Paper Detail

AVControl: Efficient Framework for Training Audio-Visual Controls

Matan Ben-Yosef, Tavi Halperin, Naomi Ken Korem, Mohammad Salama, Harel Cain, Asaf Joseph, Anthony Chen, Urska Jelercic, Ofir Bibi

huggingface Score 8.8

Published 2026-03-25 · First seen 2026-03-27

General AI

Abstract

Controlling video and audio generation requires diverse modalities, from depth and pose to camera trajectories and audio transformations, yet existing approaches either train a single monolithic model for a fixed set of controls or introduce costly architectural changes for each new modality. We introduce AVControl, a lightweight, extendable framework built on LTX-2, a joint audio-visual foundation model, where each control modality is trained as a separate LoRA on a parallel canvas that provides the reference signal as additional tokens in the attention layers, requiring no architectural changes beyond the LoRA adapters themselves. We show that simply extending image-based in-context methods to video fails for structural control, and that our parallel canvas approach resolves this. On the VACE Benchmark, we outperform all evaluated baselines on depth- and pose-guided generation, inpainting, and outpainting, and show competitive results on camera control and audio-visual benchmarks. Our framework supports a diverse set of independently trained modalities: spatially-aligned controls such as depth, pose, and edges, camera trajectory with intrinsics, sparse motion control, video editing, and, to our knowledge, the first modular audio-visual controls for a joint generation model. Our method is both compute- and data-efficient: each modality requires only a small dataset and converges within a few hundred to a few thousand training steps, a fraction of the budget of monolithic alternatives. We publicly release our code and trained LoRA checkpoints.

Workflow Status

Review status
pending
Role
unreviewed
Read priority
soon
Vote
Not set.
Saved
no
Collections
Not filed yet.
Next action
Not filled yet.

Reading Brief

No structured notes yet. Add `summary_sections`, `why_relevant`, `claim_impact`, or `next_action` in `papers.jsonl` to enrich this view.

Why It Surfaced

No ranking explanation is available yet.

Tags

No tags.

BibTeX

@misc{benyosef2026avcontrol,
  title = {AVControl: Efficient Framework for Training Audio-Visual Controls},
  author = {Matan Ben-Yosef and Tavi Halperin and Naomi Ken Korem and Mohammad Salama and Harel Cain and Asaf Joseph and Anthony Chen and Urska Jelercic and Ofir Bibi},
  year = {2026},
  abstract = {Controlling video and audio generation requires diverse modalities, from depth and pose to camera trajectories and audio transformations, yet existing approaches either train a single monolithic model for a fixed set of controls or introduce costly architectural changes for each new modality. We introduce AVControl, a lightweight, extendable framework built on LTX-2, a joint audio-visual foundation model, where each control modality is trained as a separate LoRA on a parallel canvas that provide},
  url = {https://huggingface.co/papers/2603.24793},
  keywords = {LoRA, LTX-2, audio-visual foundation model, parallel canvas, attention layers, in-context methods, VACE Benchmark, depth-guided generation, pose-guided generation, inpainting, outpainting, camera control, audio-visual benchmarks, spatially-aligned controls, camera trajectory, sparse motion control, video editing, modular audio-visual controls, compute-efficient, data-efficient, huggingface daily},
  eprint = {2603.24793},
  archiveprefix = {arXiv},
}

Metadata

{}