Text-to-video (T2V) generation has made significant strides with diffusion models. However, existing methods still struggle with accurately binding attributes, determining spatial relationships, and capturing complex action interactions between multiple subjects. To address these limitations, we propose MagicComp, a training-free method that enhances compositional T2V generation through dual-phase refinement. Specifically, (1) During the Conditioning Stage: We introduce the Semantic Anchor Disambiguation (SAD) to reinforces subject-specific semantics and resolve inter-subject ambiguity by progressively injecting the directional vectors of semantic anchors into original text embedding; (2) During the Denoising Stage: We propose Dynamic Layout Fusion Attention (DLFA), which integrates grounding priors and model-adaptive spatial perception to flexibly bind subjects to their spatiotemporal regions through masked attention modulation. Furthermore, MagicComp is a model-agnostic and versatile approach, which can be seamlessly integrated into existing T2V architectures. Extensive experiments on T2V-CompBench and VBench demonstrate that MagicComp outperforms state-of-the-art methods, highlighting its potential for applications such as complex prompt-based and trajectory-controllable video generation.
A sculpture displayed behind a candle
A cat sitting on the right of a fireplace
A lion sitting behind a chicken
A sculpture displayed behind a candle
A cat sitting on the right of a fireplace
A lion sitting behind a chicken
@misc{zhang2025magiccomptrainingfreedualphaserefinement,
title={MagicComp: Training-free Dual-Phase Refinement for Compositional Video Generation},
author={Hongyu Zhang and Yufan Deng and Shenghai Yuan and Peng Jin and Zesen Cheng and Yian Zhao and Chang Liu and Jie Chen},
year={2025},
eprint={2503.14428},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.14428},
}