1University of Maryland, College Park 2NEC Labs America 3UC San Diego
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
LangDriveCTRL is a natural-language-controllable framework for editing real-world driving videos to synthesize diverse traffic scenarios. It represents each video as an explicit 3D scene graph, decomposing the scene into a static background and dynamic object nodes. To enable fine-grained editing and realism, it introduces a feedback-driven agentic pipeline. An Orchestrator converts user instructions into executable graphs that coordinate specialized multi-modal agents and tools. An Object Grounding Agent aligns free-form text with target object nodes in the scene graph; a Behavior Editing Agent generates multi-object trajectories from language instructions; and a Behavior Reviewer Agent iteratively reviews and refines the generated trajectories. The edited scene graph is rendered and harmonized using a video diffusion tool, and then further refined by a Video Reviewer Agent to ensure photorealism and appearance alignment. LangDriveCTRL supports both object node editing (removal, insertion, and replacement) and multi-object behavior editing from natural-language instructions. Quantitatively, it achieves nearly 2× higher instruction alignment than the previous SoTA, with superior photorealism, structural preservation, and traffic realism.
Overall Pipeline. Given an input video and the user instruction, our pipeline first builds a scene graph, which decomposes the scene into a static background node and multiple dynamic object nodes with their trajectories. To execute the instruction, the orchestrator coordinates agents and tools from different modules to work together: the object query module localizes target objects in the scene graph based on textual descriptions; the object node editing module performs object removal, insertion, and replacement; the behavior editing module generates and refines multi-object trajectories based on a feedback loop; finally, the rendering and refinement module renders the edited scene graph and iteratively refines it with a video diffusion tool. While the figure illustrates single-object editing, our pipeline is capable of multi-object editing.
@article{he2025langdrivectrl,
title={LangDriveCTRL: Natural Language Controllable Driving Scene Editing with Multi-modal Agents},
author={He, Yun and Pittaluga, Francesco and Jiang, Ziyu and Zwicker, Matthias and Chandraker, Manmohan and Tasneem, Zaid},
journal={arXiv preprint arXiv:2512.17445},
year={2025}
}