1University of Maryland, College Park 2NEC Labs America 3UC San Diego
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
Input
ChatSim
Cosmos
Ours
LangDriveCTRL is a natural-language-controllable framework for editing real-world driving videos to synthesize diverse traffic scenarios. It leverages explicit 3D scene decomposition to represent driving videos as a scene graph, containing static background and dynamic objects. To enable fine-grained editing and realism, it incorporates an agentic pipeline in which an Orchestrator transforms user instructions into execution graphs that coordinate specialized agents and tools. Specifically, an Object Grounding Agent establishes correspondence between free-form text descriptions and target object nodes in the scene graph; a Behavior Editing Agent generates multi-object trajectories from language instructions; and a Behavior Reviewer Agent iteratively reviews and refines the generated trajectories. The edited scene graph is rendered and then refined using a video diffusion tool to address artifacts introduced by object insertion and significant view changes. LangDriveCTRL supports both object node editing (removal, insertion and replacement) and multi-object behavior editing from a single natural-language instruction. Quantitatively, it achieves nearly 2× higher instruction alignment than the previous SoTA, with superior structural preservation, photorealism, and traffic realism.
Overall Pipeline. Given an input video and the user instruction, our pipeline first builds a scene graph, which decomposes the scene into a static background node and multiple dynamic object nodes with their trajectories. To execute the instruction, the orchestrator coordinates agents and tools from different modules to work together: the object query module localizes target object nodes in the scene graph based on text descriptions; the object node editing module performs node removal, insertion, and replacement operations; the behavior editing module generates and refines multi-object trajectories based on a feedback loop; finally, the rendering and refinement module renders the edited scene graph and refines it with a video diffusion tool. While the figure illustrates single-object editing, our pipeline is capable of multi-object editing.
@article{langdrivectrl2025,
title = {LangDriveCTRL: Natural Language Controllable Driving Scene Editing with Multi-modal Agents},
author = {Yun He and Francesco Pittaluga and Ziyu Jiang and Matthias Zwicker and Manmohan Chandraker and Zaid Tasneem},
journal = {arXiv preprint arXiv:XXXX.XXXXX},
year = {2025}
}