SurgSora: Decoupled RGBD-Flow Diffusion Model for Controllable Surgical Video Generation



Tong Chen1*,†, Shuya Yang2*, Junyi Wang3*, Long Bai3,†, Hongliang Ren3, Luping Zhou1,‡

1The University of Sydney    2The University of Hong Kong      3The Chinese University of Hong Kong     
*: Equal Contribution; †: Project Lead; ‡: Corresponding Author.

Abstract



Medical video generation has transformative potential for enhancing surgical understanding and pathology insights through precise and controllable visual representations. However, current models face limitations in controllability and authenticity. To bridge this gap, we propose SurgSora, a motion-controllable surgical video generation framework that uses a single input frame and user-controllable motion cues. SurgSora consists of three key modules: the Dual Semantic Injector (DSI), which extracts object-relevant RGB and depth features from the input frame and integrates them with segmentation cues to capture detailed spatial features of complex anatomical structures; the Decoupled Flow Mapper (DFM), which fuses optical flow with semantic-RGB-D features at multiple scales to enhance temporal understanding and object spatial dynamics; and the Trajectory Controller (TC), which allows users to specify motion directions and estimates sparse optical flow, guiding the video generation process. The fused features are used as conditions for a frozen Stable Diffusion model to produce realistic, temporally coherent surgical videos. Extensive evaluations demonstrate that SurgSora outperforms state-of-the-art methods in controllability and authenticity, showing its potential to advance surgical video generation for medical education, training, and research.


Results on CoPESD Dataset



Results on Trajectory Control



Ablation Study on Trajectory Control