DynOPETs: A Versatile Benchmark for Dynamic Object Pose Estimation and Tracking in Moving Camera Scenarios
* Authors contributed equally to this work
1 ShanghaiTech University, Mobile Perception Lab
2 Fudan University, Multi-Agent Robotic Systems Lab
2 Fudan University, Multi-Agent Robotic Systems Lab
Abstract
In the realm of object pose estimation, scenarios involving both dynamic objects and moving cameras are prevalent. However, the scarcity of corresponding real-world datasets significantly hinders the development and evaluation of robust pose estimation models. This is largely attributed to the inherent challenges in accurately annotating object poses in dynamic scenes captured by moving cameras. To bridge this gap, this paper presents a novel dataset DynOPETs and a dedicated data acquisition and annotation pipeline tailored for object pose estimation and tracking in such unconstrained environments. Our efficient annotation method innovatively integrates pose estimation and pose tracking techniques to generate pseudo-labels, which are subsequently refined through pose graph optimization. The resulting dataset offers accurate pose annotations for dynamic objects observed from moving cameras. To validate the effectiveness and value of our dataset, we perform comprehensive evaluations using 18 state-of-the-art methods, demonstrating its potential to accelerate research in this challenging domain. The dataset will be made publicly available to facilitate further exploration and advancement in the field.
Pipeline

Results
For more qualitative visualization and analysis, please refer to our supplementary materials.
BibTeX
@article{meng2025dynopets, title={DynOPETs: A Versatile Benchmark for Dynamic Object Pose Estimation and Tracking in Moving Camera Scenarios}, author={Meng, Xiangting and Yang, Jiaqi and Chen, Mingshu and Yan, Chenxin and Shi, Yujiao and Ding, Wenchao and Kneip, Laurent}, journal={arXiv preprint arXiv:2503.19625}, year={2025} }
Contact Us
Xiangting Meng:
Jiaqi Yang: