](https://deep-paper.org/en/paper/2411.17662/images/cover.png)
RoboPEPP: Teaching AI to See Robot Poses Through Physical Intuition
Imagine a robot arm working in a bustling kitchen or on a manufacturing floor. To collaborate safely with humans or other machines, this robot needs to know exactly where it is in space relative to the camera watching it. This is known as robot pose estimation. Usually, this is straightforward because we can cheat: we ask the robot’s internal motor encoders what its joint angles are. But what if we can’t trust those sensors? What if we are observing a robot we don’t control? Or what if we simply want a purely vision-based redundancy system? ...
](https://deep-paper.org/en/paper/2410.23132/images/cover.png)
](https://deep-paper.org/en/paper/2411.18941/images/cover.png)
](https://deep-paper.org/en/paper/file-2195/images/cover.png)
](https://deep-paper.org/en/paper/2501.05446/images/cover.png)
](https://deep-paper.org/en/paper/2404.03632/images/cover.png)
](https://deep-paper.org/en/paper/2412.17806/images/cover.png)
](https://deep-paper.org/en/paper/2503.08306/images/cover.png)
](https://deep-paper.org/en/paper/2412.11077/images/cover.png)
](https://deep-paper.org/en/paper/2501.03729/images/cover.png)
](https://deep-paper.org/en/paper/2504.12909/images/cover.png)
](https://deep-paper.org/en/paper/2412.13183/images/cover.png)
](https://deep-paper.org/en/paper/2412.19637/images/cover.png)
](https://deep-paper.org/en/paper/file-2185/images/cover.png)
](https://deep-paper.org/en/paper/2405.17220/images/cover.png)
](https://deep-paper.org/en/paper/2503.12886/images/cover.png)
](https://deep-paper.org/en/paper/2503.04459/images/cover.png)
](https://deep-paper.org/en/paper/2503.19718/images/cover.png)
](https://deep-paper.org/en/paper/2503.09487/images/cover.png)
](https://deep-paper.org/en/paper/file-2179/images/cover.png)