](https://deep-paper.org/en/paper/file-2179/images/cover.png)
Breaking the Cycle: How BIT Achieves 3D Object Tracking Without Models or Training
Introduction Imagine you are a robot. I hand you a toy you have never seen before—a uniquely shaped, hand-carved wooden animal. I ask you to track its movement in 3D space as I wave it around. For a human, this is trivial. For a computer vision system, this is a nightmare scenario. Most state-of-the-art 3D object tracking systems rely on priors. They either need a precise 3D CAD model of the object beforehand (Model-based), or they need to have “seen” thousands of similar objects during a massive training phase (Training-based). If you don’t have the CAD file and you haven’t trained a neural network on that specific category of object, the system fails. ...
](https://deep-paper.org/en/paper/2505.21943/images/cover.png)
](https://deep-paper.org/en/paper/2412.03451/images/cover.png)
](https://deep-paper.org/en/paper/2502.07785/images/cover.png)
](https://deep-paper.org/en/paper/2403.11116/images/cover.png)
](https://deep-paper.org/en/paper/2503.20308/images/cover.png)
](https://deep-paper.org/en/paper/2412.18608/images/cover.png)
](https://deep-paper.org/en/paper/2412.15119/images/cover.png)
](https://deep-paper.org/en/paper/2503.18420/images/cover.png)
](https://deep-paper.org/en/paper/2503.20779/images/cover.png)
](https://deep-paper.org/en/paper/file-2168/images/cover.png)
](https://deep-paper.org/en/paper/2403.11295/images/cover.png)
](https://deep-paper.org/en/paper/file-2165/images/cover.png)
](https://deep-paper.org/en/paper/2412.00115/images/cover.png)
](https://deep-paper.org/en/paper/2503.19199/images/cover.png)
](https://deep-paper.org/en/paper/2407.09392/images/cover.png)
](https://deep-paper.org/en/paper/file-2161/images/cover.png)
](https://deep-paper.org/en/paper/2412.16604/images/cover.png)
](https://deep-paper.org/en/paper/2501.03841/images/cover.png)
](https://deep-paper.org/en/paper/2412.09612/images/cover.png)