Breaking Robots with Geometry: How to Red-Team Manipulation Policies
Imagine you have trained a robot to pick up a screwdriver. You’ve trained it on thousands of simulations, and it achieves a 95% success rate. You are ready to deploy. But then, in the real world, you hand the robot a screwdriver that is slightly bent, or perhaps the handle is a bit thicker than the one in the training set. Suddenly, the robot fails catastrophically—it slips, it drops the object, or it can’t find a grip.
This is a classic problem in robotics: brittleness to out-of-distribution geometry. Standard benchmarks evaluate robots on curated, “nominal” object sets. They rarely test how the system handles the messy, imperfect variations found in reality.
In this post, we are diving deep into a new framework called Geometric Red-Teaming (GRT). This research proposes a way to automatically discover “CrashShapes”—physically plausible, deformed versions of objects that cause pre-trained robot policies to fail. By treating the policy as a black box and using simulation-in-the-loop optimization, GRT exposes the hidden vulnerabilities of robotic manipulation systems.

As shown in Figure 1, the system takes standard objects (top row) and discovers subtle geometric changes (bottom row) that lead to bad grasps, slippage, or insertion failures—even when the deformation looks minor to a human observer.
The Problem: Static Benchmarks vs. Dynamic Reality
In fields like Computer Vision and Natural Language Processing (NLP), “Red-Teaming” is a standard practice. Researchers actively try to break their models using adversarial examples—images with imperceptible noise that trick a classifier, or prompts that bypass an LLM’s safety filters.
Robotics lacks a robust equivalent for 3D geometry. Most evaluation happens on static datasets like YCB (a standard set of everyday objects). If a robot can pick up the YCB mustard bottle, we assume it can pick up any mustard bottle. This assumption is dangerous. Geometric variations alter affordances—the specific parts of an object that allow for interaction (like a handle or a rim). If a grasp policy relies on a specific curvature that disappears with a slight dent, the policy is fragile.
GRT aims to answer the question: Can we automatically generate geometric deformations that induce catastrophic failure, while keeping the object physically plausible?
The Solution: Geometric Red-Teaming (GRT)
GRT is a modular framework that integrates three distinct concepts:
- VLM-Guided Selection: Using Vision-Language Models (like GPT-4o) to decide where to deform an object based on semantic reasoning.
- Jacobian Field Deformation: A mathematical method to deform the mesh smoothly and realistically.
- Black-Box Optimization: A genetic-style algorithm that evolves these shapes inside a physics simulator to minimize the robot’s success rate.

The workflow, illustrated in Figure 2, is cyclical. It starts with a nominal object, identifies critical points to manipulate, generates a population of deformed “candidates,” tests them in a simulator (Isaac Gym), and evolves the population toward failure.
Step 1: Where to Deform? (VLM Guidance)
You cannot simply move vertices of a 3D mesh at random. Doing so would result in spiky, jagged, or non-manifold meshes that look like glitches rather than real objects. Furthermore, not all parts of an object matter for a specific task. If you are testing a robot’s ability to insert a USB drive, deforming the plastic casing might not matter, but slightly bending the connector head is critical.
To solve this, the researchers employ a Vision-Language Model (VLM). They developed a two-stage prompting strategy.
- Geometric Reasoning: The VLM is shown multiple views of the object with numbered keypoints overlaid. It is asked to identify which points can serve as “handles” (points to move) and “anchors” (points to keep fixed) to create meaningful shape variations.
- Task-Critical Ranking: The VLM ranks these subsets based on the specific task (e.g., “red-teaming a grasping policy”). It looks for changes that are plausible but likely to cause trouble.

This semantic grounding ensures the optimization search space focuses on the parts of the object that actually matter, making the process much more efficient than random searching.
Step 2: How to Deform? (Jacobian Fields)
Once the “handle” points are selected, the system needs a way to move them while dragging the rest of the mesh along smoothly. The researchers adapted a technique called As-Plausible-As-Possible (APAP), specifically its Jacobian field deformation stage.
The mathematical goal is to find new vertex positions (\(V^*\)) that minimize distortion in the local geometry (preserving the original triangles’ orientation and scale as much as possible) while satisfying the constraints of the handle and anchor points.

In this equation:
- \(L\) represents the Laplacian (describing local mesh connectivity).
- \(J\) is the Jacobian field (the local rotation/scale transformations).
- The second term ensures that anchor points (\(T_a\)) stay where they are supposed to be.
Interestingly, the researchers found that the full APAP pipeline, which includes a “diffusion prior” to make shapes look like a learned distribution, was actually harmful for certain engineering objects.

As seen in Figure 6, applying the full diffusion prior to a USB plug (middle column) destroyed the connector geometry, making it impossible to insert regardless of the robot’s skill. The Jacobian-only method (right column) preserved the structural integrity of the connector while allowing for the necessary deformation.
Furthermore, omitting the diffusion prior offered a massive speedup—reducing processing time from 10 minutes per object to just 22 seconds, which is crucial when running thousands of optimization loops.
Step 3: Finding the Failure (Optimization)
With a method to deform objects, the system now needs to find the specific deformation parameters \(\theta\) (the movement vectors of the handle points) that minimize the robot’s performance \(\mathcal{J}\).

Because the simulator (Isaac Gym) and the policy success metric are generally non-differentiable (you can’t easily calculate a gradient), standard gradient descent won’t work. Instead, GRT uses a population-based, gradient-free approach called TOPDM.

As outlined in Algorithm 1, the process works as follows:
- Initialize a population of random deformations.
- Evaluate every candidate in the simulator (rollout).
- Select Elites: Pick the top percentage of deformations that caused the lowest success rates.
- Mutate: Create the next generation by slightly perturbing the elites.
- Repeat until a catastrophic failure is found or time runs out.
To ensure the deformations don’t become ridiculous (like turning a mug into a flat pancake), the researchers introduced a Smoothness Score (SS) constraint.

This score limits the average displacement of the handle points. The optimizer filters out any candidate that exceeds a specific “deformation budget” \(\tau\):

Experimental Results: The Collapse
The researchers tested GRT across three distinct domains:
- Rigid Grasping: Picking up YCB objects using Contact-GraspNet.
- High-Precision Insertion: Inserting a USB-like plug into a socket.
- Articulated Manipulation: Opening a drawer.
The results were stark. Policies that performed near-perfectly on nominal objects crumbled under GRT’s discovered shapes.

In Table 1, “Final Drop” indicates the reduction in success rate.
- Grasping: Dropped by ~76%.
- Articulated Manipulation: Dropped by ~61-98%.
- Insertion: Dropped by ~67-77%.
The visual evolution of these failures is fascinating. The optimization process slowly morphs the object, hunting for the policy’s blind spot.

Look at the L-bracket in the bottom row of Figure 4. The change is subtle, yet the success rate drops from 97.4% to 11.4%. This highlights how “brittle” learned policies can be; they overfit to specific geometric features of the training object.
Does VLM Guidance Matter?
You might wonder if we really need a fancy VLM to choose the handle points. Couldn’t we just pick random points? The researchers performed an ablation study to test this.

Table 2 compares VLM-Guided + Optimization (the proposed method) against heuristic (random) selection and simple Gaussian perturbation.
- VLM guidance achieved the highest drop in performance (76.3%).
- It reached 50% failure faster (7.32 iterations) than heuristics.
- It kept the geometric complexity lower (\(\Delta\) Complexity 0.041), meaning the shapes were simpler and more realistic, yet more effective at breaking the robot.
Blue-Teaming: Fixing the Robot
The goal of Red-Teaming isn’t just to break things—it’s to make them stronger. This is where Blue-Teaming comes in.
The researchers took the “CrashShapes” discovered by GRT and fed them back into the training pipeline. They fine-tuned the policies using PPO (Proximal Policy Optimization) on these difficult geometries.

The results in Table 3 are encouraging.
- For the State-based insertion policy, success on “CrashShape 1” (CS-1) jumped from 25.0% to 87.8%.
- Crucially, the performance on the Nominal (original) object remained high (87.5%).
This proves that CrashShapes are valid training signals. They aren’t “adversarial examples” in the sense of being impossible nonsense; they are valid, hard-mode examples that force the policy to generalize better.
From Simulation to Reality
A common critique of simulation-based research is the “Sim-to-Real gap.” Do these subtle geometric failures actually matter in the real world, or are they just exploiting physics engine bugs?
To verify this, the team 3D-printed the CrashShapes discovered in the simulator and tested them on physical robots (xArm 6 for insertion, Franka Emika Panda for grasping).


The real-world results mirrored the simulation closely.

As shown in Table 4:
- Insertion: The original policy succeeded 90% of the time on the nominal plug. On CS-1, it plummeted to 22.5%.
- Recovery: When they deployed the “Blue-Teamed” policy (fine-tuned in sim), the real-world success on CS-1 recovered to 90.0%.
This is a powerful validation. It confirms that GRT is discovering physical, geometric vulnerabilities that transfer to reality, and that simulation-based correction effectively repairs these vulnerabilities in the real world.
Conclusion and Implications
Geometric Red-Teaming (GRT) introduces a rigorous way to stress-test robotic manipulation. Instead of relying on static test sets that give us a false sense of security, GRT proactively hunts for the geometric “edge cases” that cause failure.
Key Takeaways:
- Geometry is a vector of failure: Small changes in shape can completely break policies that seem robust.
- Semantic Guidance is efficient: Using VLMs to guide the deformation search finds failures faster and yields more plausible shapes than random noise.
- Actionable feedback: The discovered CrashShapes are not just for evaluation; they are valuable training data that significantly improve real-world robustness.
As robots move out of controlled factory environments and into unstructured homes and offices, tools like GRT will be essential. We cannot manually curate every possible bent spoon or dented can a robot might encounter. We need automated adversaries to find these failures for us, so we can fix them before deployment.
](https://deep-paper.org/en/paper/2509.12379/images/cover.png)