Introduction
For centuries, sailors navigated the open oceans by looking up at the stars. Today, spacecraft orbiting Earth and traveling through the solar system do exactly the same thing. By identifying specific star patterns, a satellite determines its precise orientation in space—known as its “attitude.” This process is handled by a device called a Star Tracker.
Standard star trackers use Active Pixel Sensors (APS)—essentially the same technology found in your smartphone camera. They take a picture of the sky, process the frame to find stars, identify them against a catalog, and calculate the angle. While highly accurate, they are slow. Most operate at 2-10 Hz (frames per second). If a satellite needs to spin quickly or stabilize against rapid vibrations, standard star trackers blur the image and fail, leading to a loss of navigation.
But what if a camera didn’t wait to take a full picture? What if it worked like the human eye, reacting only to changes in light?
This is the premise of Event-Based Star Tracking. In a recent paper titled “EBS-EKF: Accurate and High Frequency Event-based Star Tracking,” researchers from Kitware and the University of Dayton propose a new method that leverages neuromorphic “event cameras” to track stars at speeds up to 1000 Hz.

As shown in Figure 1, their method (EBS-EKF) not only operates at much higher frequencies than standard APS trackers but also maintains accuracy where other event-based algorithms drift. This blog post dives into the physics, the mathematics, and the real-world experiments behind this breakthrough.
Background: Eyes vs. Shutters
To understand why this research is significant, we first need to understand the hardware.
The Limitations of Standard Cameras (APS)
Traditional cameras integrate light over a fixed exposure time to produce a frame. In the context of star tracking, this creates a trade-off. Long exposures allow you to see dim stars but cause motion blur if the satellite moves. Short exposures freeze motion but make dim stars invisible. Furthermore, processing full frames is computationally expensive, limiting the update rate.
The Promise of Event-Based Sensors (EBS)
Event-based sensors (EBS) work differently. Each pixel operates independently and asynchronously. A pixel only sends data—an “event”—when it detects a change in brightness (logarithmic intensity) that exceeds a certain threshold.

As illustrated in Figure 2(b), when a star (a point of light) moves across a pixel, the intensity rises and then falls. The sensor generates a stream of discrete events:
- Positive events (Red): Intensity increasing.
- Negative events (Green): Intensity decreasing.
Because these sensors have microsecond resolution and high dynamic range, they seem perfect for high-speed star tracking. However, previous attempts to use them have relied on simulations that treat stars as simple moving white dots. The reality of the night sky, as the authors discovered, is much more complex.
The Physics of Low-Light Electronics
The core innovation of the EBS-EKF paper isn’t just a better algorithm; it’s a better understanding of the physics of the sensor itself.
In bright conditions (like daylight), an event camera reacts almost instantly to changes. However, star tracking happens in the dark. The researchers analyzed the circuitry of EBS pixels and found that in low-light conditions, the bandwidth (speed) of the pixel’s photoreceptor drops significantly. It behaves like a low-pass filter with a cutoff frequency that depends on the intensity of the light.
The Intensity-Dependent Offset
This creates a fascinating, albeit problematic, phenomenon. Brighter stars generate higher photocurrents, allowing the pixel to react faster. Dimmer stars generate lower currents, causing the pixel to react slower.

Look at Figure 3 above.
- Panel (a): Notice the lag between the peak light intensity (yellow) and the event likelihood (green).
- Panel (c): This plot is crucial. It shows that the “center” of the cloud of events shifts position based on how bright the star is (Magnitude \(m_s\)). A bright star (lower magnitude value) triggers events ahead of the true center compared to a dim star.
Existing methods assumed the events cluster perfectly around the star’s center. In reality, the “event cloud” leads or lags the star depending on its brightness. If you ignore this, your star tracker will be inaccurate.
The Mathematical Model
The researchers derived a “Low-Light (LL) Event Likelihood” model. While the full differential equation (shown below) captures the exact circuit behavior, it is too computationally heavy to solve for every single event in real-time.

Instead, they proposed a clever approximation. They model the likelihood of a positive event as a Gaussian distribution, but they shift the center of that Gaussian based on the star’s brightness (\(m_s\)) and velocity (\(\bar{\mathbf{v}}\)).

This equation allows them to correct the position of every incoming event based on the known magnitude of the star it likely belongs to, aligning the data with the true star position.
The Algorithm: EBS-EKF
With a better signal model in hand, the authors built a tracking system using an Extended Kalman Filter (EKF). An EKF is a standard tool in robotics for estimating the state of a system (like position and velocity) given noisy measurements.
3D State Estimation
Previous works often used simplified 2D filters that tracked stars on the image plane. However, a spacecraft rotates in 3D space. The EBS-EKF estimates a 6-dimensional state:
- Attitude: A Quaternion \(\mathbf{q}\) (representing 3D rotation).
- Angular Velocity: \(\boldsymbol{\omega}\) (how fast it is spinning).
The algorithm follows a continuous loop, operating on individual events rather than waiting for frames.

As described in Algorithm 2:
- Predict: When a new event arrives, the EKF predicts the camera’s current orientation based on the last known velocity.
- Match: It checks if the event falls near a known star from an onboard star catalog.
- Correct: If there is a match, it calculates the “offset correction” based on that star’s brightness (using the physics model discussed earlier).
- Update: It updates the camera’s estimated rotation and velocity using the corrected event position.
This approach allows the tracker to update its attitude estimate up to 1000 times per second (1 kHz).
Experimental Validation
Theory is good, but does it work under the night sky? This paper is notable because the authors moved away from LCD screens and simulations, building a rigorous real-world data collection rig.
The “Frankenstein” Rig
They rigidly mounted a Prophesee EVK4-HD event camera next to a commercial, space-ready star tracker (Rocket Lab ST-16RT2). They mounted this dual-camera setup on a motorized pan-tilt unit to sweep across the sky, simulating satellite maneuvers.

Figure 5 shows the setup. The critical component here is the Sync Pulse Generator. Event cameras have their own internal clocks. To compare their accuracy against the commercial tracker, the researchers had to microsecond-synchronize the data streams (illustrated in Figure 11 below).

Results: Accuracy and Robustness
The researchers compared their EBS-EKF against three state-of-the-art event-based methods:
- ICP: Iterative Closest Point (matching event clouds to stars).
- Hough: Using Hough transforms to find star trails.
- 2D-KF: A 2D Kalman Filter approach.
The results were stark.

Figure 7 shows the error relative to the commercial star tracker during a “Velocity Sweep.”
- Drift: Notice the “sawtooth” pattern in the existing methods (Purple/Green). They drift off-track and only snap back when they perform a slow, absolute star identification (usually every few seconds).
- Stability: The proposed EBS-EKF (Red line) stays consistently accurate, maintaining a lock on the stars without drifting, thanks to the 3D EKF and the continuous updates.
The Importance of the Offset
Remember the physics model regarding star brightness? Figure 8 proves why it matters.

In this experiment, the tracker was running smoothly until a bright star (Magnitude 2.23) entered the field of view (shaded blue region).
- Black Line (No Offset): The error spikes. The bright star’s events are arriving “early” due to higher circuit bandwidth, confusing the filter.
- Red Line (With Offset): The error remains low. The algorithm knows the star is bright, anticipates the offset, and corrects for it.
High-Speed Performance
One of the main selling points of event cameras is speed. The commercial Rocket Lab tracker has a safety cutoff: if the satellite spins faster than 3 degrees/second, it stops providing solutions to prevent inaccurate blurred readings.

As shown in Figure 1(c) (and detailed in the paper’s text), the EBS-EKF continued to track accurately even at 7.5 degrees/second, speeds where the traditional sensor gave up entirely. This capability is vital for agile satellites that need to slew quickly to track ground targets or communicate with ground stations.
Comparison Summary
The paper aggregates their findings into a comparison table. While qualitative, it highlights the distinct advantages of the new approach.

The key takeaways from Table 1 are:
- Update Rate: 500-1000 Hz (compared to ~10Hz for standard trackers).
- Evaluation: This is the first work evaluated on real night sky data, not just LCD screens.
- Centroiding Accuracy: ~0.4 pixels, significantly better than the ~1.8 to 3.0 pixel error of previous methods.
Conclusion
The work presented in EBS-EKF represents a significant maturation of event-based vision for space applications. By moving beyond simple simulations and grappling with the complex photophysics of event sensors in low light, the researchers demonstrated that event cameras are not just a theoretical curiosity—they are a viable competitor to established technology.
The combination of a physics-informed signal model (accounting for brightness-dependent latency) and a robust 3D Extended Kalman Filter allows for star tracking that is:
- Faster: 1000 Hz updates.
- More Robust: Works during high-speed rotations.
- Accurate: Corrects for sensor artifacts that previously caused drift.
As we look toward a future of more agile, autonomous spacecraft, “eyes” that react to the stars rather than just photographing them might become the new standard for navigation.
This blog post summarizes the research paper “EBS-EKF: Accurate and High Frequency Event-based Star Tracking” by Reed et al. (Kitware/University of Dayton).
](https://deep-paper.org/en/paper/2503.20101/images/cover.png)