Bias and Denoising Techniques to Improve Dim RSO Detection by up to 2.9x with an Event-based Vision Sensor

Brian McReynolds, U.S. Air Force Academy; Rachel Oliver, Air Force Institute of Technology; Peter McMahon-Crabtree, AFRL Space Vehicles Directorate; Michal Zolnowski, 6ROADS Optical Observatories; Tobi Delbruck, Institute of Neuroinformatics, UZH/ETH Zurich

Keywords: Event-based, Neuromorphic, SDA, Optimization, Sensitivity, Dim object detection

Abstract:

Neuromorphic Event-based Vision Sensors (EVS) have garnered recent interest for space sensing tasks due to performance advantages like high temporal resolution, wide-dynamic range, and data sparsity resulting from their frame-free sensing paradigm. Several studies showcase their capabilities in detecting resident space objects (RSOs), but only three have reported empirical measurements of limiting performance. Early studies concluded that second and third generation EVS lag state-of-the-art, scientific CMOS cameras in absolute sensitivity but offer significant advantages in temporal resolution and data sparsity, whereas the current 4th generation cameras made significant improvements in low light performance and sensitivity.  However, performance limits have only been reported for a specific combination of sensor and optic, so they do not directly translate to fundamental performance limits and are not useful for predicting sensor capabilities when paired with different optical systems. Additionally, no previous studies explored the vast parameter space of user-defined sensor biases available in EVS.

To establish the true limits of EVS performance, we consider pixel operation to apply novel techniques for managing EVS noise and sensitivity to improve dim, sub-pixel, point source object detection. Using a custom-built, photometrically-calibrated RSO simulator, we explore the many degrees of freedom offered by adjustable camera biases and report results from the most advanced commercially available EVS, the Sony/Prophesee IMX636.

Finally, we demonstrate a novel denoising method to offset the excess noise introduced by increasing camera sensitivity.  We show that just a small fraction (~2%) of pixels tend to dominate the background noise (>99% background events).  Additionally, we show that each pixel’s average noise rate remains relatively constant in time, allowing us to apply a computationally efficient background subtraction method.

This paper highlights several key results.  Foremost, our optimized biases increase sensitivity by up to 1.05 visual magnitudes (2.6x) compared to stock biases which have been used in nearly all SDA applications to date.  Our detailed characterization methodology demonstrates that, when optimally biased, modern EVS are capable of detecting point source objects producing as few as 1.3k photons per second with transit speeds of 6.8 pix/sec on the sensor’s focal plane. Further after fitting a model that accounts for the pixel’s finite temporal response near the dark current limit, our results suggest that for transit speeds on the order of 1 pixel/second, the sensitivity limit can be decreased to approximately 450 photons/second. These characterization results are captured in an open-source software model allowing potential EVS users to predict sensor response as a function of object brightness (visual magnitude), optical system (focal length, f/#, point spread function) and object/scan speeds. Combining bias optimization and noise reduction during an on-sky collection, we demonstrate the ability to detect 2.9x more star streaks compared to stock biases with an effective data rate of less than 0.02 events/pixel/second. 

Date of Conference: September 16-19, 2025

Track: SDA Systems & Instrumentation

View Paper