Panna Felsen, The Aerospace Corporation; Ronald Scrofano, The Aerospace Corporation; Ruben Rosales, The Aerospace Corporation; John Subasavage, The Aerospace Corporation; Nehal Desai, The Aerospace Corporation; Timothy Smith, The Aerospace Corporation; Michael Dearborn, The Aerospace Corporation
Keywords: space domain awareness, event camera detection, machine learning
Abstract:
Abstract
Space is becoming increasingly congested and contested. Recent dramatic surges in the number of active satellites demands automated systems capable of efficiently detecting and tracking space objects in order to maintain requisite space domain awareness (SDA). Event-based neuromorphic vision cameras are low SWaP (size, weight, and power) sensors capable of producing sparse data at a very high temporal resolution. These characteristics are desirable for remote systems, and have propelled event cameras to the attention of the SDA community. Recent work has demonstrated a proof-of-concept qualitative Resident Space Object (RSO)-detection capability [1] and introduced a novel event camera SDA dataset with benchmark performance from a cascade of hand-designed filters [2]. The diversity of objects in space, with varying sizes, brightness, motion patterns, etc. strongly supports a data-driven approach to detecting and tracking RSOs. A step in that direction has been recent work from [3] that fuses image and 3D point cloud features to produce RSO detections in synthetic and real data captures. In this work, we present a deep learning-based framework, operating entirely in the 3D point cloud space, for detecting RSOs and star streaks. We provide results on a novel dataset containing 2040 ground-based, nighttime event camera recordings captured with a 640×480 Prophesee Gen 3 with a Zeiss 85mm f/1.4 lens. The dataset contains more than 1500 labeled RSOs, representing the largest groundbased SDA event camera dataset, measured by number of labeled targets. We include a comparison with a baseline method derived from traditional computer vision techniques, and we demonstrate substantial performance improvement over the previous state of the art for event-based RSO detection, published in [2]. Moreover, we investigate the on-line application of our method, in terms of examining the minimum number of events needed to detect an RSO.
References
[1] Cohen, Gregory, et al. “Event-based sensing for space situational awareness.” The Journal of the Astronautical Sciences 66.2 (2019): 125-141.
[2] Afshar, Saeed, et al. “Event-based object detection and tracking for space situational awareness.” IEEE Sensors Journal 20.24 (2020): 15117-15132.
[3] Salvatore, Nikolaus, and Justin Fletcher. “Learned Event-Based Visual Perception for Improved Space Object Detection.” Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2022.
Date of Conference: September 27-20, 2022
Track: Machine Learning for SSA Applications