Matt Brown, Rocket Lab; William Bidle, Rocket Lab; D. Brandon Knape, Rocket Lab; Brandon Whitchurch, Rocket Lab; Skip Williams, Rocket Lab
Keywords: Machine Learning, Deep Neural Network, SDA, Detection, Classification, RSO, Ellipse
Abstract:
We report on an artificial intelligence (AI) approach for real-time, single-shot detection and classification of unresolved resident space objects (RSO) with sub-pixel localization via implicit ellipse regression. Deep-neural-network machine learning achieves state-of-the-art performance in virtually all modern video object-detection domains where large-scale training datasets are available. However, in domains where training data is limited, overfitting and lack of generalization is a major concern. To navigate this challenge for RSO detection, we developed a comprehensive simulation framework to synthesize images of far-field point objects (stars and RSOs), spanning a wide range of brightness, point spread function (PSF), and motion-blur (0-500 pixels streaks), to train our network. We demonstrate robust generalization on real ground-based telescope data. In a single pass of an image through the network, the model detects each far-field point object consistent with the simulated PSF, extracts its total integrated pixel-value (brightness), and fits a sub-pixel-accurate ellipse. These vectorized analytics allow flexible discrimination between stars and RSO across a variety of missions. We present results and timings for our deep neural network deployed on a NVIDIA Jetson Orin™ edge computer and on a space-grade Xilinx Versal™ system-on-chip (SoC).
Passive electro-optical imaging is an important sensing modality for space domain awareness (SDA). Whether deployed from ground-based telescopes or satellite-hosted imaging systems, the objective is to use advanced image processing to automatically detect, classify, and localize human-made objects. Generally, these RSO are too small and/or distant to be resolved by the imaging system. As such, both stars and RSO behave as point sources with an imaged spatial structure entirely defined by the imaging system’s PSF and any blur (streaking) due to relative motion during exposure. Therefore, to differentiate stars from RSO, a detection algorithm must extract each object’s brightness and infer its astrometric location so that it can correlate to known stars in a catalog.
Some SDA missions seek to ascertain the current orbit state of an a-priori-known object, pointing the optical system to track its expected motion such that it is imaged to a spot (the PSF), while stars present as streaks. Conversely, when searching for an unknown RSO, missions sidereally point such that stars are imaged to spots, while unknown RSO present as streaks. Therefore, a general-purpose detection algorithm must be robust to a wide range of object brightness and motion blur (streaking), and it must support sub-pixel object centroiding across all cases. Extracting all object centroids within an image is critical to support accurate plate solving, which matches objects to stars in a catalog to eliminate them as potential RSO. In addition, plate solving also supports refinement of the boresight knowledge beyond the prior telemetry for accurate RSO orbit determination.
Our deep neural network detector is trained to detect all far-field point-source objects (star or RSO) and avoid false positives due to sensing noise and near-field artifacts. For each detected object, the network outputs a confidence in being a far-field point source, a brightness value (total image signal contributed by the object), as well as a 5-parameter sub-pixel-accurate ellipse fit, which minimizes the Hausdorff distance with respect to a ground-truth ellipse. The ellipse center represents the sub-pixel location of the object at the midpoint of the exposure period, from which a boresight model can recover the object’s astrometric location. The semi-minor ellipse axis represents the PSF radius, and the semi-major axis of the ellipse represents the motion blur vector. Spanning a variety of missions, simple rules can be applied to the network’s compact outputs to classify RSO vs stars based on the motion vector and/or matching each object’s center against a star catalog. Further, the model is able to detect dim (low signal-to-noise-ratio) objects, extending the capability of existing SDA platforms.
With the availability of higher-resolution focal planes and multi-camera configurations, the next generation of SDA space payloads will be better equipped to monitor larger regions of space more rapidly. However, these systems will demand a commensurate high-throughput image-processing chain. Therefore, we also present results demonstrating accurate detection performance at the lowest compute cost on a NVIDIA Jetson Orin™ edge computer and on a space-grade Xilinx Versal™ system-on-chip (SoC) by optimizing using neural architecture search and network-compression techniques.
Date of Conference: September 16-19, 2025
Track: Machine Learning for SDA Applications