Paul Day, Booz Allen Hamilton; Jeremiah Crane, Booz Allen Hamilton; Paul Cronk, Booz Allen Hamilton; Carlos Jimenez, Booz Allen Hamilton; Bruce Johnson, Booz Allen Hamilton; Rochelle Koeberle, Booz Allen Hamilton; Darin Millard, Booz Allen Hamilton; Zackary Werner, Booz Allen Hamilton
Keywords: Artificial Intelligence, Neural Networks, Convolutional Neural Networks, Computer Vision, Edge Compute, On-orbit Edge Compute, AIOps, AI Operations, MLOps, Machine Learning Operations
Abstract:
Space Domain Awareness (SDA) is crucial for ensuring the safety and sustainability of space operations, especially as the space domain begins its transition toward a contested, degraded, and operationally-limited environment. The number of resident space objects (RSOs) continues to grow while nation-state actors seek to elude detection and characterization by extant SDA mechanisms. Traditional ground-based sensors face limitations of coverage and latency. These sensors are also susceptible to deception due to their predictable periods of observation. Maneuvers performed immediately before or after the RSO is observed can introduce significant error into orbit determination. In addition, a paucity of angular diversity in observations exists due to most current high-accuracy SDA assets being ground-based. Relatedly, as the lunar and Martian orbital regimes become increasingly crowded, SDA’s importance in those regions will grow, presenting significant challenges for ground-based SDA.
To address these shortcomings, we present the results of an experimental demonstration of on-orbit RSO object detection using convolutional neural networks (CNNs) deployed to an on-orbit edge compute device. Wide field of view (FOV) images are obtained of orbital regimes of interest with attention paid to the location of the earth, sun, and moon relative to the region of interest (ROI). The images are preprocessed to enhance the contrast and highlight the different relative motion of RSOs with respect to stars and planets. The images are then fed to a CNN running aboard the spacecraft on edge-compute hardware to detect RSOs against the background of a starfield. The goal of the experiment is the detection of RSOs using artificial intelligence (AI), specifically a computer vision (CV) CNN model running on edge-compute hardware in orbit. The CNN is trained on a custom annotation set created using the continuous false alarm rate (CFAR) detection algorithm and shape analysis to identify RSOs within imagery. The study explores the impact of the earth or the moon being present in the field of view on the detectability of the RSOs. It also explores the detectability of RSOs in various orbital regimes, such as geostationary/geosynchronous (GEO) or various inclinations of low earth orbit (LEO) relative to the imaging spacecraft, which will be in a sun-synchronous orbit.
To accomplish this demonstration, we are using a systems engineering for AI (SE4AI) methodology that utilizes model-based systems engineering (MBSE) to capture our concepts of operations (CONOPS), system design, and AI operations (AIOps) pipeline design. Our on-satellite system integrates an optical imaging system, hardware optimized for running CNNs, architecture for low-shot object detection, and custom flight software. This system is supported by a ground-based AIOps pipeline and a downlink strategy designed to maximize the utility of limited uplink/downlink bandwidth. Our optical imaging system, a Ubotica Technologies XE2 system, includes an electro-optical camera and a high-speed connection to a vector processing unit (VPU). The VPU runs our trained CNN, enabling fast detection and characterization of RSOs that pass through the camera’s FOV. The image preprocessing chain includes a contrast enhancement step which performs an arithmetical average of a series of images to highlight the relative motion of the RSOs with respect to the starfield. It also includes an optional tiling step to allow the VPU to batch process several smaller images rather than one large image, and in turn reduces the downscaling necessary to accommodate the CNN’s input size. Utilizing this hardware/software combination, our goal is to demonstrate autonomous detection of RSOs against both starfield and planetary backgrounds. Our host satellite, the “Call to Adventure” flight demonstrator mission for Apex Space’s new “Aries” bus, is scheduled to launch on 4 March 2024, and we will share exciting preliminary results at the AMOS Conference in September 2024.
Once mature, our onboard CV solution will represent a significant advancement in SDA. Experimentation continues at the time of this writing. By developing this technology, we are paving the way for the proliferation of highly capable SDA sensors throughout multiple orbital regimes, vastly increasing the angular diversity of observations available for RSOs orbiting Earth and other celestial bodies. In this effort, we aim to contribute to safer space operations.
Date of Conference: September 17-20, 2024
Track: Space-Based Assets