Alexander Rogers, Turion Space; Mohamed Hasan, Turion Space; Ryan Westerdahl, Turion Space; Thomas Cooley, Turion Space
Keywords: Machine Learning, space-based imaging, non-earth imaging, uncertainty quantification, trustworthy AI
Abstract:
With the rapid expansion of space activity, the ability to autonomously detect, classify, and characterize resident space objects (RSOs) has become a critical challenge in space domain awareness (SDA). As the number of satellites and potential debris continues to rise, reliance on advanced computational techniques is necessary to process and interpret the vast amount of observational data generated by modern space-based sensors. Recent advancements in machine learning (ML) and computer vision have opened new avenues for leveraging non-Earth imaging (NEI) data to enhance satellite characterization. This work investigates the application of deep learning models to space-based satellite imagery, focusing on improving detection accuracy, classification reliability, and overall performance in extracting meaningful insights from narrow field-of-view (NFOV) sensors. Using space-based assets to perform RSO characterization offers a significant advantage over existing ground-based techniques in SDA since it allows for greater resolution and observation throughput.
Building on our work at the SDA TAP Lab, we have developed and tested a suite of machine learning models designed to detect, classify, and characterize space objects from NEI data. Our research explores the efficacy of advanced deep learning techniques in detecting faint objects, segmenting onboard payloads, and classifying RSOs based on resolvable features. A particular challenge is training convolutional neural networks (CNNs) on small RSOs, which can easily be washed out in the feature space of deep convolutional networks. We explore the evolution of You Only Look Once (YOLO) architectures (Terven, 2023) and existing research approaches for small object detection, as well as different down sampling approaches to retain relevant features for small objects. This analysis will be performed a dataset containing unresolved and semi-resolved observations of objects captured in NEI.
Given the increasing reliance on artificial intelligence in SDA applications, ensuring the reliability and transparency of machine learning models is crucial. We explore the concept of uncertainty quantification in our models, which provides insight into the confidence and reliability of the predictions made. This aspect is necessary for fostering trust in AI-assisted space domain awareness and decision-making processes. We perform this analysis by implementing and demonstrating existing uncertainty quantification research into machine learning model workflows. An initial demonstration will be performed using Monte Carlo Dropout (Gal 2015).
Beyond the immediate technical applications, the increasing adoption of space-based sensing capabilities signifies a broader paradigm shift in SDA. Traditionally reliant on ground-based optical and radar systems, SDA is now incorporating multi-modal sensing architectures that integrate ground-based, airborne, and space-based observations. This transition enables a more comprehensive and persistent monitoring framework for tracking RSOs and detecting potential anomalies.
The findings from this study underscore the importance of multi-modal sensing for SDA. As the proliferation of space assets continues to accelerate, the demand for accurate, real-time object characterization becomes increasingly critical. Space-based imaging offers unique observational perspectives that are unobtainable from Earth, providing a valuable complement to existing ground-based tracking systems. Addressing key challenges such as data processing limitations and environmental noise, our work demonstrates strategies to enhance operational viability. By leveraging machine learning and advanced image processing techniques, we enhance the ability to monitor, classify, and interpret Sat^2 imagery. This approach relies on sensitive object detection models with high recall to avoid missing any objects during inference. Furthermore, many formulas used for characterization require the segmentation and accurate delineation of the captured objects in order to provide reliable and trusted outputs. Our results underscore the transformative potential of AI-driven analysis for SDA applications, providing insights that will inform the next generation of space-based monitoring and classification systems.
Date of Conference: September 16-19, 2025
Track: Machine Learning for SDA Applications