Kimmy Chang, SSC/SZGA; Lauren Fisher, KBR; Zach Gazak, SSC/SZGA; Justin Fletcher, SSC/SZGA
Keywords: Spectroscopy, Machine Learning, Small Telescopes, Hyperspectral, Convolutional Neural Networks, Segmentation, Classification
Abstract:
We present a novel approach to advancing Space Domain Awareness (SDA) using resolved hyperspectral imaging. Traditional methods for extracting information from space assets rely on spatially resolved imagery, mainly applicable to low Earth orbit (LEO) objects. Despite high-resolution imagery, analysts often struggle to manually discern and accurately identify key structural components of space objects. Our solution addresses this issue by automatically segmenting and identifying structural information across spectral bands. Recent research indicates that valuable, distance-invariant information is embedded in the spectrum of an object’s reflected sunlight, providing insights into the object’s composition and state. We employ a custom convolutional neural network (CNN) model for image classification and segmentation. The primary 2D CNN model is trained on a diverse set of twenty simulated satellites, followed by fine-tuning on smaller, satellite-specific 2D CNNs for detailed analysis. In total, we simulated 27 satellites, each annotated with material (e.g., Germanium, Titanium Alloy) and component (e.g., Attitude Control System, Payload) semantic segmentation. Our method achieves satellite classification accuracies exceeding 98%, with macro average precision and recall exceeding 80%. Notably, we demonstrate the primary 2D CNN model’s ability to be effectively fine-tuned on satellites not included in the initial training. Furthermore, our method requires only 80 samples from each satellite for both material and component semantic segmentation, and once the initial primary model is trained, fine-tuned model results can be obtained in less than five minutes.
Date of Conference: September 17-20, 2024
Track: Machine Learning for SDA Applications