George Landon, Cedarville University; David Strong, Strong EO Imaging, Inc.; Timothy Giblin, i2 Strategic Services LLC; Benjamin Roth, U.S. Air Force Academy; Francis Chun, U.S. Air Force Academy
Keywords: Space Situational Awareness, Space Domain Awareness, Geosynchronous, Non-Resolved Objects, Characterization, Spectroscopy, Machine Learning, Deep Learning, Falcon Telescope Network
Abstract:
With the continuously increasing number of objects in orbit, additional methods utilizing fast and broadly deployed sensors to maintain Space Domain Awareness are critical. However, densely deploying sensors that provide spatially resolved imaging capabilities for Geostationary Earth Orbit (GEO) satellites is prohibitively resource-intensive. Therefore, to achieve densely deployed ground-based sensors, a reduction in resolving capabilities is required, which in turn limits the use of classic image-based identification and classification methods.
This work explores state-of-the-art deep learning classification architectures that train on non-spatially resolved spectral satellite observations. Leveraging satellite spectral observations in raw spectral flux allows the development of a classification model that can be used for satellite characterization through a simplified preprocessing pipeline. To support methodologies developed in this work, different methods of satellite characterization are performed on five evenings of data, from the United States Air Force Academy Falcon Telescope Network, where non-resolved slitless spectroscopy observations of satellites are automatically classified by satellite, bus, and manufacturer.
While convolutional neural networks (CNNs) have produced decades of improved classification results in many computer vision tasks, the recent development of transformers has put new focus on what is possible in object classification tasks. Both Vision Transformers (ViTs) and Shifted-window (Swin) Transformers have provided exceptional improvements in classification and have led to the more recent improvements of CNNs in the development of ConvNeXt, which is an architecture that seeks to produce results that are similar to, or even outperform ViTs and Swin Transformers while maintaining the simplicity of classic CNNs. This work develops a 1D ConvNeXt architecture and demonstrates its efficiency in satellite classification using slitless spectroscopy observations while developing support for choices of other important training properties such as loss function and optimizer.
Moreover, data preparation remains a critically important step in both architecture design and training. While the observations recorded spectroscopy values from 400-800nm, extracting common bandpass values typical of Kron/Cousins filters provides a method for dimensionality reduction without reducing classification accuracy. To build a robust satellite classification system for Space Domain Awareness, training and evaluation datasets are carefully separated to mimic real-world scenarios. In this work, training is never performed on “future” or “concurrent” observations of satellites. In an effort to avoid the unfair advantage of learning atmospheric conditions or other conditions that may vary temporally, evaluation is always performed on observations that occur after observation dates used for training.
The custom 1D ConvNeXt architecture is deployed on a large dataset of slitless spectroscopy observations of 20 satellites over five evenings. The results explore classification accuracy when using calendar dates later than the training dates. This work seeks to evaluate whether models can accurately classify satellites from future, unseen observations. Results attempt to address which satellite characteristics, such as bus, configuration, or platform are the most robust for classification accuracy of future observations. Bus classification accuracy appears the most consistent in performance, exceeding 70% accuracy.
Date of Conference: September 16-19, 2025
Track: Satellite Characterization