Leveraging Unresolved Hyperspectral Signatures for Robust Deep Learning Classification of Geosynchronous Satellites

Jason Kirkendall, Rochester Institute of Technology; Bartosz Krawczyk, Rochester Institute of Technology; Francis Chun, U.S. Air Force Academy; Michael Gartley, Rochester Institute of Technology

Keywords: Machine Learning, Geosynchronous, Slitless Spectroscopy, Convolutional Neural Network

Abstract:

Space Situational Awareness (SSA) has become increasingly urgent as the necessity for dependable tracking and characterization of Resident Space Objects (RSOs) expands. However, the limited availability of community-accessible hyperspectral data has hindered progress. In response, the United States Air Force Academy (USAFA) recently utilized its Falcon Telescope Network (FTN) to gather unresolved hyperspectral signatures of geosynchronous satellites. This initial dataset comprises 594 hyperspectral data samples of 28 satellites, collected over 15 nights from 2022-2025. Most satellites were observed five to ten times a night, although the collections were not uniform, resulting in an unbalanced dataset. The acquired images underwent further pre-processing to extract the exo-atmospheric first-order spectral data for each collection and were stored as an array of “time-series” data. Previously, this data had been investigated for its potential to employ machine learning for unresolved RSO (URSO) classification.  Those studies have used the averages of the data samples per collection day to minimize noise per class and have worked exclusively with the time series data when training their algorithms. Keeping the characterization of URSOs in mind, this study builds on previous work. It introduces two novel concepts regarding this dataset: the effectiveness of using all data samples individually to create deep learning (DL) models capable of positively classifying URSOs. Creating a robust model to these variations is essential due to the differences observed between each collection for various reasons, including phase angle, atmospheric changes, etc. The second concept introduced was transforming the time series data into two-dimensional images using Gramian Angular Summation Field (GASF) encoding. Previous research indicates that convolutional neural networks (CNNs) typically learn more effectively using image data rather than time series data. With this in mind, we anticipated our GASF-fed models would yield greater accuracy throughout the confusion matrix compared to the time-series data. We aimed to achieve better results than previous studies; however, we were concerned about scarcity and noisy data when utilizing every sample. We found that the GASF process reduced the repetition of noisy patterns in our data at the dimension we set our images, thereby addressing the concern of noisy data. To address data scarcity, we introduced augmented data, which comprised random Gaussian noise and scaling applied to the original GASF and time-series data samples. The models included three convolutional layers and three fully connected layers, with dropout and batch normalization between each layer, executed using k-fold cross-validation with five folds. Utilizing the GASF images, this architecture enabled our model to classify 83% of data from 119 images accurately it had not encountered before. In comparison, the time-series model achieved an accuracy of only 74% when tested against 119 unseen data samples. Subsequently, we developed unique conditional variational autoencoders (CVAEs) for the time-series and GASF data, allowing our model to incorporate generative and transfer learning methods to explore the latent space of our data better and provide us with a more diverse set of samples for training. Incorporating the synthetic data generated by the CVAE rendered our model more robust than training with augmented data alone. The resultant model, a straightforward CNN, was trained and validated using a blend of 2000 generated samples from the CVAE and half of the original data, tested on 297 unseen original images with an accuracy of 84%, surpassing other models from the previously mentioned research. This study demonstrates the effectiveness of utilizing GASF encoding for this type of data with simple yet robust DL techniques to enhance the accuracy and reliability of URSO identification. We also observed optimal results when incorporating all data samples, which is promising for the future of this research in supporting SSA’s long-term objectives.

Distribution A. Approved for public release: distribution unlimited. (PA #USAFA-DF-2025-130)

 

Disclaimer:

“The views expressed in this article, book, or presentation are those of the author and do not necessarily reflect the official policy or position of the United States Air Force Academy, the Air Force, the Department of Defense, or the U.S. Government.”

 

Date of Conference: September 16-19, 2025

Track: Satellite Characterization

View Paper