Time Forecasting Satellite Light Curve Patterns using Neural Networks

William Dupree, Aptima, Inc.; Louis Penafiel, Aptima, Inc.; Thomas Gemmer, Aptima, Inc.

Keywords: Machine Learning, Neural Network, Unified Data Library, Ground-Based Measurements, Light Curve, Visual Magnitude, Time Series Analysis

Abstract:

With levels of technology world-wide increasing rapidly, space has become an increasingly contested domain in defense. It is imperative that the United States continues to grow and improve upon its current Space Domain Awareness (SDA) capabilities.  Having greater knowledge of one’s environment leads to safer and improved decision making while operating inside of it. In this study we focus on improving SDA by characterizing satellites and their patterns of life (PoL) through analyzing historical data to forecast Visual Magnitude and light curve patterns. In previous work we have been successful in showing the application of Machine Learning (ML) to SDA data gives rise to improved detection of satellite maneuvers as a function of time. In this study we take a similar approach and apply ML techniques to satellite observational data to predict future trends in Visual Magnitude/light curves for a single satellite. We create models that characterize light curves based on a finite input window of observable measurements and output possible patterns the light curve may exhibit. We explore how these patterns can be used to alert analysts of anomalous activity when compared to the real time data.

The data used in our study comes from the Electric-Optical (EO) data for the EchoStar 7 satellite, ranging over a period of roughly 9 months. This gives rise to our first challenge, as the EO observational data can be locally dense around small time scales but globally sparse in nature. This analysis uses the EchoStar 7 dataset due to the large number of observations it includes aiding in the Deep Learning approach we apply. Before we apply ML models to the data, we preprocess the data, performing general cleaning and feature engineering tasks. The feature engineering is done to mitigate characteristics in the raw data that may weaken the prediction power of our final model. Major challenges we hope to address are: how to handle the non-uniform sampling of the data, how to encode relevant categorical information into numerical data, and determining the best way to connect the past nature of the data to current measurements.

Irregular time steps complicate what it means to predict/forecast the Visual Magnitude at the next time step. The model may be able to speak to the magnitude of a given measurement, but unless handled carefully will not be able to associate “when” the measurement occurred. If the difference in time between measurements becomes too large, the relevance between samples is drastically decreased. To add to the complex nature of the data there are also many characteristics that are sensor-specific. An example would be categorical data labeling what sensor a specific observation came from. This sensor information affects the sensitivity of the observation measurement. All sensors may not have the same specifications, as a variety of providers are often used when collecting EO data. Some sensors may go offline, or new ones may be added. If the model we use hopes to learn from sensor information we must be especially mindful in how the information is encoded.  The final task in our feature engineering is to prepare the data for time series analysis by manipulating the data with appropriate windowing techniques. These techniques include creating a subset of immediate historic data as well as moving differences for specific features. We tune the model by testing performance across window size (number of historical samples in the window) and feature inclusion.

After feature engineering we train several models that are capable of predicting the Visual Magnitude on the very next time step. We focus on two Neural Network implementations to show the prediction power for Visual Magnitude measurements from ground-based sensors, comparing popular models such as simple Feed Forward networks and Long Short Term Memory (LSTM) networks. This problem’s inspiration arose from using LSTM for time series analysis, since we expect satellites with regular and stable orbits to follow repeatable light curve patterns. This approach is further aided by the ability of a Neural Network to make connections between the data that current classical equation of motion analysis does not capture. By applying the LSTM approach, we allow the model to tune what information is important and what may be discarded/forgotten for future Visual Magnitude value prediction.

We validate our Neural Network model by comparing to a baseline windowed-averaging model for Visual Magnitude versus time. The window size of our feature data will play an important role in model training time, model configuration, and the accuracy of the results. Prediction for prediction sake is an interesting exercise, however our end goal in having accurate predictions will be to identify irregular behavior in real time data. Future scenarios of interest include identifying measurement as mis-tagged satellites and irregular light curves caused by changes in rotational state. In both cases the light curve can exhibit deviations from previously exhibited patterns and comparing to an accurate forecast may distinguish these anomalies.

Date of Conference: September 14-17, 2021

Track: Machine Learning for SSA Applications

View Paper