Nicholas Perovich, MIT Lincoln Laboratory; Zachary Folcik, MIT Lincoln Laboratory; Rafael Jaimes, MIT Lincoln Laboratory
Keywords: SSA, SDA, Machine Learning, Neural Network, Artificial Intelligence, Maneuver Detection
Abstract:
This study explores methods to improve the performance of satellite maneuver detection software by comparing traditional statistical methods with machine learning and neural network-based methods. Detecting satellite maneuvers is an important component of Space Domain Awareness (SDA), which requires Space Operators to have accurate knowledge of the location of objects in Earth-orbit. Software tools allow Operators to maintain SDA by estimating and predicting satellite orbital trajectories. The software compares predicted locations to observed values gathered from optical and radar sensors. Differences, or residuals, between predictions and observations are useful to determine if satellites deviate from their expected trajectories. Residuals are often the basis for maneuver detection algorithms. However, residuals combine the error sources of maneuvering motion and sensor observational noise. Current statistical methods for differentiating between maneuvers and noise use threshold tests that produce high false alarm rates. An example method is able to correctly detect a maneuver 94% of the time (detection rate), but also detects a maneuver when no maneuver occurred 8% of the time (false alarm rate). As the number of satellites in Earth-orbit increases, the number of reported false maneuvers also increases. Space Operators, who can be overwhelmed by thousands of false alarms, respond by choosing smaller numbers of satellites to track, resulting in a loss of full Space Domain Awareness (SDA).
A subsequent task of Operators is to determine when the maneuver was initiated so that orbital state estimation systems can update new orbital state vectors. The same example method mentioned above uses another threshold test to timestamp the beginning of a maneuver. This method can produce a result accurate to within 24-48 hours of the beginning of a maneuver. The Operator then manually inspects the data to accurately place the timestamp or applies algorithms to estimate the maneuver time. As maneuver detections increase with more satellites in orbit, this time-consuming task will increase Operator overload and/or executions of complex algorithms, resulting in higher computational cost. It is the goal of this research to employ artificial intelligence to improve detection rates and false alarm rates of SDA software and to automate maneuver timestamping to reduce Operator workload and subsequent maneuver estimation costs.
In this study, the methods being evaluated for maneuver detection are the Random Forest (RF) classifier algorithm and a Deep Neural Network (DNN) classifier. The method being developed for maneuver timestamping uses a Long Short-Term Memory (LSTM) neural network. In this study, the Python machine learning library Scikit-Learn is used to implement RF algorithms. We develop the DNN and LSTM using Keras, a Python open-source library that allows users to interface with artificial neural networks in the TensorFlow library. Observation residuals in Right Ascension (RA) and Declination (Dec) coordinates are used as input data. For the purposes of training and validating the algorithm, residual data for both observations is artificially generated using a Monte Carlo simulation. Data is generated in two sets: cases in which the satellite did and did not maneuver. All cases are programmed to contain simulated, natural observational noise, which often looks similar to maneuvers. The simulated observation errors are designed to behave similarly compared with real data. 1000 maneuvering and non-maneuvering cases are generated, with each case containing four different types of data: RA and Dec residuals measured in arcseconds and standard deviations. The data is split into training, validation, and testing batches. For each case, 20 features are computed from the residual data and used to train RF and DNN classification algorithms.
Hyperparameters for both the RF and DNN are optimized using Scikit-learns Grid-Search Cross-Validation algorithm. For the RF classifier, a detection rate of 91% and false alarm rate of 1% are achieved. For a DNN classifier, a detection rate of 97% and false alarm rate of 0% are achieved. An examination of performance vs epochs shows that the DNN is not over-training.
Compared to 94% and 8% for current methods, RF shows an improvement in false alarm rate and degradation in detection rate. DNN shows improvement in both metrics. Most remarkable is the 0% false alarm rate, especially considering the network doesnt seem to be over-training. The next step for this research is to employ these algorithms on real, observational data. Even though the simulated data was created to mimic real data as closely as possible, we anticipate that the introduction of real data will cause a reduction in performance. These results will be included in the final paper. Additionally, the LSTM method for timestamping maneuvers is still being constructed. LSTMs inherently require a substantial amount of data preprocessing; current progress includes manipulating data into a format that can be used to train a LSTM network. We anticipate that the LSTM will perform much better than the current method for timestamping maneuvers.
Date of Conference: September 27-20, 2022
Track: Machine Learning for SSA Applications