Shape Identification of Space Objects via Light Curve Inversion Using Deep Learning Models

Roberto Furfaro, University of Arizona; Richard Linares, Massachusetts Institute of Technology; Vishnu Reddy, University of Arizona

Keywords: Deep Learning, Convolutional Neural Networks, Recurrent Neural Networks, Light Curve Inversion

Abstract:

Over the past few years, Space Situational Awareness (SSA), generally concerned with acquiring and maintaining knowledge of resident Space Objects (SO) orbiting Earth and potentially the overall cis-lunar space, has become of critical to preventing the loss, disruption, and/or degradation of space capabilities and services. Importantly, threats to operational satellites are also increasing, due to emerging capabilities of potential adversaries.  As space becomes more congested and contested, developing a detailed understanding of the SO population became one of the fundamental SSA goals. Currently, the SO catalog includes simplified SO characteristics, e.g. solar radiation pressure and/or drag ballistic coefficients. The currently available simplified description limits the dynamic propagation model used for predicting the catalog to those that assume cannon ball shapes and generic surface properties. Future SO catalogs will have more stringent requirements and shall provide a detailed picture of SO characteristics. An analysis of the current state of the art shows that traditional measurement sources for SO tracking, such as radar and optical, can be employed to extract information on SO characteristics. Such measurements have been shown to be sensitive to SO properties such as shape, attitude, angular velocity, as well as surface parameters.

Recent advancements in deep learning (e.g. Convolutional Neural Networks (CNN), Recurrent Neural networks (RNN)), Generative Adversarial Networks (GAN), Deep Autoencoders (AE)) have demonstrated impressive results in many practical and theoretical fields (e.g. speech recognition, computer vision, robotics). Whereas deep learning methods are becoming ubiquitous in many aspects of our life, they have been barely explored for SSA applications in general and for SO object characterization, in particular. Recently, CNNs have been shown by our research team to be an effective method for SO classification using photometric data. In this paper, we report the results obtained in designing and training a set of deep models capable of retrieving SO shapes from light curves. Traditional shape retrieval methods employ some form of physically-based model inversion. One of the most advanced approach, the Multiple Models Adaptive Estimator (MMAE), runs a bank of Extended Kalman Filters which are based on a set of physical models accounting for different space object properties. The model that minimizes the uncertainty during the retrieval (least residual) is considered to be the model that best represents the SO object properties. Nevertheless, physically-based model inversion is generally ill-posed and computationally expensive. Here, we show how deep learning methods can be employed to provide an effective shape retrieval in a fast and accurate fashion. Both CNNs and deep recurrent models comprising Long Short Term Memory (LSTM) layers are designed trained and validated for SO shape retrieval. A cluster analysis based on t-Distributed Stochastic Neighbor Embedding (t-SNE) is employed to analyze how data classes are clustering in the corresponding 2-D and 3-D embedding. Data visualization provides a critical understanding of the ability of the system to learn the correct inverse functional relationship between light curves and SO shape. Importantly, the trained deep networks are tested on a set of light curves extracted from optical measurements of SO collected by RAPTORS networks available at the University of Arizona.

Date of Conference: September 17-20, 2019

Track: Machine Learning for SSA Applications

View Paper