Attitude Propagation of Resident Space Objects with Recurrent Neural Networks

Davide Amato, University of Arizona; Roberto Furfaro, University of Arizona; Aaron J. Rosengren, University of Arizona; Mohammad Maadani, University of Arizona

Keywords: Attitude propagation, recurrent neural networks, long short term memory, astrodynamics, SSA

Abstract:

Knowledge of the spacecraft rotational state is crucial for the modelling of both atmospheric drag and solar radiation pressure perturbations. However, the straightforward propagation of coupled orbital-attitude dynamics is a daunting task from the computational point of view, due to the rotational motion introducing perturbation time scales that are much smaller than the orbital period. The maximum step size attainable by a numerical solver is thus heavily constrained, even with efficient orbit propagation techniques. This is even more critical in the case of HAMR (High Area-to-Mass Ratio) objects, since their flexibility and high area-to-mass ratio result in a very irregular rotational motion. Past works on this topic have focused on decoupling attitude and orbit dynamics during propagation intervals, to then apply an Encke-type correction to recover the coupled motion. In another approach, the two dynamics are propagated separately and are coupled at the triggering of a threshold based on entropy measures. Modern parallel computing algorithms have also been exploited to develop highly efficient numerical solvers for the integration of attitude dynamics. While surely improving performances over traditional integrations, these approaches are not able to deal efficiently with highly irregular rotational motion (characteristic of HAMRs); moreover, they might require information that is often not readily available, such as state covariance.

We propose using machine learning techniques based on Recurrent Neural Networks (RNNs) for the propagation of attitude dynamics. The incorporation of a feedback loop in RNNs makes them easily adaptable for the prediction of a time series, unlike classical feedforward networks. A particular type of RNN, the Long Short Term Memory networks (LSTMs), provide structures to selectively remember/forget characteristics from the past data in the time series, which radically increases the efficiency of the training process. In addition, the outputs of RNN/LSTMs are relatively inexpensive once trained, their structure makes them easily parallelizable, and a variety of RNN/LSTMs are already implemented in publicly available software libraries such as Google TensorFlow. Thus, they are ideally suited for the prediction of time series representing rotational motion, and are able to deal with multiple dynamical time scales.

In this work, we compare RNN/LSTMs to the numerical integration of the kinematic equations for the attitude prediction of a spacecraft. We neglect orbital motion in this ansatz, since the main difficulty relies in the computation of attitude for the reasons described above. The RNN/LSTM and numerical integration solutions are compared in terms of accuracy and required CPU time.

Date of Conference: September 11-14, 2018

Track: Astrodynamics

View Paper