Automated 6DOF Satellite Pose Estimation From Resolved Ground-Based Imagery

Thomas Dickinson, AFIT/CI, Rochester Institute of Technology Center for Imaging Science; Derek Walvoord, Rochester Institute of Technology Center for Imaging Science; Michael Gartley, Rochester Institute of Technology Center for Imaging Science

Keywords: deep learning, machine learning, pose estimation, 6DOF, computer vision, domain gap, automation

Abstract:

Automated satellite pose estimation enhances spacecraft health assessment and behavior monitoring and is a valuable part of future Space Domain Awareness (SDA) architectures. Large, ground-based electro-optical (EO) telescopes equipped with adaptive optics (AO) produce well-resolved imagery of Low Earth Orbit (LEO) satellites, but this imagery is rarely employed for pose estimation due to difficult visual interpretation and labor-intensive processes. In recent years, supervised deep learning (DL) approaches for automated pose estimation have excelled. However, sufficient real training data is challenging to collect and label, and learning methods suffer when trained on simulated data for real applications. Our research focuses on developing a methodology to simulate realistic AO imagery and train robust deep learning models, facilitating automated, real-time satellite pose estimation and bridging the simulation-to-real domain gap. This would provide the ability to accurately track a satellite’s position and orientation in real time, enhancing SDA. This work assumes images contain a single satellite and that an accurate satellite CAD model is available.

We first demonstrated simpler DL models designed for 3 degrees of freedom (3DOF) pose estimation (rotation only). Ultimately, practicality demands a pose model designed for 6 degrees of freedom (6DOF) pose estimation – 3DOF rotation, 2DOF translation, and 1DOF scale. Additionally, the model must handle the challenges of partial illumination along with blur and noise. A Seasat (SATCAT 10967) CAD model was used to render 110,000 unique 6DOF poses and illuminations. Illumination geometries were chosen randomly and constrained to be physically valid for a full year of Seasat passes over a given ground site during terminator conditions. Random augmentations were used to degrade the pristine renders and create training and test sets representative of real imagery.

First, a coarse localizer network containing roughly 30M parameters was trained to regress satellite bounding boxes. The bounding box regressor achieved mean Intersection over Union (IoU) = 0.84 for the degraded test set. The predicted bounding boxes were used to crop the initial images to the satellite region of interest (RoI) and resize them for input to the 6DOF pose model. The 6DOF pose model totals 83M parameters and was trained from scratch in stages to ultimately predict a 6-dimensional representation for 3DOF rotation – essentially the first two columns of a 3×3 rotation matrix. Normalized X/Y translation and universal scale were also predicted.

Passing the full test set first through the coarse localizer then through the 6DOF pose model, we demonstrated a mean rotational error of 13.2° (2.7° median), a mean 2-axis translation error of 31 cm (18 cm median), and a mean slant range error of 1.4% or 13.7 km (1.2% or 11.8 km median). Translation error can also be expressed angularly as a mean of 300 nrad (170 nrad median) or in pixel space (relative to the uncropped images) as a mean of 2.1 pixels (1.2 pixels median). These performance metrics are for individually tested images. When time-series data were available, a Kalman filter was applied to smooth out noisy pose predictions. The full model (coarse localizer, 6DOF pose model, and temporal Kalman filter) achieved a mean rotational error of 4° (3° median, 11° maximum) on 105 frames of real video of Seasat captured by a ground-based telescope, successfully bridging the domain gap.

DISTRIBUTION A. Approved for public release: distribution is unlimited.
Public Affairs release approval #AFRL-2024-3237

Date of Conference: September 17-20, 2024

Track: Satellite Characterization

View Paper