Learned Satellite Radiometry Modeling from Linear Pass Observations

Kimmy Chang, Odyssey Systems–Space Systems Command (A&AS); Justin Fletcher, USSF SSC/SZG

Keywords: Neural Radiance Fields, Machine Learning, Imaging, SSA

Abstract:

Models of resident space objects enable change detection, provide functionality insights, and can improve orbit propagation accuracy. This paper explores the viability of pairing Neural Radiance Fields (NeRF) and data preprocessing on synthetic and real satellite images taken from linear pass observations to produce 3D models of satellites. The original NeRF method proposed by Mildenhall et. al. created high fidelity 3D reconstructions from real and synthetic datasets, but entails several limitations: (1) it required known camera parameters for each image, (2) it assumed that the scene was geometrically/materially/photometrically stable, (3) it converged well only for low-noise images, (4) it needed a substantial number of input views, and (5) it required a lengthy training time. Imagery of satellites captured during real observational passes often fails to meet several of these criteria. As such, new approaches are needed to build a successful NeRF model for satellite applications. 
NeRF extensions that have been published since the original NeRF have addressed challenges from the set of aforementioned limitations. Nevertheless, none are suitable to the unique demands of a satellite imaging application. We demonstrate that no current neural radiance field method is able to reconstruct a3D model that is representative of the input satellite. Gaiani et. al. provides evidence that preprocessing images prior to training can help improve the performance of 3D reconstruction. Inspired by this approach, we test the efficacy of data preprocessing as a method to overcome noisy and variable conditions present in satellite images.
To test this hypothesis, we use the SPEED+ synthetic and sunlamp datasets to finetune our model and data preprocessing pipeline. The synthetic dataset represents an ideal satellite object plane. The sunlamp dataset features different sources of illumination that capture corner cases, stray lights, shadowing, and other visual effects. Thus, sunlamp mimics direct high-intensity homogeneous light from the sun and will help ensure model robustness. From the results of synthetic and sunlamp dataset testing, we determine NeRF– to be the most viable model architecture for our purpose and the combination of the following data preprocessing to be the most effective for high quality 3D reconstructions: crop and center, denoise, and Gaussian area blur. Finally, we demonstrate the success of this approach on real satellite imagery.
In conclusion, our work makes the following contributions: (1) we show that the method of Neural Radiance Fields works for the satellite application field with both the synthetic SPEED+ dataset and real satellite images sourced from 1.6-meter and 3.6-meter Advanced Electro-Optical System (AEOS) telescopes located at the Air Force Maui Optical and Supercomputing (AMOS) Observatory; (2) data preprocessing makes the approach of Neural Radiance Fields viable; (3) NeRF– with data preprocessing is able to account for the lack of camera parameters, various lighting conditions; and (4) that we can create a 3D model of a satellite from only 32 images of limited angular range. This paper explores two geometric transformations and fourteen photometric transformations which are used to propose an optimal data preprocessing pipeline for satellite images. Quantitatively, we show that our method surpasses the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) values of any current NeRF model and variant NeRF state-of-the-art. To our knowledge, this paper is the first to explore the effects of data preprocessing on NeRF 3D reconstruction both in the space domain and general computer vision community.

Date of Conference: September 19-22, 2023

Track: Machine Learning for SDA Applications

View Paper