Kimmy Chang, Space Systems Command (A&AS); Justin Fletcher, Space Systems Command (A&AS);
Keywords: Image resolution, telescopes, imaging, atmospheric turbulence, space domain awareness
Abstract:
Spatially extended imaging of objects in low earth orbit (LEO) provides useful satellite health and status information but is complicated by wavefront errors introduced by atmospheric turbulence. Image recovery through turbulence is a long-standing problem, and has been extensively treated. However, these approaches often struggle to achieve high quality restoration under the conditions typical of ground-based astronomical imaging. In this work, we propose an alternative approach in which modern computer vision models for learned image restoration are applied to the problem of image recovery through atmospheric turbulence in images of satellites.
Historically, most image restoration in the space domain has relied on blind deconvolution methods. These methods attempt to estimate the point spread function (PSF) and rely heavily on prior information about the PSF as well as the observation period length. Recent works from Chen et. al and Shu et. al have demonstrated an increased interest in using autoencoders and other neural network approaches for image restoration. However, no study has assessed the viability of large state-of-the-art image restoration models in the space domain.
In order to evaluate image restoration, we construct a custom dataset from the Scored Images of LEO Objects (SILO) dataset. The SILO dataset uses wave-optics simulations to generate a dataset of Space-object National Imagery Interpretability Rating Scale (SNIIRS)-scored images of LEO satellites observed from a ground-based optical observatory with varied turbulence conditions. Whereas previous studies have looked at degradations at qualitative levels of “moderate” and “severe,” we provide a quantitative study of nineteen levels of degradation based on evenly-distributed SNIIRS scores in the range of 2.5–7 with steps of 0.25 SNIIRS. Additionally, we evaluate image restoration on real-world images sourced from a 3.6-meter Advanced Electro-Optical System (AEOS) telescope located at the Air Force Maui Optical and Supercomputing (AMOS) Observatory.
We explore a total of seven image restoration methods: IMDN, BSRGAN, Real-ESRGAN, SwinIR, MemNet, CBDNet, and DPIR. These methods include deep learning deconvolution and non-deconvolution methods, General Adversarial Networks (GANs), and Vision Transformers. We offer an analysis of time and computational complexity of these methods in tandem with the restored image’s quality.
Given the results of our study, we select the best image restoration method for further optimization. We demonstrate the effectiveness of our finetuned model including superior image restoration with less data and shorter training time than traditional deconvolution methods. We present our selected model’s efficacy at simulated degradation levels of 2.5 SNIIRS–a degradation that was previously impossible to restore images from. For observations with the AEOS 3.6m telescope, this corresponds to the moderate to severe turbulence that can be expected.
In conclusion, our work makes the following contributions: (1) we curate a custom simulated satellite dataset with ten grades of degradation and propose a quantitative framework for evaluating restoration of atmospheric turbulence degraded images; (2) we conduct a comprehensive study of the application of state-of-the-art models on the image restoration of satellites; (3) and we show that it is possible to restore satellite images at a higher atmospheric turbulence degradation that were impossible with previous methods. Our finetuned model surpasses the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) values of both traditional and state-of-the-art approaches.
Date of Conference: September 19-22, 2023
Track: Machine Learning for SDA Applications