Deep Multi-Task And Representation Learning Method for Atmospheric Turbulence Prediction and Correction from Focal Plane Speckle Images

Vignesh Kumar Pandian Sathia, Georgia State University; Nick Murphy, Georgia State University; Ruchir Namjoshi, Georgia State University; Dustin Kempton, Georgia State University; Fabien Baron, Georgia State University; Stuart Jefferies, Georgia State University; Berkay Aydin, Georgia State University

Keywords: machine learning, adaptive optics, speckle image, zernike coefficient, atmospheric turbulence

Abstract:

no rush or anything i know you said earlier you were kinda busy over the summer but is there any way we could get a 48 hour reservation for a marietta campus building (preferrably the atrium) for the 29th-31st of august?

= ====================

A major challenge when using ground-based telescopes to view objects in space is the distortions produced by layers of turbulence in the Earth’s atmosphere. The current state-of-the-art to counteract this problem is adaptive optics. Such systems use additional wavefront sensors to measure atmospheric conditions and then adjust mirrors in the telescope’s optical path to correct the perceived image. While this process improves the clarity of the image, it also requires many small and inter-connected adjustments that may not scale well as the number of mirror segments in a telescope increases.

In this work, we propose a deep multi-task learning approach coupled with representation learning to determine monochromatic wavefront aberrations at a single telescope aperture in real time. This approach utilizes shared weights for learning relevant, and inter-related parameters of wind layers, which allows for better scaling and control of multi-segment mirror adaptive optics. Through machine learning we want to learn both the characteristics of the layers of atmospheric aberrations that caused the distortion, as well as information needed to correct the applied convolution.

To accomplish this task, we first generate a realistic training dataset of time series frames of speckle images. Each sample contains 2,000 frames which are 256 by 256 pixels and represent an observation from a telescope with a one meter aperture. These rasters are further distorted by a Point Spread Function that simulates the effects of relevant atmospheric conditions. Each data sample is encoded with information about the simulated atmospheric conditions that produced them, as well as a range of Zernike coefficients which could be used to correct their distortions.

This considerably sparse dataset was then processed using multi-task self-supervised learning models to learn representations in sufficiently low dimensions and extract important features and use the features to predict the atmospheric conditions. Additional ablation studies were then conducted to optimize parameters such as the number of latent dimensions considered during the model training. 

Instead of reconstructing the images (which is memory intensive and prone to collapse), we  exploit the temporal aspect of data to align the representations in lower dimensions, thus reducing the complexity of the model.
Applications of this work include predicting the precise adjustment values required for adaptive optics systems in real time, reducing the need for external wavefront sensors.

Date of Conference: September 16-19, 2025

Track: Atmospherics/Space Weather

View Paper