Closely Spaced Object Classification Using MuyGPyS

Kerianne Pruett, Lawrence Livermore National Laboratory; Nathan McNaughton, University of California, Berkeley; Michael Schneider, Lawrence Livermore National Laboratory

Keywords: SDA, SSA, machine learning, classification, RPO, CSO, optical imaging, modeling, simulation

Abstract:

The safety and security of space missions requires understand how objects are moving and operating within the space domain, making accurate detection methods a crucial capability. Detecting rendezvous and proximity operation (RPO) maneuvers challenges traditional ground-based optical space domain awareness methods as two objects close together can fall within the point-spread function of the optical system, thus appearing as a single object. Machine learning applications can be used to try and disambiguate closely spaced objects (CSO) in images, however, most traditional methods rely on a large number of training samples, or classification accuracy begins to break down rapidly in low signal-to-noise regimes. In contrast, Gaussian processes enable one to take a probabilistic approach to the classification problem, requiring less training samples and performing better in the presence of noisy data. Gaussian processes additionally produce probabilistic outputs which enable reliable flagging of ambiguous images for follow-up with human inspection. RPO data exists in limited quantities, and optical images are often of limited quality due to seeing, weather, or sensor parameters, making a Gaussian process approach superior to traditional machine learning methods. We present CSO classification results using MuyGPyS, a fast and flexible Gaussian process python package developed by Lawrence Livermore National Laboratory (LLNL). We compare MuyGPyS’ ability to detect CSOs as a function of angular distance and magnitude difference between objects and across different signal-to-noise regimes. Conducting the analysis in this way allows for results that are independent of orbital regime or optical sensor specifications, as the optical effects of sensor performance and satellite altitude fall within the sampled parameter spaces. Images used in this analysis are simulated using GalSim, an open-source python package for simulating highly accurate ground-based optical images. Satellites are then injected into GalSim images using the LLNL-developed orbital dynamics simulation package for space situational awareness, SSAPy. We assume Lambertian sphere emission models and commercial-off-the-shelf ground-based optical sensors at a variety of locations, and our image simulations include accurate atmosphere and noise models given the various locations and observing conditions. We find that our Gaussian process approach outperforms conventional machine learning methods, especially in the more challenging regions of simulation parameter space. 

Date of Conference: September 19-22, 2023

Track: Conjunction/RPO

View Paper