Felicitas Hernandez, Northrop Grumman; Scott Almond, Northrop Grumman; Max Li, Northrop Grumman
Keywords: Space Domain Awareness, SDA, SSA, satellite tracking, sensor networks, optical telescopes, infrared, visible, space-based observations, artificial intelligence, machine learning, CONOPs, sensing, simulation, image processing, mission design, constellation
Abstract:
From 2015 through 2020, a Russian Luch/Olymp satellite made approaches as close as 2 km to several geosynchronous satellites, raising immediate concerns of a collision and resulting space debris. In 2019, the Chinese TJS 3 demonstrated synchronized maneuvers with its apogee kick motor during a day/night terminus, illustrating a technique for spoofing satellite position. Events like these probe the limits of existing sensor networks, necessitating continuing enhancement in order to address gaps in evolving and future threats. This talk presents a novel approach for anti-spoofing and identification of non-resolvable objects using space-based hyperspectral imagery with machine learning for space domain awareness.
The capability presented here is built atop the traditional space domain awareness baseline: information about target orbits and concept of operations (light curves) is generated using standard techniques from the panchromatic sensor that is co-located with the hyperspectral sensor. This “meta-data” is used to self-queue the hyperspectral sensor and is, along with the corresponding hyperspectral imagery, fed into a hybrid machine learning model to extract operational information such as country of origin, age of materials, thruster chemical composition, spacecraft physical temperature, status (debris vs operational), laser communication usage, and continuous custody (unique spectral fingerprint). These properties can then be used as high-fidelity information for surveillance applications or for actionable decisions.
Hyperspectral imagery has been used extensively in Low Earth Orbit applications to image the sunlit side of the Earth in both the visible and infrared spectrums. Due to the high data volumes of both spatial and spectral data, various algorithms are commonly employed to extract just the features of interest to make optimal use of the finite downlink bandwidth. Conversely, the hyperspectral space domain awareness capability operates in a photon-starved environment: frame rates and data volumes are notably lower than in related missions, motivating the application of more complex deep machine learning models and opening the option for ground-based processing. In this case, machine learning is applied as a tool for intelligence extraction about characteristics of interest in a high-dimensional space rather than for compression.
The extraction of hyperspectral target parameters is inherently nonlinear, and a closed form math model is insufficient to precisely capture expected sensor performance across all scenarios. Thus, both real-world data and an end-to-end simulation of the processing chain are needed to evaluate the capability of the hyperspectral sensor architecture. This talk presents a four stage simulated photons-to-knowledge data pipeline involving real-world spectral inputs and a synthetic scene generator creating representative inputs to subsequent on-board processing and machine learning intelligence extraction. Real-world data inputs consist of both Bidirectional Reflectance Distribution Function lab collects of spacecraft material samples and telescope observations of orbiting man-made objects: both data types are used to inform target simulation in the scene generator. The synthetic scene generator produces representative scenes based on measured hardware performance parameters and first-principles physics, with particular care taken to capture full dimness star clutter. Both panchromatic and hyperspectral on-board algorithms are used to generate intermediate data products in the simulated payload. The output from this pipeline is used for both training and performance assessment of the machine learning classifier model. The development of this end-to-end payload math model also supports rapid iteration of design parameters to optimize the hyperspectral sensor hardware and software solution. This synthetic scene approach is extensible to, or can augment, training as on-orbit data becomes available. A broad parameter sensitivity assessment is performed looking at classifier skill across different features, distance-to-target impact, revisit rate in a constellation configuration, and other target orbital parameters.
Application of visible and infrared hyperspectral imagery machine learning to space domain awareness increases fidelity of information, but also the complexity of the processing chain. Articulating the performance of such a solution requires development of new metrics based on anticipated use cases that go beyond probability-of-detect and additionally capture the nuanced effects of the spectral information. The work presented here demonstrates a proof of concept that can be extended to fully accomplish these goals.
Date of Conference: September 19-22, 2023
Track: Space-Based Assets