Object Detection from Radon Transformations using Machine Learning Techniques

Thomas Walker, Lockheed Martin Australia; Boris Repasky, Lockheed Martin Australia; Timothy Payne, Lockheed Martin Australia; Greg Madsen, Lockheed Martin Australia

Keywords: machine learning, artificial intelligence, regression, Radon transformation, Space Situational Awareness, object detection, Space Domain Awareness

Abstract:

Efficiently examining astronomical images for objects is becoming increasingly important as the space domain becomes more crowded. Increasing the effectiveness of object detection algorithms would improve space debris management, increase launch safety and inform future satellite placement.

To enable wide area, low cost coverage of the ground to space domain, the FireOPAL Network uses consumer grade digital single lens reflex (DSLR) cameras to provide persistent space situational awareness. Using off-the-shelf DSLR cameras on Earth to detect objects in Earth orbit poses a number of unique challenges including low signal-to-noise ratios (SNR), extremely small object sizes, sparse object density and a large amount of interfering light from stars. Using standard lenses also creates additional problems with chromatic aberration and complex vignetting functions.

To deal with these unique challenges we present a novel approach that combines the Radon transform’s superior line feature extraction with techniques from machine learning and computer vision. Our approach uses a heatmap regression based convolutional neural network (CNN) applied to the Radon space rather than raw pixel values. By leveraging the Radon transform, our approach is able to detect even the faintest of low Earth orbit (LEO) satellites, pushing the detection capability of off-the-shelf DSLR cameras to their limit.

In the paper we show that our approach is superior to methods relying on Radon transforms alone. Many approaches for identifying objects from the Radon space rely on peak detection and thresholding. This is effective for detecting objects that create clear streak-like signals, but is not as robust when noise or other artefacts dominate the scene causing spurious peaks in the Radon space. We show that our machine learning approach is able to distinguish between the true streak-like signal and other line-like artefacts such as imperfectly removed stars or hot pixels inherent to the sensor, even when the signal from the true satellite is over an order of magnitude lower than the interfering artefact.

Our method significantly outperforms traditional Radon transform based methods for satellite detection in low SNR settings when there is significant noise present from artifacts, increasing both the average precision (AP) and the probability of detection (Pd). These results ultimately yield more true positive detections of orbiting objects and demonstrates the effectiveness of machine learning in this scenario.

Date of Conference: September 14-17, 2021

Track: Machine Learning for SSA Applications

View Paper