Seeing Stars: Learned Star Localization for Narrow-Field Astrometry

Violet Felt, U.S. Space Force; Ian McQuaid, KBR; Peter Thomas, KBR; Sean Sullivan, Pacific Defense Solutions; Jeff Houchard, EO Solutions; Justin Fletcher, USSF SSC/SZG

Keywords: Astrometry, Machine Learning, Convolutional Neural Networks, Transformers, Object Detection, Instance Segmentation, Line Segment Detection

Abstract:

Space domain awareness (SDA) includes the detection, astrometric localization, and identification of artificial Earth satellites. While deep learning solutions for satellite detection and identification have been explored, astrometric localization still relies on traditional methods. The zoo of learned star localization models presented in this work are a first step towards a deep learning solution for satellite localization, as high-precision and high-sensitivity star detection enables astrometric fitting, and by extension satellite localization.
Existing star detection methods use traditional computer vision techniques, often relying on hand-crafted features optimized for a specific telescope. In the SDA community, where telescope track rates deviate from sidereal, these methods can also require image metadata such as telescope track rate and exposure time. In contrast, our deep learning models detect stars using only the uncorrected image pixels, achieving a higher recall and similar precision to traditional methods. This higher recall facilitates an astrometric fit rate double that of traditional methods, significantly increasing the quantity of usable satellite datapoints in SDA applications.
We create a StarNet dataset of 139,089 real images and their corresponding 11 million stars to train our deep learning models. These images are captured in rate-track mode by six sensors at four geographic locations against LEO, MEO, and GEO targets. They range in size from 512×512 pixels to 1024×1024 pixels, in field of view (FOV) from 0.3 degrees to 0.9 degrees, and in instantaneous field of view (ifov) from 2.0-4.2 arcseconds per pixel. The star streaks in these images range in angle from 0 to 360 degrees, in length from 1 to 56 pixels, and in quantity from 5 to 1000 stars per image.
To generate ground truth, stars are identified in each image using a traditional star detection technique (such as SExtractor), human annotation, or model ensembling (employing the trained object detection models described below). Detected stars are submitted to astrometry.net for astrometric fitting, and the created world coordinate system (WCS) is paired with a star catalog to extract all the ground truth stars in the frame. These stars are represented in three formats suitable for deep learning: bounding boxes, segmentation masks, and line segments. Model ensembling is added to bootstrap the size of our dataset, and “astrometry.net-in-the-loop” ensures every star included in the ground truth is a real star.
We train a variety of state-of-the-art object detection, instance segmentation, and line segment detection models on the StarNet dataset, including: YOLOX, Deformable DETR, Faster RCNN, RetinaNet, Mask2Former, QueryInst, Mask RCNN, HTC, LETR, and F-Clip. This set of models spans a selection of CNN and transformer architectures, most with a pre-trained ResNet-50 backbone. Each model is modified to detect a maximum of 1000 stars per image, then trained with the optimal hyperparameters from their respective paper.
The performance of our models is evaluated using object-wise metrics (precision, recall, F1) and pixel-wise metrics (AP, IoU) against SExtractor at various thresholds. Experiments address the relationship between model performance and factors such as the number of stars in the image (the transformer models struggle to detect over 500 stars per image), the length of star streaks in the image (the models increasingly outperform SExtractor as streak length grows), and the visual magnitude and SNR of stars in the frame (the models reliably detect fainter stars than SExtractor). The distribution of star localization residuals (comparable to SExtractor) and the astrometric fit rate across the dataset (double that of SExtractor) are reported.
Finally, we conduct a calibration satellite collection campaign and compute calibration satellite localization residuals in arcseconds, using our trained StarNet model to detect star locations and previous work’s trained SatNet model to detect satellite locations. Sub-pixel calibration satellite accuracy and a significant increase in the number of calibration satellite datapoints (due to the increase in astrometric fit rate) is presented, underscoring the high-precision and high-sensitivity capabilities of learned star detectors.

Date of Conference: September 19-22, 2023

Track: Machine Learning for SDA Applications

View Paper