Machine Learning for Event-Based Vision Sensor Space Domain Awareness Object Tracking

Rachel Oliver, Air Force Institute of Technology; Michael Albert, University of Texas El Paso; Olac Fuentes, The University of Texas at El Paso; Dmitry Savransky, Cornell University

Keywords: Artificial Intelligence, Machine Learning, Space Domain Awareness, Object Tracking, Event-based Vision Sensors, Space-based

Abstract:

Event-based vision sensors are a promising technology for space-based space domain awareness (SDA) applications. These sensors create a time series list when pixels experience changes to their induced current. Their sparse data format enables optimization of communication link bandwidth, the independent nature of their pixels provides very high dynamic range, and their microsecond-level precision could enable rapid orbit estimations via algorithms such as the Advanced Uni-sensor Rapid Orbit Reconstruction Algorithm and Sensing (AURORAS) algorithm. Additionally, the lower power consumption and reduced cost compared to traditional integrating sensors provide advantages in spacecraft size, weight, power, and cost (SWaP-C) for space-based applications. In this paper, we demonstrate the utility of the encoded temporal information in event-based data to distinguish between point source objects. This work builds upon previously developed Bayesian methods, developing machine learning models capable of functioning online, which further improve classification accuracy.

Traditional processing methods assume the use of frame-based imagery and not an event list that contains both spatial and temporal data. While one could assemble traditional frames with EVS data, that process effectively removes two of the benefits of the data, its sparsity and temporal signature between events. Implementation of EVS is non-trivial to utilize its time series output because the paradigm is relatively new. There are no pre-packaged methods or accepted optimal solutions currently available to achieve effective and computationally efficient algorithms. Therefore, we explore what can be derived by the time series output to develop a tracking methodology that leverages the temporal information. Using ground-based RSO observations, we begin the development of a probability-informed tracking algorithm with a classic multiple hypothesis tracker (MHT) as our inspiration. At a high level there are 4 steps to an MHT: process new information into clusters, develop hypotheses, confirm hypotheses, and prune hypotheses and clusters. 

For live event-clustering, we explore two primary methods: proximity-based clustering, and random sample consensus (RANSAC). We define metrics for optimal grouping, the primary metric being the number of noise events ungrouped, and the real (non-noise) events associated with a majority of events from the same source, divided by the event total. We also look at a secondary optimality metric which includes the number of noise events grouped in majority noise event groupings, assuming a down-stream classifier will reject the noise groupings. We explore the parameter-space for the grouping methods and select the parameters with the best performance on the training set, validating performance on the validation set. Parameters explored include distance to nearest event group and timescale of events relative to pixel-scale (e.g. 1s = 1pix in distance calculation). 

We then classify hypotheses through Bayesian probabilities and various machine learning classification techniques, including random forest, dense and convolutional neural networks (DNNs/CNNs). Classification models are trained on a training set consisting of 70% of the labeled data sets. Initially models are trained on batch data, later trained against live output of the grouping methods. 

Next, we validate the classifier performance on the validation set. Receiver operating characteristics (ROC) curves help us determine an optimal threshold per classifier and analyze classifier performance versus number of events and total time seeing events from the group. In addition to these classification methods, we implement and experiment with a simplifying assumption for events associated with stars; that the resultant groups will have similar slope.  The benefit of isolating the star signals and not simply rejecting their information is two-fold: it improves the isolation of RSO information for satellite tracking and we can use the star information to inform spacecraft attitude. Pixel locations associated with stars and relative brightness from ON event totals provide enough information for classical astrometry techniques to gain a pointing vector of the sensor boresight in the international celestial reference system (ICRF).

Finally, we generate synthetic data from a custom EVS simulator and evaluate the performance of the combined clustering and classifying algorithms. In their most effective configuration, the models provide .9827 true positive rate (TPR) and 0.9924 true negative rate (TNR) for satellite event grouping and classification on the validation data sets.

Results indicate that machine learning models can discriminate between grouped events of RSO signals. These methods, when properly tuned, out-perform a classically built Bayesian statistical model using the same training data. Their promising performance indicates that algorithms leveraging the sparse data from EVS can enable online autonomous discrimination between RSOs during tracking activities which will be ideal for space-based systems.

Date of Conference: September 16-19, 2025

Track: Machine Learning for SDA Applications

View Paper