Masashi Nishiguchi, Purdue University; Carolin Frueh, Purdue University; Brian McReynolds, Institute of Neuroinformatics
Keywords: dynamic vision, physics-based modeling
Abstract:
Dynamic vision cameras are not frame based and react to differential light on the single pixel level. Their use has always been intriguing in space situational awareness problems. However, the low magnitudes, high signal to noise ratios have been proving a challenge. Furthermore, non-linear time-biases in the event registration do occur, thwarting precise astrometry.
In this paper, we have developed a physics-based digital twin of the DAVIS dynamic vision camera. It is shown explicitly what effect bias currents have and how time biases evolve as a function of the photo current. The results are compared to an actual DAVID camera and its measurements. In contrast to other dynamic vision camera simulations, a truthful representation is found, avoiding rendering full-frames to begin and without making broad simplification. The model hence allows a lot more insights. The utility and limit of the use for Space Situational Awareness applications is shown based on first principles. Observation scenarios that are shown explicitly is the astrometric observations for the detection of untracked objects and brightness variation measurements for the use in characterization, comparable to classical frame-based light curve measurements.
Date of Conference: September 17-20, 2024
Track: SDA Systems & Instrumentation