Konrad Bojar, KB-Innotech
Keywords: SST, triangulation, algorithm toolpack, low power, processing optimization, timeliness
Abstract:
A typical design of SST optical triangulation sensors being currently deployed follows the edge computing paradigm and therefore contains powerful computing units for processing the data acquired by cameras; the raw data is too large to be sent to the central node of the triangulation network. The most heavyweight component of the data reduction software is the image processing block. Since the timeliness is one of key performance indicators, data cannot be stacked, and this leaves on the order of 100ms for image processing. This amount becomes a challenge when we impose an energy budget which is equivalent to mean power consumption of 45W by the image processing block processing six camera streams, yielding a limit of 7.5W per stream only. In this paper we present a lightweight image processing toolpack optimized for low-power COTS x86 platforms to be utilized in optical SST triangulation stations for cataloguing in LEO regime, deployed in areas with no infrastructure whatsoever, like deserts or desolate mountain peaks. The first level of the toolpack breakdown structure yields the following blocks: background modeler, star extractor, streak detector, Markov Random Field (MRF) block, library block (the network-level processing, like matching of streaks from different stations for orbit determination, is out of scope of this paper). This functional breakdown structure does not allow to optimize the performance globally because the intermediate results within blocks are not shared and the constituent image processing techniques are handled in an architecture-blind manner. Hence we introduce a parallel breakdown structure which is developed along the main one. The said structure provides architecture awareness across the toolpack and divides blocks into the following groups: pipelineable and neighborhood-based. The background modeler module, a simple background region mask extractor, and the star extractor module, preparing the input for the astrometry module, operate on pipelines only. The streak detector, implementing contour-based methods for short streaks, and therefore salient, and maximum likelihood method for long streaks, and therefore faint, contains processing based on 8-adjacency and mixed adjacency neighborhoods; mixed adjacency is used in order to obtain unambiguous edge representation of streaks. The MRF block is a streak model estimation block and it works using 8-adjacency neighborhoods, while the library block contains all other functions, pipelineable and neighborhood based.
Our test bench is a single industrial PC of Intel Comet Lake architecture and with no GPU onboard. It is used to simultaneously acquire and process streams from six industrial cameras and the total mean dissipated power of the whole processing system during observations does not exceed 100W, six cameras included; in the idle mode we use the ACPI control to put most devices into the D3 state. The test hardware is running under Linux what allows full control over cameras and other peripherals and their power states. At the implementation stage of the image processing toolpack we have decided to use the OpenCV library because it supports the Intel Integrated Performance Primitives (IPP) library. We use IPP and OpenCV primitives wherever possible in our development. For example, in the MRF block, there is the Iterated Conditional Modes algorithm and the Expectation-Maximization algorithm relying heavily on IPP arithmetic primitive operations, and in the streak extractor module an OpenCV Levenberg-Marquardt solver is used for least squares fitting.
We have tested our toolpack using industrial cameras, and the whole test setup was capable of detecting objects of magnitude up to 10.5. In order to provide reliable test conditions of the toolpack, during several consecutive observation nights of good weather all accessible video streams were recorded. The toolpack was then benchamarked using these streams. We present obtained results and discuss worst-case scenarios encountered. At that time we also argue that the algorithmic complexity of all algorithms used, viewed from the perspective of size of existing and prospective LEO object populations, allows undisturbed observations at any latitude with excellent timeliness parameter values.
Date of Conference: September 27-20, 2022
Track: Optical Systems & Instrumentation