Zachary Bergen, Ball Aerospace; Naomi Owens Fahrner, Ball Aerospace; Carl Stahoviak, Ball Aerospace
Keywords: 3D model, CAD, photogrammetry, least squares, bundle adjustment, computer vision, Groebner bases, image, pose, point cloud, feature matchingpose
Abstract:
We answer the question of identifying a space satellite automatically from a sensing spacecraft. The cogent questions are: what is the shape? What direction is it headed? What is its orientation?
These considerations are at the heart of Space Domain Awareness (SDA). Individual object knowledge combined with known space objects allows us to determine en masse a predictive capability of encroachment or future proximity. Knowing what you are tracking greatly adds to the ISR (Intelligence Surveillance Reconnaissance) capability. 3D shape can help identify and assess the condition of a space object. An increasingly important role for SSA is to identify active satellites, expired satellites, and orbital debris. The whole idea of SBSS (Space-Based Surveillance Systems) is to keep a much closer watch on space from space itself.
We present a case study of techniques for remote spacecraft/satellite 3D modeling and pose analysis using a hybrid approach combining simulation, classical (photogrammetric bundle adjustment), and pose estimate using Perspective-n-Point (PnP) algorithms.
The study comprises three main areas:
Exemplar data rendered for simulation input
Photogrammetric algorithms for 3D point cloud generation
Pose estimation using PnP algorithms
We have a variety of tools for simulating a satellite in motion in space. There are two simulation inputs:
Exemplar as a CAD model
Point cloud representation
The first area we discuss is simulation input creation:
In the first case, we render a CAD model at various orientations to produce image frames as seen from the sensor. In-house simulation programs are used. The orbit of the spacecraft collecting imagery in the simulation is designed to create reasonable parallax as the sensor motion is controllable. During orbit, a frame capture is simulated using a staring mode. Each frame image contains an aspect of the object that is used as input into a feature detection algorithm. A feature is a keypoint/descriptor pair. The keypoint is a pixel location in an image. A descriptor is a vector that uniquely defines a keypoint. The features are matched (when visible) in each frame and results in a match set of features and frames.
The second case is a point cloud imaged from the same sensor locations resulting in frames with known point correspondences for each frame. The match set of features is known in this case. The Blender CAD program was used to build point cloud exemplars from the CAD model by creating surface facet/ ray intersections (e.g., LIDAR simulation) to create noiseless object truth sets. We then imaged the point clouds in a simulation to produce synthetic images and arrive at the same output as in the first case: frames with known matched features.
The second area we discuss is 3D point cloud creation:
Given our set of matched features in images, we produce ray bundles from the camera perspective center at each collect location. These are input into a bundle adjustment. A bundle adjustment optimizes (e.g., in a least-squares sense) the ray intersections between several rays from matching features in several images. The truth set allowed us to test the bundle adjustment without concern for the errors produced with feature matching.
Matched feature points are collected from the imagery using computer vision techniques and sorted for optimal parallax. The collecting craft interior orientation is used to project rays from features to form a ray bundle in inertial space. A least squares bundle adjustment is used along with feature points to produce a representative point cloud. The two point cloud creation options allow us to determine the effects of noise as we perform the pose.
For analyzing the impact of noise, we compared a standard triangulation method with the least squares bundle adjust with and without noise. The triangulation worked far better than the bundle adjustment without noise. When noise was added, the bundle adjustment worked much better.
The third area we discuss is pose estimation:
Given our point cloud, we perform an analysis of pose. The point cloud is fed into a PnP algorithm, producing a pose estimate. Characterization of the attitude/ephemeris of the satellite motion allows for maneuvers for docking. A variety of techniques used will be presented including model and point cloud generation using Blender and algorithms for combining the classical and ML information.
We implemented several Perspective-n-Point (PnP) algorithms for the additional pose estimation algorithms. These approaches utilized Groebner bases to optimize a given objective function. This allowed us to find the quaternions for the rotation. After finding the rotation, we were able to substitute some known values to find the translation between the target body frame and the camera frame. The specific PnP algorithms implemented were the Unified PnP, O(n) PnP, Efficient PnP, and Perspective-3-Point.
Exploitation of the process:
The output of our process allows for exploitation of the data followed by dissemination. We discuss an in-house tool for rendering the objects and motions in AR/VR. The tool is useful for SDA when many objects are involved and are too distant to see without the aid of enhanced graphics. Such tools are being used for situational awareness.
Given a point cloud, it is also possible to compare the cloud sensed by our process with an exemplar. This step would attempt to identify a known object given a point cloud. One of the authors holds a patent introducing a novel way of matching point clouds to objects. This would aid in identification of space objects and subsequent accurate rendering. Dissemination of intelligence data is more useful with an accurate rendering vs a point cloud.
We will present the results of our study including exemplar creation, algorithms, statistical plots of the outcomes of each area of analysis, and movies of the point clouds rendered from the sensor perspective.
The output of our process is useful for automated satellite docking, tracking to monitor maneuvers, and other SDA algorithms that determine trajectory vectors and proximity notification. We will show a movie of a simulated docking process driven by our automated process.
Date of Conference: September 27-20, 2022
Track: Non-Resolved Object Characterization