Reuben Settergren, BAE Systems
Keywords: resection,measurement,mono,stereo,monoscopic,stereoscopic
Abstract:
Measurement of 3D objects observed at a distance is a fundamental photogrammetry task. Modern photogrammetry relies heavily on metric sensors that have a precise understanding of their position and orientation relative to a stationary scene. But if an object of interest has moved or rotated between collected images the necessary conditions of stationary scene content do not hold. Existing capabilities to measure 3D objects in arbitrary poses are focused on close-range use cases such as industrial part inspection, and are not applicable for long-range cases such as Space Domain Awareness (SDA) analysis of Non-Earth Imaging (NEI). For SDA/NEI contexts, an orbiting sensor which images another orbiting object will likely have an accurate understanding of its own position and pose, and a rough range to the target, but not a precise understanding of the natural axes of the observed body, and how they are moving and rotating over time.
We present a capability that creates synthetic long-range sensor models, which treat the axes of the observed rigid body as a fixed reference, and reposition the sensor to a pose in the object’s coordinate system. The user marks vectors on the NEI imagery, along the observed object’s X, Y, and Z axes. For long-range image collection, there are no (measurable) vanishing points, and all co-axial vectors will present as parallel; multiple vectors marking each visible axis add redundancy and robustness to the solution. The lengths of the marked vectors are not critical (except that longer axes minimize the effect of measurement error); the three two-dimensional unit vectors in image space, determine a unique sensor rotation for the synthetic viewing pose of the hypothetically stationary object.
Lacking vanishing points means the focal length and scale are not recoverable from the imagery alone. It is necessary also to provide scale information; this could be in the form of focal length, pixel size, and range (which are typically available), or lacking that, a scale bar. Together with an arbitrary choice of a point on the observed body to serve as the origin of the XYZ coordinate system, these inputs suffice to solve for a complete camera model.
A single camera model already supports a variety of monoscopic measurements, such as lengths and angles (even for structures not parallel to the imaging focal plane, if it is known/assumed how they align with body XYZ axes). But having camera models for 2 or more images enables full stereo exploitation – determining the 3D XYZ coordinates of every point that can be observed in multiple images. This opens the door to 3D reconstruction of the object, stereo viewing for visual assessment, etc.
Date of Conference: September 17-20, 2024
Track: Space Domain Awareness