Julian Gamboa, Northwestern University; Mohamed Fouda, Northwestern University; Stephen Johnson, Northwestern University; Selim Shahriar, Northwestern University
Keywords: Image Recognition, Surveillance, Shift Invariance, Scale Invariance, Rotation Invariance, Digital Holography, Hybrid Opto-electronic Correlator, Polar Mellin Transform
Abstract:
An important aspect of space situational awareness (SAS) is the monitoring of ground sites involved in controlling space vehicles and activities, using unmanned aerial vehicles or space platforms. For this type of surveillance, it is necessary to recognize objects in a speedy and robust manner. It is well known that rapid image recognition can be carried out using holographic techniques. However, these techniques have not become of practical utility due largely to lack of suitable materials for dynamic holography. Recently, we showed how to overcome this constraint by resorting to digital holography. In particular, we proposed the Hybrid Opto-electronic Correlator (HOC), which achieves the same functionality as that of a holographic correlator, but uses only Focal Plane Arrays (FPAs), spatial light modulators (SLM), phase stabilization circuits, and VLSI chips [J. Opt. Soc. Am. A 31, 41-47, 2014]. In the HOC, the amplitude and phase information are recorded via FPAs through interference with plane waves. The HOC is able to detect objects in a shift invariant manner. We also showed later that the HOC could be augmented via incorporation of the polar Mellin transform (PMT), thus making it possible to achieve shift, rotation and scale invariance simultaneously [J. Opt. Soc. Am. A 31 No. 6, 1259-1272, 2014]. Recently, we demonstrated experimentally the basic functionality of the HOC [Applied Optics 56, Issue 10, 2754-2759, 2017]. Here, we demonstrate the ability of the PMT-augmented HOC to detect images in a shift, rotation and scale invariant manner.
The experimental configuration we have used for the PMT augmented HOC is as follows. First, the beam from a 532 nm laser is expanded in diameter and then split into two paths using a beam-splitter (BS). One path leads to a Mach-Zehnder Interferometer (MZI), redirected by a mirror mounted on a piezo-electric transducer, followed by a shutter (S1), and is used as a phase-stabilization and scanning circuit (PSSC). This allows us to control the relative phase of the plane waves that will then interfere with the image beams. The second path also passes through a shutter (S2) and is then split into the reference and query arms. Each of these two beams reflects off an amplitude-modulated (AM) SLM and is then directed towards a bi-convex lens, which produces the Fourier Transform (FT) of the image at its focal plane. The image beam then interferes with its corresponding plane wave prior to being detected by an FPA. We can choose to detect just the image beam by closing S1 and opening S2; just the plane wave by opening S1 and closing S2; or the interference pattern by opening both shutters. The use of shutters can be avoided by using one FPA for each detected signal (6 in total). In a fully automated version of the PMT augmented HOC, the PMTs of images are to be carried out with an FPGA. Furthermore, the FPA signals are to be sent to VLSI based chips for carrying out fast computations, producing signals that are to be sent to a fast SLM for carrying out the final correlation. Much technical work remains to be done in order to realize an HOC with this degree of automation. However, the functionality of the PMT augmented HOC can be demonstrated by using a computer to carry out the computational steps. This is the approach we have used for the results reported here.
The steps to obtain the PMT for each image are as follows: (1) Find the Fourier Transform (FT) using a lens. (2) Determine the amplitude of the FT. (3) Eliminate a small circular area at the center of the FT, which is essential for carrying out PMT. (4) Transform to polar coordinates. (5) Transform radial coordinate values to the natural logarithms thereof. The resulting PMT images are then used as inputs to the HOC.
We started with two images of the same object, with the query image shifted, scaled down significantly and rotated by a large angle with respect to the reference image. We then carried out the correlation process using the HOC, without carrying out the PMT process. As expected, no correlation peak was found in this case. We then produced the PMT version of each of these images, and carried out the correlation process again. This time, we found a strong correlation peak, representing a match. The observed signal was also found to be in close agreement with results produced via numerical simulations.
The close agreement between the experiment and the numerical simulations demonstrates that the PMT-augmented HOC system behaved as expected. Furthermore, we were able to analyze the correlation signal to determine, with high fidelity, the relative scale factor between the reference image and the query image, as well as the angle by which the query image is rotated with respect to the reference image. However, the information about the vector by which the query image is shifted with respect to the object image could not be recovered from the correlation signal. This is a necessary price to pay for using the PMT process. However, it should be noted that if this information is of importance, it can be recovered as follows. First, the scaling and rotation information, obtained using the PMT-augmented HOC, are used to produce a modified version of the query image which has the same size and angular orientation as that of the reference image. Then, using the modified query image, a normal correlation is carried out, using the HOC, without PMT. The resulting correlation signal contains information which can be used to reveal the magnitude and the direction of the shift vector.
When fully automated, the PMT-augmented HOC would be able to carry out robust image recognition at a rate which can be as rapid as a million images per second, much faster than what can be achieved using digital signal processing. Thus, this work paves the way for developing an ultra-fast and robust image recognition system for surveillance relevant to space situational awareness. This work has been supported by AFOSR Grant No. FA9550-18-01-0359.
Date of Conference: September 17-20, 2019
Track: Adaptive Optics & Imaging