Mark S. Schmalz (University of Florida), Gerhard X. Ritter (University of Florida), Eric T. Hayden (University of Florida), Gary Key (Frontier Technology, Inc.)
Keywords: Automated signature detection, Pattern recognition
Abstract:
Accurate spectral signature classification is key to reliable nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications, especially where linear mixing models are employed, signature classification accuracy depends on accurate spectral endmember determination. In previous work, it has been shown that class separation and classifier refinement results in Bayesian rule-based classifiers and in classical neural nets (CNNs) based on the linear inner product tend to be suboptimal. For example, the number of signatures accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. Such problems are exacerbated by the presence of input nonergodicity. Computed pattern recognition, like its human counterpart, can benefit from processes such as learning or forgetting, which in spectral signature classification can support adaptive tracking of input nonergodicities. For purposes of simplicity, we model learning as the acquisition or insertion of a new pattern into a classifiers knowledge base. For example, in neural nets (NNs), this insertion process could correspond to the superposition of a new pattern onto the NN weight matrix. Similarly, we model forgetting as the deletion of a pattern currently stored in the classifier knowledge base, for example, as a pattern deletion operation on the NN weight matrix, which is a difficult goal with classical neural nets (CNNs). In practice, CNNs have significant disadvantages of poor classification accuracy, limited information storage capacity, poor convergence, and long training times, which have been remedied by the development of neural networks based on lattice algebra. The first two authors have elsewhere shown that such lattice neural networks (LNNs) can be configured as auto- or hetero-associative memories and are amenable to pattern insertion or deletion operations on the LNN weight matrix. In this paper, we detail the implementation of pattern insertion and deletion in lattice associative memories (LAMs), in support of signature classification. It is shown that, for an n-input LAM having an nxn-element weight matrix, pattern insertion and deletion from the weight matrix can be computed exactly in O(n) addition operations, with a small proportionality constant. Adaptive classifiers based on LNN technology can thus achieve accurate signature classification in the presence of time-varying noise, closely spaced or interleaved signatures, and imaging system optical distortions. As proof of principle, we exemplify classification of multiple closely spaced, noise corrupted signatures from a NASA database of space material signatures.
Date of Conference: September 14-17, 2010
Track: Non-resolved Object Characterization