Processing Signals for Automatic Recognition

Processing Signals for Automatic Recognition

Automatic Signal Recognition – an emerging need

Imagine a car or ‘phone that responds only to its rightful owner’s voice; a second sensor that warns of an impending axle, gearbox or engine fault; an intruder alarm that tells you if an event is benign or hostile; or a low-cost silicon chip that recognizes words or people.
Smart Sensors, new generation Smart Cards, Intelligent Security Systems, time- and money- saving Diagnostic Instruments, low-cost Speech Response devices and Biometric Verification Systems are just a few examples of applications for Domain Dynamics’ TESPAR/FANN technology. Excluded until now on the grounds of cost and complexity, such concepts are creating new markets where there is a growing and as yet unsatisfied appetite.

All these devices call out for automatic recognition and classification of the events and conditions represented by signals, either directly or via sensors, and often demand a “metric” and “linearity.” For certain parameters, such as temperature and pressure, the requirement to classify and interpret significant changes and events automatically may be quite straightforward. With parameters such as vibration, noise and the human voice, the problem for designers and operators of commercial systems is much more challenging. Classical Fourier-based methods are proving inadequate in several key practical respects. Processing complex signal data into a “metric” or “signature” form that permits the effective exploitation of the classification power of Fast Artificial Neural Networks (FANNs) is an example of these goals and limitations.

Large-scale plant and process monitoring systems may afford plenty of DSP computing resource for the sophisticated analysis and interpretation algorithms needed to extract meaningful and reliable data from noisy signals. Even in these cases, the power of the TESPAR/FANN approach is exemplified in its ability to separate and classify many signals that remain indistinguishable in the Frequency Domain. In portable instruments and smart sensors, the problem is acute. Despite the rapid advances in DSP power available in microchip form, these demands may show in severe limitations of battery life, performance and costs.

TESPAR/FANN technology

TESPAR/FANN technology developed and patented by Domain Dynamics Limited now makes automatic recognition and classification devices technically and commercially feasible. TESPAR/FANN methods provide a new digital data / neural network combination which is proving highly useful in all kinds of automatic signal recognition applications. TESPAR/FANN involves the integration of novel Time Encoded Signal Processing And Recognition (TESPAR) waveform coding procedures with orthogonal Fast Artificial Neural Networks (FANNs) in purpose-designed structures that permit highly flexible decision making / data fusion hierarchies to be tailored to match the needs of the recognition or classification task, whether simple or complex.

TESPAR coding

TESPAR is a new simplified digital language, first proposed by King and Gosling [1] for coding speech. The process is equally valid for any band-limited waveform from, for example, seismic signals with frequencies and bandwidths of fractions of a hertz, to radio frequency signals in the gigaHertz region, and beyond.
TESPAR is based on a precise mathematical description of waveforms, involving polynomial theory, which shows how a signal of finite bandwidth (“band-limited”) can be completely described regarding the locations of its real and complex zeros. This contrasts with the more conventional approach of linear transformations based on “amplitude” sampling at regular intervals, as has been described by Fourier, Nyquist, Shannon and others. The real and complex zero descriptors of TESPAR and the time-bandwidth data produced by a Fourier transform are mathematically equivalent, and both result in 2TW (the Shannon Number) of digital sample data points describing the waveform. The mathematical background to this zero-based approach is outlined in Voelcker and Requicha.

Given the real and complex zero locations of the signal, a vector quantization procedure has been deployed to code these data into a small series of discrete numerical descriptors, typically around 30 (the TESPAR symbol alphabet). Holbeche [5] gives an account of one version of this coding. TESPAR codes can be implemented in hardware or software, and produce a stream of numerical symbols (from the alphabet set) which naturally follow the input waveform in the time domain.

Matrix formation

The output from a TESPAR coder may be converted into a variety of progressively informative matrix data structures. For example, the single-dimension vector (or S-matrix) is a histogram recording the frequency with which each TESPAR coded symbol occurs in the data stream. A more discriminating data set is the two-dimensional histogram or A-matrix which is formed from the frequency of symbol pairs, which need not necessarily be adjacent. Extending this to 3 dimensions would improve the discrimination power still further. Typical A and S matrices are shown in Figures 1 and 2.
Various coding strategies are available to make the best use of information provided by waveform amplitude, duration or shape descriptors, and by introducing the idea of ‘lag’ into matrix formation, to exclude or emphasize artifacts that recur at special rates.


TESPAR data structures are of fixed size, dependent upon the alphabet used. This makes for regimes of processing that are both stable and straightforward to implement. In a typical classification task, TESPAR matrices for several samples of known operational conditions or events may be collected and used to produce a reference matrix or archetype which embodies the unique characteristics of that event.

Subsequently, during live monitoring, new matrices are created in situ and continuously compared against the trained archetypes for a classification judgment to be made. All standard statistical methods, such as angular correlation, can be applied in the decision-making process, and yield useful results.

The Artificial Neural Network dimension

Potentially far more powerful is the possibility of applying Artificial Neural Network methods of pattern classification to the TESPAR matrices. Because TESPAR matrices are of fixed size and dimension, they are ideally matched to the input requirements of Neural Networks. Recent practical experience [6] confirms that the TESPAR/FANN combination enables the introduction of imperative classification procedures, producing system performances previously considered infeasible, and including an ability to classify many signals that remain inseparable in the frequency domain.
Performance advantages

In many important cases of interest, TESPAR-based classification techniques show significant performance advantages over conventional Fourier-based methods. For example:
Typically two orders of magnitude less computer processing power are required, with consequent lower power consumption.
The simple data structures are both compact and of fixed dimension, such that limited processing and memory resource is no barrier to efficient implementation. This has important benefits for data storage and communication operations, as TESPAR coding provides a very efficient data reduction method.
The data structures offer very high degrees of discrimination and are optimally for classification using FANN architectures.
Input can be obtained from low-cost sensors. In many applications, classification results indistinguishable from those obtained using the high-cost alternatives (whose linearity is often essential when using frequency domain signal analysis methods) have been obtained.
For many real world applications, false alarm and other system errors can, by routine system design, be made vanishingly small.
Classification speed is minimal, e.g. less than 1 second using current popular microprocessor technology for a single pass classification.
For these and other productive reasons, TESPAR/FANN is implementing the biometric functions required in the European Union CASCADE Esprit Smart Card project which is developing a 32-bit RISC processor 20 square mm in the area for a new generation of Smart Card and Secure Pocket Intelligent Device applications.

Implementation issues

The availability of simple and cheap processing capabilities, with embodiments appropriate to all levels of production volume, is crucial to the viability of smart sensors, structures, and instruments. Such devices combine or even integrate, a sensor and electronic processing device to give local and immediate classification without the need for central computation. These concepts are currently receiving much academic and industrial attention.
The TESPAR coding and vector quantization process are already available both as a software algorithm and in a low power ASIC silicon design. Beyond this, TriTech Microelectronics of Singapore are in the process of producing a range of very low cost, low power TESPAR embodiments in silicon which offer a high degree of flexibility for integration into a wide range of potential high volume TESPAR applications.

In association with this activity, a collaboration with King’s College and University College London is now adapting their pRAM Neural Network architecture to the task of classifying TESPAR data structures [8, 9, 10]. pRAM technology provides Neural Networks that can be trained on the silicon itself. Thus the realization of complete TESPAR/FANN single chip solutions is in sight, capable of training in situ and adaptable to widely differing low cost, high volume applications.

Design strategies

The use of a multiple network architecture with the classification decision based on a data fusion/vote taking decision logic across the network set offers the possibility of making system errors vanishingly small by design. As an example, a typical speaker verification architecture (fig. 3) may consist of 15 or more networks, with practical system error performances moving towards the 1 in 100,000 target FRR performance figure set as a “requirement” by the UK banking community for biometric verification methods

Massively Parallel Network Architectures

The data needed to store a TESPAR/FANN classification architecture is already much smaller than many competitor methods. The latest work, however, is showing that alternative methods can capitalize on the strengths of the multiple network architecture. While not requiring the significant training procedures, and enabling all the describing information to be contained in as little as 50 or 60 eight-bit bytes of data, irrespective of the size, dimensionality, and complexity of the input data matrices.
The new technique of Massively Parallel Network Architectures embodies the immense power of massively parallel networks and data fusion enhancements to achieve the performance associated with a large number N of trained networks, where N may be, for example, between 100 and 1500.

In this technique, an ordered set of N networks, all different, may be generated a priori in non-real time using input data from a large number of representative samples. The N networks are then used as an interrogation set, against which all signal samples are to be compared, both at registration (‘training’) and subsequent interrogation (classification).

When being registering against the N net interrogation set, a signal sample is first converted to appropriate TESPAR matrices and compared against each of the N nets in turn. Each net will produce an output on one of its nodes, indicating to which of the pre-trained signals in the net the input sample was closest. By examining the pattern of data outputs from all N nets, a variety of strategies is available for characterizing the signal. For example, using an ordered set of N=100 nets, a signal can be characterized by the corresponding data set of 100 three-bit words, i.e. circa 38 eight-bit bytes.

This new, patented technique has exceptional potential in Biometric Verification and Word Recognition applications.

Development tools

All the work described has been conducted using a Domain Dynamics’ proprietary PC-based development system, the TADS-XS 50. The system includes an extensive library of both conventional and TESPAR signal processing and data analysis software, operating under the popular MATLAB graphical user interface. FANN classification architectures are created, trained, tested and interrogated within the system using the proprietary FasTEST software suite. This development facility is proving precious in enabling third parties to evaluate TESPAR/FANN architectures in a broad range of real-world classification tasks.

As a result of over sixty real world case and feasibility studies to date conducted by Domain Dynamics for a variety of organizations, TESPAR and TESPAR/FANN techniques are now being applied to:
monitoring the particle size and state of or granules en route through a series of crushing mills to the smelter
“listening” to drill head vibration to detect the condition of the rock in mining processes
intelligent security and intruder detection systems portable instruments that make a quick and accurate diagnosis of vehicle electric and engine management system faults detecting and locating sticking and faulty valves in diesel engines and compressors identifying and locating electrical faults in HV power transformers early warning of conditions that precede failure in helicopter drive gearboxes and rotors systems for quality monitoring of cellular network communication channels word recognition devices that perform well in noise
As a practical illustration of a TESPAR/FANN application, three operating conditions of a high-speed rotary reciprocating compressor are examined. Two of the conditions may result in costly repair and downtime. Monitoring was carried out by recording noise signals via a simple acoustic sensor placed within a few centimeters of the compressor.

Previous investigations had shown that the three essential conditions of interest were tough to separate using conventional frequency domain procedures.

Signal samples from the three conditions were encoded to TESPAR A-matrices and used to train an 841-10-3 floating point Artificial Neural Network. Random signal condition samples not used in the training process were then encoded to A-matrices and applied as an input to the FANN. The condition output of the FANN produced the correct condition classification for every input signal in turn, with a worst case score of 0.824 and an average rating of 0.970 across 30 interrogations.


Experience to date with TESPAR/FANN hardware and software indicates:
The TESPAR/FANN combination is a powerful, robust, flexible and economical technology for a wide range of automatic classification and signals recognition applications.
TESPAR/FANN procedures permit system errors to be made vanishingly small over a wide range of real-world applications.
New massively parallel network strategies allow vital classification embodiment data to be stored within a few tens of 8-bit bytes.
TESPAR/FANN hardware and software development tools are readily available for solving real world signal classification and verification problems in a cost-effective manner.
On-going developments will enable low-cost smart sensors to be realized via trainable TESPAR/FANN classifiers integrated with a sensor on silicon.




No Responses

Write a response