337_Ana.pdf

Signal Processing and Pattern Recognition of AE Signatures
Athanasios A. Anastasopoulos
Envirocoustics S.A., El. Venizelou 7 & Delfon, 14452 Athens, Greece
nassos@envirocoustics.gr
ABSTRACT
The paper, presents the essential elements of signal processing and pattern recognition, as applied for the analysis of
Acoustic Emission (AE) data from different materials, structures and processes. Signal processing and pattern recognition are
extensively used in AE data evaluation in order to discriminate relevant from non relevant indications, characterize the source
of emission and correlate with the associated failure mechanism. The background idea is that each AE source, is
characterized by it’s signature, to be identified during the analysis process. Signal processing is performed at the waveform
level, either by applying digital filtering, Fourier transforms or other processing such as wavelet transform, or by extracting AE
features as a mean to describe the shape and content of a detected AE waveform. In either case the aim is to discriminate
one type of waveform from another and correlate with different source mechanisms. The use of histogram analysis and/or two
dimensional correlation plots is discussed as conventional AE signature identification process. The respective limitations are
discussed and the alternative of multi-dimensional features sorting is presented as an introduction to the necessity of pattern
recognition in AE data analysis. The use of both supervised and unsupervised pattern recognition techniques is presented and
representative example is given. Demonstrating that Pattern Recognition is not a panacea, the paper shows that the
algorithms and respective software offers all the necessary tools for evaluating the complexity of the problem and proceed with
classifier design for AE signatures recognition.
Introduction
Since the early days of Acoustic Emission (AE), where analog systems were used to measure a single AE parameter, such as
RMS (root mean square) or simple counters measuring threshold crossings, a tremendous progress has been made in AE
technology. Acoustic Emission [1] instrumentation for research and industrial applications is now based on modern, powerful
and fast multi-channel, digital systems capable of recording and processing simultaneously waveforms, time driven data
(independent of threshold-sensitivity), hit driven data (threshold depended) as well as external parameters (such as load,
temperature etc.). In addition to that AE systems have taken full advantage of state of the art development in computer and
digital signal processing, incorporating the latest technological advances into its hardware and software. Modern AE systems
perform complex signal processing such as waveforms filtering, multi-parameter AE feature extraction while at the same time
displaying multiple screens of processed AE signals and data. The basic principles of AE signal processing are presented
herein together with a comparison between the waveform based versus features based processing strategies. This paper
does not address either the architecture of digital AE system or hardware design issues of Digital Signal Processing (DSP) or
Field Programmable Gate Arrays (FPGA).
Pattern recognition techniques are presented as an alternative and/or complementary AE data processing technique, to the
traditional amplitude distribution [2] and two dimensional correlation plots [3], aiming to help operators in noise identification
and/or filtering as well as assisting the overall evaluation. The paper discuss the basic principles of supervised classification [4,
5], consisting of a learning process, where representative AE data are used as examples to train a classifier for subsequent
classification and automatic evaluation of unknown AE data. It also discusses the use of unsupervised pattern recognition [46], as a multidimensional sorting technique aiming to identify and separate noise-related AE (EMI, friction, mechanical impacts,
flow noise) from legitimate AE. Case study from AE testing and pattern recognition analysis is presented. The advantages and
limitations of the entire methodology are discussed in relation with conventional evaluation techniques.
Signal Processing
An emerging technology implemented in modern digital AE systems is waveform streaming. Data streaming extends the
waveform recording capabilities of modern systems and allows the capturing of the continuous (non stop) AE signals during
an entire test. The important issue regarding data streaming is that we now have a record of the true acoustic emission, as
close as possible to the analogue signal from piezo-crystal output. In addition to that, recording is independent of acquisition
parameters, such as threshold and hit lock out time. Advantages of such approach include the lower risk of loosing data due
waveform length restrictions or due to increased data rate or overlapping events. Typical example of data streamed during 2
minutes AE monitoring of reciprocating machinery is presented in figure 1a, while zooming on single cycle AE response in
presented in figure 1b.
Although data streaming offers several advantages, the amount of data increases drastically and therefore it is usually restricted
between 4 and 8 channels. When more AE channels are needed for recording the AE waveforms, voltage threshold based
triggering is performed, resulting in a collection of sort duration waveforms. The advantage of including waveform recording and
processing into the AE system is the extra information provided helping the user for source characterization or location. Besides the
traditional signal processing techniques such as Fourier Transform, Digital Filtering and Correlation, advance digital signal
processing techniques, such as Sort Time FFT or Wavelet transforms can be applied for better signal interpretation and information
representation. In addition to that, reprocessing the waveforms and generating new AE features using different thresholds, filtering
and setup parameters is as an additional advantage.
Figure 1: a. (top) Streamed waveform b. (bottom) single cycle AE response – zoomed from streamed waveform.
The purpose of AE feature extraction is to extract as much information about the shape and content of a waveform, in order to
differentiate it from other waveforms with different source mechanisms. With enough of these waveform descriptors, AE features
can be used in combination to virtually reconstruct the AE waveform. In several practical applications [1], AE Features are
measured in real time by the hardware using DSP or FPGAs, without simultaneous recording of AE waveforms. Measuring only the
AE features, one achieves considerable data reduction resulting in much faster digital signal processing and pattern recognition
than dealing with the actual waveform. On the other hand, the greatest disadvantage of features measured in real time is their
dependence on the threshold used for extraction and its effect on the resulting AE signatures. Figure 2 shows an Acoustic
Emission waveform with some of the AE features superimposed on it. These time domain AE features, are know as Hit Driven
Data since they refer to each AE hit as defined by the threshold and the respective timing parameters such as hit definition time
(HDT) and hit lock out time (HLT).
Potential (V)
Signal Arrival Time
Duration
Rise Time
Amplitude
Threshold
Time
Counts to Peak
Counts
⎡ ArrivalTime⎤
⎢ Param1 ⎥
⎢
⎥
⎢ Param2 ⎥
⎢
⎥
⎢ RiseTime ⎥
⎢ Counts ⎥
⎢
⎥
⎢ Amplitude ⎥
vi = ⎢
Duration ⎥
⎢
⎥
⎢ Energy ⎥
⎢
⎥
.
⎢
⎥
⎢
⎥
.
⎢
⎥
.
⎢
⎥
⎢⎣ featuren ⎥⎦
Figure 2: Typical AE signal. Some of the features extracted are shown. The Features Vector for each AE hit (record) is the
representation of that hit as a vector in a multidimensional space.
Among the features shown in Figure 2, the most well know AE feature and one of the very few that is not affected by the set
trigger threshold is the Amplitude, i.e. the maximum (positive or negative) AE signal at the sensor output described by the
formula:
dBAE = 20 log (Vmax/1μ-volt) – (Preamplifier Gain in dB).
(1)
At this stage it is important to note that amplitude is an absolute measure of signal at the sensor output and should be
considered independent of the system and preamplifier used. On the other hand, features such as Time of Hit, i.e. the time
that the AE Signal exceeds the AE Threshold, or counts (or in other words Threshold Crossing Counts) are strongly affected
by the threshold setting. Lower threshold would result in very different time of hit, affecting also the wave velocity for location
and increasing the number of counts. Other parameters, such as energy and duration are also affected by threshold changes
to a lesser extend compared with counts.
On the other hand, frequency domain features are less affected by threshold changes. In addition to that frequency domain
features are very important for AE and source mechanism identification. Commonly used frequency based features, calculated
in real time by the systems are:
-
Frequency Centroid or the first moment of inertia of FFT magnitude.
Peak frequency, i.e. the frequency, which contains the largest magnitude.
Partial Power AE Features, representing the percentage of energy contained in a certain frequency range. They are
calculated by summing the power spectrum in a user specified range of frequencies, dividing it by the total power (across
the full range of frequencies).
Other commonly used frequency features, calculated usually in post processing, relate with the frequency bandwidth
expressed as FFT Width at 10%, 30% or 50% max magnitude (kHz), i.e. band width over 10%, 30% or 50% max magnitude.
Finally it is worth mentioning that it is also possible to get the features from a segment only of the waveform and not the entire
waveform. Usually the first part of the waveform is of interest, example of figure 3, as it better represents the characteristics of
the source. Focusing or extracting features from different parts of the waveform are particularly useful when different wave
modes can be identified and processed.
Figure 3: Example of a waveform segment that starts 29.75 μsec after threshold crossing and has 69.75 μsec duration.
Features can be extracted from a section on each waveform
Pattern Recognition
Once signal processing is completed and features are extracted or measured in real time, the next step in developing pattern
recognition process for AE application, is the inter-comparison of the collected data to sort unknown objects (AE waveforms or
hits or events) in groups by similarity, or the comparison against a data base trying to match the object(s) with known. For the
needs of applying Pattern Recognition during AE testing, pattern recognition is treated as pure classification process assigning an input to a category called class - where:
-
AE Patterns (AE waveforms or hits or events) in the same class exhibit similar properties, they are SIMILAR,
-
while AE Patterns (AE waveforms or hits or events) in different classes, exhibit different properties, they are NOT
SIMILAR, they are DISSIMILAR.
The use of histogram analysis and/or two dimensional correlation plots between features has been proposed [2, 3] for AE
signature identification as an empirical pattern recognition process. Such an approach is usually based on histogram of single
feature processing such as amplitude or in two dimensional correlation plots of counts versus amplitude, where graphical
alarms can be used to group the data in two or three categories/classes. Although such an approach might be sufficient in
some analysis cases, it does not permit solving complicated problems such as recognition of multiple failure mode signatures
from composite materials. For such cases of complicated signature recognition, or whenever automatic classification is
required, advanced multidimensional classifier design is used. Classification can be performed by two main methodologies:
Supervised Pattern Recognition [4, 5.], involves a learning process and where each new (unknown) set of AE data is
processed and classified to previously known classes comparing its features to a data base or using rules derived from the
learning process. In this case, the Classifier design is a process of “Learning from Examples” and is called Supervised Pattern
Recognition. The following, three types of supervised algorithms have been used in AE data analysis [5]:
⇒ k-NNC [7]: The k-Nearest Neighbour Classifier is a simple distance-based algorithm. The algorithm classifies the unknown
pattern to the class label most frequently occurring among the k-nearest samples. It is a simple powerful method, the
performance of which depends mainly on the completeness and accuracy of the training set.
⇒ Linear [7]: The Linear Classifier, as the name suggests, is dedicated to the classification of linearly separable problems.
Training is based on an iterative process, aiming to estimate the weights of the linear discriminant functions. The Linear
classifier might be considered as a special version of single-layer neural network.
⇒ BP Net [8]: The Back Propagation Neural Network is characterised by its multilayer perceptron topology, where connection
weights and processing elements biases are modified using the generalised delta rule. The BP encoding (training) process
is an iterative one, and, thus, needs to be repeated until a satisfactory output is attained. The performance mainly depends
of which depends mainly on the neural network topology and correction functions as well as on the completeness and
accuracy of the training set.
Unsupervised Pattern Recognition [4-7], is the process by which AE data are classified in general groups according to their
similarity. This process does not require any previous knowledge or data base. Objects are classified into groups by
comparing their features and deciding upon their similarity. In the absence of a priori knowledge about recognition problem as
it is often the case in acoustic emission, "Unsupervised Pattern Recognition" techniques are employed.
In Unsupervised Pattern Recognition, the number of classes/categories must be estimated as well as a meaningful grouping of
the AE data for further use as a training set during the classifier design. Unsupervised pattern recognition requires a lot of
experimentation and intuition by the user in order to achieve acceptable results. Application [9-18] of unsupervised pattern
recognition in Acoustic Emission data is even more difficult, due to the transient and often nonreversible phenomena
monitored by the method. As reverse problems might not be uniquely determined, application of unsupervised pattern
recognition does not always result in solutions corresponding to the physical phenomena.
FIGURE 4: AE Data in 3D view. The addition of the 3rd dimension enables the understanding of the true data structure
compared with 2D correlation plot.
The clustering algorithms are numerical methods for the partition (grouping) of N patterns (hits/signals) to M
classes/categories, without a-priori information on the number of classes and class characteristics. Clustering algorithms are
used to overcome the difficulties arising from the human’s inability to visualize the geometrical properties of the data in a
multidimensional space (refer to figure 4) and to help the analyst discover the structure of the data by identifying families of
patterns (and the respective hits/signals) with similar characteristics. Euclidean distance is usually used as a measure of
pattern (and its respective hits/signals) similarity while implementation of other distance types such as City Block, Square and
Octagonal offers increased flexibility in analysis. Five traditional clustering algorithms and a Kohonen LVQ neural network are
discussed as the core of unsupervised pattern recognition:
⇒ Max-Min Distance [4-7]: A simple heuristic algorithm aiming to identify cluster regions, which are farthest apart. Depending
on the selected features and the selection of algorithm parameters, it can be used either for identifying “outliers” (such as
extreme noise), or to define the initial partition for further optimization by other algorithms.
⇒ K-Means [4-7]: A well-known algorithm for the minimization of a performance index throughout an iterative process. In case
of Euclidean distance, the performance index is the sum of squared errors. It is a simple algorithm, which, besides the
distance selection and initial partition, requires input of the desired number of clusters.
⇒ Forgy [4-7]: An algorithm, based on K-means, with added heuristic criteria for controlling the number of clusters by deleting
small classes with few points and creating a new cluster if a pattern is sufficiently separated from it’s closest cluster. It
offers additional flexibility compared to the k-means algorithm.
⇒ Cluster Seeking [4-7]: An algorithm that, in its original version, was based on distance threshold expressed as the radius of
a hyper-sphere (if Euclidean distance is used) and required only one pass through the data set. Iteration controls and
heuristic criteria, for deleting small classes and letting unclassified data outside a certain distance, when added results in
an algorithm known as the “Wish” algorithm.
⇒ ISODATA [4-7]: The most versatile of the traditional algorithms, based also on the K-Means algorithm, offers additional
flexibility for controlling the resulting partition by merging and splitting of clusters during each iteration. Compared with the
Forgy algorithm, ISODATA requires better insight on the data structure. Selection of algorithm parameters is more difficult,
compared to the Forgy algorithm.
⇒ LVQ [5, 8]: The Learning Vector Quantizer (LVQ) is a two-layer (input-output) ANS, introduced by Kohonen. The version
usually implemented uses unsupervised single winner encoding to perform classification of vectors to any one of a predefined number of classes. Single winner LVQ is a two, fully-connected-layers network. LVQ exhibits conceptual
similarities to K-Means, the main difference being in the cluster centre modification scheme.
Unsupervised pattern recognition is based on clustering evaluation and cluster validity [4-6] which be performed by means of
class statistics (evaluating the inter class distances versus class compactness) or using specific indexes such as Rij index or
other vector discriminant functions. In cases where extensive experimentation and cluster validity is not performed, the
clustering algorithms can be used as a multidimensional sorting tool. An unsupervised analysis methodology, suitable for AE
data analysis and signatures recognition was proposed [ ], consisting of a combined use of Max-Min Distance and K-Means or
Forgy algorithms. The methodology is based on “interactive” clustering, where results from Max Min distance are used to
initialize the K-Means or Forgy algorithms. A generalized approach of “interactive” clustering method has been implemented [5
].
Demonstration Case Study
Three resonant frequency AE sensors (PAC-R15I, 150KHz) were mounted on a thick metallic plate in a triangular pattern
(320mm X 540mm). Four channels PAC-DiSP board was used for real time data acquisition and NOESIS [15] pattern
recognition s/w for the analysis and pattern recognition studies. Typical AE waveforms and their associated features values
used in the present case study are presented in Figure 5.
Feature 1
(Rise
Time)
Feature 2
(Amp dB)
Feature 3
(counts)
….
Feature k
(MARSE)
R1
89
83
79
…
157
R2
93
87
92
…
994
R3
249
97
19
…
…
…
76
52
11
Waveform ⇒
…
Rn
...
Figure 5: Pattern Matrix of a Typical set of AE data.
155
…
…
2
The simulated Acoustic Emission signals of figure 5 were produced by HSU Nielsen source (mechanical pencil lead breaks
0.3mm, 2H) at various positions on the plate. On the other hand, mechanical friction simulated signals produced by sliding a
small metal piece across the surface of the plate. Finally, Electromagnetic Interference (EMI) signals were generated by
unplugging the sensor cable during acquisition. The Time-Amplitude plot of Figure 6, presents the sequence of
experimentation, where during the first 40sec simulated AE signals were performed using HSU Nielsen source. Friction like
emissions followed for the next 15 sec and finally EMI simulation performed.
Figure 6: Simulated AE used for Supervised PR example
Since the experiment was performed in a controlled way and representative AE hits – examples – from each signal class are
known a priori, the associated pattern matrix can be use for Supervised Pattern recognition. In order to demonstrate a
classifier design, the available data divided in two parts (at random) in training and testing set, used to train and test the
classifier.
The various supervised algorithms available have both advantages and disadvantages [4]. For the demonstration needs of
the present study a Back Propagation Neural Network was trained. The network topology consists of 3 layers (Input, Hidden
and Output). The input layer contained ten nodes, representing the dimension of the pattern vectors, i.e. the 10 features used
(Rise Time, Counts to Peak, Counts, MARSE, Duration, Amplitude, Absolute Energy, Frequency Centroid and Peak
Frequency), the hidden layer contained three nodes, the same with the output layer. The three nodes of the output layer
represent the three different classes of interest i.e. Simulated AE, Friction and EMI.
The network was successfully trained, resulting 1.8% overall error, which represent 1 misclassified point (from AE class to
friction class). The error is due to pure definition of the training set. More specifically the two low amplitude hits at time
22.5sec (Figure 6), were labeled and used in both training and test sets as simulated AE, although represent a type of friction
at the time just before the pencil break. Deleting those two hits from the training and test set a 100% recognition (0% error)
was achieved.
Further optimization of classifier performance, aiming to improve classification speed without sacrificing accuracy, can be
performed by reducing the size of pattern vector, i.e. removing features. Generally speaking, the use of highly correlated
features should be avoided. For this purpose, the covariance or the correlation matrix of AE features should be examined or
subjected to hierarchical clustering in order to assist the selection of uncorrelated [4, 6]. In cases this is not available, a step by
step approach, of reducing the size of the pattern vector by one (e.g. in the previous example using 9 features instead of 10),
repeating training and evaluating the overall and within class error might be adopted. In any case classifier stability should be
evaluated by evaluating the classifier performance using different training-test sets.
Presuming that a priori knowledge about the data of Figures 5 and 6 is not available, Unsupervised PR techniques are used
in order to demonstrate the classification sequence. Following the unsupervised PR methodology, discussed in previous
paragraph and present in depth elsewhere [6], the interactive coupling of Max-Min distance and Forgy algorithms resulted in
the five classes different classes presented in Figure 7. Close observation of the resulting partition, indicates that all the data
from pencil breaks were grouped together (named class 3 – red circles in this case) resulting in 100% recognition. Similarly
the EMI data recorded from times greater than 50sec, were all grouped together (named class 1 – green boxes in this case)
resulting in 100% recognition. However the AE data, due to friction – between 37sec and 47 sec, were grouped in three
different classes (named 0, 2 and 4 in Figure 7).
Figure 7: Results of UPR applied on data of Figure 6
Figure 8: 3D Scatter plot of UPR results.
Close investigation of the 3D scatter plot of Figure 8, indicates the wide scatter of friction like AE hits when viewed in the rise
time vs peak frequency plane. In other words, grouping of these data in 3 different classes reflects hidden correlations, not
expected from the AE point of view. It is worth noting that different results should be expected if a subset of the 10 features
was used (for example using only Counts, Amplitude and Duration).
Validating the results is a complicated Task and requires a great deal of intuition and experimentation. However, validity
studies concerning the result of clustering algorithms are necessary in order to increase our confidence on the estimated
number of clusters and to decide for the best partition of the data. Once a desired clustering scheme has been achieved, the
resulting partitioning can be used to automate the classification process on other data sets of similar-in-nature AE testing, by
design a classifier - supervised pattern recognition.
Discussion and Conclusions
Feature extraction from both the time and frequency domain, can provide a full description of the AE waveform at very high
compression ratios. Real time features extraction is the preferred method for real time pattern recognition, while waveforms
recording and post processing can be used for better representation of AE sources and in order to lower classification error rates
although waveforms recording and processing result in considerably slower classification speed.
Where complicated AE signatures are present, (e.g. in cases where high background noise exists, or in composite structures
where several failure mechanisms have to be discriminated), conventional graphical analysis may not provide the necessary
resources for discrimination. In such cases, automated statistical and/or neural network techniques extend the AE user’s
capabilities in identifying the hidden structure and correlation of data categories in a multidimensional space. Unsupervised
pattern recognition techniques can be used to determine classes of similar AE signals and subsequently train a supervised
pattern recognition algorithms, so that interesting classification studies can be reapplied on new, unknown data.
Pattern Recognition techniques can prove a fast and effective tool for AE data analysis. Care, though, must be taken to
understand the limitations of the technique so as to use it properly. Pre-processing should be considered as an important step
in classifier design. A generalized classifier applicable in all types of AE tests cannot be designed. A number of reasons (e.g.
sensors spacing, different thresholds and acquisition parameters, variety of noise sources per test type, different source
mechanisms etc.) limit such generalization. However, once a large data base from similar tests is established, and once PR
results are validated, the technique can be used for real time interpretation of AE data. Optimizing the speed performance of
the classifier should be considered when attempting to use PR for real time data classification.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
A. A. Pollock, “Acoustic Emission Inspection”, in Metals Hanbook, Vol 17, Ninth Edition ASM International, 1989,
pp. 278-294.
A. A. Pollock, “Acoustic Emission Amplitude Distributions”, International Advances in Non Destructive Testing, Vol. 7,
1981, Gordon and Breach, Science Publishers, Inc., pp. 215-239.
D. Short, J. Summerscales, “Amplitude Distribution Acoustic Emission Signatures of Unidirectional Fibre Composite
Hybrid Materials”, Composites, Vol.15, No 3, 1984, pp. 200-206.
Anastasopoulos, A.A., ASNT Handbook, 3rd Edition: Vol. 6, Acoustic Emission Testing, chapter 5, Part 2, Technical
Editors R. K. Miller &. Erik V. K. Hill, ASNT 2005.
NOESIS V 4.1, Overview & Reference Manuals, Envirocoustics S.A. , 2004
A. A. Anastasopoulos, T. P. Philippidis, “Clustering Methodologies for the evaluation of AE from Composites“, J.
of Acoustic Emission, Vol. 13, Noa 1/2, 1995, pp 11-21.
J. T. Tou, R. C. Gonzalez, “Pattern Recognition Principles”, 1974, Addison-Wesley, Reading Massachusetts.
Simpson, P. K. Artificial Neural Systems, Pergamon Press (1990)
D. Kouroussis, A. Anastasopoulos, P. Vionis, V. Kolovos, “Unsupervised Pattern Recognition of Acoustic
Emission from Full Scale Testing of a Wind Turbine Blade”, J. of Acoustic Emission, Vol. 18, 2000, pp 217-223,
A. A. Anastasopoulos, T. P. Philippidis, “Pattern Recognition Analysis of AE from Composites”, Proceedings of
EWGAE - 23rd European Conference on AE Testing, Vienna, 6-8 May 1998, pp. 15-20.
T. Philippidis, V. Nikolaidis, A. Anastasopoulos, “Damage Characterisation of C/C laminates using Neural
Network Techniques on AE signals”, NDT&E International, Vol. 31, No 5, Elsevier 1998, pp. 329-340.
A. Anastasopoulos, A. Tsimogiannis, D. Kouroussis “Acoustic Emission Proof Testing of Insulated Aerial Man
Lift Devices”, J. of Acoustic Emission, Vol. 18, 2000, pp 224-230.
A. Tsimogiannis, B. Georgali, A. Anastasopoulos, “Acoustic Emission / Acousto-Ultrasonic Data Fusion for
Damage Evaluation in Concrete”, J. of Acoustic Emission, Vol. 18, 2000, pp 21-28,
A. G. Dutton et all, “Acoustic Emission Monitoring from Wind Turbine Blades Undergoing Static and Dynamic
Fatigue Testing”, British Journal of NDT Insight, Vol. 42, No. 12, p. 805-808, December 2000.
A. A. Anastasopoulos et all, “Structural Integrity Evaluation Of Wind Turbine Blades Using Pattern Recognition
Analysis On Acoustic Emission Data“,J. of Acoustic Emission, Vol 20, 2002, pp229-237.
A. N. Tsimogiannis, V. N. Nikolaidis, A. A. Anastasopoulos, “Hydrogen Cylinder Acoustic Emission Testing and
Data Evaluation with Supervised Pattern Recognition”, NDT.net – September 2002, Vol. 7 No. 09:
http://www.ndt.net/v07n09.htm .
Valery Godinez et all, “Semi-Real Time Classification Of Acoustic Emission Signals For Drive System Coupling
Crack Detection”, EWGAE- 26th European Conference on Acoustic Emission Testing, Berlin 15-17 September
2004, DGFZP- Proceedings BB 90-CD, ISBN: 3-931381-57-9, pp. 481-493.
A. A. Anastasopoulos and A. N. Tsimogiannis, “Evaluation Of Acoustic Emission Signals During Monitoring Of
Thick-Wall Vessels Operating At Elevated Temperatures”, J. of Acoustic Emission, Vol 22, 2004, pp 59-70.