4
IEEE Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications 5-7 September 2005, Sofia, Bulgaria Intelligent Data Acquisition and Processing Using Wavelet Neural-Networks Andrea Kulakov 1, Danco Davcev2 1) Computer Science Department, Faculty of Electrical Engineering, University "Sts Cyril and Methodius", Skopje, Macedonia, kulakgetf.ukim.edu.mk 2) Computer Science Department, Faculty of Electrical Engineering, University "Sts Cyril and Methodius", Skopje, Macedonia, etfdavgetf.ukim.edu.mk Abstract - Most of the problems for data management in today's wireless sensor networks were already dealt with during the past thirty years of the artificial neural-networks tradition and that kind of algorithms can be easily implemented to wireless sensor network platforms. These problems include the needfor simple parallel distributed computation, possibility for distributed storage, fault-tolerance and in some cases the possibility of auto-classification ofsensor readings. We will present data acquisition through hierarchical two-level architecture with algorithms which will use wavelets for initial data-processing of the sensory inputs and neural-networks which use unsupervised learning for categorization of the sensory inputs. They are tested on a data obtainedfrom a set of 4 motes, equipped with seven sensors each. Keywords - Wireless sensor networks, wavelets, neural- networks, data robustness, data clustering, unsupervised pattern learning I. INTRODUCTION Sensor networks are dealing with huge amounts of data and are usually deployed to monitor the physical world. A completely centralized data collection strategy would be hard to put into practice due to the energy constraints on sensor node communication. Anyway, it would be very inefficient given that sensor data has considerable redundancy both in time and in space. In cases when the application demands compressed summaries of large spatio-temporal sensor data and similarity queries, such as detecting correlations and finding similar patterns, the use of a neural-network algorithm seems to be a reasonable choice. Still, up until recently, the only application of neural- networks algorithms for data processing in the field of sensor networks was the work of Catterall et al. [5] where they have slightly modified the Kohonen Self Organizing Maps model. In our previous work ([8],[9],[1 1]) we have shown how the same data can be classified using a more sophisticated model of a neural- network, namely the ART model ([6], [7], [2], [3], [4], ). In this paper we will use different data obtained at our partner lab at Politecnico di Torino, Italy, specifically for the purpose of using the architecture proposed in [8], [9] and [ 11]. Artificial Neural Networks that use unsupervised learning typically perform dimensionality reduction or pattern clustering. They are able to discover both regularities and irregularities in the redundant input data by iterative process of adjusting weights of interconnections between a large numbers of simple computational units (called artificial neurons). As a result of the dimensionality reduction obtained easily from the outputs of these algorithms, lower communication costs and thus bigger energy savings can also be obtained. The rationale underlying the use of wavelets is that they offer a successful and easily obtained time- frequency signal representation with the abilities to view the data at multiple spatial and temporal scales and to extract important features in the data like abrupt changes at various scales. Generally, the wavelet transform is a procedure of filtering and subsampling operations [12]. In the paper we will review first the ARTI [7] and FuzzyART [3] models. Later we will show a possible architecture for incorporating the Neural Networks into the small network of units, combining each with wavelet preprocessing, as we proposed earlier in [10]. Shortly we will present the hardware platform that has been used to obtain the data that we used to test our proposal and we will give some results of the classifications of the data within this architecture. We will also show how wavelet transform of the signals can be incorporated in the proposed architectures. II. ART AND FuzzyART Several models of unsupervised Neural Networks have been proposed like Multi-layer Perceptron (MLP), Self- Organizing Maps (SOMs), and Adaptive Resonance Theory (ART) [7]. Out of these we have chosen the ART models for implementation in the field of sensor networks because they do not constrain the number of different categories in which the input data will be clustered and can add new categories while comparing the current input with the previous ones. ART has been developed for pattern recognition primarily. Models of unsupervised learning include ARTI [7] for binary input patterns and FuzzyART [3] for analog input patterns. ART networks develop stable recognition codes by 0-7803-9446-1/05/$20.00 C2005 IEEE 491

[IEEE 2005 IEEE Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications - Sofia, Bulgaria (2005.09.5-2005.09.7)] 2005 IEEE Intelligent Data Acquisition

  • Upload
    danco

  • View
    214

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE 2005 IEEE Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications - Sofia, Bulgaria (2005.09.5-2005.09.7)] 2005 IEEE Intelligent Data Acquisition

IEEE Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications5-7 September 2005, Sofia, Bulgaria

Intelligent Data Acquisition and Processing Using WaveletNeural-Networks

Andrea Kulakov 1, Danco Davcev21) Computer Science Department, Faculty of Electrical Engineering,

University "Sts Cyril and Methodius", Skopje, Macedonia, kulakgetf.ukim.edu.mk2) Computer Science Department, Faculty of Electrical Engineering,

University "Sts Cyril and Methodius", Skopje, Macedonia, etfdavgetf.ukim.edu.mk

Abstract - Most of the problems for data management intoday's wireless sensor networks were already dealt withduring the past thirty years of the artificial neural-networkstradition and that kind ofalgorithms can be easily implementedto wireless sensor network platforms. These problems includethe needfor simple parallel distributed computation, possibilityfor distributed storage, fault-tolerance and in some cases thepossibility ofauto-classification ofsensor readings.We will present data acquisition through hierarchical two-levelarchitecture with algorithms which will use wavelets for initialdata-processing of the sensory inputs and neural-networkswhich use unsupervised learning for categorization of thesensory inputs. They are tested on a data obtainedfrom a set of4 motes, equipped with seven sensors each.

Keywords - Wireless sensor networks, wavelets, neural-networks, data robustness, data clustering, unsupervisedpattern learning

I. INTRODUCTION

Sensor networks are dealing with huge amounts ofdata and are usually deployed to monitor the physicalworld. A completely centralized data collection strategywould be hard to put into practice due to the energyconstraints on sensor node communication. Anyway, itwould be very inefficient given that sensor data hasconsiderable redundancy both in time and in space. Incases when the application demands compressedsummaries of large spatio-temporal sensor data andsimilarity queries, such as detecting correlations andfinding similar patterns, the use of a neural-networkalgorithm seems to be a reasonable choice.

Still, up until recently, the only application of neural-networks algorithms for data processing in the field ofsensor networks was the work of Catterall et al. [5]where they have slightly modified the Kohonen SelfOrganizing Maps model. In our previous work([8],[9],[11]) we have shown how the same data can beclassified using a more sophisticated model of a neural-network, namely the ART model ([6], [7], [2], [3], [4], ).In this paper we will use different data obtained at ourpartner lab at Politecnico di Torino, Italy, specifically forthe purpose of using the architecture proposed in [8], [9]and [ 11]. Artificial Neural Networks that use

unsupervised learning typically perform dimensionalityreduction or pattern clustering. They are able to discoverboth regularities and irregularities in the redundant inputdata by iterative process of adjusting weights ofinterconnections between a large numbers of simplecomputational units (called artificial neurons). As a resultof the dimensionality reduction obtained easily from theoutputs of these algorithms, lower communication costsand thus bigger energy savings can also be obtained.The rationale underlying the use of wavelets is that

they offer a successful and easily obtained time-frequency signal representation with the abilities to viewthe data at multiple spatial and temporal scales and toextract important features in the data like abrupt changesat various scales. Generally, the wavelet transform is aprocedure of filtering and subsampling operations [12].

In the paper we will review first the ARTI [7] andFuzzyART [3] models. Later we will show a possiblearchitecture for incorporating the Neural Networks intothe small network of units, combining each with waveletpreprocessing, as we proposed earlier in [10]. Shortly wewill present the hardware platform that has been used toobtain the data that we used to test our proposal and wewill give some results of the classifications of the datawithin this architecture. We will also show how wavelettransform of the signals can be incorporated in theproposed architectures.

II. ART AND FuzzyART

Several models of unsupervised Neural Networks havebeen proposed like Multi-layer Perceptron (MLP), Self-Organizing Maps (SOMs), and Adaptive ResonanceTheory (ART) [7]. Out of these we have chosen the ARTmodels for implementation in the field of sensornetworks because they do not constrain the number ofdifferent categories in which the input data will beclustered and can add new categories while comparingthe current input with the previous ones. ART has beendeveloped for pattern recognition primarily. Models ofunsupervised learning include ARTI [7] for binary inputpatterns and FuzzyART [3] for analog input patterns.ART networks develop stable recognition codes by

0-7803-9446-1/05/$20.00 C2005 IEEE 491

Page 2: [IEEE 2005 IEEE Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications - Sofia, Bulgaria (2005.09.5-2005.09.7)] 2005 IEEE Intelligent Data Acquisition

self-organization in response to arbitrary sequences ofinput patterns. They were designed to solve the so calledstability-plasticity dilemma: how to continue to learnfrom new events without forgetting previously learnedinformation. ART networks model several features suchas robustness to variations in intensity), detection ofsignals mixed with noise, and both short- and long-termmemory to accommodate variable rates of change in theenvironment. There are two main variations of ART-based unsupervised networks: ARTI (three-layernetwork with binary inputs) and Fuzzy ART (with analoginputs, representing neuro-fuzzy hybrids which inherit allkey features of ART). More details can be found in [1].

input pattern p (pj E {O, 1 }; j = 1, 2, ..., N), the networkattempts to classify it into one of its existing categoriesbased on its similarity to the stored prototype of eachcategory node. More precisely, for each node i in the L2layer, the bottom-up activation Ai is calculated, whichcan be expressed as

N

EwinpA = i=i N

j=l

for i= 1, ...,M (1)

Category classified or learned

L2QQQ Q1ae rywhere wi is the (binary) weight vector or prototype ofcategory i, and £> 0 is a parameter. Then the L2 node Cthat has the highest bottom-up activation, i.e. Ac =

max{Ai i = 1, ..., M}, is selected. The weight vector ofthe winning node (wc) will then be compared to thecurrent input at the comparison layer. If they are similarenough, i.e. if they satisfy the matching condition:

N

ZWCOPj=l

N

j=l

Sensor inputs

Fig. 1. Architecture of the ART network.

In Fig. 1 typical diagram of an ART Artificial NeuralNetwork is given. L2 is the layer of category nodes. Ifthe degree of category match at the LI layer is lower thanthe sensitivity threshold 0, originally called vigilancelevel, a reset signal will be triggered, which willdeactivate the current winning L2 node that hasmaximum incoming activation, for the period ofpresentation of the current input.An ART network is built up of three layers: the input

layer (LO), the comparison layer (LI) and the recognitionlayer (L2) with N, N and M neurons, respectively. Theinput layer stores the input pattern, and each neuron inthe input layer is connected to its corresponding node inthe comparison layer via one-to-one, non-modifiablelinks. Nodes in the L2 layer represent categories intowhich the inputs have been classified so far. The LI andL2 layers interact with each other through weightedbottom-up and top-down connections that are modifiedwhen the network learns. There are additional gaincontrol signals in the network which are not shown inFigure 1 but regulate its operation and they will not bedetailed here.

The learning process of the network can be describedas follows: At each presentation of a non-zero binary

>e (2)

where 0 is the sensitivity threshold (0<0 <1), then theL2 node C will capture the current input and the networklearns by modifying wc:

-=y(wd n p) + (1(-3)w()where y is the learning rate (0<.y<1). All other weightsin the network remain unchanged. The case when y=1 iscalled "fast learning" mode.

If, however, the stored prototype wc does not matchthe input sufficiently, i.e. if the condition (2) is not met,the winning L2 node will be reset for the period ofpresentation of the current input. Then another L2 node(or category) is selected with the highest Ai, whoseprototype will be matched against the input, and so on.This "hypothesis-testing" cycle is repeated until thenetwork either finds a stored category whose prototypematches the input well enough, or allocates a new L2node in which case learning takes place according to (3).

III. DATA AcQuisITION USING WAVELET NEURALNETWORKS

We have enhanced the ART classifiers with simplepreprocessing units that do a wavelet transformation oneach signal. Each unit has FuzzyART implementationsclassifying only its sensor readings. One of the units can

492

Page 3: [IEEE 2005 IEEE Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications - Sofia, Bulgaria (2005.09.5-2005.09.7)] 2005 IEEE Intelligent Data Acquisition

be chosen to be a Clusterhead collecting and classifyingonly the classifications obtained at other units. Since thecluster or category identification numbers at each unitcan be represented with integer values, the neural-network implementation at the Clusterhead is ARTI withbinary inputs.

With this architecture a great dimensionality reductioncan be achieved depending on the number of sensorinputs in each unit (in our case it's a 7-to-I reduction). Inthe same time communication savings benefit from thefact that the cluster number is a small binary numberunlike raw sensory readings which can be several byteslong real numbers converted from the analog inputs.Since the communication is the biggest consumer of theenergy in the units, this would lead to bigger energysavings as well. Another benefit from this architecture isthe fact that we can view the classifications at theClusterhead as an indication of the spatio-temporalcorrelations of the input data.

light sensor, a microphone, 2 accelerometers, 2 magneticsensors, and a thermometer. Test set-up is composed by asensor node (node 0) directly connected to a PC by aserial interface, and three nodes distributed in thesurrounding environment. Each node transmits its datadirectly or through multi-hop to the node 0, that forwardsthe data to the PC. Data were collected in different filesaccordingly and later analyzed through the simulationenvironment.

* .=0.99 m 0=0.85 00.7530

5

'.12 1

FuzzyLRT

Fig. 2. One Clusterhead collecting and classifying the data afterthey are once classified at the lower level.

Category classified or eared

L2| OO

Sensitivity

LI

LO -

Fig. 3. Each unit can be enhances with a wavelet preprocessorsof the raw sensor data.

IV. HARDWARE PLATFORM AND EXPERIMENTAL RESULTS

The platform for the experiments, from which the dataanalyzed in this paper were obtained, is a collection of 4units from Xbow platform, each of them equipped with a

0 200 400 600 800 1000

Sample Number

1200 1400 1600

Fig. 4. Results of the categorization of the sensory inputs forthe architecture shown in Fig. 2, with different sensitivity

thresholds 0.

We compare these results with the ones previouslyobtained in [10] where we have used the data obtained inexperiments of Catterall et all. ([5]). We have calculatedthe wavelet coefficients over different time windows,although we present here only calculations over 512samples. The results of the clustering with the firstarchitecture but augmented with computation of thewavelet coefficients calculated over 512 samples aregiven in Fig. 5. We have analyzed also signals whichwere purposefully made faulty (sensor 17 giving zero orrandom signals), the results of which are also shown inFig. 5. The architecture for data acquisition showssignificant data-robustness which can be seen from thegraphs in Fig. 4 and Fig. 5, where similar behavior canbe noticed.

p~~~~._66Mrro 17j zerb 1Z_ndomln

0 200 400 600 800 1000 1200 1400 1600

Fig. 5. Clustering of the ID Haar wavelet coefficientscalculated over 512 samples, with the second architecture

p=0.80, where the 17th sensor is faulty.

493

AllI o

7~ART!i

a-

I-@l

t_, '\

22 .

Zb 0 .w4..

20 004m

15 -,

in _m m

v1 * -**

a

0-0)

* 04M4*

40 44=41. 0

4m4.

4m**,.!D*,.* Essmsmssmsom4.P=Nmmm= No mm

I .w mm

Page 4: [IEEE 2005 IEEE Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications - Sofia, Bulgaria (2005.09.5-2005.09.7)] 2005 IEEE Intelligent Data Acquisition

In order only to test the possibility for preprocessing ofthe signals before analyzing them with a neural-network,we have implemented only the calculation of one-scalewavelets and taken them as inputs for the neural-networkclustering.

V. CONCLUSION

In this paper we have demonstrated a possibleadaptation of one popular model of Neural Networksalgorithm (ART model) in the field of wireless sensornetworks. The positive features of the ART classalgorithms such as simple parallel distributedcomputation, distributed storage, data robustness andauto-classification of sensor readings are demonstratedwithin the proposed architecture consisting of oneClusterhead and collecting only clustering outputs fromthe other units.

This architecture provides a big dimensionalityreduction and in the same time additional communicationsaving, since only classification IDs (small binaries) arepassed to the Clusterhead instead of all input samples.The Discrete Wavelet Transform of the signals isincorporated as a preprocessing unit of the neural-networks giving the ability to extract important featuresin the data like abrupt changes at various scales.

ACKNOWLEDGMENT

We gratefully thank the researchers Maurizio Spiritoand Federico Stirano working at Politecnico di Torino inItaly, for helping us by providing the sensor data frommeasurements conducted at their lab.

REFERENCES

[1] G.Bartfai, R.White, ART-based Modular Networks forIncremental Learning of Hierarchical Clusterings, in Connection

Science. vol. 9, No. 1, 1997, pp. 87- 112.[2] G.A.Carpenter, and S.Grossberg, A massively parallel

architecture for a self-organizing neural pattern recognitionmachine, Computer Vision, Graphics, and Image Processing, vol.37, 1987, pp. 54-115.

[3] G.A.Carpenter, S.Grossberg, and D.B.Rosen, Fuzzy ART: Faststable learning and categorization of analog patterns by anadaptive resonance system, Neural Networks, vol. 4, 1991, pp.759-771.

[4] G.A.Carpenter, and S.Grossberg, Integrating symbolic and neuralprocessing in a self-organizing architecture for pattern recognitionand prediction. In V. Honavar and L. Uhr (Eds.), Artificialintelligence and neural networks: Steps towards principledprediction. San Diego: Academic Press, 1994, pp. 387-421.

[5] E. Catterall, K. Van Laerhoven, and M. Strohbach, Self-Organization in Ad Hoc Sensor Networks: An Empirical Study, inProc. of Artificial Life VIII, The 8th Int. Conf. on the Simulationand Synthesis of Living Systems, Sydney, NSW, Australia, 2002.

[6] S. Grossberg, Adaptive pattern classification and universalrecoding: I. parallel development and coding of neural featuredetectors, Biological Cybernetics, vol. 23, 1976, pp. 121 -134.

[7] S. Grossberg, Adaptive Resonance Theory in Encyclopedia OfCognitive Science, Macmillan Reference Ltd, 2000.

[8] A. Kulakov, D. Davcev, "Tracking of unusual events in wirelesssensor networks based on artificial neural-networks algorithms,"in Proceedings of International Conference on InformationTechnology: Coding and Computing, ITCC 2005, April 4-6,2005, Las Vegas, NV, USA.

[9] A. Kulakov, D. Davcev, G. Trajkovski, "Implementing artificialneural-networks in wireless sensor networks," in Proceedings ofIEEE Sarnoff Symposium on Advances in Wired and WirelessCommunications, April 18-19, 2005, Princeton, NJ, USA.

[10] A. Kulakov, D. Davcev, G. Trajkovski, "Application of waveletneural-networks in wireless sensor networks," in Proceedings ofACIS Conference on Software Engineering, ArtificialIntelligence, Networking, and Parallel/Distributed Computing(SNPD2005), May 23-25, 2005, Towson University, Maryland,USA.

[11] A. Kulakov, D. Davcev, "Distributed data processing in wirelesssensor networks based on artificial neural-networks algorithms,"in Proceedings of the 10th IEEE Symposium on Computers andCommunications, La Manga del Mar Menor, Cartagena, SpainJune 27 - 30, 2005.

[12] G. Strang, and T.Q.Nguyen, Wavelets and Filter Banks,Wellesley-Cambridge Press, 2nd Ed., 1997.

494