6
Spatio-Temporal Perception Nets Martin Pongratz * , Rosemarie Velik and Jana Machajdik * Vienna University of Technology Institute of Computer Technology Gußhausstraße 27-29 / E384, A-1040 Wien, Austria Email: [email protected] Tecnalia Research and Innovation Department of Biorobotics and Neuro-Engineering Paseo Mikeletegi 1, E-20009 Donostia-San Sebastian, Spain Email: [email protected] Vienna University of Technology Institute of Computer Aided Automation Favoritenstraße 9, A-1040 Wien, Austria Email: [email protected] Abstract—State of the art approaches to autonomous systems face the challenge of sensor data fusion, abstraction, classifica- tion, and prediction of events. The trend is going towards the integration of more and more sensors into automation systems, which will reach a number of sensors comparable to the amount of sensory receptors in the human body in the not too distant future. While today’s technical systems cannot cope with such a flood of information to be processed rapidly, these challenges are mastered exceptionally well by the human brain. Based on this observation, in prior work, a biologically inspired model for sensor data processing has been proposed [1]. This so- called neuro-symbolic information processing model is based on a functional model of the human perception system. Here, an extension of this concept to spatial and temporal aspects of perception is presented. The challenges for solving these tasks as well as the strategies to master these challenges based on perception-nets are presented. I. I NTRODUCTION A higher level of automation is seen as the key to a more efficient, comfortable, safe, and independent life for the earth’s population, including the young and especially the older generations. Having initially found application primarily in factory environments [2], the focus of interest has now begun to shift towards domains like ambient assisted living [3], resource- and energy saving [4], safety- and security surveillance [5], comfort and entertainment [6], and similar ones. This development faces the challenge of a system’s interaction with the environment and – as crucial prereq- uisite for this – particularly a system’s ’perception’ of the surroundings. Current systems map physical models to their sensory data and apply mathematical or statistical models in order to act on this information [7], [8], [9]. Even systems in research state (compare section II) use only basic abstraction layers for learning and perception like the usage of meta- parameters [10]. Due to the very restricted capabilities of these systems, their adoption and usage have been limited. Especially in environments where not all influencing aspects can be measured or the sensitivity to measurement errors is high, a clear mathematical description of the process is not possible. This clearly constricts the usage of technical systems to relatively simple processes. The outcomes achieved so far for more complex applications targeting at automated recognition and classification of every day life situations and the prediction of physical events have been far from ideal so far [4]. In comparison to existing technical approaches, the human brain/mind is able to master these tasks nearly perfectly based on the personal experience acquired over years. Without any consideration of the underlying fundamental physical or mathematical models, humans achieve prediction and classifi- cation task at a very high reliability. For example, humans – even small children – can distinguish thousands of different events and objects of different sizes, forms, colors, at different locations, being moving or non-moving. Tennis players are able to predict tennis balls served with more than 200 km/h accurately and are even able to return these balls using a tennis racket. Other examples for outstanding tasks achieved by the human perception system are baseball (batting of the ball and catching), squash and badminton (batting), crossing a road and estimating the traffic, etc. In perception anticipation (unconscious behavior) and pre- diction (conscious behavior) based on experience are required. This knowledge is furthermore the basis for a high number of tasks where classification and template matching is required like the recognition of a person based on her/his voice, objects based on the visual information, spoilt milk based on the abnormal proprioperception when moving the bottle/liquid packaging board. The high reliability of the human brain and the percpetion in performing such tasks motivates the extraction and application of biological principles in technical systems. Due to this fact, human and animal models of the brain have given inspiration to many technical systems. However, the results of concepts and implementations in this field are so far of only very limited use [11]. A high number of approaches has simply dealt with an abstract implementation of the basic processing IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011 978-1-61284-993-5/11/$26.00 ©2011 IEEE

[IEEE AFRICON 2011 - Victoria Falls, Livingstone, Zambia (2011.09.13-2011.09.15)] IEEE Africon '11 - Spatio-temporal perception nets

  • Upload
    jana

  • View
    216

  • Download
    4

Embed Size (px)

Citation preview

Page 1: [IEEE AFRICON 2011 - Victoria Falls, Livingstone, Zambia (2011.09.13-2011.09.15)] IEEE Africon '11 - Spatio-temporal perception nets

Spatio-Temporal Perception NetsMartin Pongratz∗, Rosemarie Velik† and Jana Machajdik‡

∗Vienna University of TechnologyInstitute of Computer Technology

Gußhausstraße 27-29 / E384, A-1040 Wien, AustriaEmail: [email protected]†Tecnalia Research and Innovation

Department of Biorobotics and Neuro-EngineeringPaseo Mikeletegi 1, E-20009 Donostia-San Sebastian, Spain

Email: [email protected]‡Vienna University of Technology

Institute of Computer Aided AutomationFavoritenstraße 9, A-1040 Wien, Austria

Email: [email protected]

Abstract—State of the art approaches to autonomous systemsface the challenge of sensor data fusion, abstraction, classifica-tion, and prediction of events. The trend is going towards theintegration of more and more sensors into automation systems,which will reach a number of sensors comparable to the amountof sensory receptors in the human body in the not too distantfuture. While today’s technical systems cannot cope with sucha flood of information to be processed rapidly, these challengesare mastered exceptionally well by the human brain. Based onthis observation, in prior work, a biologically inspired modelfor sensor data processing has been proposed [1]. This so-called neuro-symbolic information processing model is based ona functional model of the human perception system. Here, anextension of this concept to spatial and temporal aspects ofperception is presented. The challenges for solving these tasksas well as the strategies to master these challenges based onperception-nets are presented.

I. INTRODUCTION

A higher level of automation is seen as the key to amore efficient, comfortable, safe, and independent life forthe earth’s population, including the young and especially theolder generations. Having initially found application primarilyin factory environments [2], the focus of interest has nowbegun to shift towards domains like ambient assisted living[3], resource- and energy saving [4], safety- and securitysurveillance [5], comfort and entertainment [6], and similarones. This development faces the challenge of a system’sinteraction with the environment and – as crucial prereq-uisite for this – particularly a system’s ’perception’ of thesurroundings. Current systems map physical models to theirsensory data and apply mathematical or statistical models inorder to act on this information [7], [8], [9]. Even systems inresearch state (compare section II) use only basic abstractionlayers for learning and perception like the usage of meta-parameters [10]. Due to the very restricted capabilities ofthese systems, their adoption and usage have been limited.Especially in environments where not all influencing aspectscan be measured or the sensitivity to measurement errorsis high, a clear mathematical description of the process is

not possible. This clearly constricts the usage of technicalsystems to relatively simple processes. The outcomes achievedso far for more complex applications targeting at automatedrecognition and classification of every day life situations andthe prediction of physical events have been far from ideal sofar [4].

In comparison to existing technical approaches, the humanbrain/mind is able to master these tasks nearly perfectlybased on the personal experience acquired over years. Withoutany consideration of the underlying fundamental physical ormathematical models, humans achieve prediction and classifi-cation task at a very high reliability. For example, humans –even small children – can distinguish thousands of differentevents and objects of different sizes, forms, colors, at differentlocations, being moving or non-moving. Tennis players areable to predict tennis balls served with more than 200 km/haccurately and are even able to return these balls using a tennisracket. Other examples for outstanding tasks achieved by thehuman perception system are baseball (batting of the ball andcatching), squash and badminton (batting), crossing a road andestimating the traffic, etc.

In perception anticipation (unconscious behavior) and pre-diction (conscious behavior) based on experience are required.This knowledge is furthermore the basis for a high number oftasks where classification and template matching is requiredlike the recognition of a person based on her/his voice, objectsbased on the visual information, spoilt milk based on theabnormal proprioperception when moving the bottle/liquidpackaging board.

The high reliability of the human brain and the percpetion inperforming such tasks motivates the extraction and applicationof biological principles in technical systems. Due to this fact,human and animal models of the brain have given inspirationto many technical systems. However, the results of conceptsand implementations in this field are so far of only verylimited use [11]. A high number of approaches has simplydealt with an abstract implementation of the basic processing

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

Page 2: [IEEE AFRICON 2011 - Victoria Falls, Livingstone, Zambia (2011.09.13-2011.09.15)] IEEE Africon '11 - Spatio-temporal perception nets

Fig. 1. Modular Hierarchical Structure of a Perception Network

units of the brain – neurons – and their arrangement inneural networks [12], [13], [14], [15], [16], [17], [18], [19]without considering the function of the whole system. Thisapproach is similar to reconstructing the function of a moderncentral processing unit or graphics processing unit, containingmillions of transistors, based on the information that it is madeof transistors. Theoretically, the same function could howeverbe built on different systems with tertiary logic, relays insteadof transistors, or other hardware implementations. The keyinformation is the structure, information processing principle,and function of the system [20], [21].

A new brain-inspired approach considering exactly thisstructural and functional background of the human perceptionsystem is the neuro-symbolic information processing principleproposed by R. Velik [1], [22]. In Section II, an overview ofthis developed model is given, which shall be the basis fora further extension of this neuro-symbolic network model inthe Section III and IV with particular focus on spatial andtemporal aspects of perception giving the new model the nameof spatio-temporal perception nets.

II. PRIOR WORK – NEURO-SYMBOLIC NETWORKS

The approach of neuro-symbolic networks having the struc-tural organization of the human perceptual cortex (compareFigure 1) is a major step ahead in modeling human perceptionfunctions for a technical realization. In contrast to artificialneural networks, an interpretable meaning is assigned to singleneurons in these networks, transforming them into so-called”neuro-symbols”. Other differences to neuronal nets lie in thefact that neuro-symbols can have properties and pass theseproperties to other neuro-symbols and that the informationprocessing inside these nets is event-based (offering a com-putation time advantage for processes with a low number of

events over time when being simulated on common computersystems based on serial processing principles). In order toavoid misinterpretation of the neuro-symbolic nets and therelated information processing due to the similar name toneuronal nets, this approach will be referenced as ”perceptionnet” from now on.

Perception networks were created in order to deal withthe increasing complexity of sensor data and the extractionand merging of relevant information from these multi-domainsensor sources [4]. The information of multiple domains ismerged in a hierarchical concept of several layers (composedof further sub-layers) mimicking the primary, secondary andtertiary cortex of the human brain.

The starting point for information processing are inputdata from sensory receptors. The information coming fromthese sensory receptors is then processed by neuro-symbols inseveral stages: feature level, sub-unimodal, unimodal level, andmultimodal level. Furthermore, for the processing of dynamicevents, a scenario level is introduced above the multimodellevel (see Section IV-C). In the first three stages, informationfrom each sensory modality is processed separately and in par-allel. In the last two stages, neuro-symbolic information fromall modalities is merged and results in a unitary multimodalperception of the environment.

A simple example of a sensor network and the mergingof the sensory input in the related perception net used forsurveillance of a room based on a video camera, a microphone,and a number of ”tactile” sensors (light barrier, tactile floorsensors, motion detector) is shown in Figure 2. Each sensoris assigned to a modality. The modality-specific informationprocessing is done in the lowest level of the hierarchicalconcept. The perception net shown in Figure 2 is used to detectthat a person enters a room based on the sensory input.

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

Page 3: [IEEE AFRICON 2011 - Victoria Falls, Livingstone, Zambia (2011.09.13-2011.09.15)] IEEE Africon '11 - Spatio-temporal perception nets

Fig. 2. Example perception net to recognize that a person is entering a room

III. LOCATION-DEPENDENT PERCEPTION

To form networks, neuro-symbols of different levels haveto be interconnected. Without considering the location wherea perceptual image was perceived, a neuro-symbol of a lower-level can principally contribute to the activation of all con-nected neuro-symbols in the next higher level. However, whendifferent situations happen in parallel in the environment, thiscan lead to a merging of neuro-symbols, which in fact donot belong together. That way, an activation of inadequatehigher-level neuro-symbols can occur. Location informationabout the position where perceptual images being representedby particular neuro-symbols have been detected can help toresolve this problem. Accordingly only lower-level neuro-symbols that lie within a certain spatial area are boundtogether to a higher-level neuro-symbol. For this purpose, inthe feature layer, location information is coded in form oftopographic maps meaning that sensors neighboring in spatiallocation project to neighboring neuro-symbols. In contrast, inthe higher layers, location information is coded as propertiesof neuro-symbols. Information about the location of neuro-symbols can be sent to other connected neuro-symbols. Neuro-symbols can only be activated from other neuro-symbols if theperceptual images they represent are located within a certainspatial area.

The principle is illustrated by adapting the example ofFigure 2, in which the detection of an event, where a personis entering a room, is detected. By a concurrent triggeringof different sensors, the unimodal symbols person, steps, andobject enters are activated. Similar to the example given inFigure 2, in the example, these three neuro-symbols can acti-vate the multimodal neuro-symbol ”person enters”. However,as illustrated in Figure 3, the unimodal neuro-symbols areperceived in different spatial areas. Therefore, it is very likelythat they origin from different events, and that their fusion tothe neuro-symbol person enters would be incorrect. To avoidan undesired activation of neuro-symbols, it is useful to definehow much certain lower-level symbols may deviate in locationto be bound to a higher-level symbol. This information can

Fig. 3. The Role of Location Information in Fusion between Different Layers

either be predefined, which requires knowledge from thesystem designer, or it can be learned from examples presentedto the system (see [21]).

IV. TIME-DEPENDENT PERCEPTION

The concept presented in Figure 2 is able to recognizesituations that occur simultaneously. Perception of dynamicevents has not been fully discussed yet but constitutes a mainfeature that allows establishing a new basis for autonomoussystems for object and situation recognition, process predic-tion, and many other tasks where a certain level of abstractionis necessary and the characteristics of the process can belearned from multiple observations. The main advantages ofthese systems lie in the lower initial configuration demands(saving human resources and costs), the broad variety ofapplications, and the quality of perception, recognition, andprediction. The latter especially holds true for processes thatcannot be described mathematically to a full extend or wheremeasuring of all relevant influencing factors is not possibledue to their complexity. Possible applications are: predictionof an object’s trajectory based on fragmentary information,matching and recognition of electrical loads based on theenergy consumption over time, automated and autonomouselderly observation, recognition of emergency situations, andmany more.

A. Perceiving Temporal Relations

The information processing in perception nets is event-based and does not have any clocking or time informationincluded – similar to humans. Still, biological archetypes doexperience time and also the derived perception-nets allow toconsider time-dependencies and dynamic aspects of events.A simple example for the necessity of considering temporalaspects is shown in Figure 4a where an interval of overlap isneeded to correctly identify relations between neuro-symbolicactivations. If this overlap does not occur, events perceived bydifferent sensory modalities cannot be correlated (see Figure4b).

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

Page 4: [IEEE AFRICON 2011 - Victoria Falls, Livingstone, Zambia (2011.09.13-2011.09.15)] IEEE Africon '11 - Spatio-temporal perception nets

Fig. 4. Temporal Overlapping of Neuro-symbolic Activations

Fig. 5. Prolongation of the Activation Time by the Usage of Time Windows

The example in Figure 4 shows symbols from the sub-unimodal level which are more likely to deliver wrong in-formation because of sensor-failures than symbols of higherlevels. Also short interruptions in the activation of thesesymbols are possible. In order to overcome these shortcomingsthe usage of time windows (compare Figure 5) is possible.

While this might sound controversy to the bio-inspiredapproach, these time windows could be achieved by feedbackloops [21] between neurons and synaptic connections ofdifferent type resulting in a longer activation of the relatedsymbol. Especially for the information processing at unimodalneuro-symbol level or below, the usage of time windows issupported by findings of neuroscience: On one hand the gen-eral information processing of humans shows signs of an inputlag specific to each modality which is explained by differentsignal transmission durations for the modalities’ inputs locatedall over the body (e.g. tactile information from the toe) [23].

Fig. 6. Structure for Detecting a Certain Succession of Events

On the other hand some aspects of time experience have beenfound to be modality specific and thus have to be included inthe unimodal levels of the perception net [23].

B. Perceiving Successions of Events

In contrast to Figure 5, where the concurrent activation ofdistinctive symbols, e.g. the occurrence of activation withina certain time interval, is recognized and responded to, casesmay exist where the succession of the occurrence is highlyrelevant. For example, the detection of an object entering aroom can also be defined as the activation of the followingthree neuro-symbols in exactly this succession:

1) Object is present (detected by the floor sensors)2) Object passes (the light barrier)3) Motion (an object moves inside the room, detected by

the motion sensor)

A suitable structure for recognition of this sequence isshown in Figure 6.

Here, connections representing positive and negative feed-back are used in the perception net. The negative feedbackfrom b1, c and e ensure that the succession of the activation issimilar to the specified one while the positive feedback froma neuron’s output to the same neuron’s input allow storageof a symbol’s activation. Neurons from the higher levels areused to reset the neurons from lower levels through negativefeedback while maintaining the own activation through pos-itive feedback (the connections of the output of d with theinput of d as well as the input of a are illustrative examples).Examples for the activation and non-activation of the neuro-symbol ”Object enters”, based on the sequence of input datarunning through the network structure presented in Figure 6,are given in Figure 7.

The advantage of the solution presented in Figure 6 over theapproach presented in Figure 5 lies in the higher error rejectionrate in cases where the sequence of lower-level activations isnot the same as the target neuron-symbol’s activation. This is

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

Page 5: [IEEE AFRICON 2011 - Victoria Falls, Livingstone, Zambia (2011.09.13-2011.09.15)] IEEE Africon '11 - Spatio-temporal perception nets

Fig. 7. Examples for Detection of Correct and Incorrect Successions of Events

due to the fact that the activation of the higher level neuro-symbol is limited to one sequence of activation of the inputneurons-symbols. In all other cases the ”Object enters”-neuro-symbol will not be activated.

C. Concept for high-level Time-Perception

Similar to the previous two subsections, where the aspect oftime was discussed on the level of unimodal symbols, the samebasic principles are applicable for time perception on higherlevels. A main difference here is that the duration of the eventsperceived can change from some milliseconds to seconds inthe unimodal layers to up to tens of seconds in the multimodallayer. Additionally, the introduction of a new layer, calledthe scenario layer, is necessary in order to represent high-level dynamic events resulting from a sequence of multimodalperceptions (see highest layer in Figure 1).

V. CONCLUSION

In this article, one concept for the perception of spatialrelations and two principles for the perception of temporalrelations and succession of sensor data in perception-netshave been presented. The temporal approaches allow time

perception on different layers of the network and thus canbe used for short- as well as long-term sequences.

Future work will target the prediction of events and eventcharacteristics based on currently available perceptual infor-mation and previously established experience. Furthermore,investigation will go towards the development for a brain-inspired model of a perception-action loop in order to allowfor precise, fast, and adequate reactions to occurring events.

REFERENCES

[1] R. Velik, D. Bruckner, R. Lang, and T. Deutsch, “Emulating theperceptual system of the brain for the purpose of sensor fusion,” inHuman-Computer Systems Interaction, ser. Advances in Soft Computing,Z. Hippe and J. Kulikowski, Eds. Springer Berlin / Heidelberg, 2009,vol. 60, pp. 17–27.

[2] D. Bruckner, R. Velik, and Y. Penya, “Perception in automation - a callto arms,” Journal on Embedded Systems, vol. 2011, 2011.

[3] N. Qadeer, R. Velik, G. Zucker, and H. Boley, “Knowledge represen-tation for a neuro-symbolic network in home care risk identification,”in Industrial Informatics, 2009. INDIN 2009. 7th IEEE InternationalConference on, 2009, pp. 277–282.

[4] R. Velik and G. Zucker, “Autonomous perception and decision makingin building automation,” Industrial Electronics, IEEE Transactions on,vol. 57, no. 11, pp. 3645–3652, 2010.

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

Page 6: [IEEE AFRICON 2011 - Victoria Falls, Livingstone, Zambia (2011.09.13-2011.09.15)] IEEE Africon '11 - Spatio-temporal perception nets

[5] D. Bruckner and R. Velik, “Behavior learning in dwelling environmentswith hidden markov models,” Industrial Electronics, IEEE Transactionson, vol. 57, no. 11, pp. 3653–3660, 2010.

[6] R. Velik and H. Boley, “Neurosymbolic alerting rules,” IndustrialElectronics, IEEE Transactions on, vol. 57, no. 11, pp. 3661–3668, 2010.

[7] J. Kober, K. Mulling, O. Kromer, C. Lampert, B. Scholkopf, andJ. Peters, “Movement templates for learning of hitting and batting,” inRobotics and Automation (ICRA), 2010 IEEE International Conferenceon, may 2010, pp. 853–858.

[8] M. Pongratz, F. Kupzog, H. Frank, and D. Barteit, “Transport bythrowing - a bio-inspired approach,” in Industrial Informatics (INDIN),2010 8th IEEE International Conference on, 2010, pp. 685–689.

[9] D. Barteit, H. Frank, and F. Kupzog, “Accurate prediction of interceptionpositions for catching thrown objects in production systems,” in Indus-trial Informatics, 2008. INDIN 2008. 6th IEEE International Conferenceon, 2008, pp. 893–898.

[10] J. Kober and J. Peters, “Imitation and reinforcement learning,” RoboticsAutomation Magazine, IEEE, vol. 17, no. 2, pp. 55–62, june 2010.

[11] R. Velik, G. Zucker, and D. Dietrich, “Towards automation2.0: Aneurocognitive model for environment recogntion, decision-making, andaction execution,” EURASIP Journal of Embedded Systems, 2010.

[12] R. Harvey and K. Heinemann, “Biological vision models for sensorfusion,” in Control Applications, 1992., First IEEE Conference on, Sep.1992, pp. 392–397 vol.1.

[13] R. Akita, R. Pap, and J. Davis, “Biologically inspired sensor fusion,”Accurate Automation Corp Chattanooga TN, Tech. Rep., 1999.

[14] M. Costello and E. Reichle, “Movement templates for learning ofhitting and batting,” in Robotics and Automation (ICRA), 2010 IEEEInternational Conference on, may 2010, pp. 853–858.

[15] E. P. Blasch and J. C. Gainey, “Feature-based biological sensor fusion,”in FUSION ’98 International Conference, 1998, pp. 702–709.

[16] A. Paplinski and L. Gustafsson, “Multimodal feedforward self-organizing maps,” in Computational Intelligence and Security, ser.Lecture Notes in Computer Science, Y. Hao, J. Liu, Y. Wang, Y.-m.Cheung, H. Yin, L. Jiao, J. Ma, and Y.-C. Jiao, Eds. Springer Berlin/ Heidelberg, 2005, vol. 3801, pp. 81–88.

[17] H. Bothe, M. Persson, L. Biel, , and M. Rosenholm, “Multivariatesensor fusion by a neural network model,” in Proc. Second InternationalConference on Information Fusion, 1999, pp. 1094–1101.

[18] D. George and B. Jaros, “The htm learning algorithms,” Numenta, Tech.Rep., 2007.

[19] I. Arel, D. Rose, and T. Karnowski, “A deep learning architecturecomprising homogeneous cortical circuits for scalable spatiotemporalpattern inference,” in in Proc. NIPS 2009 Workshop on Deep Learningfor Speech Recognition and Related Applications, 2009. [Online]. Avail-able: http://research.microsoft.com/en-us/um/people/dongyu/NIPS2009/

[20] L. I. Perlovsky, B. Weijers, and C. W. Mutz, “Cognitive foundations formodel-based sensor fusion,” in Society of Photo-Optical InstrumentationEngineers (SPIE) Conference Series, ser. Society of Photo-OpticalInstrumentation Engineers (SPIE) Conference Series, I. Kadar, Ed., vol.5096, Aug. 2003, pp. 494–501.

[21] R. Velik, “A bionic model for human-like machine perception,” Ph.D.dissertation, Vienna University of Technology, Institute of ComputerTechnology, 2008.

[22] R. Velik, “The neuro-symbolic code of perception,” Journal of CognitiveScience, 2010.

[23] D. M. Eagleman, Brain Time, M. Brockman, Ed. Vintage Books, 2009.

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE