74

JAMRIS 2010 Vol 4 No 2

  • Upload
    jamris

  • View
    224

  • Download
    3

Embed Size (px)

DESCRIPTION

www.jamris.org

Citation preview

Page 1: JAMRIS 2010 Vol 4 No 2
Page 2: JAMRIS 2010 Vol 4 No 2

Editor-in-Chief

Co-Editors:

Janusz Kacprzyk

Dimitar Filev

Kaoru Hirota

Witold Pedrycz

Roman Szewczyk

(Systems Research Institute, Polish Academy of Sciences , Poland)

(Research & Advanced Engineering, Ford Motor Company, USA)

(Interdisciplinary Graduate School of Science and Engineering,

Tokyo Institute of Technology, Japan)

(ECERF, University of Alberta, Canada)

(PIAP, Warsaw University of Technology )

; PIAP

, Poland

(Polish Academy of Sciences; PIAP, Poland)

Editorial Office:

Al. Jerozolimskie 202, 02-486 Warsaw, POLANDTel. +48-22-8740109,

Chairman:

Industrial Research Institute for Automationand Measurements PIAP

Janusz KacprzykPlamen AngelovZenn BienAdam BorkowskiWolfgang BorutzkyOscar CastilloChin Chen ChangJorge Manuel Miranda DiasBogdan GabryśJan JabłkowskiStanisław KaczanowskiTadeusz KaczorekMarian P. KaźmierkowskiJózef KorbiczKrzysztof KozłowskiEckart KramerAndrew KusiakMark LastAnthony MaciejewskiKrzysztof Malinowski

[email protected]

Editorial Board:

(Lancaster University, UK)

(Korea Advanced Institute of Science and Technology, Korea)

(Polish Academy of Sciences, Poland)

(Fachhochschule Bonn-Rhein-Sieg, Germany)

(Tijuana Institute of Technology, Mexico)

(Feng Chia University, Taiwan)

(University of Coimbra, Portugal)

(Bournemouth University, UK)

(PIAP, Poland)

(PIAP, Poland)

(Warsaw University of Technology, Poland)

(Warsaw University of Technology, Poland)

(University of Zielona Góra, Poland)

(Poznań University of Technology, Poland)

(Fachhochschule Eberswalde, Germany)

(University of Iowa, USA)

(Ben–Gurion University of the Negev, Israel)

(Colorado State University, USA)

(Warsaw University of Technology, Poland)

Executive Editor:

Associate Editors:

Webmaster:

Proofreading:

Copyright and reprint permissionsExecutive Editor

Anna Ładan

Mariusz AndrzejczakKatarzyna Rzeplińska-Rykała

Tomasz Kobyliński

Urszula Wiączek

Andrzej MasłowskiTadeusz MissalaFazel NaghdyZbigniew NahorskiAntoni NiederlińskiWitold PedryczDuc Truong PhamLech PolkowskiAlain PruskiLeszek RutkowskiKlaus SchillingRyszard Tadeusiewicz

Stanisław TarasiewiczPiotr TatjewskiWładysław TorbiczLeszek TrybusRené WamkeueJanusz ZalewskiMarek ZarembaTeresa Zielińska

[email protected]

[email protected]

(PIAP, Poland)

(PIAP, Poland)

(PIAP, Poland)

(PIAP, Poland)

(University of Wollongong, Australia)

(Polish Academy of Science, Poland)

(Silesian University of Technology, Poland)

(University of Alberta, Canada)

(Cardiff University, UK)

(Polish-Japanese Institute of Information Technology, Poland)

(University of Metz, France)

(Częstochowa University of Technology, Poland)

(Julius-Maximilians-University Würzburg, Germany)

(AGH University of Science and Technology

in Kraków, Poland)

(University of Laval, Canada)

(Warsaw University of Technology, Poland)

(Polish Academy of Sciences, Poland)

(Rzeszów University of Technology, Poland)

(University of Québec, Canada)

(Florida Gulf Coast University, USA)

(University of Québec, Canada)

(Warsaw University of Technology, Poland)

JOURNAL of AUTOMATION, MOBILE ROBOTICS& INTELLIGENT SYSTEMS

All rights reserved © 1

Publisher:Industrial Research Institute for Automation and Measurements PIAP

If in doubt about the proper edition of contributions, please contact the Executive Editor. , excluding advertisements and descriptions of products.The Editor does not take the responsibility for contents of advertisements, inserts etc. The Editor reserves the right to make relevant revisions, abbreviations

and adjustments to the articles.

Articles are reviewed

Page 3: JAMRIS 2010 Vol 4 No 2

T. Matsuo, T. Yokoyama, D. Ueno, K. Ishii

T. Sonoda, Y. Nishida, A.A.F. Nassiraei, K. Ishii

The study of bio-inspired robot motion control system

Development of antagonistic wire-driven jointemploying kinematic transmission mechanism

DEPARTMENTS

IN THE SPOTLIGHT

EVENTS

2

JOURNAL of AUTOMATION, MOBILE ROBOTICS& INTELLIGENT SYSTEMSVOLUME 4, N° 2, 2010

CONTENTS

REGULAR PAPER

Two dimensional model of CMM probing system

New approach to the accuracy descriptionof unbalanced bridge circuits with the exampleof Pt sensor resistance bridges

Data fusion in measurements of angular position

he spike-timing-dependent plasticity functionbased on a brain sequential learning system usinga recurrent neuronal network

A Simple local navigation system inspired byhippocampal function and its autonomous mobilerobot implementation

Building a cognitive map using an SOM

Z.L. Warsza

S. Łuczak

G. Ogata, K. Natsume, S. Ishizuka, H. Hayashi

T. Miki, H. Hayashi, Y. Goto, M. Watanabe, T. Inoue

K. Tokunaga, T. Furukawa

SPECIAL ISSUE SECTION

Brain-like Computing and ApplicationsGuest Editors: Tsutomu Miki, Takeshi Yamakawa

T

2

S.H.R. Ali

S. Sonoh, S. Aou, K. Horio, H. Tamukoh, T. Koga,T. Yamakawa

A human robot interaction by a model of theemotional learning in the brain

3

8

16

25

23

31

39

48

55

62

71

73

Page 4: JAMRIS 2010 Vol 4 No 2

Abstract:

1. IntroductionOne challenge for advanced coordinate metrology is

the accurate dimensional measurements on modern engi-neering objects, especially in aerospace and automotiveindustries. The probe is one of the most important sys-tems of CMM measurement accuracy. However, studies onCMMs cannot separate the characteristic performance ofthe probing system from other CMM error sources [1-9].The stylus tip contact with the detected surface is thesource of electronic signals that will develop the patternon the working objects. So, the performance of the CMMoverall system is very much dictated by the motion preci-sion of the probe stylus tip and its actuator. Therefore,the probe stylus tip is literally at the center of the CMMoperation and a key element of coordinate measure-ments.

A coordinate measuring machine (CMM) as an auto-mation technology is playing the key role in the modernindustry to improve the measurement accuracy. Accurateprobing that is computer controlled is the current trend forthe next generation of coordinate metrology. However, theCMM probing system is limited by its dynamic root errorsthat may markedly affect its response characteristics. Inthis paper, dynamic response errors of CMM measurementshave been analyzed. The adopted probe stylus sizesthroughout the course of measurements are found to causesome waviness errors during CMM operations due to each ofthe prescribed angle of the probe tip contact point with thespecimen surface and the radius of the stylus tip. Varia-tions in the geometry of the stylus have their consequenteffects on its inherent intrinsic dynamic characteristicsthat in turn would cause relevant systematic root errors inthe resulted measurements. Unforeseeable geometrical er-rors of a CMM using a ductile touch-trigger probing systemhave been characterized theoretically. These results areanalyzed in order to investigate the effect of the dynamicroot errors in the light of six probe stylus tip of the situationinto account when assessing the accuracy of the CMM mea-surements. Analytical approaches have been applied ona developed two dimensional model (2DM) of stylus tip todemonstrate the capability of such approaches of empha-sizing the root error concept using the strategy of CMMductile trigger type of probe.

Keywords: CMM, trigger probe, stylus tip, tip root errors,and two-dimensional-model (2DM).

A variety of probe designs in CMM are available today,although most probes are compatible with most CMMs [1,

10]. Metrology engineer should understand the behaviorof each type of probe, where CMM probes are classifiedinto two main categories; contact (tactile) probes andnon-contact probes. As the name suggests, a contactprobe gathers data by physically touching the artifact orspecimen. CMM contacting probes are divided also intotwo specific families of hard probes and touch trigger orscanning probes, which maintain contact with the speci-men surface during data collection [10]-[12]. The pro-bing system in CMM machines includes stylus and stylustip, which have their own dynamic characteristics duringmeasuring processes [7], [13].

To use a hard probe, the CMM operator manuallybrings the probe into contact with the specimen, allowsthe machine to settle and manually signals of the CMM torecord the probe position. The CMMs software treats thereadings to compensate for the diameter of the probestylus tip. CMM hard probes are available in a variety ofconfigurations and continue to have a broad applicationin coordinate metrology. When used in conjunction withmanual CMMs, they are most frequently used to measurecurved surfaces, distances between specimen features,angles and the diameter and centerline location of boresin applications that require low to medium accuracy.Hard probes are simple in use and rugged also, but theirrepeatability quality depends upon his operator touch.Because every operator has a different touch whenmoving and bringing the probe into contact with the spe-cimen, therefore this hard type probe is not commonlyused in large mass production companies.

Recently, the touch trigger (scanning) probe is themost common type of probes used in CMM. Ductile triggerprobes are precision-built, touch-sensitive devices thatgenerate an electronic digital signal each time the probecontacts a point on the specimen surface, which is usu-ally indicated by an LED and an audible signal. The probehead itself is mounted at the end of one of the CMM'smoving axes. It can be rotated automatically, and canaccommodate many different probe stylus tips andattachments. These features make the CMM trigger probea versatile and flexible data-gathering device. CMM touchtrigger probes eliminate influence of operator touch onmeasuring results compared to hard probe type. It can befitted on direct computer numerical control (CNC-CMMs)and manual CMMs [10]-[11]. An improvement on thebasic touch trigger probe design incorporates piezo-based sensors to translate the deflection of the probe

1.1. CMM hard probes

1.2. CMM touch trigger probe

TWO DIMENSIONAL MODEL OF CMM PROBING SYSTEM

Salah H. R. Ali

Received 2 ; accepted 13 .nd December 2009 January 2010th

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 3

Page 5: JAMRIS 2010 Vol 4 No 2

into a constant digital acoustic signal that is recorded bythe CMM. This design improves the accuracy of measure-ments due to the elimination of the effect of stylusbending (caused by force variations when the touch trig-ger probe contacts the specimen) and inaccuracies cau-sed by the probe's internal electromechanical parts.While, the last element in the probe styles is the tip.

In practice, the part measured waviness deviate fromthe desired value owing to many quasi-static systematicerrors as inherited intrinsic are geometric error of measu-ring probe tip, thermally induced distortions of machineprobe elements, errors arising from the static deflectionor stiffness of machine-fixture-specimen-probe systemunder the touch force and other errors [1], [9]. Measure-ment accuracy is commonly determined by the kinematicaccuracy of the CMM probe and a big portion of machinesused with low kinematic accuracy. Software-based errorcompensation is a method for anticipating the combinedeffect of all of these above factors on standard preciseand accurate spherical artifact and suitably modifyingthe conventionally designed probe tip scanning trajec-tory. Considerable research works have been reported toimprove the kinematic accuracy of the CMM machineprobe, which is too sophisticated to implement, there arefew programs that focus on changing the CNC program tocompensate the probe error [6], [14]. While, generatinga cylindrical surface, its profile often concavely deviatesfrom the ideal profile, which initiates the necessity ofmeasuring the surface profile during the measuring pro-cess and suitable CMM strategy. Therefore, it is verydifficult to separate and study these types of errors inpractice clearly. In such cases, CMM is a reliable tool forverification, dimensional measurement and geometricalwaviness form deviation accuracy for selected surfaceprofile needs to be carried out theoretically. Probe stylustip path is updated during measuring based on the para-meters measured by the newly developed techniques.Thus, a high-quality inner cylindrical surface measure-ment has been successfully generated by process usingsoftware compensation. The geometrical form and undu-lation of spare parts for machinery and mechanical equip-ment is an important and active role in the technologiesapplications of industrial metrology. During the stages ofassembly and operation of mechanical systems, the rubyball tip of the stylus that used to measure deviations ingeometrical feature always requires a thorough test toachieve high measurement accuracy, specially whenmeasure the deviations of difficult forms and its wavinessin the three directions (X, Y, and Z).

Recently, both theoretical analysis and experimentalstudies pointed at the triggering point being the mainsource of probe errors [1]-[6], [8], [12], [14-16]. But un-fortunately, this area needs more dynamic analyses tounderstand stylus response error sources according to thedesign and construction of CMMs, especially new CMMmachines [17]. The error caused by probe loping has be-come a significant component of the total system errors.However, most of the studies on scanning CMMs cannotseparate the performance of the probing system fromother error sources of the CMM [1], [8]. Since CMM triggerprobes are precision equipments themselves, their per-formance should be studied separately from the rest of

the components of the CMM in order to characterize itsbehavior to improve the measurement accuracy. Thisprinciple of operation effectively triggers the probe ata constant force regardless the contact area between theprobe stylus tip and the measured specimen.

2. Mathematical ModelSince the influence of some unforeseeable factors

affecting probe inaccuracy could be small, so it requiresaccurate analytical model during analysis. Thus, for thisinvestigation a new two-dimensional-model (2DM) hasbeen used to present the root error due to ball tip size ofthe CMM probe stylus at measurement operations.

During scanning all CMM touch probes in the coordi-nate measurement have a natural ball tip errors [14].Supposing 2DM, where stylus ball is steady placed inhorizontal position, thus only X-axis and Y-axis transla-tion movement of the stylus is possible. Assuming no sty-lus tip ball deformation and no surface deformationunder the test, following 2DM model can be presented inFigs. 1 and 2.

Figure 1 shows the measurement principle of the pro-posed system which include contact points 1, 2 and 3that are indicated on the vertical and horizontal plans ofthe probe stylus tip with l stylus length and ball tip radiusof with predicate angle . Figure 2 presents that due tothe finite size of the probe stylus ball tip, the contactpoint on a cylindrical surface will be along the stylus axis,but relatively at some point on the side of the ball wherethe test surface and the stylus tip ball slope angle matchhorizontally.

Because of the ball does not touch the test artifactspecimen along the same stylus slope angle, there will bean error in the measured length for any measurementpoint where the test part surface slope angle ( ) is notzero degree. The error and different possible positionsof the ball tip are shown. Case d is at a higher surfaceslope (180°) and thus it has a larger measurement error

, while case a, is located at a lower surface slope and ithas a smaller measurement error . Therefore, to getexact location of n point on slope surface, an error ,reduced by value is made, due to fact, that positionof point on stylus tip ball every time is captured. Fromfigure 2, values and can be expressed as follows:

2.1. Stylus ball tip error

R

E

E

EE

EY

Y E

Fig. 1. Horizontal placed probe stylus ball tip radius ( ).R

t

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles4

VOLUME 4, N° 2 2010

Page 6: JAMRIS 2010 Vol 4 No 2

proposed in the measurement of cylindrical parts usingCMMs, emerged two types of systematic unforeseeable er-rors. The first error resulted when increasing surface slopeangle while the second error resulted when increasing theradius of the stylus tip ball. Fig. 3 shows how margin oferrors are calculated theoretically and the output ofclimate surface slope degree ( ) of the probe tip duringcontact through 360° at using complete cylindrical refe-rence artifact for different six tips radii. It is observedthat the amount of errors in the -direction starting fromzero for each probe tip at the beginning of contact at 0°,while increasing incremental increase of inclination angleof a point contact of the tip with the artifact to reach itsmaximum relative value of 2R% at 180° and then comeback to the decline to reach zero at 360°. It means thatrotational motion that occurs during the probe tip scan-ning due to creeping of the tip at the base of the probevibrates at the surface coming into contact with thecylindrical parts are also generate another error regularly.

Hence, it can be concluded from this mathematical-two-dimensional model is capable to appear two differentsystematic errors with the movement of the probe duringCMM scanning. The first is consequence of the creepingtip while the second is a result of increasing the radius ofthe tip became increasingly error rate to a maximum value

Y

� � �

Y = R - r

Y = R - (R cos ) = R (1 - cos )

E YY

E

EE Y

m n tm=n=t

(1)

(2)

Where the distance between two points t and n indicatesthe in direction, while is the relative distancebetween points m and t in the -direction, the largescale of the tip ball in case b. From appointed 2DM can bestated that measurement error and values are madeonly in Y-axis and is dependant on surface slope. In point2 or 3 (according to figure 1, where is matching angle=180 ) and error would be maximal values, while

error and are equals zero only at scanning flat sur-faces ( =0, 360 ) when all points , and are over-laying each other ( ), as shown in case a.

Y

Y

Y

R

YY

°

°

The root error due to CMM stylus ball tip cannot beneglected. Six different stylus balls with radii =4.0, 3.0,2.5, 2.0, 1.5 and 1.0 mm have been selected according toactually common using. A relative mutation in the di-rection ( ) of the stylus ball can be observed accordingto the surface slope degree that called matching angle( ), Fig. 3.

Through the application of accurate analytical 2DM

3. Results and Discussions

-

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 5

VOLUME 4, N° 2 2010

Fig. 2. Size of error E related to the probe tip ball radius ( ) and surface slope degree ( ).R �

Fig. 3. Relative error of the probe ball tip at different surface slope angle.

Page 7: JAMRIS 2010 Vol 4 No 2

of 2R% during the measurement at the point of orthogo-nal (180°) of the cylindrical artifact. The maximum rela-tive amount of these root errors are ranged from 8, 6, 5, 4,3 to 2% at the same matching point (180°) for the ball tipradius of 4.0, 3.0, 2.5, 2.0, 1.5 and 1.0mm in respectivelyas shown in Fig. 3.

In other words, figure 4 can help to conclude that;small probe tip of 1.0mm can be better used to diagnosethe true state of the surface form of the specimens thanthat with bigger tip radius of 1.5, 2.0, 2.5, 3.0 and 4.0mmin reactively. This is because the probe tips of the largeradii owing touch large contact area with the inner sur-face of the used standard artifact, and . In thiscase, the distortion of the measurement result using 1.0mm probe tip become more visible and gives better esti-mate of the measured feature profile compared to theresults of 4.0 mm probe tip.

The application of the proposed accurate analyticaltwo-dimensional-model (2DM) measurement techniquecan be used for its capability to present two types ofsystematic unforeseeable errors of the probe during CMMscanning process. The first error can be due to surfaceslope degree of the probe tip during contact with thedetected surface of the cylindrical artifact in the Y-direc-tion, with zero error value where the tip begin to startrotation and reach its maximum error value of 2R% at thepoint of orthogonal axis at 180°, then returns again tozero error at 360°. This error always occurs for the ball tiprotation during the scanning process as a result of cree-ping of the probe tip during touching the measured sur-face. The second error resulted at increased radius of theprobe stylus tip ball. From carried out results analysis, arecan conclude the following:

• Increasing the probe tip radius decreases theaveraged measured error signals of surface wavi-ness; it may be due to large number of contactpoints of small tip on the artifact during scanningtrajectory. It has been cleared that the probe sty-lus tip at scanning have a significant influence inthe accuracy of CMM measurements using the stra-tegy of touch trigger probe independently.

• From results obtained, an easy calibration and

vice versa

Fig. 4. Scheme of the probe tips scanning path duringmeasuring process.

4. Conclusion

correction technique for probe performance accu-racy of CMMs measurements can be developed,which is can be built upon both of surface formand probe stylus characteristics experimentally.

- Engineering and Surface Metrology,Length and Precision Engineering Division, NationalInstitute for Standards (NIS), Giza (12211), PO Box 136,Egyp t . Mob i l e : 0020 -126252561 , E -ma i l :[email protected] or [email protected].

AUTHORSalah H. R. Ali

References[1] Hermann G.,

, vol. 4, no.1, 2007, pp. 47-62.

[2] Krajewski G., Woźniak W., “One dimensional kinetic mo-del of CMM passive scanning probes”,

, vol. 3,no. 4, 2009, pp. 172-174.

[3] Jae-jun Park, Kihwan Kwon, Nahmgyoo Cho, Develop-ment of a Coordinate measuring machine (CMM) touchprobe using a multi-axis force sensor.

, vol. 17, 2006, pp. 2380-2386.[4] Woźniak A., Dobosz M., “Influence of measured objects

parameters on CMM touch trigger probe accuracy ofprobing”, , Elsevier Inc., vol. 29,issue 3, 2005, pp. 290-297.

[5] Kasparaitis A., Sukys A.,Part B, ISSN

1012-0394, vol. 113, 2006, pp. 477-482.[6] Wu Y., Liu S., Zhang G., “Improvement of coordinate

measuring machine probing accessibility”, PrecisionEngineering, vol. 28, 2004, pp.89-94.

[7] Yagüe J.-A., Albajez J.-A., Velázquez J., Aguilar J.-J.,“A new out-of-machine calibration technique forpassive contact analog probes“, , ElsevierLtd., vol. 42, 2009, pp. 346-357.

[8] Woźniak A., Mayer J. R. R., Bałaziński M., “Stylus tipenvelop method: corrected measured point determina-tion in high definition coordinate metrology”,

, Springer, vol. 42, 2009,pp. 505-514.

[9] Ali S.H.R., “The Influence of fitting algorithm and scan-ning speed on roundness error for 50 mm standard ringmeasurement using CMM”,

, Polish Academy of Sciences,Warsaw, Poland, vol. 15, 2008, no. 1, pp. 31-53.

[10] Genest D. H., The right probe system adds versatility toCMMs, Available at:

[11] Zeiss Calypso Navigator,Revision 4.0, Germany, 2004.

[12] Dobosz M., Woźniak A., “CMM touch trigger probestesting using a reference axis”, ,Elsevier Inc, vol. 29, 2005, issue 3, pp. 281-289.

[13] Yagüe J.-A., Albajez J.-A., Velázquez J., Aguilar J.-J.,A new out-of-machine calibration technique for passivecontact analog probes", , Elsevier Ltd.,vol. 42, 2009, pp. 346-357.

Geometric error correction in coordinatemeasurement, Acta Polytechnica Hungarica

Journal of Auto-mation, Mobile Robotics & Intelligent Systems

Meas. Sci. Techno-logy

Precision Engineering

Dynamic errors of CMM probes,diffusion and defect data. solid state data.

Measurement

Int.Journal of Adv. Manuf. Technol.

Int. Journal of Metrology& Measurement Systems

http://www.qualitydigest.com/jan97/probes.htmlCMM operation instructions and

training manual.

Precision Engineering

Measurement

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles6

VOLUME 4, N° 2 2010

Page 8: JAMRIS 2010 Vol 4 No 2

[14] Lin Y. C., Sun W. I., “Probe radius compensated by themulti-cross product method in free form surface mea-surement with touch trigger probe CMM”,

, Springer, vol. 21, 2003, pp.902–909.

[15] Li L., Jung J.-Y., Lee Ch.-M., Chung W.-J., “Compen-sation of probe radius in measuring free-formed curvesand surface“,

, Springer, vol. 4, no. 3, 2003.[16] Xiong Z., Li Z., “Probe radius compensation of workpie-

ce localization”, , vol. 125, February2003, pp. 100-104.

[17] Zhao J., Fei Y. T., Chen X. H., Wang H. T., “Research onhigh-speed measurement accuracy of coordinate mea-suring machines“,

,Conf. series 13, 2005, pp. 167-170.

Int. Journalof Adv. Manuf. Technol.

Int. Journal of the Korean Society of Pre-cision Engineering

ASME Transactions

Journal of Physics: 7 Int. Symposiumon Measurement Technology and Intelligent Instruments

th

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 7

VOLUME 4, N° 2 2010

Page 9: JAMRIS 2010 Vol 4 No 2

Abstract:

1. IntroductionThis paper is based on earlier author proposals given

in papers [1], [6] - [8]. As it has been pointed there, thegeneralized accuracy description of the 4R bridge ofarbitrary variable arm resistances was not existing in theliterature. Some considerations of the bridge accuracywith sensors of very small increments only have beenfound in [9], [10]. Generalized description is urgentlyneeded mainly for:

- initial conditioning circuits of analogue signalsfrom broadly variable immittance sensor sets,

- identification of the changes of several internalparameters of the equivalent circuit of the objectworking as twoport X, when it is measured from itsterminals for testing, monitoring and diagnosticpurposes.

Near the bridge balance state, application of relativeerrors or uncertainties is useless, as they are rising to

After short introduction transfer coefficients of theunloaded four arms bridge of arbitrary variable arm resis-tances, supplied by current or voltage source, are given inTable 1. Their error propagation formulas are find and tworationalized forms of accuracy measures, i.e. related to theinitial bridge sensitivities and of double component form assum of zero error and increment error of the bridge transfercoefficients are introduced. Both forms of transfer coeffi-cient measures of commonly used bridge - of similar initialarm resistances in balance and different variants of theirjointed increments, are given in Table 3. As the examplelimited errors of some resistance bridges with platinumPt100 industrial sensors of class A and B are calculatedTable 4 and analyzed. Presented approach is discussed andfound as the universal solution for all bridges and also forany other circuits used for parametric sensors.

Keywords: resistance bridge, sensor, measures of accura-cy, error, uncertainty of measurements.

± . In [1], [6] this obstacle was bypassed by relating theabsolute value of any bridge accuracy measure to theinitial sensitivity of the current to voltage or voltage-to-voltage bridge transfer function. These sensitivities arevaluable reference parameters, as they do not changewithin the range of the bridge imbalance. In paper [7]the new double component approach to describing thebridge accuracy is developed. It has the form of sum ofinitial stage and of bridge imbalance accuracy measures.Such double component method of describing accuracy iscommonly used for the broad range instruments, e.g.digital voltmeters. Relation of each component to accu-racy measures of all bridge arm resistances have beendeveloped. As the example formulas of accuracy measuresof two bridges used for industrial Pt sensors will bepresented and their limited errors calculated

Four resistance (4R) bridge circuit with terminalsABCD working as passive type twoport of variableinternal resistances is shown in Fig 1.

In measurements the ideal supply of the bridge bycurrent , if or by voltage

; when ; is commonly used. Alsothe output is unloaded, i.e.: , . Forsingle variable measurements it is enough to know chan-ges of one bridge terminal parameter and the outputcircuit voltage is mostly used. With notations of Fig.1 formulas (1), (2) of are given in Table 1 [2],

� � �

� �

.

2. Basic formulas of bridge transferfunctions

X

X

I J RU E

Fig. 1. Four arms bridge as the unloaded twoport of typewith the voltage or current supply source branch.

AB = const.= =const. =0

G

AB G

L DC DC

DC

DC

RR U' U

UU

NEW APPROACH TO THE ACCURACY DESCRIPTION

OF UNBALANCED BRIDGE CIRCUITS

WITH THE EXAMPLE OF P SENSOR RESISTANCE BRIDGESt

Zygmunt Lech Warsza

Received 28 ; accepted 24 .th April 2009 September 2009th

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles8

Table 1. Open circuit bridge voltage and its transfer functions.

a) current supply b) voltage supply

Page 10: JAMRIS 2010 Vol 4 No 2

where:, - current or voltage on bridge supply

terminals ,

, transfer functions of the bridge of open-circuited output, i.e.: current to voltage(transfer resistance) and ratio of two voltages.

- initial bridge open circuit sensitivities ofand of ,

- sum of bridge

arm resistances; , - its incrementand initial value,

, - normalized bridge imbalancefunction of and of ,

- increments of the functionnumerator and of the denominator.

Output voltage may change its sign for some setof arm resistances. If transfer function or ,the bridge is in balance and in (3) and (4) its conditionsof both supply cases are the same: .

The balance of the bridge can occurs for many diffe-rent combinations of , but the basic balance state isdefined for all , i.e. when:

Formulas of the bridge terminal parameters are sim-plified by referencing all resistances to their initial valu-

I U

R R R RR R

r k

rk

R

fr k

Lf

Ur k

R R =R R

AB AB

i i i i i

i i

i

R i i

i E i

i R i

i

DC

i

i

A B

f

R R =R R

R

� � � � �

0 0

0

21 21

21

21

0

21 21

21 21

1 3 2 4

10 30 20 40

(1 )

( )

( ) ( )

( ) ( )( )

å

å

å å

å å

å å å

å

å

- arm resistance ofinitial value and absolute and relative

increments,-

,

(5)

Ó

Ó

= 0 = 0

= 0

es in the balance, i.e. and referencinginitial resistances to one of the first arm, i.e.:

, and from (3) . Bridgetransfer functions can also be normalized, as is shown inTable 1.

= (1+ )

=

R RR

R

R

i i i

i

i

i

Ri i

Ri

i i i

Ri

0

0

20 10 40 10 30 10

0

å

ä

ä

ä å äå

ä

R mR R nR R mnR� �

3. Accuracy description of broadly variableresistancesThe accuracy of measurements depends in compli-

cated way on structure of the instrumentation circuit, va-lues and accuracy of its elements and on various environ-mental influences of natural conditions and of the neigh-boring equipment. Two type of problems have been met inpractice:

- description of circuits and measurement equip-ment by instantaneous and limited values of sys-tematic and random errors, absolute or relatedones, as well by statistical measures of that errors,

- estimation of the measurement result uncertainty,mainly by methods recommended by guide GUM.

Measures of accuracy (errors, uncertainty) of the sin-gle value of circuit parameter are expressed by numbers,of variable parameter - by functions of its values. In bothcases they depend on equivalent scheme of the circuit, onenvironmental and parameters of instrumentation used orhave to be use in the experiment.

The measures of broadly variable resistance , e.g. ofthe stress or of the temperature sensor, could be expres-sed by two components: for its initial value and for itsincrement as it is shown by formulas of Table 2.

Instantaneous absolute error and its relative valuereferenced to are given by formulas (6) and (7),

| | of the poorest case of valuesand signs of | | and | | or | | - by (8), and

for random errors or uncertainties- by (9).

relative limited errorstandard

statistical measure�

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 9

t = k =0 0,

� � �R R Ri i R i� =(1+ )åÓ 0

R R10 30 R R10 30

�Ri0�Ri0 ( + )( + )R R R R10 20 30 40

i=1

4

Table 2. Two-component formulas of the sensor resistance accuracy measures.

Page 11: JAMRIS 2010 Vol 4 No 2

If errors of increment and of initial value of resistance are statistically independent then correlation coefficient , butif they are strictly related each to the other then . Exact value can only be find experimentally. From (8) follows thatborders of the worse cases | | of possible values of nonlinearly dependent on even if | | and | | or | | areconstant [1], [6] - [8].

Distribution of the initial values and relative increments of the sensors' set resistances depends on their data obtainedin the production process. Its actual values also depend on influences of the environmental conditions.

Instantaneous values of measurement errors of bridge transfer functions and result from the total differential ofanalytical equations (3) and (4) from Table 1.

After ordering all components of is:

(10)

where: - weight coefficients of error components

- subscript when ;- multiplier is or if is .

If resistances are expressed as , formula (10) is

(10a)

Absolute error of transfer function has other forms, i.e.:

(11)

or

(11a)

From (10) and (11) one could see that if errors of the neighbouring bridge arms, e.g.: 1, 2 or 1, 4 have the same signthey partly compensate each other.

If errors of resistances are expressed, as in (7), by their initial errors and incremental errors , then

(12)

where: (12a)

If arm resistance is constant, , , but weight coefficient of its component in still depends on otherarm increments . In initial balance state, i.e. when all arm increments , the nominal transfer function ,but real resistances have some initial errors and usually , .

(13a)

(13b)

where:

Relative errors are preferable in measurement practice, but it is not possible to use them for transfer functions near thebridge balance as the ratio of absolute error and the nominal (or for the voltage suppliedbridge of and ) is rising to . Then other possibilities should be applied. There are two possible ways todescribe accuracy of the bridge transfer function (or ) in the form of one or of two related components:

- absolute error of the bridge transfer function may be referenced to initial sensitivity factor of (or to of ) orto the range of transfer function (or );

kk k

r k

r

ji i

R R R R

k

R

R wr r

R

t

k

r rk k

r kt r k k

r r k k

i

i i

Ri Ri i i i i

i

Ri

Ri

i i i j j j

Ri

Ri i i i

i i Ri i Ri r

i i r k

r

k

r r

k

= 0= ± 1

±

= 1,2,3,4 =3,4,1,2(–1) =+1 if 1, 3 –1 2, 4

= (1+ ) = (1+ )

= 0 == 0 (0) = 0

0 0

=

=

=

0 = 0= 0 ±

ä ä å ä å äå

å

ä

ä

å å

ä

ä ä ä

å ä ä

ä

ä

ä

ä ä ä ä ä

0

21 21

21

0 0

21

0

0 21

21 210

0 210 210

210 0 210

210 0 210

210 10 20 30 40

21 210 21 210

21 21 210

21 21

0 21 0 21

21max 21min 21max 21min

4. Description of the accuracy of bridge transfer functions

absolute error of transfer function

ii+1

å

� � � �

� � �

� � � � �

� �

� �

å åj i

Ä

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles10

Page 12: JAMRIS 2010 Vol 4 No 2

- initial error have to be subtracted fromand then accuracy could be described by twoseparate terms: for zero and for transfer functionincrement, as it is common for digitalinstrumentation.

In the it is preferable if error isreferenced to the initial sensitivity factor as constantfor each bridge, then to full range of as it could bechange. Such relative error could be presented as sum:

(14)

where: - initial (or zero) rela-tive error of ;

relative error of normalized imbalancefunction when , also referenced to .

Error is similar for any mode of the supply sourceequivalent circuit of the bridge as twoport.

Zero of the bridge may be corrected on different ways:by adjustment of the bridge resistances, by the oppositevoltage on output or by the digital correction of conver-ted output signal. In such cases from (14) it is

(15)

From (15) follows that related to error ofincrement depends not only on increment errors ofresistances but also on their initial errors evenwhen initial error of the whole bridge , becauseafter (12a) weight coefficients of in (15) depends on .The component of particular error despairs only when

. Functions of or may be approximated forsome intervals by constant values.

In the absolute error of transferfunction (12) after subtracting its initial value is

(16)

And after referenced it to , and substitutionfrom (12a)

(17)

where :

(17a,b)

Weight coefficients (17a,b) are finite for any value ofincluding because if all 0 also 0.Error is equivalent to error of the resistance

increment in formula (7).From ( 13a) and (17 ) is:

for current to voltage transfer function(18a)

and similarly for voltage transfer function(18b)

� �

� � �

� � �

r r

r

r

r i

i

r

i

i i

i i

i

i i i

i

Ri

i

r r i i

i

r r r

k k k

210 21

21

0

21

21

210 10 210 30 40

21

21

21 0

210

0 21 21

0

210

0

0

0

21

21

21 21

21

21 0 210 21 21

21

21 0 210 21 21

==0

( )( ) 0

0=0

=0

= 0

= +

= +

first type method

second type method

tr

r

f r t

t r

R

r

r w

r r LR

t rk

k k

ä

ä ä ä ä ä

ä å

å

ä

ä

ä

ä

ä å

ä

ä ä

å

å

ä ä

å

ä ä

ä ä

å

å

å

å

where: , - absolute errorsof initial value or e.g. or

, - related errors of increments- or - from the initial stage.

Two component accuracy equation of transfer func-tion was funded by the same way as for .

Actual values of instantaneous errors of or couldbe calculated only if signs and values of errors of all resis-tances are known. In reality it happens very rare. Morefrequently are used their limited systematic errors (of theworst case) and statistical standard deviation measures.Formulas of these accuracy measures of or could beobtained after transformation of error formulas (10) -(18a,b). All these accuracy measures are possible to findin one component or two component forms. One-com-ponent formulas for arbitrary and main particular cases of4R bridge are given in Tables in [1], [6], [7].

The two-component method of the bridge transferfunction accuracy representation, separately for itsinitial value (e.g. equal to zero) and for increment is simi-lar like unified one used for digital instruments and of thebroad range sensor transmitters. It is especially valuableif zero of the measurement track is set handily or auto-matically. Absolute measures could be transformed alsoby the sensor set linear or nonlinear function to the unitsof any particular measurand, e.g. in the case of platinumsensors - to °C [6], [8].

In the measurement practice the mostly used for sen-sors are four-arm bridges of resistances equal in the ba-lance state. Formulas of accuracy measures for transferfunctions and of these particular resistance bridgesare much simpler then general ones [6]. They are presen-ted in Table 3 with assumption that all correlation coef-ficients . Formulas of and its errors given aremainly for comparison as current supply is preferable onefor resistance sensors. The transfer function or ofthe bridge including differential sensors of four or twoincrements and transfer function of two equal inopposite arms may be linear but their measures depend onby different way each other. From Table 3 it is possible to

compare the accuracy measure formulas of the current orvoltage supplied 4R bridges of few element and singleelement sensors. For example formulas of accuracy mea-sures of similarly variable two opposite arm resistancesare simpler then for the single variable arm. Form of thelimited error | | is similar for four linear A-D bridgesand | | only for two of them: A and B.

The good example of the broadly variable resistancesensors is platinum sensors Pt 100 of A and B classes com-monly used in industrial temperature measurements. To-lerated differences from their nominal characteristic aregiven in standard EN 60751÷A2 1997. They are expressedin °C or as permissible resistance values in [ohm] - see | |of classes A and B in Fig. 2. Characteristic of class A sen-sors is determined up to 650°C and for less accurate class

t kr k r k

r r k k

kr

r k

r k

r

r k

k k

r k

r

0 210 210 0 210 210

21 21 210 210

21 21

21 210 21 210

21

21

21 21

21 21

21

21 21

21

21 21

21

21

21

= == 0 = 0,

= 0

±

ä ä

ä ä

å å

å

ä

ä

� �

r k

r r k k

ij

r r

k k

5. Description of accuracy measuresof particular 4R bridges mostly usedfor sensors

6. Tolerances of industrial Pt sensors

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 11

Page 13: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles12

Table 3. Accuracy measures of the most common open-circuit 4R bridges of similar all arm initial resistances 4R .10

Page 14: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 13

B - up to 850°C. Initial limited errors | | of both classesare 0,06% and 0,12%, respectively.

On the base of nominal characteristic of Pt 100 sen-sors the maximum limited error | | | |=| |+| | for

of both classes is calculated as ratio of tolerances| | and increments of sensor resistance [7], i.e. as

Obtained values are given in Fig. 2. They are onlyslightly changing and could be approximated by the sin-gle value and related to the maximum or mean value ofthe temperature range of each sensor. In the full rangeof positive Celsius temperatures the limited error | |doesn't exceed 0,2% of for class A and | | forclass B.

Limited errors , , ofthe 4R bridge transfer function with the single indus-trial sensor of A or B class has be calculated from formulasof Table 3. It was assumed that limited errors of

ä

ä ä ä ä

å

ä

å ä

ä ä ä ä ä

ä

10

1 10

21 21 21 210 21

21

0

R max i

r r r r r

i

å

0,5%

| | | | = | | | |

| |

7. Limited errors of the 4R bridgewith single industrial Pt 100 sensors

å

r

constant bridge arms are equal and not higher that thesensor initial error , balance is at 0°C, and current ofsupply source is stable enough or ratio of output signaland this current is measured. Maximum temperature range(0 - 600)°C is taken for calculations and for it the relative

Fig. 2. Tolerances | | and maximum limited relative errors| | of temperature sensors Pt100 type A and B evaluatedfrom their standard [7].

ä

ä| |10

Table 4. Limited errors of few cases of the current supplied 4R bridge with the single resistor sensor, e.g. Pt 100 type A or B.

Page 15: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

increment of sensor resistance is: . As exam-ple numerical formulas of limited errors | | or | | ofthe class A are also estimated. Limited errors of the class Bsensor bridge have been similarly estimate. For clarifyingconsiderations the lead resistances are taken as negligi-ble. All results are presented in Table 4.

In Table 4 five different cases of measuring circuit areconsidered, i.e.: bridge without any adjustments, outerand internal zero setting, negligible initial errors only ofconstant arms or of the sensor arm as well.Ratio of limited errors of the bridge without adjustmentsand Pt sensor is 1,7 for class A and 2,9 for class B.

If errors of the bridge resistances are negligible (line5) limited error is only slightly higher then for the sensor,but if also the initial sensors error is adjusted, then thebridge transfer function error is even smaller then ofthe sensor itself ! (line 1). Results for examples 2, 3 and 4are between 1 and 6.

For comparison: relative limited error of the outputvoltage of the bridge including two similar Pt 100 sensorsof the class A in opposite arms, calculated from line 3 ofTable 3 for the same temperature range doesn't exceed0,51% - without null correction, and 0,39% - if is correc-ted. These errors are calculated for the twice higher signalthat for the single sensor bridge. They are slightly higherbut signal linearly depends on resistance increment equalfor both sensors. For lower temperature ranges relativelimited errors and uncertainties type B become higher.

Two methods of describing the accuracy measures ofthe arbitrary imbalanced sensor bridges are presented to-gether and compared., i.e.:

- one component accuracy measure related to initialsensitivity of the bridge transfer functions, givenbefore in [1], [6], [7] and

- the new double component one of separatelydefined measures for zero and transfer functionincrement [8].

The second one is similar as used for the broad rangeinstruments, e.g. digital voltmeters. Accuracy measuresof bridge arms are defined for initial resistances and fortheir increments. Then these methods are independentfrom the sensor characteristic to the measured quantity.

These methods are discussed using on few examples of4R bridges of equal initial resistances, supplied by currentor voltage source and with single, double and four ele-ment sensors.

Given formulas allow to finding accuracy of the 4Rbridge or uncertainty of measurements with bridge cir-cuits if actual or limited values of errors or standard statis-tical measure of their resistances and sensors are known.

Formulas of general and particular cases of the bridgemay be used for computer simulation of the accuracy ofvarious sensor bridges and measured objects of the Xtwoport equivalent circuit in different circumstances.

Systematic errors could be calculated as random onesfor set of sensor bridges in production or in exploitationand if correlation coefficients are small obtained valuesshould be smaller than of the worst-case limited errors.

Similar formulas as presented in this and other papers,e.g. [1], [4], [6], [7] could be formulated for any types of

å

ä ä

max

21 21

21

=2,137r k

r

8. General conclusions

impedance sensor circuits as DC and AC bridges of singleand double supply, active bridges linearized by feedbackor multipliers, Anderson loop (developed in NASA) andimpedance converters with virtual DSP processing. Givenin this paper methods of the simplification of theiraccuracy description could be also applied in manyindustrial measurements.

Accuracy of current and voltage supplied strain brid-ges has been analyzed by M. Kreuzer [9], [10], but sucha unified approach as given above to the accuracy des-cription of unbalanced bridges and other circuits ofbroadly variable parameters, developed in [1], [4], [6] -[8], is not found so far in literature.

The presented method is also valuable for accuracyevaluation in testing any circuit from its terminals as two-port, which is commonly used in diagnostics and in impe-dance tomography. It was also used to describe the accu-racy of two-parameter bridge measurements - see [3]-[5].

- Polish Metrological Society,Warsaw, Poland. E-mail: [email protected].

AUTHORZygmunt Lech Warsza*

References[1]

[2]

[3]

[4]

[5]

[6]

[7]

Warsza Z. L., “Four-Terminal (4T) Immittance Circuitsin Multivariable Measurements” (Immitancyjne układyczterobiegunowe (4T) w pomiarach wieloparametro-wych),

, Warszawa 2004 (in Polish).Sydenham P.H., Thorn R. (eds.),

, Chapters: 126: “Electrical Bridge CircuitsBasic Information” and 127 “Unbalanced DC bridges” byWarsza Z. L., John Wiley & Sons, Ltd. New York 2005,pp. 867-889.Warsza Z. L., “Two Parameter (2D) Measurements in FourTerminal (4T) Impedance Bridges as the New Tool forSignal Conditioning”. Part 1 and 2. In:

, Gdynia/Jurata,2005, pp. 31-42.Warsza Z. L., “Two Parameter (2D) Measurements inDouble-current Supply Four-terminal ResistanceCircuits”, vol. 13,no 1, 2006, pp. 49-65.Warsza Z. L., “Backgrounds of two variable (2D) mea-surements of resistance increments by bridge cascadecircuit” In: WilgaSymposium, Poland, vol. 6347, ed. Romaniuk R. S.,2006, pp. 634722R-1-10.Warsza Z.L., “Accuracy Measures of the Four Arm Bridgeof Broadly Variable Resistances, Parts 1 and 2”. In:

, TU Iasi , Romania, vol. 1, Sept. 2007, pp. 17-28.Warsza Z.L., “Miary dokładności transmitancji mostkarezystancyjnego w przypadkach szczególnych” (Accura-cy measures of transmittance of variable resistancebridge in particular cases),

, no. 10, 2007, pp. 17 -24 (in Polish).

Monograph of Industrial Institute of Control andMeasurement PIAP

Handbook of MeasuringSystem Design

,

Proc. of the 14International Symposium on New Technologies inMeasurement and Instrumentation and 10 Workshop onADC Modeling and Testing IMEKO TC-4

Metrology and Measurement Systems,

. Proc. of SPIE Photonic Applications...,

Proceedings of 15 IMEKO TC4 International Symposiumon Novelties in Electrical Measurement and Instrumen-tations

Pomiary Automatyka Kon-trola

th

th

th

VOLUME 4, N° 2 2010

Articles14

Page 16: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

[8]

[9]

[10]

Warsza Z.L., „Nowe ujęcie opisu dokładności mostkaz przemysłowymi czujnikami Pt” (New Approach of theAccuracy of Industrial Pt Sensor's Bridge),

, Prace Komisji MetrologiiOddziału PAN w Katowicach, Konferencje, no. 8, pp.155-164 (in Polish).Kreuzer M., “Linearity and Sensitivity Error in the Use ofSingle Strain Gages with Voltage-Fed and Current-FedCircuits”, , vol. 8, October1984, pp. 30-35.Kreuzer M., “Whestone Bridge Circuits shows almost noNonlinearity and Sensitivity Errors when used for SingleStrain Gage Measurements”,

in Internet:ta_sg_kreuzer_06 2004.

PodstawoweProblemy Metrologii PPM'08

Experimental Techniques

Technical literature of Hot-tinger Baldwin Messtechnik,

VOLUME 4, N° 2 2010

Articles 15

Page 17: JAMRIS 2010 Vol 4 No 2

Abstract:

1. Introduction

2. Determining the tilt

The paper refers to two specific cases of measuringangular position. The first is tilt sensing realized bymeans of an accelerometer (what recently has becomevery popular due to application of inexpensive MEMS ac-celeration sensors), while the second is an angle measu-rement by means of an incremental rotation-to-pulsesensor. In both cases we usually encounter a redundancyof the measurement data, which can be used for increa-sing accuracy of the considered measurements employingappropriate data fusion.

According to one of many definitions, data fusion isa way of integrating into one whole data generated byvarious sources, or merging data generated by a singlesource, yet related to different features of an object ora phenomenon, and separated from a signal generated bya single sensor [1]. In the considered cases, we deal witha single sensor generating two or three output signalsthat can be merged in various ways.

As for the dynamics of the considered measurements,accelerometers sensing tilt operate under static or quasi-static conditions, as far as the existing accelerations areconcerned [2]. This is connected with low frequencies ofperforming such measurements, whereas in the case ofincremental rotation-to-pulse sensors the conditions canbe dynamic. However, because of the foreseen applica-tion, the proposed measuring system based on an incre-mental sensor is designed for operation at a very low ro-tational speed, what does not result in rigorous require-ments with regard to operation speed of the electroniccircuits as well as the software processing the outputsignals from the sensor.

The case of determining a tilt angle with applicationof accelerometers is presented in Fig. 1, where:

The paper describes a possibility of increasing accuracyof measurements of angular position with application ofdata fusion. Two cases of determining the angular positionare considered: accelerometer-based measurements of tiltand measurements of angular position with application ofincremental rotation-to-pulse sensor (coupled with an origi-nal measuring system described in the paper). Applicationof the proposed data fusion ensures in both cases decreaseof the uncertainty of the related measurement of ca. 40%.

Keywords: data fusion, rotation-to-pulse angle sensor,tilt.

g g g

x y z

g

w

x, , - components of the gravitational accelerationindicated by accelerometers with sensitive axes arrangedas the Cartesian axes , , ,

- gravitational acceleration,- arbitrarily oriented tilt angle,- pitch,- roll.

The basic dependencies used for calculating theangu-lar position represented by two component angles

and can be found e.g. in [2]-[4]. However,application of these dependencies results in a lowaccuracy of deter-mining the tilt angles.

An original method that makes it possible to deter-mine with a higher accuracy an angular position (repre-sented by the component angles and ), employingappropriate data fusion, has been proposed in [5]. Theidea of the measurement is based on computing theseangles as a weighted average (having variable weight co-efficients) of three different signals generated by theapplied accelerometers:

(1)

(2)

where:, , , - weight coefficients.

In order to determine the weight coefficients, the fol-lowing dependencies have been accepted that guaranteeto obtain a minimal value of the combined standard un-certainty of the component tilt angles [5]:

y z

xz yzg g

w w w

, - geometric sums of pairs of respective componentaccelerations,

� �

� �

Fig. 1. Components of the gravitational accelerationagainst the tilt angles.

1 2 3 4

DATA FUSION IN MEASUREMENTS OF ANGULAR POSITION

Sergiusz Łuczak

Received 24 Septem ; accepted 3 Decem .th ber 2009 ber 2009rd

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles16

Page 18: JAMRIS 2010 Vol 4 No 2

wwww

1

2

3

4

1 1

� �

� �

� �

� �

cossincossin

/2/4

0

2

2

2

2

(3)(4)(5)(6)

(7)

(8)

Various types of incremental sensors are used for mea-suring all sorts of physical quantities. A principle of theiroperation is described in numerous works, e.g. in [7].In order to determine an angular displacement of a gra-duated wheel, there are usually used two separate signalsin space quadrature, generated by two detectors. The pha-se shift of results from the geometric configuration ofthe detectors and corresponds to a geometric angle of(where is the pitch angle between adjacent markers onthe wheel). Such solution makes it possible to distinguishthe sense of rotation and to double the resolution of thesensor [7] (in the case when the two analogue outputsignals are directly converted into logical ones), whichnormally equals the number of markers created on the gra-duated wheel. Course of one period of the signals is depic-ted in Fig. 2 (assuming that their offsets are of , andtheir amplitudes of 1; electric phase angle is expressedin degrees arc).

High accuracy systems of this type, manufactured bysuch companies as Renishaw or Heidenhain, featurea subdivision resolution owing to interpolation of theanalogue signals generated between adjacent markers on

The initial values of the component angles andthat occur in the formulae (3) - (6) can be calculated infew ways: using the basic arc sine or arc cosine equations,which appear in formulae (1) - (2) (choice of appropriateequation is determined by value of angles and [2],[5], [6]) or using an arc tangent equation [4]. Additio-nally, an iterative method can be applied here, where ina successive step of computations the aforementionedweight coefficients are calculated once more, yet usingvalues of angles and determined in the previousstep according to formulae (1) - (2). In such case we em-ploy recurrent formulae (7) - (8), and owing to that, thefinal values of angles and can be determined witha higher accuracy [6]:

However, in the case of the last method, when for-mulae (3) - (6) contain values of angles and determi-ned in a previous iterative step according to (1) and (2),experimental studies indicated that no significant incre-ase of the accuracy related to determination of tilt angleshas been obtained in the following iterations. Additio-nally, application of the recurrent formulae may consi-derably limit dynamics of the sensor operation.

� �

� �

� �

� �

� �

3. Measuring the angular positionwith incremental sensor

the graduated wheel (same as in Fig. 2). For instance,a measuring system by Jenoptik Carl Zeiss with an incre-mental sensor IDW 2/16384 with 16,384 markers (whatcorresponds to a physical resolution of ca. 1.3 minute arc)that cooperates with a standard AE 101 counter unit,employs an 8-bit interpolation of the amplitude of theincremental signal, what makes it possible to obtain theaccuracy and resolution of 1 or even 0.5 second arc (i.e.the measurement resolution is increased almost 160-times) [8].

While realizing the subdivision, one can alternatelyuse the sine signal generated by the first detector withinthe range of the phase angle of , and then thecosine signal generated by the second detector within therange of 1/4 3/4 (and respectively within the wholeperiod), as it has been proposed in an analogous case ofmeasuring tilt angles [2]. Such approach is also a kind ofdata fusion. Owing to this procedure, accuracy of determi-ning the angular position within one period significantlyincreases, because the input signals are not used withinthe angular range where they feature a considerable non-linearity, which results in a small sensitivity of the mea-surement [2]. The same result can be obtained while mea-suring a difference between instantaneous amplitude ofthe sine and cosine signal [7].

In order to obtain a further increase of the accuracyrelated to measurements of angular position, it has beendecided to employ the same principle as in tilt measure-ments, described in section 2. It consists in a simulta-neous, instead of an alternate as in the previous case,processing of both signals. Just as in the case of deter-mining tilt angles, fusion of the information contained inboth output signals is realized by calculating value of thephase angle on the basis of a weighted average withvariable weight coefficients. Analogously to formulae (1)- (2), this angle can be determined as follows:

(9)

where:, - weight coefficients,- analogue voltage signal from the first detector,- analogue voltage signal from the second detector,- amplitude of the signal from the first detector,- amplitude of the signal from the second detector.

Fig. 2. The measuring signals of the incremental sensor.

- /4 /4� �

� �

4. Data fusion in incremental measurements

w wUUUU

1 2

1

2

3

4

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 17

Page 19: JAMRIS 2010 Vol 4 No 2

(16)

Function (16) reaches its minimum, when itsderivative with respect to equals zero, i.e.:

(17)what boils down to the following equation:

(18)

Formula (18) yields dependencies identical compareto their counterparts (3) - (6) related to measurements oftilt angles:

(19)(20)

Courses of weight coefficients (black curve) and(gray curve) are presented in Fig. 3. The graphs indicatethat for three characteristic values of phase angle : 0,1/4 and 1/2 , values of the coefficients will be respec-tively: (1, 0); (0.5, 0.5); (0, 1).

When the values of coefficients and are alreadyknown, we obtain a formula for determining the phaseangle :

(21)

Value of angle determined in a previous step is to becalculated analogously as in the case of tilt angles (seesection 2). Neglecting insignificant uncertainties of de-termining the weight coefficients, we can state that un-certainty of determining the phase angle is constantand equals:

(22)

As it has been mentioned above, the proposed idea ofdata fusion has been already applied in the case of tiltmeasurements realized by means of systems based onMEMS accelerometers [6]. It is also planned to implementthe data fusion in the case of using the aforementionedincremental sensor IDW 2/16384 manufactured by Zeiss.

w

w

ww

w w

w w

1

1

1

2

1 2

1 2

2 2cos = 0

cossin

� �

� �

� �

2

2

2

Fig. 3. Graphs of formulae (19) and (20).

5. Measuring System with Incremental Sensor

Formula (7) is true when the weight coefficients meetthe following self-evident condition:

(10)

Just as before, it is crucial to find such dependenciesfor determining the weight coefficients, being a functionof phase angle , that ensure obtaining a minimal valueof the combined standard uncertainty of angle , whichcan be expressed as [9]:

(11)

where:- uncertainty of determining the voltage signalfrom the first detector,- uncertainty of determining the voltage signalfrom the second detector,- uncertainty of determining the first weightcoefficient,- uncertainty of determining the second weightcoefficient.

As it has been supported by an analysis of experi-mental results related to tilt measurements, in order tosimplify formula (11) a certain assumption may be accep-ted while determining the respective partial derivativesof the weight coefficients. It is that the coefficients canbe regarded as having a constant value, even though infact they are functions of the measured angle. Under suchassumption, combination of formula (9) with the simpli-fied dependency (11) yields the following equation:

(12)

Still another simplification may be accepted here:uncertainties of determining the voltage signals fromboth detectors as well as their amplitudes are equal. It isfully justified in the cases when both detectors have anidentical structure. So, let us assume:

(13)

(14)

While expressing the voltages from the detectors asfunctions of phase angle , formula (12) can be transfor-med as follows:

(15)

Taking into account formula (10), equation (15) canbe rearranged as:

w w

u U

u U

u w

u w

u U u U u U

U U U

1 2

1

2

1

2

1 2

3 4 0

+ = 1

( )

( )

( )

( )

( ) = ( ) = ( )

= =

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles18

Page 20: JAMRIS 2010 Vol 4 No 2

Due to numerous shortcomings, the standard counterAE101 has been replaced with a custom built software-controlled electronic system. Interpolation of the sineand cosine signal with subdivision accuracy is realizeddigitally with application of a commercial data acquisi-tion card equipped with 16-bit A/D converters manufac-tured by Advantech [10] (cards with 12-bit A/D conver-ters can be applied here as well). Theoretically, thatwould make it possible to increase accuracy of measuringthe angular position few thousand times. It is obviouslyimpossible because of various sources of errors that occurhere. The most significant are the following:

- noises of the output signals,- inaccuracies of the A/D converters of the data

acquisition card,- inaccuracies of the markers on the graduated

wheel,- error of the phase shift between the detectors,- variability of the phase shift during measure-

ments,- different amplitudes of the output signals

generated by the detectors,- variability of amplitudes of the output signals

during measurements,- incorrect values of the offsets of the output

signals due to alignment errors,- variability of the offsets of the output signals

during measurements.

As can be easily noted, it is a very significant matterto take into account the offsets, as well as amplitude ma-gnitudes of the sine and cosine signal generated by thedetectors in the incremental sensor (these values must beregarded while using formula (21)). Each sensor featuresan individual value of these parameters (in the case ofthe original counters AE 101, each of them was tuned toa single incremental sensor [8]). Therefore, it is neces-sary to determine the offsets and amplitude magnitudesduring initialisation of the system or beforehand, fora specific sensor. Still another option, accepted in theconsidered measuring system, is measuring these para-meters in real time and introducing possible correctionssystematically. Such solution ensures obtaining thehighest accuracy of the system, as it compensates formost of the errors listed above.

Schematic of the described measuring system is pre-sented in Fig. 4. As mentioned above, it features a subdi-vision accuracy due to a software interpolation of themeasuring signals.

A shortcoming of such system architecture is a signi-ficantly limited dynamics of its operation. In the case ofmeasurements performed at high rotational speeds of thegraduated wheel another kind of system should be built.It should be an autonomous microprocessor system pro-cessing the output signals generated by the rotation-to-pulse sensor in a real time, and transferring the compu-ted results to a computer memory only from time to timein an off-line mode.

It is also possible to eliminate the electronic inter-face while applying a data acquisition module featuringa high gain for the analogue inputs (such modules aredesigned for measurements of temperature by means of

thermocouples). However, such solution significantlylimits the range of dynamic operation of the system.

As far as application of the considered data fusion isconcerned, the experimental studies have been carriedout so far only on a tilt sensor employing MEMS accelero-meters. The following graph illustrates the obtainedincrease of the measurement accuracy.

The indication error expressed over axis y has beendefined as an absolute value of the difference betweenthe value of the tilt angle applied by means of the usedtest station and the value determined with respect to theaverage of respective indications of the tested sensor.The black curve is related to application of the weightedaverage, whereas the gray curve to application of the sineand cosine signal alternatively (as described in section3). Increase of the measurement accuracy due to applica-tion of data fusion can be clearly observed.

The author demonstrated in the paper an analogy bet-ween the case of measuring component tilt angles (pitchand roll) by means of accelerometers, and quite a diffe-rent case of measuring angular position by means of in-cremental sensors with subdivision accuracy. Owing toapplication of the proposed data fusion, in both casesa significant decrease of the uncertainties of determi-

Fig. 4. Schematic of the incremental measuring system.

Fig. 5. Decrease of error values owing to data fusion.

6. Experimental Results

7. Summary

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 19

Page 21: JAMRIS 2010 Vol 4 No 2

ning the angular values can be obtained (at least of 40percent with respect to the basic measuring systems whe-re the output signals are processed individually, using arcsine or arc cosine type of formulas).

The same result can be obtained (yet in an easier way)in the case of determining tilt angles with application ofan arc tangent function. A respective algorithm has beenpresented in [11]. The same applies to incremental sen-sors with subdivision accuracy, where a similar depen-dency can be used for calculating the phase angle reali-zing the subdivision [12]:

(23)

Experimental works carried out at the Faculty of Me-chatronics, Warsaw University of Technology, provedthat interpolation makes it practically possible to increa-se the sensor accuracy even 300 times (with respect to itsphysical resolution resulting from the number of the mar-kers). However, it should be noted that some manufac-tures realize in their incremental measuring systemsa 1024-fold division [7], or even a 4096-fold one [13].

Nevertheless, it should be emphasized that applica-tion of data fusion based on a weighted average, eventhough more complicated than application of the arctangent function, has this advantage over the later thatit is possible to regard different uncertainties related toboth output signals. Then, formulae (7) - (8) as well as(21) change, and thus relation between the phase angleand the weight coefficients takes on a form other than(3) - (6) and (19) - (20).

A case when values of uncertainties related to theoutput signals are different, concerns first of all triaxialMEMS accelerometers (often used in tilt measurements),since the signal related to their vertical sensitive axis isusually few times less accurate compare to the signalsrelated to their horizontal sensitive axes. The authorplans in the nearest future to carry out experimentalworks that are to determine how the accuracy of measu-ring tilt angle increases in the case of regarding the afo-rementioned difference and compensating for it.

Application of the measurement principle based onthe weighted average has yet another advantage over thearc tangent function. It yields a unique result in the casewhen the measured angle is of ± , whereas while usingthe arc tangent function a respective angle value must bedetermined according to an arc cosine function, i.e. yetanother data fusion must be applied.

- Division of Design of Precision Devi-ces, Institute of Micromechanics and Photonics, Facultyof Mechatronics, Warsaw University of Technology,ul. Boboli 8, 02-525 Warsaw, Poland; tel. (+48)(22)2348315, fax (+48)(22) 2348601.E-mail: [email protected].

�/2

AUTHORSergiusz Łuczak

References[1] Sroka R., Metody fuzji danych w pomiarach parametrów

ruchu drogowego, (Rozprawy Monografie, no. 182),

Uczelniane Wydawnictwa Naukowo-Dydaktyczne AGH:Cracow, 2008 (in Polish).

IEEE Sensors J.

Tilt Sensing with Kionix MEMS Accelerometers

Pomiary Automatyka Robotyka(in Polish)

Engineering Mechanics

Proc. I KongresMechaniki Polskiej (inPolish)

Czujniki i przetworniki pomiarowe(in Polish)

IDW Incremental Translumination Angle-MeasuringSystem

Wyrażanie niepewności pomiaru. Przewodnik(in Polish)

et al., RecentAdvances in Mechatronics

[2] Łuczak S., Oleksiuk W., Bodnicki M., “Sensing Tilt withMEMS Accelerometers”, , vol. 6, no. 6,2006, pp. 1669-1675.

[3] “ ”, AN 005,Kionix, Ithaca, NY 2005.

[4] Łuczak S., „Pomiary odchylenia od pionu przy użyciuakcelerometrów MEMS”, ,no. 7-8/2008, 2008, pp. 14-16 .

[5] Łuczak S., Oleksiuk W., “Increasing Accuracy of TiltMeasurements”, , vol. 14, no. 3,2007, pp. 143-154.

[6] Łuczak S., „Algorytm wyznaczania odchylenia od pionuprzy użyciu akcelerometrów MEMS”, [in]

, Warsaw, 2007, p. 119 & CD-ROM.

[7] Zakrzewski J., , Wyd.Politechniki Śląskiej: Gliwice, 2004 .

[8], Jenoptik Carl Zeiss JENA GmbH, Berlin Leipzig,

1989.[9] , Główny

Urząd Miar, Warsaw, 1999 .[10] “PCI-1716/1716L 16-bit, 250kS/s, High-Resolution

Multifunction Card. Startup Manual”, Advantech Co.,Ltd., Taipei, 2007.

[11] Łuczak S., “Advanced Algorithm for Measuring Tilt withMEMS Accelerometers” [in] R. Jabłoński

, Springer-Verlag: Berlin Hei-delberg, 2007, pp. 511-515.

[12] Chapman M., “Heterodyne and homodyne interferome-try”, Renishaw, New Mills, 2002.

[13] “ Heidenhain,Angle Encoders”, 2005.

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles20

Page 22: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

SPECIAL ISSUE SECTION

Guest Editors:

Tsutomu MikiTakeshi Yamakawa

Brain-like Computing

and Applications

VOLUME 4, N° 2 2010

Page 23: JAMRIS 2010 Vol 4 No 2
Page 24: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

Special issue section on

Brain-like Computing and Applications

It is our great pleasure to present our readers with this special issue of theon Brain-like Computing and Applications.

Today’s advanced information technology based on high-performance and inexpensive digital computersmake our daily life convenient. Moreover it facilitates a high-speed and complicated calculation andhandling of a huge amount of data. On the other hand, it can not perform tasks easy for a human being,such as, recognizing facial expressions, distinguishing an animal from a human, grasping a soft material,etc. Researchers are now facing to the limitations of system intellectualization.

To cope with this limitation, engineering applications of crucial functions of the biological brain havedrawn great attention of researchers and developers. Collaborations among different fields are necessary foraccomplishing the break through.

We promoted the brain-inspired information technology (Brain-IT) under the Kyutech 21 century COEprogram (2003-2008) entitled, , (Projectleader: Takeshi Yamakawa), which has been driven by the department of Brain Science and Engineering,Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology. We think thatdeveloping the brain-inspired system demands for facilitating cross-disciplinary researches. Five researchfields, namely Physiology, Psychology, Theory and Models, Devices, and Robotics, are indispensable torealize a true brain-inspired information system. The Kyutech 21 century COE program produced the severalinterdisciplinary technology fusions which becomes a base of brain-inspired information technology. Eachcore technology is new approach brought by conjunction of research results in plural different fields. As oneof the results of the Brain-IT, we organized the special session on “Brain-like Computing: Theory andApplications” in the SCIS & ISIS 2008, Joint 4 International Conference on Soft Computing and IntelligentSystems and 9 International Symposium on advanced Intelligent Systems, held on Nagoya, Japan, during17 -21 September 2008 and had deep and fruitful discussions on this impressive topics.

The papers in this issue are mainly the extended and revised version of papers that have been selectedfrom submissions to the special session and the related research results of the 21 Kyutech COE program.

The first paper by G. Ogata, K. Natsume, S. Ishizuka, and H. Hayashi is focused on the spike-timing-dependent plasticity (STDP) in Hippocampus. Readers of this journal are familiar with computational modelsbut not neuroscience ones. The editors venture to put the neuroscience paper at the top of this specialsection. We believe that the research results of neuroscience will give suggestive information to

. The STDP function, described in thispaper, is known as relating to sequential learning in brain, which is shown by computational studies but notyet neurophysiological experiments. In this paper, experiments on STDP phenomena were performed in rat.The dentate gyrus (DG) in the hippocampus binds or integrates spatial and non-spatial information inparallel. We hope this topic give a new impression to readers.

In the second paper, T. Miki, H. Hayashi, Y. Goto, M. Watanabe, and T. Inoue present a practical simplelocal navigation system inspired Hippocampal function. The authors develop a human-like local navigationusing a sequence of landmarks with a simple model easy to implement. The proposed method adapts toa slight change of landmark in the real world. Mechanisms for storing, recalling, and updating the landmarksequence are described. The validity of the proposed system is confirmed using an autonomous mobile robotwith the proposed navigation mechanism.

The third paper, by K. Tokunaga and T. Furukawa, deals with a new effective method for building anenvironmental map in a self-organizing manner which is based on a Higher Rank of Self-Organizing Map(SOM2). This is an important and great interest topic in mobile robot researches. The proposed methodcreates an environmental map in a self-organizing manner from visual information obtained with a cameraon a mobile robot. The effectiveness is shown by simulation results.

JAMRIS (Journal of Automa-tion, Mobile Robotics & Intelligent Systems)

"World of brain computing interwoven out of animals and robots"

st

th

th

th st

st

st

researchersin engineering field and promote researches on brain-inspired systems

Editorial

Oscar Castillo*, Patricia Melin

23Editorial

VOLUME 4, N° 2 2010

Page 25: JAMRIS 2010 Vol 4 No 2

24

Journal of Automation, Mobile Robotics & Intelligent Systems

The fourth paper by S. Sonoh, S. Aou, K. Horio, H. Tamukoh, T. Koga, and T. Yamakawa is focused onemotional expressions on robots. Usage of emotions in robotics is one of the attractive themes. The authorsproposed an Emotional expression Model of the Amygdala (EMA), which realizes both recognition of sensoryinputs and a classical conditioning of emotional inputs. The EMA was developed with a massively parallelarchitecture by using an FPGA, and the effectiveness is confirmed by demonstrations of a human-robotinteraction with the emotions.

The fifth paper by T. Matsuo, T. Yokoyama, D. Ueno, and K. Ishii deals with a robot motion controlsystem using Central Pattern Generator (CPG) which exists in nervous system of animals and generatesrhythmical motion pattern. The authors propose a robot motion control system using CPG and applied it toan amphibious multi-link mobile robot. The proposed system adapts dynamically to an environmental changeby switching a controller due to environment and robot of state. The effectiveness is confirmed bysegmental results.

In the last paper, T. Sonoda, Y. Nishida, A. Nassiraei, and K. Ishii present a unique actuator which isa new antagonistic joint mechanism using kinematic transmission mechanism (KTM). The proposed modelgives a solution for problems that are difficulties in control caused by complex and nonlinear properties,downsizing of actuator, and response time of articular compliance. The performance of KTM is evaluatedthrough stiffness and position control simulations and experiments.

Guest Editors:

Tsutomu Miki

Takeshi Yamakawa

Kyushu Institute of Technology, Kitakyushu, JapanE-mail: [email protected]

Kyushu Institute of Technology, Kitakyushu, JapanE-mail: [email protected]

Editorial

VOLUME 4, N° 2 2010

Page 26: JAMRIS 2010 Vol 4 No 2

Abstract:

1. Introduction

This paper examines the spike-timing-dependent plas-ticity (STDP) at the synapses of the medial entorhinal cor-tex (EC) and the dentate gyrus (DG) in the hippocampus.The medial and lateral ECs respectively convey spatial andnon-spatial information to the hippocampus, and the DG ofthe hippocampus integrates or binds them. There is a recur-rent neuronal network between the EC and the hippocam-pus called the EC-hippocampus loop. A computational stu-dy has shown that using this loop and STDP phenomena atthe recurrent EC synapse, sequential learning can be ac-complished. But the STDP functions at the synapses of theEC and DG have not yet been studied by neurophysiologicalexperiments. Experiments on STDP phenomena were perfor-med in rats. The STDP function was asymmetrical in the ECsynapse and symmetrical in the DG. The medial EC mainlyprocesses the time-series signals for spatial informationabout visual landmarks when a rat is running in an environ-ment, the lateral EC processes their features, and the DGbinds or integrates the information on the positions andfeatures of the landmarks. Thus, the EC-hippocampus loopprocesses sequential learning of spatial and non-spatialinformation in parallel, and the DG binds or integrates thetwo kinds of signals. A system based on this biological phe-nomenon could have similar characteristics of parallel pro-cessing of object features and positions, and their binding.

Keywords: hippocampus, entorhinal cortex, STDP func-tion, brain science.

via

Animals live in a temporal world. They can experienceseveral events and store them as episodic memories usingtheir brains. They can also sense a sequence of the eventsand memorize the sequence. In the engineering field,memorization, i.e., sequence learning, is processed asa recurrent neural network [1]. Michael Jordan developeda recurrent network and applied it to word recognitionand the production of speech, etc. [2]. But how does thebrain itself process such sequence learning?

The entorhinal cortex and hippocampus in the brainare thought to be involved in sequence learning. Thehippocampus processes episodic memory. In episodicmemory, many sensory signals flow into the hippocampusone by one. The sensory signals are processed in thecortex first. They then flow into the hippocampus theentorhinal cortex (EC) [3]. For example, when a rat is run-ning in an environment where some visual landmarks arelocated, spatial information, which is processed in theparietal cortex, first enters into layer II of the medial EC

(MEC) and flows into the hippocampus via the medialperforant path (mPP). Non-spatial information and infor-mation on color or shape, which are processed in the oc-cipital and temporal cortices, respectively, enter into lay-er II of the lateral EC (LEC) and flow to the hippocampus

the lateral perforant path (lPP) (Fig. 1). Then the out-put signal of the hippocampus returns to layer II of the EC(ECII) through layer V of the EC (ECV). Thus, the connec-tion between the EC and hippocampus is recurrent [4].This recurrent neuronal network (EC-hippocampus-EC,etc.) is called the EC-hippocampus loop.

via

Fig. 1. Spatial information and non-spatial informationflow in a brain. The signals on the spatial information firstenter the medial entorhinal cortex (MEC) and are conveyedto the dentate gyrus (DG) of the hippocampus. Signals onnon-spatial information enter the lateral entorhinal cortex(LEC) and are also conveyed to the dentate gyrus.

Fig. 2. The neuronal network between the entorhinal cortexand hippocampus is recurrent. A cross section includinghippocampus and EC in the lower figure is shown in the up-per picture. You can see the size of the rat brain compared

THE SPIKE-TIMING-DEPENDENT PLASTICITY FUNCTION

BASED ON A BRAIN SEQUENTIAL LEARNING SYSTEM USING

A RECURRENT NEURONAL NETWORK

Genki Ogata, Kiyohisa Natsume, Satoru Ishizuka, Hatsuo Hayashi

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 25

Page 27: JAMRIS 2010 Vol 4 No 2

with a lighter. mECII, layer II of the MEC; SUB, subiculum.The subiculum is the gateway from hippocampus to the ECin the EC-hippocampus loop. mECV stands for layer V of themEC. The arrows indicate the signal flow in the EC-hippo-campus loop in the lower figure.

In memory processes in the brain, the synapses, i.e.,the junctions between two neurons, undergo two kinds ofchange in the neuronal network [5]. One is a long-termpotentiation, LTP, and the other is a long-term depres-sion, LTD. In the LTP, the signal easily passes through thesynapse, while in the LTD, it is difficult for the signal topass through. In the learning process, both LTP and LTDoccur, and it is thought that their occurrence formsa spatial pattern. These synaptic changes are controlledby the precise timing of the pre- and the postsynapticspikes [6]. The resulting long-lasting synaptic change inspike-timing-dependent plasticity (STDP) is expressed asa function of the spike timing t between pre- and post-synaptic firing. In most systems studied to date, a presy-naptic neuronal spike before the postsynaptic neuronalspike leads to LTP while a postsynaptic neuronal spikebefore the presynaptic neuronal spike leads to LTD [6].Cases in which a presynaptic spike before a postsynapticspike actually leads to LTD while the opposite timingleads to LTP, have also been observed [7]. These STDP ru-les have an asymmetrical function. There is also a sym-metrical STDP function in which the synchronous firing ofpre- and postsynaptic neurons leads to LTP while a shiftof the timing results in LTD. The STDP rules are used insequence learning [8] and the control of synchronousactivity of neuronal oscillation [9]. In the research of theneural networks, Hebb’s rule is ordinarily used as a lear-ning rule. This rule has no temporal information, whilethe STDP rule does. When the neural network adopts theSTDP rule as a learning rule, the network can use the tem-poral information of the neuronal spike more effectively.Igarashi and his collaborators including one of the coau-thors of the present paper have proposed a model inwhich the EC-hippocampus loop and the STDP phenome-na at the recurrent synapse of ECII are used for sequencelearning in the brain [10]. In the model it takes a fewtens of msec for the signal to propagate along the loop.When the first signal comes into the loop, the signal canbe associated with the next signal, which has a delayaround a few tens of msec according to the STDP rule atthe recurrent synapses in ECII. Their model adopteda symmetrical STDP function. But the function at thesynapse has not yet been clarified by a neurophysiolo-gical experiment.

The dentate gyrus (DG) of the hippocampus receivesa signal from the EC via the medial perforant path (mPP)and lateral perforant path (lPP) in the EC-hippocampusloop. When a rat runs through a course of objects, themPP conveys the spatial information of the objects, andthe lPP conveys their non-spatial features (Fig. 1). TheDG integrates the two kinds of information. To clarifyhow the DG integrates the information, the characteris-tics of the STDP function at the synapses of mPP and lPPmust be studied. The STDP function at the synapse bet-ween the lPP and granule cells in the DG has already beenmeasured experimentally [11]. It is symmetrical in sha-

pe. But the STDP function at the synapse between themPP and granule cells has not yet been clarified.

In the present study, we explored the STDP rules atthe synapse between ECII and ECV neurons and at themPP synapse of the DG in the hippocampus.

2. Materials and MethodsExperiments were carried out in compliance with the

Guide for the Care and Use of Laboratory Animals at theGraduate School of Life Science and Systems Engineeringof Kyushu Institute of Technology. The STDP rule at theECII synapse was recorded in an EC slice cut from a ratbrain as shown in Fig. 2. The STDP rule at the DG synapsewas recorded in a hippocampal slice cut from a brain asshown in Fig. 2. Rats were anaesthetized and decapita-ted, and the brains were removed. Then the slices werecut from the brains using a microslicer. Fifty-nine slices(450 μm thick) of the EC and hippocampus were preparedfrom twenty-six 3- to 4-week-old Wistar rats. They weretransferred to each recording chamber, and perfused withoxygenated artificial nutrition solution. In the STDP ex-periment at the synapse of the ECII, a recording electrodewas placed in the ECII cell layer to record the field exci-tatory postsynaptic potential (fEPSP) (Fig. 3). The fEPSPindicates the synaptic transmission at the synapse. Oneof the two stimulation electrodes was placed in the axonlayer to stimulate the axons of the presynaptic neurons,and the other electrode was placed in the cell layer tostimulate the postsynaptic neurons of mEC (Fig. 3a). Theelectrode did not stimulate just one axon or neuron butseveral. In the experiment at the DG synapse in the hip-pocampus (Fig. 4), the recording electrode was put intothe cell layer of the DG, and the two stimulation electro-des were located to stimulate the presynaptic and postsy-naptic neurons. The stimulation protocols were the samein the EC and hippocampus (Fig. 3). The baseline respon-se induced by the baseline stimulus was first recorded;then the paired stimulus at both the presynaptic andpostsynaptic neurons was given to induce STDP at thesynapse, and the baseline stimulus was given again tocheck whether the STDP was induced or not. The degreeof stimulation at the baseline was adjusted so that theamplitude of fEPSP was 50 % of the maximum so thatchanges in fEPSP could be easily observed. The stimulusin the paired stimulus was adjusted to the minimumstrength at which the postsynaptic neuron induced thespike. Twenty baseline stimuli were fed at intervals of 30sec and the baseline responses of fEPSP were recorded.After the baseline stimuli, 120 paired stimuli were fed atintervals of 5 sec for the pairing process. In the pairing,the timing of the stimulation to the presynaptic andpostsynaptic neurons shifted. After the pairing the base-line responses were again recorded for an hour at the sa-me interval as before (Fig. 3b). The regression line offEPSP at the latencies between 4 – 8 msec after the sti-mulation (Fig. 3b arrow) was extrapolated, and the slopeof the line was calculated. It was used as an indication ofthe synaptic strength. To check whether LTP or LTD wasinduced, the averaged slope of fEPSP 10 min before thepairing was compared with that between 50 and 60 minafter the pairing. The statistical test (student's t-test)was performed and the data with a significant difference

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles26

VOLUME 4, N° 2 2010

Page 28: JAMRIS 2010 Vol 4 No 2

lasted for at least 60 min (Fig. 5). The fEPSP slope was sig-nificantly decreased compared with the baseline slope be-fore the pairing (significance probability p < 0.01). There-fore, LTD was induced (Fig. 5). On the other hand, the pai-ring at t = 0 msec induced the potentiation of fEPSP andlasted for an hour. The pairing induced LTP (Fig. 6).

STDP function at the synapse of ECII (Fig. 7) showsthat at the spike timing t between the presynaptic andpostsynaptic cells around 0-msec LTP was induced, whileat the shifted timing between them around 20-msec LTDwas induced. The shape of the STDP function was asym-metrical. This type of STDP function was first found at thedistal synapse of the pyramidal cells in the cortex [12].

(p < 0.05) were adopted.At the STDP rule, a positive spike timing ( t > 0) indi-

cates that the presynaptic neurons were stimulated first,and then the postsynaptic neurons were stimulated. Ne-gative timing ( t < 0) indicates the opposite. The synap-tic change was recorded at spike timings from -60 to 60msec at the ECII synapse, and from -40 to 20 msec at theDG synapse of the hippocampus.

3. Results

In EC slices, after the pairing of the positive timing oft = 20 msec, fEPSP was suppressed and the suppression

3.1. STDP function at the synapse of ECII

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 27

VOLUME 4, N° 2 2010

Fig. 3. The recording protocol for the STDP function at the ECII synapse. A. Sites where the recording and the stimulation elec-trodes were located. In the right inset, an example of the electrical signal from the recording electrode is shown. The x and y axesindicate the time and the field potential, respectively. B. The stimulation protocol to record STDP phenomena is shown. Thevertical bars indicate the stimulations. The x-axis indicates the time. The inset shows the typical fEPSP. The arrow indicates thetime when the slope of fEPSP is calculated. An extrapolated line is also shown.

Fig. 4. Schematic diagram of the recording of the STDP function in the DG of the hippocampus.

Page 29: JAMRIS 2010 Vol 4 No 2

Fig. 5. LTD induced by the pairing of the positive spiketiming t = 20 msec at the ECII recurrent synapse. Thex-axis indicates the time, and the y axis indicates the rela-tive fEPSP slope. The relative fEPSP slope is defined by theratio of the fEPSP slope to the average for ten minutesbefore the pairing. The time zero indicates the end of thepairing. The horizontal thick bar indicates the pairing.

Fig. 6. LTP induced by the pairing.

Fig. 7. STDP function at the ECII recurrent synapse. Thefilled circles and bars indicate the relative fEPSP slope andthe standard errors of the means (SEM). The x-axis showsthe spike timing, and the y-axis indicates the relative fEPSPslope. These data are obtained from forty slices of twentymale rats.

3.2. STDP function in dentate gyrus (DG)of hippocampus

In hippocampal slices, after pairing at the positivetiming of t = 5 msec, LTP was slightly induced (Fig. 8),

while the pairings at the negative timing of t = -10 msecinduced LTD (Fig. 9).

The STDP function at the DG synapse of hippocampus(Fig. 10) shows that at the spike timing between the pre-

Fig. 8. LTP induced by the pairing of the positive timingt = 5 msec at the synapse in the DG of the hippocampus.

Fig. 9. LTD induced by the pairing of the negative timingt = -10 msec at the synapse in the DG of the hippocampus.

The pairing suppressed the transmission.

Fig. 10. STDP function at the synapse in the DG of thehippocampus. Circles and bars indicate the relative fEPSPslope and SEM. These data are obtained from nineteenslices of six male rats.

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles28

VOLUME 4, N° 2 2010

Page 30: JAMRIS 2010 Vol 4 No 2

synaptic and postsynaptic neurons around 5 msec LTPwas induced, while when t is between 10 and 18 msec,and between -20 and -1 msec, LTD was induced. The sha-pe of the STDP function is almost symmetrical, unlike thefunction at the ECII synapse.

At the synapse of ECII, the STDP function was asym-metrical (Fig. 7). It had an LTP region around t = 0 msec,and had an LTD region at t > 0 msec. On the other hand,at the synapse of DG in the hippocampus, it was symme-trical (Fig. 10). It had an LTP region around t=5 msec,and had two LTD regions at t < 0 and t = -15 msec onboth sides of the LTP region. These results suggest thatthe STDP function is different in different brain regions[13]. It is thought that induction of LTP or LTD dependson the rise in intracellular Ca concentration at the post-synaptic neuron ([Ca ]i) by the pairing [14]. When[Ca ]i increases so little by the pairing, the synapse doesnot change. When it increases moderately, LTD isinduced. [Ca ]i is increased by the neuronal spike. Whenthe increase in [Ca ]i is great, LTP is induced. Therefore,when the presynaptic and postsynaptic neurons fire si-multaneously (Figs. 7 and 10), [Ca ]i increases greatly inthe postsynaptic neurons, and LTP is induced at both theEC and hippocampus synapse (Figs. 7 and 10). The spiketiming between the postsynaptic and presynaptic neu-rons shifts, [Ca ]i does not increase so much by the sti-mulation, and LTD is induced at the DG synapse of thehippocampus (Fig. 10). Only this mechanism does notexplain the finding that there was no LTD region in theSTDP function of the ECII synapse. The present asymme-trical STDP function found at the ECII synapse is a noveltype. A similar asymmetrical STDP function was found atthe remote synapse from the soma of the pyramidal neu-rons located in the neocortex of the brain [12], while thefunction mainly had an LTP region at the negative timingdifferent from the present function. The mechanism ofthe function has not yet been clarified. There are feed-back inhibitory neurons in the ECII network. The neuronsmay contribute to the STDP function.

In an open field with landmarks, a rat runs watchingthe landmarks. The rat memorizes the time-series of theposition of the landmarks and associates their positionsand features. From this information the rat recognizes itsposition. As a result, place cells emerge in the hippocam-pus [15]. The place cell in the hippocampus fires in therespective place field whenever the rat runs into the fieldfrom any direction. As a result, the hippocampus of therat memorizes its position in the environment.

The granule cells in the DG of the hippocampusreceive information from the EC the mPP and lPP. ThemPP conveys the spatial (positional) information of thelandmarks and the lPP conveys information about theirfeatures. The mPP and lPP come from the mECII and lECII,respectively. The two signals converge in granule cells ofthe DG in the hippocampus. The granule cells in the DG ofthe hippocampus integrate these two signals (Fig. 1).The STDP function at the DG synapse with mPP (Fig. 10)and lPP in the hippocampus was asymmetrical [11]. It hasan LTP area near t = 0 msec, and there are two LTD areaswith negative and positive timing. When the granule cells

� �

4. Discussion

2+

2+

2+

2+

2+

2+

2+

via

fire simultaneously with the firing of mPP or lPP, thesynapse of mPP and lPP on the granule cells is long-termpotentiated. Therefore, there is a possibility that thegranule cells will associate the non-spatial informationof the landmark brought by the lPP with their positionalinformation conveyed by the mPP when the two inputsarrive at the granule cells simultaneously. The hippocam-pus associates the features of the landmark objects withtheir positional information. The DG neurons, whichassociate the position and the features of the landmarks,can be regarded as place cells.

Igarashi [10] has proposed a model of the brainby which the EC-hippocampus loop processes sequencelearning of sensory inputs. In rats, sensory informationto the EC is coded by the spike train. It is assumed thatthe frequency of the spike train is dependent on thedistance between the rat and the landmark. When the ratis far from the landmark, the frequency of the spike trainis low, for example, 30 Hz, and when the rat comes closerto the landmark, its frequency increases, say, to 40 Hz.The stellate cells in the ECII connect with each otherthe recurrent neuronal network through the hippocampus(Fig. 2). Thus, the output of the stellate cells in the ECreturns to the other stellate cells in the mEC with somedelay. Igarashi's model has found that when two stellatecells acquire the inputs of spike trains of 40 Hz and 30 Hz,respectively, LTP is induced at the synapse from a 40 Hz-firing stellate cell to a 30 Hz-firing stellate cell accordingto the STDP rule [10]. Thus, when the rat is runningthrough the landmarks A, B and C, the EC-hippocampusloop memorizes the sequence of the landmarks A, B, andC. Igarashi [10] used the STDP function with thesymmetrical Mexican hat type shape. One of the presentcoauthors, Hayashi, has shown that when EC stellate cellsreceive sensory inputs which contain the backgroundnoise, the asymmetrical STDP function with an LTD areafor the negative timing and LTP area for the positivetiming induces irrelevant synaptic enhancement at theECII synapse [16]. Thus, they suggest that the LTD area atthe positive timing of the symmetrical STDP functionprevents the irrelevant synaptic change and enablesrobust sequence learning for sensory inputs containingthe background noise. The STDP function found in thepresent paper has an LTD area at the positive timing.Using the function, the EC-hippocampus loop could pro-cess the sensory input signal with background noise morerobustly.

Actually there are two loops, a mEC-mPP-hippocam-pus loop, which processes the time-series of the position,and a lEC-lPP-hippocampus loop, which processes thetime-series of the features separately and in parallel. Thetime-series of the position of the landmark and the fea-tures of the landmarks are processed in the EC-hippocam-pus loop in parallel, and the position and features of thelandmark are integrated and bound in the granule cells ofthe DG in the hippocampus. The parallel processing of thefeature of the objects and the integration or binding ofthem may be the characteristics of the informationprocessing of a brain system based on EC-hippocampusloop. In results, the system can interpret the environ-ment around it freely, and it can adjust to its environ-ment.

et al.

via

et al.

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 29

VOLUME 4, N° 2 2010

Page 31: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles30

In brain-inspired systems, a new concept for studieslinking the fields of brain science and engineering, it isthought that the results from brain science can be appli-ed to engineering technologies [17]. The present worksuggests the EC-hippocampus loop processes sequencelearning based on features and positioning of landmarks,and the hippocampus integrates or binds the featuresand the positions in parallel. Using these processesa brain may navigate a path of movement. The proposedbrain sequence-learning model using the EC-hippocam-pus loop may be applicable to the sequential learning oflandmarks by a mobile robot. A navigational system fora robot based on the present results can be developed.

Our results suggest that there is the possibility fora brain to use a recurrent neuronal network and asymme-trical STDP function to achieve sequential learning. Thisprocessing has a feed-forward character. On the otherhand, recurrent neural networks in the engineering fieldusually adopt an algorithm of back propagation throughtime (BPTT) to learn the sequence of the time-seriessignal [18]. If the algorithm were used in a brain, someneurons could fire retrospectively. Actually, the hippo-campal neurons replay in reverse [19]. Which algorithmthe brain adopts must be determined in the future stu-dies. In addition, which algorithm is useful for a robot tonavigate a path of moving will be clarified by the studiesof the brain-inspired systems.

- Kyushu Institute of Technology, 2-4Hibikino, Wakamatsu-ku, Kita-kyushu, Fukuoka, Japan808-0196. Kiyohisa Natsume, tel and fax: +81-93-695-6094. Email: [email protected].* Corresponding author

ACKNOWLEDGMENTS

AUTHORSGenki Ogata, Kiyohisa Natsume*, Satoru Ishizuka,Hatsuo Hayashi

References

This work was partially supported by the 21 Century COE(Center of Excellence) Program at the Kyushu Institute ofTechnology, entitled "World of brain computing interwoven outof integrating animals and robots".

[1] Elman J.L., "Finding structure in time",, vol. 14, 1990, pp. 179–211.

[2] Jordan M.I., "Serial Order: A parallel distributed proces-sing approach" , , vol. 8604, 1986, pp. 1-40.

[3] Kloosterman F., Van Haeften T., Witter M.P., Lopes DaSilva F.H., "Electrophysiological characterization ofinterlaminar entorhinal connections: an essential linkfor re-entrance in the hippocampal-entorhinal system",

, vol. 18, 2003, pp. 3037-52.[4] Witter M.P., Moser E.I., "Spatial representation and the

architecture of the entorhinal cortex", ,vol. 29, 2006, pp. 671-8.

[5] Bliss T.V., Collingridge G.L., "A synaptic model of me-mory: long-term potentiation in the hippocampus",

, vol. 361, 1993, pp. 31-9.[6] Bi G.Q., Poo M.M., "Synaptic modifications in cultured

hippocampal neurons: dependence on spike timing,

st

Cognitive Scien-ce

ICS report

Eur. J . Neurosci.

Trends Neurosci.

Nature

synaptic strength, and postsynaptic cell type",, vol. 18, 1998, pp. 10464-72.

[7] Markram H., Lubke J., Frotscher M., Sakmann B., "Regu-lation of synaptic efficacy by coincidence of post-synaptic APs and EPSPs", , vol. 275, 1997,pp. 213-5.

[8] Abbott L.F., Blum K.I., "Functional significance of long-term potentiation for sequence learning and predic-tion", , vol. 6, 1996, pp. 406-16.

[9] Lubenov E.V., Siapas A.G., "Decoupling through syn-chrony in neuronal circuits with propagation delays",

, vol. 58, 2008, pp. 118-31.[10] Igarashi J., Hayashi H., Tateno K., "Theta phase coding

in a network model of the entorhinal cortex layer IIwith entorhinal-hippocampal loop connections",

, vol. 1, 2007, pp. 169-84.[11] Lin Y. W., Yang H.W., Wang H.J., Gong C.L., Chiu T.H.,

Min M.Y., "Spike-timing-dependent plasticity at restingand conditioned lateral perforant path synapses ongranule cells in the dentate gyrus: different roles ofN-methyl-D-aspartate and group I metabotropic gluta-mate receptors", , vol. 23, 2006, pp.2362-2374.

[12] Kampa B.M., Letzkus J.J., Stuart G.J., "Dendritic me-chanisms controlling spike-timing-dependent synapticplasticity", , vol. 30, 2007, pp. 456-63.

[13] Sjostrom P.J., Rancz E.A., Roth A., Hausser M., "Dendri-tic excitability and synaptic plasticity", ,vol. 88, 2008, pp. 769-840.

[14] Bienenstock E.L., Cooper L.N., Munro P.W., "Theory forthe development of neuron selectivity: orientation spe-cificity and binocular interaction in visual cortex",

, vol. 2, 1982, pp. 32-48.[15] O'Keefe J., "Place units in the hippocampus of the freely

moving rat", E , vol. 51, 1976, pp. 78-109.[16] Hayashi H., Igarashi J., "LTD windows of the STDP lear-

ning rule and synaptic connections having a largetransmission delay enable robust sequence learningamid background noise", , no. 2(3),June 2009, pp. 119-130.

[17] Natsume K., Furukawa T., "Introduction of Brain-inspi-red systems". In:

, Sapporo, Japan,2009, pp. 195-197.

[18] Rumelhart D.E., Hinton G.E., Williams R.J.,, vol. 1, MIT

Press, 1986.[19] Foster D.J., Wilson M.A., "Reverse replay of behavioural

sequences in hippocampal place cells during the awakestate", , vol. 440, 2006, pp. 680-683.

J.Neurosci.

Science

Cereb Cortex

Neuron

Cogn.Neuro-dyn.

Eur. J. Neurosci.

Trends Neurosci.

Physiol. Rev.

J. Neurosci.

xp. Neurol

Cogn. Neurodyn.

2009 International Symposium onNonlinear Theory and its Applications

Learninginternal representations by error propagation

Nature

VOLUME 4, N° 2 2010

Page 32: JAMRIS 2010 Vol 4 No 2

Abstract:

1. Introduction

We propose a practical simple local navigation systeminspired by the sequence learning mechanism of the ento-rhino-hippocampal system. The proposed system memori-zes a route as sequences of landmarks in the same wayhumans do. The proposed local navigation system includesa local route memory unit, landmark extraction unit, andlearning-type matching unit. In the local route memoryunit, the concept of the sequence learning mechanism ofthe entorhino-hippocampal system is implemented usinga fully connected network, while a sequence of landmarksis embedded in the connection matrix as the local route me-mory. This system has two operation modes: learning andrecall modes. In learning mode, a sequence of landmarks,i.e. a local route, is represented by enhanced loop connec-tions in the connection matrix. In recall mode, the systemtraces the stored route comparing current landmarks withthe stored landmarks using the landmark extraction andlearning-type matching units. The system uses a prospec-tive sequence to match the current landmark sequence withthe recalled one. Using a prospective sequence in the routecomparison allows confirmation of the correct route anddeals with any slight change in the current sequence oflandmarks. A certainty index is also introduced for judgingthe validity of the route selection. We describe a basic up-date mechanism for the stored landmark sequence in thecase of a small change in the local route memory. The vali-dity of the proposed system is confirmed using an autono-mous mobile robot with the proposed navigation system.

Keywords: human-like local navigation, sequence lear-ning, entorhino-hippocampal system, autonomous mobilerobot.

Recently, due to the rapid growth in digital techno-logy, there has been accelerated development of highlyintelligent machines. Intelligent machines have madeour daily lives far more convenient. Huge quantities ofdata are easily handled electronically at high speed andmany electrical devices have been provided with powerfulfunctionality. In particular, the latest vehicles comeequipped with very sophisticated information devices[1]. One of the most remarkable technologies is thenavigation system, which guides us to our destinations inunfamiliar territory. The guidance is supported by a GPSsystem and an accurate digital map. In other words, thesystem is highly dependent on data and therefore, hasa weakness with respect to recent changes and mistakesin the data. Humans on the other hand, can handle such

changes flexibly. Introducing such human-like informa-tion processing would be vital in compensating for theweakness in digital equipment. Our aim is to develop aneffective human-like system, which compensates for thisweakness and is able to suggest a plausible route evenwhen conventional navigation systems fail due to insuf-ficient information.

Recently, many researchers have focused on the func-tion of spatio-temporal representation in the hippocam-pus [2], [3]. Yoshida and Hayashi proposed a computa-tional model of the sequence learning in the hippocam-pus; that is, neurons that learn a sequence of signals canbe characterized, in the hippocampal CA1, by using pro-pagation of neuronal activity in the hippocampal CA3[5]. A network model of the entorhinal cortex layer II(ECII) with entorhino-hippocampal loop circuitry wasproposed by Igarashi [6]. Loop connections bet-ween stellate cells in the ECII are selectively potentiatedby afferent signals to the ECII, and consequently stellatecells connected by potentiated loop connections firesuccessively in each theta cycle. The mechanism has alsobeen investigated from a neurobiology viewpoint [7]. Wefocus on the attractive abilities of sequential learningand apply them to a local navigation system.

Several navigation mechanisms, inspired by crucialbrain functions especially in the hippocampus and itssurroundings, have been proposed [4], [8], [9]. Most ofthese mechanisms, however, tend to become veryintricate as a result of faithfully mimicking the brainmechanism. On the other hand, simplicity of the model isan important factor for practical embedded systems. Weaim to develop a simple local navigation system introdu-cing the essence of the remarkable brain functions. Wehave proposed a practical simple local navigation systeminspired by the sequence learning mechanism of the ECIIwith entorhino-hippocampal loop circuitry [10]. In thesystem, a route is represented as a sequence of landmarksas is the case in humans. The proposed local navigationsystem consists of a simple local route memory unit,a landmark extraction unit, and a learning-type matchingunit. In the local route memory unit, the sequence lear-ning mechanism of the entorhino-hippocampal system isimplemented using a fully connected network, whilea sequence of landmarks is embedded in the connectionmatrix as the local route memory. This system has twooperation modes: learning and recall modes. In the lear-ning mode, a sequence of landmarks is represented byenhanced loop connections in the connection matrix. Inthe system, a certainty index is introduced to evaluatethe validity of the route selection. We have realizeda flexible local navigation system in a simple architecture

et al.

A SIMPLE LOCAL NAVIGATION SYSTEM INSPIRED BY HIPPOCAMPAL

FUNCTION AND ITS AUTONOMOUS MOBILE ROBOT IMPLEMENTATION

Tsutomu Miki, Hatsuo Hayashi, Yuta Goto, Masahiko Watanabe, Takuya Inoue

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 31

Page 33: JAMRIS 2010 Vol 4 No 2

using the prospective landmark sequence and certaintyindex. In this paper, we describe the mechanisms forstoring and recall the landmark sequence and presenta basic update mechanism for the stored landmark se-quence. We confirm the validity of the proposed systemand investigate its adaptability to changes in circum-stance through experiments using an autonomous mobilerobot with the proposed mechanism.

2. Sequence learning mechanismof the entorhinal-hippocampal systemIgarashi proposed a computational model of the

ECII network with entorhino-hippocampal loop connec-tions [6]. In their model, a pair of afferent pulse trains tothe ECII, with clearly different frequencies, is selected byvirtue of loop connections that are selectively poten-tiated by the pairs of afferent signals. The frequency de-pends on the strength of sensory input. The signal trans-mission delay through the loop connections produces theorder of places. Here a “place” means a place in the realworld corresponding to a landmark. We assume that theobserver moves at a constant speed.

Here, let us assume that a route is coordinated bya sequence of places, A, B, C, D, and E. The signal for eachplace is represented by a frequency depending on thedistance between the observer and the place, wherea high frequency corresponds to a shorter distance.A higher frequency signal makes a so-called “place cell”fire in a shorter period of time. When a signal observed atplace A is fed into the ECII, the signal of the place cell A istransmitted from the ECII to the ECV through the DG-CA3-CA1 in the entorhino-hippocampal loop circuit as shownin Fig. 1. If signal B observed at place B fires anotherplace cell B in the ECII at the time the transmitted signalA arrives at the ECV cell A, the connection between theECV cell A and ECII cell B is enhanced by learning mecha-nisms in the brain. This means that a relationship bet-ween place A and place B is established. The relation-ships for the following signals are established in thesame manner. As a result, the order of places is embeddedin the loop connection weights from the ECV to ECIIneurons.

In this paper, we develop a practical simple local na-vigation system inspired by the sequence learning me-

et al.

Fig. 1. Entorhino-hippocampal loop circuit. Here “place”means a place in the real world, and represents “a placecell” which is a neuron in EC corresponding to a place in thereal world.

chanism in the hippocampus and the entorhinal cortex.The system uses signals obtained from images of land-marks specifying places and the route is stored as a chainconnection of landmarks.

3. Simple local navigation system inspiredby the sequence learning mechanism inthe entorhino-hippocampal loop circuitWe propose a dedicated navigation system inspired by

the structure of the entorhino-hippocampal loop circui-try. In the proposed system, the sequence learning me-chanism is implemented using a fully connected network(as shown in Fig. 2), while the order of landmarks is em-bedded in the matrix of loop connections as a local routememory unit. Here each landmark corresponds to a placein the real world. The entorhino-hippocampal loop circui-try illustrated in Fig. 1 is represented by neurons corres-ponding to place cells in the ECII and connecting loopswith connection weights. A connection weight repre-sents an established connection between the correspon-ding place cells in the ECV and ECII. The navigation sys-tem also includes a landmark extraction unit and a lear-ning-type matching unit as shown in Fig. 3. The landmarkextraction unit extracts landmarks from an image obtain-ed with a camera, and the learning-type matching unitevaluates the degree of matching with the current tracingroute.

Fig. 2. Local route memory unit: the route is coordinated bya sequence of observed landmarks. Landmark sequence:A B C D E F . Each landmark corresponds toa place in the real world. The loop circuit illustrated in Fig. 1is represented by neurons corresponding to place cells inECII and a loop connected with a connection weight. Theconnection weight represents an established connectionbetween place cells in ECV and ECII.

Fig. 3. Proposed navigation system includes a landmarkextraction unit, a learning-type matching unit, and a localroute memory unit.

� � � � � �

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles32

VOLUME 4, N° 2 2010

Page 34: JAMRIS 2010 Vol 4 No 2

3.1. Learning mode

3.2. Recall mode

In learning mode (Fig. 5), the system coordinates theroute as a sequence of observed landmarks. A signalcorresponding to the landmark is fed into the local routememory unit and a connection is made between the cur-rent landmark and the one immediately before. Here, letus assume that a signal of the landmark is observedafter a signal of the landmark . The relationship bet-ween the landmarks is represented by enhanced loop con-nections in the connection matrix as follows:

1) is activated by signal and the activation stateis retained until the next neuron is activated.

2) Then, the landmark signal is fed into theneuron, and a program signal is set to “1”.

3) The connection weight is assigned by the follo-wing equation.

(1)

4) Moreover, an activation signal is assigned by thefollowing equation.

(2)

5) activates in preparation of the next landmarksignal.

By repeating the above steps, the local route is storedin the local route memory unit.

In recall mode (Fig. 6), , thesystem traces the stored route in the local route memoryunit as follows:

1) When the landmark signal is observed and fedinto the neuron, activates .

2) If the connection weight on the activated lineis “1”, then is set to “1”.

3) The neuron output signal is activated by .4) One recall leads to another, until the

landmark signal is recalled in the same man-ner. Here, the observed and prospective sequencesconsist of landmarks.

j-th Ii-th I

u I

I j-thprg

w

a

a u

LRN / RCL =

Ii-th I u

wu a

u a(i+k)-th

I

K

j

i

i i

j

j

ij

j

j j

i

i i

ij

i i

j j

i+k

Fig. 5. Assignment of the connection weight in learningmode.

“0”

The system has two operation modes, learning modeand recall mode. In learning mode, the order of theobtained landmarks is stored in the local route memoryunit. In recall mode, the local route memory unit recallsthe prospective landmark sequence and the learning-typematching unit evaluates the selected current route bycomparing the observed landmark sequence with the re-called prospective landmark sequence. Here, we assumethe following procedure. 1) First, in learning mode, thesystem memorizes a route by moving along the route orstoring the data of the route. 2) Thereafter, in recall mo-de, the system traces the route automatically according tothe stored one.

Fig. 4 shows the basic elements of the local route me-mory unit. Fig. 4a) depicts a neuron unit corresponding toa place cell in the ECII. Here , , and are thelandmark input, activation input, output of the neuron,and a program signal, respectively. A isa mode select signal equal to “1” in learning mode and “0”in recall mode. Once the neuron has been activated,output retains the activated state until another neuronis activated. Fig. 4b) depicts the connection weight unit,where is a connection weight. The connection weightunit stores the ordering relationship between the and

neurons. When is fed into the neuron, isactivated in learning mode, becomes “1” and theconnection weight is set to “1”.

, ,

In the neurobiological computational model, the loopdelay corresponds to the sampling period of capturing thelandmark. Conversely, in the proposed system, the land-mark is captured every time it is observed and the acti-vation state of the neuron is kept with a latching mecha-nism until the next landmark appears.

I a u prg

LRN / RCL

u

wi-th

j-th I j-th uprg

w

I a uprg

LRN / RCL

u

i-th j-th Ij-th u prg

w

j j j j

j

ij

j j

j

ij

j j j

j

j

j

j j

ij

(a) j-th neuron unit

(b) Connection weight unit

Fig. 4. (a) Neuron unit and (b) connection weight unit. Theneuron corresponds to a place cell in ECII. Here and

are the landmark input, activation input, output of theneuron, and a program signal, respectively. Ais a mode select signal equal to “1” in learning mode and“0” in recall mode. Once the neuron has been activated,retains an activated state until another neuron is activated.The connection weight unit memorizes the ordering rela-tionship between the and neurons. When is fedinto the neuron, is activated in learning mode,changes to “1” and the connection weight is set to “1”.

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 33

VOLUME 4, N° 2 2010

Page 35: JAMRIS 2010 Vol 4 No 2

5) The recalled sequence consists of landmarks isfed into the matching unit and compared with thecurrent sequence of observed landmarks.

In recall mode, the matching unit matches the currentsequence of landmarks and the corresponding recalled se-quence (where both sequences consist of landmarks).The certainty index is introduced to verify the correct-ness of the current selected route.

The certainty index represents the validity of theroute currently selected and is defined as

(3)

(4)

where is the length of the recalled sequence used forroute matching, and and represent therecalled landmark and currently observed landmark,respectively.

A slight change in the circumstances can be represen-ted by a change in the . For example, a crossing can bedetected by a change in the certainty index . As theobserver reaches the crossing, the certainty indexdecreases to . It is because that the number of thematched landmarks decreases as closing to a crossing.

In the proposed method, the order of landmarks is em-bedded in a matrix of loop connections as the local routememory. The stored route in the local route memory unitcan easily be updated by rearranging the loop connec-tions. In this section, we describe the procedure for up-dating the connections in two different cases: a missingor added landmark observed in the selected route. Whenthe mismatched landmark is detected, the system inves-

K

KCI

CI

KI I (i+k)-th

CICI

CIK

Fig. 6. Landmark sequence activation in recall mode.

3.3. Certainty index for judging the validityof the selected current route

3.4. Update mechanism of the stored route in thelocal route memory

CI

i+k i+k

i

i

1/

tigates the subsequent sequence after a change point. Ifthe subsequent landmarks match the stored landmarks,the system accepts that the selected route is correct andthat an environmental change has occurred in the storedlandmark sequence. If an update of memory is required,the connections are updated as described below.

Let us assume that the route depicted in Fig. 7a) isstored in learning mode. Fig. 7b) illustrates the case whe-re landmark “B” disappears in the real world. In this case,first, the landmark of place A is observed. The system re-calls the prospective landmark sequence and expects thelandmark of place B as shown on the left in Fig. 7b). If “C”is observed instead of “B”, the system confirms that land-mark “B” is missing and generates a new connection-nodebetween “A” and “C” as shown on the right in Fig. 7b). Asa result, the stored route is updated. On the other hand,Fig. 7c) illustrates the case where landmark “b” appears

(a) Stored route

(b) Adaptability to a missing landmark in the real world:the case in which landmark “B” disappears in the real world.

(c) Adaptability to an additional landmark in the realworld: the case in which landmark “b” appears betweenlandmarks “A” and “B”.

Fig. 7. Proposed system can adapt to stored route changescaused by a slight change in landmarks. The stored route inthe local route memory can be updated by rearranging theloop connections.

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles34

VOLUME 4, N° 2 2010

R O

Page 36: JAMRIS 2010 Vol 4 No 2

between landmarks “A” and “B”. In this case, a new routeis created by generating two new connections instead ofthe old one as shown in Fig. 7c).

To confirm the validity of the proposed local naviga-tion system for practical engineering applications, wemade use of an autonomous mobile robot “WITH” [11],which is a basic omni-directional mobile robot platformdeveloped as a result of the Kyutech 21 COE program. Weconfirm that the proposed system can extract landmarksand store a route in the form of a sequence of landmarksand that the robot with the proposed mechanism cantrace the route corresponding to the stored landmark se-quence in the local route memory unit. Moreover, we showthat the robot behaves like a human when circumstanceschange slightly.

Fig. 8 shows the autonomous mobile robot consistingof the robot base WITH, a USB camera, and palmtop com-puter VAIO type-U in which the proposed navigationsystem is embedded. The setting signal of the operationmodes and the cruise control signal in learning mode aregiven wirelessly by an external computer.

The experiments were designed to investigate thefollowing behavior: 1) route tracing according to the sto-red route memorized in learning mode; and 2) selectinga plausible route when a slightly altered sequence of land-marks is encountered as a result of either a missing land-mark or the addition of an unknown landmark in the sto-red route sequence.

Landmarks are set along the path on an experimentalfield as shown in Fig. 9. A path, of width 24 cm, is drawnusing two black lines. Each landmark is about 10 cm x 10cm and labeled with one of the colors, red (R), blue (B),light-green (G), yellow (Y), or orange (O). Fig. 10 showsthe arrangement of the landmarks and the route stored inlearning mode in this experiment. In general, a change in

4. Experimental results

st

Fig. 8. Autonomous mobile robot consisting of the mobilerobot base WITH, a USB camera, and a palmtop computerVAIO type-U in which the proposed navigation system isembedded.

4.1. Route tracing according to the route storedin the local route memory unit

lighting has an effect on the appearance of landmarksobtained with a camera. Different color images can there-fore be obtained for the same landmark. Here, we avoidthe problem by using a standard three-layer-perceptrontrained in advance with all the landmarks used in the ex-periments. In this experiment, the MLP is trained usingthe hue data of an HSV image set for each landmark. Thetrained MLP works as a classifier of the landmark obtainedwith a camera.

First, in learning mode, an operator controls the robotnavigating through the planned route. The system ob-tains landmarks beside the path along the route traversedand stores the route as a sequence of these landmarks.Once this has been completed, we make the robot tracethe stored route autonomously.

Fig. 11a) shows the camera view in which landmarksobtained by the landmark-extracting unit are displayed.The robot traces the correct route as shown in Fig. 11b).Fig 11c) shows the prospective sequence recalled by thelocal route memory unit and a change in the certaintyindex as defined in Eq. (3). As shown in Fig.11c), bymonitoring the change in the , it is known when therobot reaches a crossing. The returns to "1" after therobot turns at the crossing. This means that the correctroute has been selected.

Two situations are assumed: a missing landmark andthe addition of an unknown one in the current route. The

Fig. 9. Experimental field: field size is 5.5m x 5.5m, path isdrawn using two black lines, and width of the path is 24cm.Each landmark is about 10cm x 10cm and labeled witha color: red, blue, light green, yellow, or orange.

Fig. 10. Arrangement of landmarks and the stored route inthe experiments.

CICI

4.2. Route tracing with a slight route change

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 35

VOLUME 4, N° 2 2010

Page 37: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles36

VOLUME 4, N° 2 2010

(a) Camera view: landmarks are extracted from (b) Autonomous mobile robot tracesan image obtained with a single camera the stored route automatically

(c) Prospective sequence recalled by the local route memory unit and a change in the certainty index

Fig. 11. Result of route tracing according to the route memorized in learning mode.

(a) Missing landmark in the current route

(b) Change in the certainty index

Fig. 12. Behavior of route tracing in the case of a slight change in the route stored in the local route memory. (a) At a crossing,the system evaluates all possible routes and chooses the route with the highest matching degree as the correct direction. (b) Therobot recognizes a missing landmark from a change in the and by comparing the current route with a shifted memory route.

CI

CI

Page 38: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 37

behavior in the case of a missing landmark or an additio-nal one is shown in Figs. 12 and 13, respectively. Fig. 12.shows the case that landmark is missing. The robot reco-gnizes a missing landmark from a change in the and bycomparing the current route with a shifted memory route.Then the system generates a new route by update mecha-nism described in 3.4. Conversely, in the case that an ad-ditional landmark is appeared in the real world (Fig. 13),the robot recognizes the additional landmark from a chan-ge in the and by comparing the current route witha shifted memory route. Consequently, the system gene-rates new routes by update mechanism described in 3.4.At a crossing, the robot decides which way to move bycomparing the matching degree ( ) of the local land-mark sequence for all possible paths. Despite checking allpossible paths at the crossing, perfectly matching pathsare not detected in these cases. Therefore, the robot se-lects the path with the highest matching degree ( ) asthe plausible direction. When a local route stored in me-mory is found in the selected route, the system concludesthat its decision was correct and that the route had chan-ged slightly.

A practical simple human-like navigation system in-spired by the entorhino-hippocampal loop mechanismwas proposed and the validity confirmed through experi-ments using an autonomous mobile robot. The system na-vigates using landmarks stored in the memory unit. Byusing a prospective landmark sequence, the system is able

CI

CI

CI

CI

5. Conclusions

to adapt to slight changes in the local route stored in theroute memory unit. In this paper, we confirmed that thecorrect action is taken when there is either a missing or anadditional landmark in a local route. This ability makesthe navigation system flexible. In addition, we introdu-ced the certainty index as a measure of recognizing thepresent situation in a route tracing. We also presented thebasic idea for updating the route stored in the local routememory unit in the case of a slight change in circum-stances by changing the connection weight of “the fully-connected network”. The authors emphasize that theproposed system implementing the sequence learningmechanism can be completed even by a small palmtopcomputer. This feature is very important from an engine-ering viewpoint.

The system compensates for the latest digital naviga-tion systems requiring a GPS and up-to-date map for accu-rate navigation. In particular, the system is promising asa low-cost and effective navigation system. It can be usedin places where GPS signals are unavailable and in shop-ping malls where costly dedicated equipment such asRFID-tag is not employed.

In our future work, we aim to develop a decision me-chanism for updating the timing and to investigate theadaptability of the proposed method. Improvement of therepresentation ability of the certainty index is necessaryfor recognizing more complex situations. A robust land-mark extraction technique that can operate in real scenesis also important to use the proposed method in practicalapplications.

VOLUME 4, N° 2 2010

(a) Additional landmark

(b) Change in the certainty index

Fig. 13. Behavior of route tracing in the case of a slight change in the route stored in the local route memory. (a) At a crossing, thesystem evaluates all possible routes and chooses the route with the highest matching degree as the correct direction. (b) The robotrecognizes an additional landmark from a change in the and by comparing the current route with a shifted memory route.

CI

CI

Page 39: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles38

ACKNOWLEDGMENTS

AUTHORSTsutomu Miki*, Hatsuo Hayashi, Yuta Goto, MasahikoWatanabe, Takuya Inoue

References

This work was supported in part by a 21st Century Center ofExcellence Program <Center#J19>, "World of Brain ComputingInterwoven out of Animals and Robots (PL: T. Yamakawa)" gran-ted in 2003 to the Department of Brain Science and Engine-ering, Graduate School of Life Science and Systems Engine-ering, Kyushu Institute of Technology by the Japan Ministry ofEducation, Culture, Sports, Science and Technology.

[1] Makoto M., "Recent Topics of Car Electronics”,, vol. 94, no. 243,

1994, pp. 69-75.[2] Yamaguchi Y., “A Theory of hippocampal memory based

on theta phase precession”, , no.89, 2003, pp. 1-9.

[3] Hafting T., , “Hippocampus-independent phaseprecession in entorhinal grid cells”, , no. 453,2008, pp.1248-1252.

[4] Wagatsuma H.,Yamaguchi Y., “Content-DependentAdaptive Behavior Generated in the Theta Phase CodingNetwork”, , Part II, LNCS 4985, 2008, pp.177-184.

[5] Yoshida M., Hayashi H., "Emergence of sequence sensi-tivity in a hippocampal CA3-CA1 model”,

, vol. 20, 2007, pp. 653-667.[6] Igarashi J., Hayashi H., Tateno K., "Theta phase coding

in a network model of the entorhinal cortex layer II withentorhinal-hippocampal loop connections”,

, vol. 1, no. 2, 2007, pp. 169-184.[7] Natsume K., , “The Possibility of Brain-Inspired

Time-Series Memory System using the Recurrent Neuro-nal Network between Entorhinal Cortex and Hippocam-pus”,

, Nagoya, 2008, pp.1778-1782.

[8] Gionannangeli C., Gaussier P., “Autonomous vision-based navigation: Goal-oriented action planning bytransient states prediction, cognitive map building,and sensory-motor learning”,

, 2008,pp. 676-683.

[9] Barrera A., Weitzenfeld A., “Biologically-inspired robotspatial cognition based on rat neurophysiological stu-dies ”, , vol. 25, no. 1-2, 2008, pp.147-169.

[10] Miki T., , “Practical Local Navigation System Basedin Entorhino-hippocampal Functions”,

- Graduate School of LifeScience and Systems Engineering, Kyushu Institute ofTechnology, 2-4, Hibikino, Wakamatsu-ku, Kitakyushu808-0196, Japan. Tel/Fax: +81-93-695-6125. E-mail:[email protected].* Corresponding author

Technicalreport of IEICE. ICD (in Japanese)

Biological Cybernetics

et al.Nature

ICONIP 2007

Neural Net-works

CognitiveNeurodynamics

et al.

Joint 4 Int. Conf. on Soft Computing and Intel-ligent Systems and 9 Int. Sympo. on advanced Intel-ligent Systems (SCIS&ISIS2008)

International Conferenceon Intelligent Robots and Systems (IROS 2008)

Autonomous Robots

et al.Joint 4 Int.

Conf. on Soft Computing and Intelligent Systems and 9

th

th

th

th

Int. Sympo. on advanced Intelligent Systems (SCIS&ISIS2008)

Brain-Inspired IT

, 2008, Nagoya, pp. 1783-1787.[11] Ishii K., Miki T., "Mobile robot platforms for artificial

and swarm intelligence researches ", ,vol. 3, 2007, pp. 39-42.

VOLUME 4, N° 2 2010

Page 40: JAMRIS 2010 Vol 4 No 2

Abstract:

1. Introduction

In this paper, we propose a new method for building anenvironmental map in a self-organizing manner using vi-sual information from a mobile robot. This method is basedon a Higher Rank of Self-Organizing Map (SOM ), in whichKohonen’s SOM is extended to create a map of data distri-butions (set of manifolds). It is expected that the “SOM” iscapable of creating an environmental map in a self-organi-zing manner from visual information, since the set of visualinformation obtained from each position in the environ-ment forms a manifold at every position. We also show theeffectiveness of the proposed method.

2

Keywords: self-Organizing map, map building, place cells,head direction cells, autonomous robot.

1.1. Aim of this studyThe ability to build an environmental map based on

sensor information is necessary for an autonomous robotto perform self-localization, identification of directionand self-navigation. In animals, a map building capabi-lity is very important to accomplish crucial behavior suchas predation, nest homing, path planning, and so on.With regards to research on map building in animals,O’Keefe and Dostrovsky identified “place cells”, whichrespond preferentially to specific spatial locations in thehippocampus of a rat [1]. The place cells encode the

observed sensory information as the animal explores itsenvironment. Moreover, O’Keefe and Nadel propoundedthe theory that animals build a “cognitive map” withinthe brain, based on research of the place cells [2]. It isthus evident that animals build a cognitive map, whichplays a role in path planning using landmarks (i.e.,particular information of local environments) coded bythe place cells. Furthermore, Taube et al. identified “headdirection cells”, which respond preferentially accordingto the direction of the head [3]. The head direction cellsare seen to be involved in the map building, since it isimportant to know one’s own direction before moving toa destination. Therefore, a robot is expected to be able toperform navigation automatically using a cognitive mapmodel that incorporates the mechanism of place cells andhead direction cells in its implementation. Moreover, itmay be possible to identify a mechanism from the modelakin to the map building of animals.

With regards to a technical model for map building,we propose using a Higher Rank Self-Organizing Map(SOM ) in this study. The SOM proposed by Furukawa [4]is generally an extended model of Kohonen’s SOM [5]. TheSOM has a SOM-type modular network structure but withnesting SOMs (Fig. 1(a)). It is the task of each SOM mo-dule in the SOM to identify a manifold, which approxi-mates a data vector set, thereby enabling the entire SOMto find the formation of the fiber bundle in a self-orga-nizing manner (Fig. 1b)). It has been suggested in [4]that by using this feature, the SOM can, with unsuper-vised learning, build a map, which is able to estimate the

2 2

2

2

2

2

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 39

BUILDING A COGNITIVE MAP USING AN SOM2

Kazuhiro Tokunaga, Tetsuo Furukawa

(a) (b)

Fig. 1. Architecture of SOM . (a) the SOM is a nest structure of SOMs. The position of each child SOM is kept with connectingbetween neighborhood child SOMs by path. (b) Each child SOM approximates each episode with a graph map (i.e., a manifold)through training of the episodes. The connecting the correspondence point of each map represents the fiber.

2 2

Page 41: JAMRIS 2010 Vol 4 No 2

location and azimuth direction independently, basedonly on sets of the image data vectors observed fromomni-directional vision sensors. It has also been sugges-ted that the SOM can, with unsupervised learning, builda cognitive map with the features given below, using onlyvisual information; each module of the SOM representsa place cell which codes the specific location, while eachreference vector unit in the SOM module representsa head direction cell. However, a detailed verification hasnot yet been done. Hence, this study aims to confirmthrough various computer simulations, whether or notthe position and the azimuth direction can be estimatedfrom a map acquired by unsupervised learning of theSOM .

Map building is an important theme in studies invol-ving autonomous robots. Consequently, in recent years,various methods for building an environmental map havebeen proposed. A classical map building method, deadreckoning, can estimate the location and pose (or direc-tion) of a robot by calculating the displacement from aninternal sensor, such as a rotary encoder, accelerationsensor, and so on. The proposal method builds the mapfrom only the arrangement of the memory of sensorinformation, while the dead reckoning build the map byusing odometry information. On the other hand, severalmethods for map building using external sensors such asa laser range finder, vision sensor, and so on, have alsobeen proposed. The most popular method, known asSLAM [6], is often used in map building using externalsensors [7], [8], [9]. SLAM can perform self-localizationand estimate the structure of the environment aroundthe robot simultaneously, making it a technologically ex-cellent method for map building. Nevertheless, SLAM re-quires a highly accurate observation model and locomo-tion model, a priori, since it is necessary to understandthe correct structure of the environment [10]. The obser-vation and locomotion models provide the physical mea-surements of the environment and the physical locationof the robot using external sensors, respectively. The ob-servation and locomotion models provide the physicalmeasurements of the environment and the physical loca-tion of the robot using external sensors, respectively. TheSLAM builds the map based on measurements providedfrom these models. However, it is difficult to develop themodels, which flexibly build the map, since the condi-tions of the environment, and the sensors are nonstatio-nary. Besides, the correspondence between SLAM andmap building in an animal’s brain has not yet been iden-tified. In contrast, a method has been proposed, called“topological map building”, that builds the map abstrac-ted by a graph [11]. Nodes and edges in the graph repre-sent specific locations (areas) and pathways betweenareas, respectively. Typically, sensor information forlandmarks is memorized to nodes, while pathways bet-ween landmarks are stored as edges. Then self-localiza-tion and path planning can be performed by matching thesensor information to the map. Since each node memo-rizes the information that represents a local area, it is notnecessary to comprehend the correct structure of the en-vironment. In addition, the method requires no physics

2

2

2

1.2. Related works

models beforehand. Moreover, it is very interesting beca-use this method resembles the cognitive map based onplace cells in the hippocampus of an animal’s brain. How-ever, this method has two important issues: how are theallocation of nodes and the connection of paths decidedin a self-organizing manner. As a solution to these issues,applying a self-organizing neural network (SONN), suchas the Self-Organizing Map (SOM), Topology Represen-ting Network (TRN), and so on, has been suggested [12].Reference vectors of the SONN are the nodes that memo-rize the information of specific areas. Moreover, pathsthat connect reference vectors represent the pathwaybetween nodes. The SONN can perform the allocation ofnodes and the connection of paths in a self-organizingmanner with unsupervised learning. Tanaka pro-posed an implementation model incorporating the placecell in the hippocampus using a TRN [13]. This methoddoes not, however, build a cognitive map in the same wayas an animal, because GPS information is included astraining input. In addition, K. Chokshi proposeda method for self-localization using the categorization ofvision information by an SOM [14]. Their method is alsoan implementation model of the place cell. Nevertheless,since these methods do not include the functionality ofthe head direction cell, a robot cannot identify its owndirection.

In contrast, our proposal method builds the map thatcan independently estimate the position and directionfrom only the visual information with the unsupervisedlearning. Each module of SOM is a node, which memo-rizes the visual information that represents the local en-vironment in a self-organizing manner. In addition, thememorized visual information is ordered correspondingto direction with the unsupervised learning. The featuresare similar to ones of the place cells and head directioncells in the hippocampus. Thus, the method incorporatesnot only the functionality of the place cells, but also thatof the head direction cells. Moreover, the topology of themap acquired with SOM and the topology of the environ-ment’s geography are nearly equal. Therefore, the self-navigation of the robot can be very easily performed byusing the map.

et al.

et al.

2

2

2

2

2. Map building using an SOM2

In this study, we aim to show that the SOM can createa self-organizing map in which the position and orienta-tion of a robot can be estimated using only vision sensorinformation. First, we explain how to acquire the mapusing an SOM .

When a mobile robot equipped with a vision sensorgets a birds-eye view of the surrounding area at place A inthe environment as shown in Fig. 2a), the episode ofvision sensor information is distributed as a manifold ina multi-dimensional vector space (sensor space). If themobile robot observes vision sensor information by rota-ting 360 degrees at place A, then the episode of visionsensor information is distributed as a one-dimensionaltoroidal manifold (Fig. 2b)). In addition, if the mobilerobot moves from place A to place B, the episode ofsensor information at place B forms a manifold near placeA (Fig. 2b)). Thus, the episodes observed at consecutiveplaces in the environment form continuous manifolds in

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles40

VOLUME 4, N° 2 2010

Page 42: JAMRIS 2010 Vol 4 No 2

(Fig. 1b)). Each child SOM approximates each episodewith a graph map (i.e., a manifold) through training ofthe episodes. Here, training of each child SOM is per-formed in such a way that the correspondence points onthe map in each child SOM are uniform. Thus, connectingthe correspondence point of each map represents thefiber. Besides, the parent SOM orders the maps formed bychild SOMs. Therefore, an SOM can create a map of mani-folds.

In building an environmental map using an SOM ,a Ring SOM (RSOM), which approximates the distributionof data vectors by a one-dimensional toroidal manifold,is em-ployed for each child SOM (Fig. 3). Hereafter, thisSOM is referred to as the RSOM×SOM. The episode sets ofvision sensor information observed at various places aregiven to the SOM as training episodes. After training,each child SOM forms a manifold of each place’s episodes.Here, the correspondence points in the map of each childSOM are constant, that is, the environmental azimuthdirection is constant. Each reference vector unit of theRSOM represents a head direction cell. Moreover, the pa-rent SOM creates a map with the topology (i.e., topologyat the positions of the manifolds in sensor space) of thepositions in the environment preserved. As a result, themap of the parent SOM itself represents a geometricalmap of the environment. Each module of parent SOMrepresents the place cells. In addition, the azimuth direc-tions of the environment in each RSOM are ordered ina self-organizing manner.

Restrictions on the method, however, include thatthe working environment is open without obstacles, inwhich robot cannot pass through and robot’s view isinterrupted, and that similar visual information does notexist in the environment. It is certainly possible to applythe method in a non-limited environment by enhancingthe SOM . (This is addressed in subsection 5.1.) None-theless, the aim of this study is to verify that an SOM cancreate a self-organizing map in which the position andorientation of a robot can be estimated using only visionsensor information. The robot’s working environment forthis study is set as follows.

(A) The working environment is open without obstac-les. Moreover, the robot can see faraway buildings andmountains, etc. as shown in Figs. 4 and 5.

(B) The robot has an omni-directional camera as vi-sion sensor.

(C) Only visual information is assumed to be observedby the robot sensors.

In (A), under normal circumstances, it is preferablethat the robot can build the map while looking aroundwith a single directional camera. Building the preferablemap from partial information is difficult without enhan-cing the algorithm for the SOM . Thus, in this study, theepisodes are acquired from an omni-directional camera.

2

2

2

2

2

2

2

2

3. Algorithm for the SOM (RSOM×SOM)2

In this section, the algorithm for the RSOM×SOM isexplained. The RSOM×SOM is an SOM in which each childSOM is replaced by a RSOM. The difference between anRSOM and SOM is the definition of the distance measurebetween reference vectors on the map, since the refe-rence vector is allocated on a one-dimensional toroid in

sensor space (Fig. 2b)). Moreover, the correspondencepoint between the manifolds corresponds to the shootingangle of the vision sensor (that is, the azimuth directionof the environment). Therefore, it is expected that theposition and azimuth direction can be estimated usinga map created based on the distance and correspondencepoint between manifolds.

For this method, we employ the SOM proposed byFurukawa. The SOM is an extension of the SOM in whicheach reference vector unit in the conventional SOM isreplaced by an SOM module. In other words, the SOM isa nest structure of SOMs (Fig. 1a)). In this paper, the SOMmodule (child level) is called the “child SOM”, while thewhole SOM (parent level) is called the “parent SOM”. Inthe SOM , sets (episodes) of vector data are given to theSOM as training data. The vector data for each episodeare distributed on each of the subspaces in vector space

(a)

(b)

Fig. 2. The episode of vision sensor information is distribu-ted as a manifold in a sensor space. If the mobile robotmoves from place A to place B, the episodes of sensor infor-mation at place B from a manifold near place A. Moreover,the correspondence point between the manifolds corres-ponds to the shooting angle of the vision sensor.

2

2

2

2

2

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 41

VOLUME 4, N° 2 2010

Page 43: JAMRIS 2010 Vol 4 No 2

the RSOM, but on a lattice in the SOM. Otherwise they arethe same.

First, we define certain variables. Suppose there aretraining episodes where each episode is composed ofdata vectors. The episode is defined as

, where is the vector data. Furthermore,the parent SOM is composed of RSOM modules. EachRSOM module (i.e., child SOM) has reference vectors.Now, the set of reference vectors in the RSOM moduleis defined as .

In the training of a RSOM×SOM, the following threeprocesses are repeated: (1) evaluative process, (2) coope-rative process, and (3) adaptive process. These processesare explained below.

First, error between each data vector and eachreference vector in all child SOMs is calculated as follows:

. (1)

Here, the index of the best matching unit (BMU) isdefined as

. (2)

IJ i-th D xx x x x

KL

k-thw x x x x

e

l

i i

i ij iJ ij

ij

ij

= { ,,..., ,..., }

= { , ,..., ,..., }

1

2

k k k kl kL1 2

(1) Evaluative process

Next, error in each child SOM module for eachepisode is calculated as

(3)

where is the error of BMU for . Thus, error ineach child SOM module is the mean of for all datavectors in one episode. Moreover, the best matching mo-dule (BMM) for the episode is defined as

. (4)

In the cooperative process, the learning rates andare calculated to decide the update values of all

reference vectors. is defined as follows:

(5)

(6)

E

e l x Ee

l i-th

i

ij ij ij i

ij

i

i

ij

i

(2) Cooperative process�

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles42

VOLUME 4, N° 2 2010

Fig. 3. Architecture of RSOM×SOM that is engineering model of the place cells and the head direction cells. The map of the parentSOM itself represents a geometrical map of the environment. Each reference vector unit of RSOM represents a head direction cell.Moreover, Each RSOM represents the place cells.

Fig. 4. Type 1 environment. Fig. 5. Type 2 environment.

kl

k

kk*

*

k

k

l

k*

k*

k*

2

ij

klkl

ije xw ��

kl

ij

l

k

ijel minarg

*�

��

J

j

k

ij

k

ie

JE

1

*1

k

i

k

iEk minarg

*�

��

' 'i

k

i

k

ik

i

��

� �

� �

��

��t

kkdik

i 2

P

2*

2

,exp

��

Page 44: JAMRIS 2010 Vol 4 No 2

, (7)

where is the learning rate at the parent level, isa neighborhood function, and represents thedistance between the module and the BMM on themap. In addition, represents a neighborhood radius,which decreases exponentially with learning step . Inthis study, is defined as Eq. (7). and arethe maximum and minimum radii of the neighborhoodfunction, respectively. Moreover, is a time constant fordecreasing the speed of the neighborhood radius. Next,is defined as follows:

(8)

(9)

. (10)

is the learning rate at the child level. Note that thedistance between the reference vector and the BMU ofthe BMM on the map is calculated as Eq. (9). Thisencourages the preservation of homogeneity in the mapof each child SOM. , , and are the maximumradius, minimum radius, and time constant at the childlevel.

All reference vectors are updated by

. (11)

In training an RSOM×SOM, the above three processesare repeated.

This section presents the verification results for twotypes of simulations. The purpose of the simulations is toconfirm whether the RSOM can build an environmentalmap to estimate the position and head direction usingonly vision sensor information.

Using the “Webots” robotics simulation software de-veloped by Cyberbotics Ltd., we created two types of wor-king environments for the simulations. Type 1 is an envi-ronment in which four walls are painted red, blue, green,and yellow (illustrated in Fig. 4). Type 2 is a park-likeworking environment shown in Fig. 5. The area in whichthe robot is able to move is a meter long by a meter wide(Figs. 4 and 5). Furthermore, the robot has an omni-directional vision sensor. Fig. 6 is an example of a panora-mic image taken from the omni-directional vision sensor.The size of the panoramic image is 512 x 64 pixels. In ad-dition, the colors of the panoramic image are converted to64 colors (in other words, red, green and blue are con-verted to 4 colors).

� �

� � �

� � �

i

i

P

P Pmax Pmin

P

ij

ij

Cmax Cmin C

d k,kk-th

tt

t

l-th

( )

( )

( )

(3) Adaptive process

4. Simulation

4.1. Framework for simulations

Fig. 6. Upper image is example of panoramic image obser-ved from vision sensor. The episode given to RSOM×SOM iscreated from the panoramic image.

(A)Confirmation of estimating the position

(B)Confirmation of estimating the head direction

Next, we explain the episodes given to the RSOM×SOM.An episode is created from the observed pano-ramic image. Data vector is a color histogram vector ex-tracted from an image (size 64x64) clipped from the pano-ramic image. For all data vectors, the color histograms areextracted evenly from the entire panorama image.

Next, the simulation flow is explained. First, the robotmoves around randomly in the environment, while simul-taneously, observing the panoramic images at variouspositions in the working environment. Next, the set ofepisodes extracted from the panoramic images are givento the RSOM×SOM as training episodes. Here, note thatX-Y coordinate is not included in training episodes. Thetraining of the RSOM×SOM is performed offline. After thetraining process, two places (A) and (B) are verified toconfirm whether the RSOM has built an environmentalmap that can estimate the position (brief X-Y coordinate)and head direction using only vision sensor informationfrom the two types of environments.

The BMM on the RSOM×SOM is monitored when the ro-bot moves to an arbitrary place in the working environ-ments. If the RSOM×SOM can build a map in which thetopology of the geography is preserved, then the topologyof the robot’s places is almost the same as that of theBMM’s positions.

First, after the robot is put in an arbitrary place, theepisode observed from this place is given to the RSOM×SOM. The robot is turned to face north and then, a BMMcorresponding to the episode is decided. In addition,a BMU in the BMM is decided after the color histogramvector of the front image (64x64 pixels) is given to theBMM. If the robot is rotated on the spot, the BMU willchange continuously on the map of the RSOM. Thus, themap of the RSOM preserves the topology of the direction.In addition, it is expected consistency be maintained inthe reference vectors of every module.

First, before the training of the RSOMxSOM, the robotmoved randomly in the environment, and simultaneously,observed the set of panoramic images. In Fig. 7 the tra-jectory of the robot is depicted by “-“ , while “ ” denotesthe positions at which the panoramic images were obser-

D xx

i ij

ij

= { }

4.2. Simulation results in Type 1 environment

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 43

VOLUME 4, N° 2 2010

k

l

*

() ( )

��

������

P

minPmaxPminPPexp

�����

tt

��

' 'j

l

ij

l

ijl

ij

��

( )( )

��t

lldijl

ij 2

C

2**

2

,exp

��

l

( ) ( )��

��

������

C

minCmaxCminCCexp

�����

tt

��� �

I

i

J

j

ij

l

ij

k

i

kl

1 1

xw ��

Page 45: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles44

ved. Note that the direction of the robot was not fixed.Panoramic images were observed from 200 positions; inother words, there were 200 training episodes. In addi-tion, there were 32 data vectors per episode.

Fig. 7. The trajectory of the robot at observation of visualinformation in type 1 environment. The trajectory of therobot is depicted by “-“, while “ ” denotes the positions atwhich the panoramic images were observed.

Fig. 8. the trajectory of the robot in “(A) Confirmation ofestimating the position” in type 1 environment.

Fig. 9. The result of map building using RSOM×SOM in type 1environment. The lattices denoted by letters of the alphabetare the BMMs at the positions shown in Fig. 8.

The results of “(A) Confirmation of estimating theposition” are shown in Figs. 8 and 9. Fig. 8 shows the tra-jectory of the robot. In addition, Fig. 9 shows the RSOM×SOM’s map in which each lattice corresponds to an RSOMmodule. The episodes observed at positions “A” to “T” inFig. 8 were given to the RSOM×SOM as test data. Moreover,the lattices denoted by letters of the alphabet in Fig.9 arethe BMMs at the positions shown in Fig. 8. Thus, it ispossible for an RSOM×SOM to build a map that preservesthe topology of the geography. The same result was obtai-ned consistently despite training being repeated severaltimes. The results of “(B) Confirmation of estimating thehead direction” are shown in Figs. 10 a), (b), and (c). Eachfigure is a result at putting the robot on A, F, and J placesin Figure 8, respectively. Having been placed at each po-sition, the robot was rotated 360 degrees in intervals of 5degrees. In Figs. 10a), (b), and (c), the relationship bet-ween the head direction of the robot and the BMU isshown. These results confirm that the head direction ofthe robot and the BMUs change continuously. Moreover,the head direction was able to be estimated by BMU easily.

(a)

(b)

(c)

Fig. 10. The result of “(B) Confirmation of estimating thedirection” in type 1 environment. (a), (b), and (c) are therelationships between head direction of robot and RSOM’sunit at position A,F, and J in Fig. 8 respectively.

VOLUME 4, N° 2 2010

Page 46: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 45

Fig. 11. The trajectory of the robot at observation of visualinformation in type 2 environment.

Fig.12. the trajectory of the robot in “(A) Confirmation ofestimating the position” in type 1 environment.

Fig.13. The result of map building using RSOM×SOM in type2 environment. The lattices denoted by letters of thealphabet are the BMMs at the positions shown in Fig. 12.

4.3. Simulation results in Type 2 environmentFig. 11 shows the trajectory of the robot in environ-

ment of Fig. 5. The panoramic images were observed at

200 positions; in other words, there were 200 trainingepisodes. In addition, there were 32 data vectors per epi-sode. The results of “(A) Confirmation of estimating theposition” are shown in Figs. 12 and 13. These results sug-gest that an RSOM×SOM is able to build a map that pre-serves the topology of the geography even if the visualinformation varies. The results were consistent despitetraining being done several times. The results of “(B)Confirmation of estimating the head direction” are shownin Figs. 14. Each figure is a result at putting the robot onA, G, and K places in Figure 14, respectively. In the re-sults, BMU corresponding to the head direction of the ro-bot has not been continuously changed. These resultssuggest that estimation of the head direction was diffi-cult because of the existence of a similar color histogram.

We have described map building using an SOM ina complex environment containing obstacles and similar

(a)

(b)

(c)

Fig.14. The result of “(B) Confirmation of estimating thedirection” in type 2 environment. (a), (b), and (c) are therelationships between head direction of robot and RSOM’sunit at position A,G, and K in Fig. 12, respectively.

5. Discussion

5.1. Map building in complex environments2

VOLUME 4, N° 2 2010

Page 47: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles46

vision information. In an adaptation of the SOM , childSOMs (i.e., place cells) are allocated to unreachable pla-ces, since the parent SOM is fixed in the lattice topology.A solution to this problem is to use a self-organizing neu-ral network as the parent SOM, such as the NG and TRN,where the network topology is not fixed. It is shown in [4]that the parent and child of the SOM can be designedusing any SONN, besides the SOM. It is expected that theplace cells are allocated only to the subspace in which in-put episodes are distributed, by replacing parent SOMswith NGs. Besides, when similar vision information exists,then it is considered that it is necessary to introduce themethod of the map building including a time transition tothe algorithm of SOM .

In this study, the color histogram was used as a simplefeature extraction, since the study aims to verify the mapbuilding by SOM . It was shown that the map buildingfrom only color information was possible by the simula-tions. However, it is difficult in the real environment todistinguish the local environment from only color infor-mation. Therefore, the technique for recognizing the en-vironment such as SIFT [15] is requested to be used asa feature extraction. 5.3. RSOMxSOM’s responses for chan-ging environment.

2

2

2

2

5.2. Feature extraction from vision information

Fig. 15. Type 2 environment to which the human is added.

Fig.16. The result in type 2 environment to which thehuman is added.

Even if some of the environment change, it is possibleto estimate the position and direction with map that wasbuilt by training of the SOM . We experimented as followsto verify the issue. First, the map was built in the type 2environment. Next, a human’s object was put on the envi-ronment (Fig.15). Namely, some of the environment waschanged with the human’s object. Final, (A) and (B) of si-mulation were verified. The results are shown to Fig.16.BMMs on the map changed as shown in Fig.16 when therobot was moved as shown in Fig.12 on the environment.There is little difference between result (Fig.13) of theenvironment where the human’s object is not put and thisresult (Fig.16). Thus, it is suggested that a part of changenot influence the position estimation.

In this paper, we confirmed that the SOM could builda cognitive map that includes features of the place cellsand head direction cells. It was shown that both the po-sition and the azimuth direction could be estimated fromthe map acquired by unsupervised learning of the SOM .The SOM model is not based on the neurological functionof the hippocampus, but is modeled technologically ina topological way. A model that imitates the function ofthe cognitive map in animals more closely can be develo-ped by creating an algorithm that introduces a time tran-sition of information into the SOM .

- Department ofBrain Science and Engineering, Kyushu Institute of Tech-nology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu 808-0196, Japan. E-mails: [email protected],[email protected].* Corresponding author

2

2

2

2

2

6. Conclusion

AUTHORSKazuhiro Tokunaga*, Tetsuo Furukawa

References[1] O’keefe J., Dostrovsky J., “The hippocampus as a spa-

tial map: preliminary evidence from unit activity in thefreely moving rat “, , vol. 34, 1971, pp. 171-175.

[2] O’keefe J. , Nadal L.,, Oxford: Oxford. University Press, 1978.

[3] Taube J.S., Muller R.U., Ranck J.B. Jr., “Head-directionCells Recorded from the Postsubiculum in Freely MovingRats”, , vol. 10, 1990, pp. 436-447.

[4] Furukawa T., “SOM of SOMs”, , vol. 22,issue 4, 2009, pp. 463-478.

[5] Kohone T., , Springer-Verlag: NewYork, 2001.

[6] Gamini Dissanayake M.W., Newman P., Clark S., Durrant-whyte H.F., Csorba M., “A solution to the simultaneouslocalization and map building (SLAM) problem”,

, vol. 17, 2001,pp. 229-241.

[7] Durrant-Whyte H.F., Bailey T., “Simultaneous Localisa-tion and Mapping (SLAM): Part I The Essential Algo-rithms”, , vol. 13,2006, pp. 99–110.

Brain. Res.

The Hippocampus as a CognitiveMap

Journal of Neuroscience

Neural Networks

Self-Organizing Maps

IEEETransactions on Robotics and Automation

Robotics and Automation Magazine

VOLUME 4, N° 2 2010

Page 48: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

Articles 47

[8] Sim R., Elinas P., Griffin M., “Vision-based SLAM usingthe rao-blackwellised particle filter”.” In:

,2005.

[9] Se S., Lowe D.G., Little J.J., “Vision-based global loca-lization and mapping for mobile robots,” IEEE Tran-sactions on Robotics, vol. 21, Issue 3, 2005, pp. 364-375.

[10] Yairi T., “Map Building without Localization by Dimen-sionality Reduction Techniques”. In:

,2007.

[11] Trullier O., Wiener S.I., Berthoz A., Meyer J., BerthelotP.M., “Biologically-based Artificial Navigation Systems:Review and prospects”, , vol.51, 1997, pp. 483-544.

[12] Martinetz T., Schulten K., “Topology Representing Net-works”, , vol. 7, issue 3, 1994, pp. 507-522.

[13] Takahashi T., Tanaka T., Nishida K., Kurita T., “Self-or-ganization of place cells and reward-based navigationfor a mobile robot”. In: , 2001,pp. 1164-1169.

[14] Chokshi K., Wermter S., Panchev C., Burn K., “ImageInvariant Robot Navigation Based on Self OrganizingNeural Place Codes”, ,vol. 3575, 2005, pp. 88-106.

[15] Loww D.G., “Distinctive Image Features from Scale-Invariant Keypoints”,

, vol. 60, no. 3, 2004, pp. 91-110.

IJCAIWorkshop on Reasoning with Uncertainty in Robotics

The 24 Interna-tional Conference on Machine Learning (ICML-2007)

Progress in Neurobiology

Neural Networks

Proceedings of ICONIP’01

Lecture notes in computer science

International Journal of ComputerVision

th

VOLUME 4, N° 2 2010

Page 49: JAMRIS 2010 Vol 4 No 2

Abstract:

1. IntroductionRecently, emotions are introduced into intelligence

robots [1]. An expression of the emotion has advanta-geous effects on a communication between a human anda robot, as well as behaviors of the robot. The communi-cation through the expression of the emotions is a non-verbal and intuitive for the human. For example, internalstates of the robot, which are not usually visualized, canbe communicated to the human by using facial expres-sions based on the emotions that the robot generates. Asconventional methods for the expression of the emotion,artificial intelligence models have emulated some emo-tions of the robots [2], [3]. However, the expression ofthe emotion must be programmed with defined inputsand certain situations in advance, and it is not an emer-gence as a result of interactions between unknown envi-ronments. Therefore, a novel model, which can generatewide variety of emotions by learning in the unknown en-vironments, has growing requirements to improve thehuman robot interaction.

Emotional-expression Model of the Amygdala (EMA)has been proposed as an artificial neural network of theamygdala from a viewpoint of an engineering approach[4]. EMA has been established based on neurosciencefindings of the amygdala that is an emotional learningsystem in the brain [5]. The learning of the emotions byEMA is interactively achieved by both recognition anda classical conditioning of inputs from environments.Furthermore, EMA is suitable for applications to therobot, because it has superior recognition abilities com-pared to other models of the emotion [6], [7].

In this paper, we apply EMA to the expression of the

In this paper, we proposed an emotional expressionsystem as a brain-inspired system. The emotional expres-sion was achieved by an Emotional expression Model of theAmygdala (EMA), which was an engineering model inspiredby an emotional learning in the brain. EMA can realize bothrecognition of sensory inputs and a classical conditioningof emotional inputs. Furthermore, a specific hardware ofEMA was developed with a massively parallel architectureby using an FPGA, and achieved a calculation speed that isover 20 times faster than an embedded general-purposecomputer. Finally, we confirmed an effectiveness of a hu-man-robot interaction with the emotions, which were gene-rated by the proposed emotional expression system.

Keywords: emotional learning, amygdala, classical condi-tioning, human-robot interaction.

emotion for an autonomous mobile robot as a brain-in-spired system. First, we demonstrate an effectiveness ofEMA using a robot simulator. Furthermore, we develop anaccelerator of EMA, which allows real-time interactions,using FPGA. Finally, we implement EMA in the robot as anemotional expression system including sensors, expres-sion-devices and the accelerator of EMA. An effectivenessof the developed system is confirmed by an interactivetraining of the expression of the emotion between thehuman and the robot.

A limbic system of the brain is an information proces-sing system involved in an emotion and a memory. Theamygdala, which is a part of the limbic system, involvesin the emotional learning. The amygdala receives varioussensory stimuli from an inside and an outside of the body

a sensory thalamus [5]. The sensory stimuli are inte-grated in a lateral nucleus of the amygdala (LA) and arelocalized and recognized based on the characteristics.Furthermore, a value of the stimulus is evaluated for cor-responding emotions in a central nucleus of the amyg-dala (CE). As a consequence, emotional reactions, such asfreezing and stress-hormone release, arise in whole ofthe body as emotional responses.

A relationship between the sensory stimulus and theemotional responses is acquired by a classical conditio-ning by using the sensory stimulus and the emotionalstimulus [8]. The sensory stimulus, such as an auditorystimulus, acts as a conditioned stimulus (CS), and is na-turally irrelevant to the emotional responses. On the ot-her hand, the emotional stimulus, such as an electricalshock, is called an unconditioned stimulus (US) since thestimulus potentially generates the emotional responses.The classical conditioning is achieved by simultaneouslypresenting CS and US. After the conditioning, the emo-tional responses are induced by observing CS only. Theamygdala is related to the conditioning with a fear emo-tion in particular.

EMA emulates two essential functions in complex anddiverse functions of the amygdala, recognition of thesensory stimulus and conditioning to the emotional res-ponse. Two functions are absolutely imperative in theemotional human robot interaction. It is preferable thatthe recognition system is self-organized thorough theinteraction. Furthermore, the recognition should be

2. Emotional-expression modelof the amygdala

2.1. Amygdala

2.2. Architecture of EMA

via

A HUMAN ROBOT INTERACTION BY A MODEL OF

THE EMOTIONAL LEARNING IN THE BRAIN

Satoshi Sonoh, Shuji Aou, Keiichi Horio, Hakaru Tamukoh, Takanori Koga, Takeshi Yamakawa

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles48

Page 50: JAMRIS 2010 Vol 4 No 2

adaptive to environmental changes. In order to realizeinteractive and adaptive recognition system, we adoptedSelf-Organizing Map (SOM) [9] and its adaptive learningrule in EMA.

The EMA architecture to satisfy two functions isinspired by the anatomical findings of the amygdala. Thearchitecture is three-layers to integrate the sensory sti-muli shown in Fig. 1. The sensory input layer has severalinput units, and corresponds to an entrance area of theamygdala including the sensory thalamus. The LA layerhas a number of competitive units, and receives thesensory stimulus. The competitive units are arranged ina two-dimensional array, and can extract characteristicsof the sensory stimulus as a feature map in a self-orga-nizing manner. Finally, emotional values, which repre-sent strength of the emotional responses, are evaluatedin the CE layer. Several types of emotions, such as fear,pleasure and surprise, are available in EMA although theamygdala is specifically related to the fear emotion.

Let be an input vec-tor that represents the sensory stimulus at time step ,and be a referencevector of the unit on the LA layer. The best matchingunit (BMU) for the sensory stimulus is selected by thefollowing equation

(1)

EMA regards the BMU as a classified CS of the sensorystimulus. The reference vectors are updated by the fol-lowing equations.

(2)

(3)

2.3. Algorithm of EMAx( )=( ( ),…, ( ),…, ( ))t t t t

t

i-th

1x x x

( )=(w ( ),…,w ( ),…,w ( ))

m M

i i m,i M,iw t t t t1,

Here, means a neighborhood function andmeans a distance function between the

unit and BMU on the LA layer. and are thelearning ratio and the neighboring width at time step ,respectively. These parameters in EMA are determined ba-sed on an adaptation degree to the sensory stimulus,because an adaptive learning is significant for an interac-tive learning between the human and the robot. Theadaptive learning is achieved by the following equations[11].

(4)

(5)

Here, is the maximum error until time stepfrom the initial state, means a normalized error asthe adaptive degree. and are the maximum andthe minimum neighboring widths, respectively. Theseequations mean that the learning ratio and the neigh-boring width increase when the normalized error is large,but these decrease when the normalized error is small.

Furthermore, a relationship between the sensory sti-mulus and the emotional stimulus is acquired by usinga learning model of the classical conditioning [10].

Let be an input vec-tor that represents strength of the emotional stimulus,

be an output vector ofthe emotional value that represents strength of the gene-rated emotional responses at the time step , where thesuffix means an index corresponding to a kind of theemotion. The emotional value is calculated by the follo-wing equations, when the sensory stimulus is presented.

, (6)

. (7)

Here, is a normalized activity of the unit to

h tD t i-th t

t

error t

V t V V

tk

act t i-th

( )(i,BMU( ))

( )

( )=( ( ),…, ( ),…, ( ))

( )

i,BMU

max min

k K

k K

i

á â

ó

( )

( )

( )=( ( ),…, ( ),…, ( ))

t

t

t E t E t E t

t t t

max

á

ó

E 1

1V

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Fig. 1. Architecture of EMA. EMA is inspired by anatomical findings of the amygdala and consists of three layers, the sensoryinput layer, the LA layer and the CE layer.

).)()((minarg

))((minarg)(

2

tt

terrortBMU

ii

ii

wx ��

))()(()()()1()(,

tthtttitBMUiii

wxww ���� �

��

���

2

2

)(,

)(2

))(,(exp

t

tBMUiDh

tBMUi

ó

max

)()(

error

terrort

i�á

),)(max()(minmax

óóáó tt �

Articles 49

��

t

ittactt )()()( uV

��

j

tBMUi

tBMUi

i

h

htact

)(,

)(,)(

Page 51: JAMRIS 2010 Vol 4 No 2

Strength of the emotional stimuli is represented as. In this simulation, the parameters of EMA

are as follows; the number of the LA unit is 64 (8 x 8),=1.0, =0.1, =0.3.

At the beginning, a performance of the stimulus reco-gnition was confirmed. We randomly presented the colorball to the robot. Fig. 3(a) shows the feature map of EMAafter presenting 1000 times. In the feature map, each co-lor patch shows features of the sensory stimulus that arerepresented by the reference vector of the LA unit. Theneighboring LA units have similar feature, and the distantLA units have different features, for For BMUR, BMUG andBMUB, R, G and B are subfix. Although uniformed featuremap was obtained by the randomly sensory stimuli, wepresented red-biased stimuli in order to confirm effective-ness of the adaptive learning rule, additionally 500 times.The feature map was changed and specialized in red featu-re by additional stimulus, as shown in Fig. 3(b). The featu-re map is updated depending on the adaptive degree tothe sensory stimulus. Thus, the recognition of EMA workswell even if the environment is dynamically changed, al-though the conventional model [11] cannot accommo-date.

Next, the classical conditioning experiment was

E( )=( )t E , Ep f

max minó ó ó

3.2. Recognition of the sensory stimulus

3.3. Emotional conditioning

Fig. 3. A feature map in EMA obtained by the stimulus reco-gnition process. Each color patch represents the referencevector of unit on the LA layer. (a) The obtained map by pre-senting 1000 random stimuli. (b) The map adapted addi-tional stimuli (red color).

Fig. 4. Emotional values in the basic classical conditioningexperiment. (a) The emotional value to fear emotion. (b)The emotional value to pleasure emotion. Emotional valuesrepresent strength of emotional responses to the sensorystimulus. EMA can achieve the acquisition and the extinc-tion for more than one emotion.

the sensory stimulus andis an emotional weight of the unit. The emo-

tional weight is updated by the following equation.

. (8)

Here, is a conditioning ratio. The algorithm of EMAis summarized two computational processes; (1) the sti-mulus recognition process and (2) the conditioning pro-cess. Recognition of the sensory stimulus can be achie-ved in the stimulus recognition process. In parallel,a prediction and an update of the emotional value can beachieved in the conditioning process.

In application such as human robot interaction, anadvantage of EMA over other classical conditioning mo-dels, for example TD model [11], is the self-organizingand adaptive recognition of the sensory stimulus (See[4]). We confirm an effectiveness of EMA by softwaresimulations and experiments with developed hardware ofEMA in the following section.

ui i k,i

K,i

R G B R G B

( )=( ( ),…, ( ),…,( ))

( )=( )

t u t u tu t

t I , I , I I I I

1,

i-th

ä

3. Computational simulations

An experiment of the expression of the emotion fora dog-like robot is performed by computational simula-tions. A simulation environment is created by a roboticssimulator “Webots” [12], and is shown in Fig. 2. The simu-lated robot (SONY AIBO ERS210) in the environment hasa vision sensor and a touch sensor. The robot can detecta color intensity of the front ball as the sensory stimulusby using the vision sensor. Furthermore the robot some-times receives tactual stimulus such as hitting and gentlestroking from the environment, when the sensory stimu-lus is presented. Here, we assume that the tactual stimu-lus induces a corresponding emotional response to the ro-bot as the emotional stimulus. EMA implemented in therobot performs the recognition and the conditioning forthe emotional learning like the amygdala.

The sensory stimulus is represented as a three-dimen-sional vector , where , and arecolor intensities of red, green and blue, respectively.

3.1. Simulation environment

Fig. 2. A simulation environment. The robot has a visionsensor to detect the sensory stimulus and has a touch sen-sor to detect the emotional stimulus. The sensory stimulusis a color of ball objects and the emotional stimulus isa tactual sense of the robot.

x

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

))()()(()()1( tttactttiii

VEuu ���� ä

Articles50

Page 52: JAMRIS 2010 Vol 4 No 2

performed by simultaneously presenting both the sensorystimulus and the emotional stimulus. The emotional sti-mulus, , was associated with the sensory stimu-lus, , every 20 steps in the first 500 steps.In the next 500 steps, the emotional stimulus, ,was associated with the same sensory stimulus every 20steps. Fig. 4 shows the emotional values in the learningsteps, where and correspond to fear and pleasureemotions, respectively. Emotional values were acquiredby the classical conditioning. As a result, the emotionalresponses to the sensory stimulus were generated withoutthe emotional stimulus. In the last of the conditioning,the emotional value ( ) was eventually lost because thecorresponding emotional stimulus was not presented.Thus, the “acquisition” and the “extinction”, which area basic principle of the classical conditioning, can beachieved in EMA.

In this simulation, it was confirmed that the robotwith EMA was able to generate the emotions by a combi-nation of the recognition and the conditioning. This con-

E

E

=(1,0)=(1,0,0)

(0,1)x

=

V V

V

1 2

1

tribution by EMA makes behaviors of robots more intel-ligent, and the human robot interaction becomes naturaland interactive.

To realize real-time processing of EMA in robotics appli-cations, we proposed specific hardware architecture ofEMA in which a hardware-oriented algorithm is emplo-yed. The proposed EMA hardware (EMAHW) was develo-ped based on a massively parallel architecture as well asconventional SOM hardware [14], [15] because the algo-rithm of EMA was based on SOM. Fig. 5 shows the mas-sively parallel architecture of EMAHW including 81 units.The architecture is achieved by several local circuits andone global circuit. Each local circuit corresponds to oneLA unit, and has a memory of the reference vector and theemotional vector. The global circuit calculates followingprocesses; finding BMU, adapting the learning parame-

4. FPGA implementation of EMA

4.1. Architecture of EMA hardware

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Fig. 6. A block diagram of the emotional expression system including the digital hardware of EMA, the emotional sensors andthe emotional expression devices.

Fig. 5. A massively parallel architecture of the EMA hard-ware. The EMA hardware consists of several local circuits and one globalcircuit.

Articles 51

Page 53: JAMRIS 2010 Vol 4 No 2

ters, calculating the emotional value and conditioning.Furthermore, the learning parameters (Eq. (3) and Eq.(4)) were modified in EMAHW as follows;

, (9)

. (10)

These learning parameters allow to EMAHW use shiftregister as a substitute for multiplier. Thus, it drasticallyreduces a circuit area and calculation cost.

EMAHW was implemented on FPGA (Xilinx Spartan-3E,sc3s1600E) with 81 LA units and 8-bit accuracy. EMAHWcalculated at clock frequency up to 50 MHz. A perfor-mance of EMAHW was estimated at 632.8 MCUPS (MillionConnection Update Per Second), when the reference vec-tors were three dimensions and the emotional weightswere two dimensions. For comparison, a performance ofa general-purpose PC (Intel Core 2 Duo, 2.66GHz) is 43.2MCUPS in the original algorithm.

Furthermore, a performance of EMAHW was confirmedby comparison with embedded processors that wereavailable an autonomous mobile robot. A benchmark testusing three-dimensional 1000 sensory stimuli was perfor-med by each processor. Table 1 shows a comparative re-sult. The calculation speed of the EMAHW was twenty-times or more as fast as the portable PC in spite of thelowest clock frequency.

To verify an effectiveness of EMA in the human robotinteraction, an emotional expression system was imple-mented in an autonomous mobile robot “WITH” [16].A block diagram of the emotional expression system isshown in Fig. 6, and the robot including the sensors andthe devices is shown in Fig. 7.

The emotional sensors include a CMOS camera to cap-ture front images of the robot, and a capacitance sensorarray to detect tactual senses from the human to the ro-bot. The main controller receives the captured imagefrom the CMOS camera, and sends an averaged the colorvalue as the sensory stimulus to EMAHW. Furthermore,the main controller estimates tactile information, hittingor gentle stroking, by using the number of respondedsensors, and sends the information as the emotionalstimulus to EMAHW. EMAHW calculates the emotionalvalue by using the sensory and emotional stimulusprovided. The emotional-expression devices were develo-ped based on an ear and a tail of a dog in order to commu-nicate the robot's emotions to human. The dog is the

4.2. Performance of EMA hardware

5.1. Robot with emotional expression system

5. Human robot interaction with EMA

most famous pet, and their emotional expressions havebeen investigated in detail. The robot’s emotions, whichare generated by EMAHW, are expressed by simple move-ments of the emotional-expression devices. For example,the ear is laid back and the tail is wagged in small mo-tions, when the robot is feeling a fear emotion. In addi-tion, pleasure, confusion and attention emotions havebeen elaborated.

Fig. 7. The robot “WITH” equipped the emotional expres-sion system.

Fig. 8. The experimental results of the human robot inter-action. Here, the colored marker is the sensory stimulus asCS and the touch to the robot is the emotional stimulus asUS. (a) A scene that the human is training the robot withthe red maker and gentle stroking. (b) A scene that therobot is expressing the pleasure emotion to the only redmarker. (c) The feature map obtained by the interaction.(d) Emotional values acquired by the interaction.

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

),...,1,0(2)(,

Nnhn

tBMUi�

),...,1,0(2)( Nntn

� �

á

Table 1. A comparison of calculation time to benchmark test between EMAHW and other embedded processor.

Articles52

Page 54: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

5.2. Experiment in human robot interaction

ACKNOWLEDGMENTS

The emotional expression system including EMAHWwas implemented in the robot as a robot’s amygdala sys-tem. As a result, the robot got to recognize the sensorystimulus from the environment, and to generate the emo-tional value and the expression of the emotion. The hu-man robot interaction was perfumed in the real environ-ment. In the interaction, the human as a trainer presen-ted a colored-marker as the sensory stimulus, and tou-ched with one’s hands as the emotional stimulus to therobot. Here, the sensory stimulus as CS is a presentationof colored-marker, and the emotional stimulus as US isa touch of the robot with gentle stroking or hitting. Atthe begging, the robot unconditionally responded usingthe specific emotion to the touch, but the colored-markerdid not induce any emotions.

Fig. 8 (a) shows a scene that the human was trainingthe robot with the red marker and the gentle stroking. Asthe interaction was repeated, the robot became to ex-press the emotion to the red marker only. Fig. 8 (b) showsa scene that the robot was expressing pleasure emotionusing the tail wagging to the red marker without the tou-ches after the training. Fig. 8 (c) shows the obtained fea-ture map in EMAHW. The feature map preserves thetopology of the sensory stimulus as well as the result ofthe computational simulation. Fig. 8 (d) shows the emo-tional value corresponding to the interaction step. Thered and blue lines represent the emotional value of fearand pleasure emotion, respectively. In the real environ-ment, the acquisition and the extinction can be succes-sfully achieved.

In the human robot interaction experiment, the robotrecognized the sensory stimulus, and predicted the emo-tional value as well as animals. The emotional expressionof the robot makes the interaction to the human moreintelligent and human-friendly.

In this paper, we implemented EMA in the simulatedand real robot in order to realize the expression of theemotions from the interaction. The expression of theemotions can be achieved by the recognition of the sen-sory stimulus and the classical conditioning in EMA. Fur-thermore, we proposed the emotional expression systemincluding the accelerator of EMA. The EMA hardware hasthe computational ability, which is 20 times or more fastthan other embedded processors. The human robot inter-action with the expression of the emotion can be achie-ved in simulated and real environment. In the futurework, a system that estimates a stimulus involving theexpression of the emotion by consideration of internaland contextual status of the robot is needed as an exten-sion of EMA. Then, we believe that the brain inspired sys-tem will achieve a breakthrough in the human robotinteraction.

6. Conclusion

This work was supported by a 21 Century Center of ExcellenceProgram, ”World of Brain Computing Interwoven out of Animalsand Robots (PI: T. Yamakawa),” granted in 2003 to theDepartment of Brain Science and Engineering (Graduate Schoolof Life Science and Systems Engineering), Kyushu Institute of

st

Technology by Japan Ministry of Education, Culture, Sports,Science and Technology. This work is also supported by Grant-in-Aid for Scientific Research (A), 2006, 18200015.

[1] Breazeal C.L., , Cambridge,MA: The MIT Press, 2002.

[2] Fujita M., Kuroki Y., Ishida T., Doi T., “Autonomous Be-havior Control Architecture of Entertainment HumanoidRobot SDR-4X”. In:

, 2003, pp.960-967.

[2] Sawada T., Takagi T., Fujita M., “Behavior Selection andMotion Modulation in Emotionally Grounded Architec-ture for QRIO SDR-4X II”. In:

, 2004, pp. 2514-2519.[4] Sonoh S., Horio K., Aou S., Yamakawa T., “An Emotional

Expression Model Inspired by the Amygdala”,

, vol. 5, 2009, no. 5, pp. 1147-1160.[5] Aggleton J.P., , New

York: Oxford University Press, 2000.[6] Armony J.L., Servan-Schreiber D., Cohen J.D., LeDoux

J.E., “An anatomically constrained neural networkmodel of fear conditioning”, ,vol. 109, 1995, no. 2, pp. 246-257.

[7] Mor'en J., Balkenius C., “A computational model ofemotional learning in the amygdala”. In:

, 2000, pp. 383-391.[8] Phelps E.A., LeDoux J.E., “Contributions of the amyg-

dala to emotion processing: from animal models tohuman behavior”, , vol. 48, 2005, pp. 175-187.

[9] Kohonen T., , Berlin: Springer-Verlag, 1997.

[10] Rescorla R.A., Wagner A.R., “A theory of Pavlovian con-ditioning: variations in the effectiveness of reinforce-ment and nonreinforcement”,

AUTHORSShuji Aou, Keiichi Horio, Takeshi Yamakawa

Satoshi Sonoh*

Hakaru Tamukoh

Takanori Koga

References

- Depart-ment of Brain Science and Engineering, Kyushu Instituteof Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu-shi, 808-0196 Japan. E-mails:[email protected],[email protected],[email protected].

- Corporate Research & DevelopmentCenter, Toshiba Corporation, 1 Komukai-Toshiba-cho,Saiwai-ku, Kawasaki-shi, JAPAN. E-mail:[email protected].

- Institute of Symbiotic Science andTechnology, Tokyo University of Agliculture and Techno-logy, 2-24-16 Nakamachi, Koganei-shi, 183-8509 Japan.Email: [email protected].

- Fuzzy Logic Systems Institute, 680-41Kawazu, Iizuka-shi, 820-0067, Japan. E-mail:[email protected].* Corresponding author

Designing Sociable Robots

Proc. of 2003 IEEE/RSJ InternationalConference on Intelligent Robots and Systems

Proc. of 2004 IEEE/RSJInternational Conference on Intelligent Robots and Sys-tems

Interna-tional Journal of Innovative Computing, Information andControl

The Amygdala: A Functional Analysis

Behavioral Neuroscience

Proc. of the 6International Conference on the Simulation of AdaptiveBehavior

NeuronSelf-Organizating Maps

Classical conditioning II:

th

VOLUME 4, N° 2 2010

Articles 53

Page 55: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

current research and theory

Machine Learning

http://www.cyberbotics.com/

IEEE Transaction on Neural Net-works

IEICE Trans. on Electronics

Neural Networks

Brain-Inspired IT II

, New York: Appleton-Cen-tury-Crofts, 1972, pp. 64-99.

[11] Sutton R.S., “Learning to predict by the methods oftemporal difference learning”, , vol. 3,1988, no 1, pp. 9-44.

[12] Cyberbotics Webots “ ”[13] Berglund E., Sitte J., “The parameterless self-organi-

zing map algorithm”,, vol. 17, 2006, pp. 305-316.

[14] Tamukoh H., Horio K., Yamakawa T., “Fast LearningAlgorithm for Self-Organizing Map Employing RoughComparison WTA and its Digital Hardware Implemen-tation”, , vol. E87- C, 2004,no. 11, pp. 1787-1794.

[15] Hikawa H., “FPGA implementation of self organizingmap with digital phase locked loops”, ,vol. 18, 2005, pp. 514-522.

[16] Takemura Y., Sato M., Ishii K., “Toward Realiation ofSwarm Interlligene Mobile Robots”, ,Amsterdam: Elsevier B.V., 2006, pp. 273-276.

VOLUME 4, N° 2 2010

Articles54

Page 56: JAMRIS 2010 Vol 4 No 2

Abstract:

1. IntroductionRobot and robotics technologies are expected to pro-

vide new tools for inspection and manipulation. The dy-namics of Robot is changed by environment and state ofrobots: weight, attitude and so on. Operations in severalenvironments and conditions of robot need switchingsystem, which is able to change due to a variety of cir-cumstances. The system, which is worked, is called “Hy-brid Dynamical System (HDS)”. HDS is able to managemany-controlled object. Therefore, one is very complextheory. But, in the nature, Animals are able to move andadapt between different environments. Adaptation sys-tem of animal is realized by Central Pattern Generator(CPG). CPG exists in nervous system of animal and gene-rates rhythmical pattern. Several motion patterns of ani-mals such as swimming, waking, flapping and so on aregenerated by CPG. CPG has “entrainment feature” whichsynchronizes a wave that matches the resonance fre-quency. The wave of CPG is able to be adjusted using sen-sory feedback. Taga [1] realized simulation of bipedwalking robot based on the idea that motion pattern isgenerated by interaction between CPG, body-dynamicsand environment. Williamson [2] applied CPG to a huma-noid robot that performs cranking, sawing, and hittingdrum. Matsuoka [3] developed control system ofa giant swing robot which is switched among swing modeand rotation mode using CPG. The goal of our study isdevelopment of bio-inspired robot control system usingCPG and application into HDS. In this paper, we reportabout development of an amphibious multi-link mobilerobot for test bed of HDS and dynamics property analysis.Additionally, distributed motion control system usingentrainment feature of CPG without sensory feedback wasdeveloped.

Robots and robotics technologies are expected as newtools for inspection and manipulation. The dynamics of ro-bot always are changed by environment and robot of statein mission. Therefore, an adaptation system, which is ableto switch controller due to environment and robot of state,is needed. Meanwhile, animals are able to go through seve-ral environments and adapt several own states. The adap-tation system is realized Central Pattern Generator (CPG).CPG exists in nervous system of animals and generates rhy-thmical motion pattern. In this paper, a robot motion con-trol system using CPG is proposed and applied to an amphi-bious multi-link mobile robot.

et al.

Keywords: CPG, snake-like robot, biomimetics.

2. Development of Amphibious Multi-LinkMobile Robot “AMMR”

In previous works, Hirose developed multi-linkmobile robot [4] and underwater multi-link mobile robot[5]. These robots move forward using reaction forcesfrom frictional force on ground or fluid drag force in un-derwater. Robot dynamics are different between groundmotion and underwater motion. Especially, an amphi-bious multi-link mobile robot[6] is very interesting forHDS to apply CPG. In this research, we applied motioncontrol system using CPG to amphibious multi-link mo-bile robot. The motion mechanisms of snake-like animalshave already been studied by Hirose [7] and Azuma [8].A snake’s body is covered with special scales that havelow friction in the tangential direction and high frictionin the normal direction. This feature enables thrust to beproduced from a wriggle motion. An eel swims under wa-ter by generating an impellent force from a hydrodynamicforce. Snake and eel generate impellent force actuatingeach joint with a certain phase difference. And, at tur-ning motion, joint trajectories of snake and eel havebias, which is balance of oscillation of joint.

In our previous work [9], we developed the multi-linkmobile robot, as a test bed for the evaluation of a motioncontrol system using a CPG. We realized wriggle motionfor forward and turning motions using periodical outputsignals of the CPG control system. However, the multi-link mobile robot was not able to evaluate HDS in whichthe dynamics of a robot transfer from one mode to anot-her because previous one was not able to move in under-water. In previous works, electrical circuit did not havefeedback connection because servomotors, which werenot able to extract angle data, were used. Thus we deve-loped an amphibious multi-link mobile robot (see Fig. 1)as a new test bed for motion control and adaptation intwo different environments that require different dyna-mics: land and underwater environments. Table 1 givesthe specifications of an amphibious multi-link mobilerobot. An amphibious multi-link mobile robot moves bothover land and under water. Therefore, waterproofness isan important design consideration. O-rings are employedon the shaft of each joint and under the cylinder lids toensure the waterproofness of the robot. The robot com-prises eight cylinders that are joined so that each cylin-der can rotate around a yaw axis via DC motors, a gearboxand a control circuit, as shown in Fig. 2. A range of jointmovement is ± /3[rad]. Hydrodynamic forces produced

2.1. Motion of snake-like animals

2.2. Mechanism

et al.

ð

THE STUDY OF BIO-INSPIRED ROBOT MOTION CONTROL SYSTEM

Takayuki Matsuo, Takeshi Yokoyama, Daishi Ueno, Kazuo Ishii

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 55

Page 57: JAMRIS 2010 Vol 4 No 2

by fins and the body produce thrust forces under waterand passive wheels are used on ground (Fig. 3).

Fig. 1. Overview of AMMR.

Fig. 2. Internal architecture of a motor module.

Fig. 3. Passive Wheels and Fins.

An amphibious multi-link mobile robot consists oftwo kinds of cylinders: a cylinder for the head (head mo-dule) and seven cylinders for the body (motor modules).Main voltage for each module is 8 V and decrease voltageinto 5 V for MPU and sensors using DC/DC converter. How-ever, connection method of power line is a cascade con-nection. Therefore, voltage decreases from tail to headmodule. One cannot evaluate torque of each joint in thiscondition. As main power, 24 V are used and voltage isstepped down to 8 V in each module using switchingregulator, which is able to apply high electrical current.A motor module has a motor to control the joint angleand a circuit board. The circuit has a microprocessor unit(MPU, PIC18F452), a potentiometer to measure the jointangle, an RS485 transceiver (MAX1487) for communica-tion and a current sensor to measure joint torque. TheMPU calculates the target trajectory using the neuron po-tential of the CPG or sinusoidal function, controls themotor using a PID control, manages sensor information(e.g., current and angle data) and communicates withcircuits of other modules using the RS485. The head mo-dule transfers the target behavior to other modules andcontrol of communication.

2.3. Electrical system

3. The experimental analysis of dynamicsproperty for “AMMR”

In this paper, dynamics property analyses are carriedout in 120 motion patterns which are expressed by combi-nation of amplitude (between 10 deg and 60 deg witha step of 10 deg), frequency (between 0.1 Hz and 0.4 Hzwith a step of 0.1Hz) and phase difference (between 20deg and 100 deg with a step of 20 deg) on ground and inunderwater. For dynamics property analyses, travel velo-city m/s and dissipation power per a meter J/m ofrobot analyses were evaluated by average value of 3 trialsin each motion pattern. And a motion capture system wasused for measurements of robot motion. 8 markers are puton head, tail and joints of robot and these markers weretracked by a motion capture system. Travel distance ofhead and tail markers in a straight line , m were mea-sured and average of and was defined as travel dis-tance m of robot. Travel velocity m/s is evaluated fromtravel distance and operation time =20 s.

3.1. Method of dynamics property analysis

af

v W

d dd d

d vd t

ö

Ä

h t

h t

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles56

Table 1. Specification of robot.

Page 58: JAMRIS 2010 Vol 4 No 2

Meanwhile, in underwater motion, fastest velocity wasv=0.10m/s in white grids. Travel velocities m/s are fastat a=50 deg by adjusting deg and by adjusting phasedifference deg, travel velocities m/s were fast at =60deg. The parameters, which generate maximum velocity,were difference between ground motion and underwatermotion. Robot was not able to move in parameter whichinclude =10 deg and =20 deg.

Next, we focused on variations of velocities. Fig. 6a)-c) shows examples of variations of velocities in groundmotions. Fig. 6a) shows variations of velocities by adjust-ment of deg and deg at fixed parameter of =0.4 Hz,Fig. 6b) shows variations of velocities by adjustment ofHz and deg at fixed parameter of =40 deg and Fig. 6c)

show variations of velocities by adjustment of deg andHz at fixed parameter of =40 deg. Fig. 7a)-c) shows

examples of variations of velocities in underwater motionat same parameters as ground motions. On ground, veloci-ties are changed by adjustment and , however velo-cities were not changed or changed a few by adjustinga between =40 deg and =60 deg. Velocities became maxspeed at =40 deg and =0.4 Hz. These results show thatadjustment of and are more effective than adjustmentof in control of velocity on ground. Robot snakes itsway, therefore, definition of velocity is different whetherto define the distance in straight line which robot moves

va

v

a

a f

f aa

f

f

a af

fa

ö ö

ö

ö

ö

ö

ö

ö

ö

Dissipation powers per travel distance of a meterJ/m were evaluated from total dissipation power J inoperation time =20 s and travel distance m as shown inEq. (2) and total dissipation power were defined as Eq.(1). V and A shows impressed voltage and carriedelectrical current of motor per sampling time =0.05 s.

(1)

(2)

Fig. 4a)-d) and Fig. 5a)-(d) show velocities of robot onground and in underwater by adjustment of deg anddeg at four fixed frequencies of =0.1, 0.2, 0.3 Hz and 0.4Hz. Velocities were expressed by brightness and the brigh-ter grids are the faster velocity. In ground motions, velo-cities were fast in white grids and fastest velocity was=0.54 m/s at =0.4 Hz, =30 deg, =40 deg]. Travel velo-

cities m/s were fast at =30, 40 deg by adjusting degand by adjusting phase difference deg, travel velocities

m/s were fast at =40 deg.

Ä

Ä

ö

ö

ö

ö

WW

t d

V I(n)t

af

v f av t a a

v

3.2. Velocity analysis on ground and underwater

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 57

Fig. 4. Variations of travel velocity in ground motions(a) f=0.1[Hz] (b)f=0.2[Hz] (c)f=0.3[Hz] (d)f=0.4[Hz].

Fig. 6. Variations of travel velocity on Ground.

Fig. 5. Variations of travel velocity in underwater motions(a) f=0.1[Hz] (b)f=0.2[Hz] (c)f=0.3[Hz] (d)f=0.4[Hz].

Page 59: JAMRIS 2010 Vol 4 No 2

as travel distance or define the distance which robot sna-kes its way as one. In this paper, the distance in straightline which robot moves is defined as travel distance.Therefore, velocities were not changed so much by adjust-ment of . In underwater, velocities are changed by ad-justment , and . Velocity became max speed at =60deg and =0.3 Hz. In underwater, robot is affected fromfluid drag force. Fluid drag force becomes big value if ve-locity and acceleration of robot are big value. Therefore,robot velocity decreased in high frequency ( =0.4 Hz).

Fig. 8a)-d) and Fig. 9a)-d) shows results that dissi-pation power per travel distance of a meter J/m wasanalyzed. In the analysis, is shown by brightness likevelocity analysis. The brighter grid is the lower value. Inthe result of analysis, on ground, the lowest value was23.49 J/m at =30 deg, =0.4 deg, =80 deg. Meanwhile,in underwater, the lowest value was 103.74 J/m at =60deg, =0.2 deg, =60 deg. Fig. 10a)-c) and Fig. 11a)-c)show examples of variation of dissipation power per traveldistance of a meter. Fig. 10a)-c) and Fig. 11a)-c) don’tinclude data at =10 deg because robot was not able to goforward so much. Overall, underwater motions need morepower than ground motions. As shown in Fig. 10a) and

af a

f

f

WW

a fa

f

a

ö ö

Ä

Ä

ö

ö

3.3 Dissipation power analysis on groundand underwater

Fig. 11a), if phase difference is adjusted from 20 to 100deg, becomes minimum value at =60 deg on groundand =80 deg in underwater. As shown in Fig. 10b) andFig. 11b), if frequency is adjusted from 0.1 to 0.4 Hz,

becomes minimum value at =0.3 Hz on ground and=0.4 Hz in underwater.

Several CPG models are proposed such as Von der Pol[10] model, Matsuoka model [11],[12], Terman-Wangmodel [13], Willson-Cowan model [14], and so on. Weemployed Matsuoka model. The Matsuoka model isexpressed in Eqs. (3)–(5), where is the membranepotential of the neuron, represents the degree ofadaptation, is the external input with a constant rate,

is the feedback signal from a sensory input, , andare parameters that specify the time constant for theadaptation, is the neuron weighting, is output ofneuron and is the number of neurons. The CPG wave inthe Matsuoka model can entrain external oscillatoryinput using sensor input . The neural oscillator wave isable to entrain a wave that matches the resonancefrequency. Additionally, the wave in the Matsuoka modelis able to change amplitude and bias by adjusting .

ö

Ä

Ä

ô ô â

W

fW f

f

ui-th v

uf

w yn

f

u

4. Motion control system using CPG

4.1. CPG Models

i

i

i u v

ij i

i

0

0

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles58

Fig. 7. Variations of travel velocity in Underwater.

Fig. 8. Variations of dissipation power in ground motions(a)f=0.1[Hz] (b)f=0.2[Hz] (c)f=0.3[Hz] (d)f=0.4[Hz].

Fig. 9. Variations of dissipation power in underwater motions(a) f=0.1[Hz] (b)f=0.2[Hz] (c)f=0.3[Hz] (d)f=0.4[Hz].

Page 60: JAMRIS 2010 Vol 4 No 2

the parameters for forward motion and Fig. 13 shows tar-get joint angles, which are made by CPG wave. The am-plitude of target joint angles are 40 deg and frequencyis 0.1 Hz. We determined the parameters through trialand error using Matlab. We simulated and calculatedequation of Matsuoka model adjusting parameters forfinding parameter sets, which are able to generate waves.And we determined parameter sets which are able torealize a certain amplitude, phase difference and fre-quency. Amplitude and frequency of target joint anglesare linked to robot’s speed. If amplitude and frequencyare set large value, robot’s speed becomes fast. On thecontrary, if their values are set to be small, robot’s speedbecomes slow. The motion control system changes fre-quency adjusting and and changes amplitude ad-justing and . Table 3 shows the relationship bet-ween , and frequency. Other parameters are same withTable 2. Table 4 shows the relationship between ,and amplitude. Other parameters are same with Table 2.The amphibious multi-link mobile robot can change dire-ction with a change in parameter and , whichshifts the neutral position. If parameters are set=0.99 and =0.94, the robot turns and moves towardthe right. Figure 14 shows the output of the CPG network,which has bias 15 deg for a right turn motion. If and

are adjusted, bias is changed. Table 5 shows thevariations in bias.

(6)

(7)

a f

u u

u u

u uu

u

uu

ô ô

ô ô

u v

e f

u v

e f

e f

e

f

e

f

0_ 0_

0_ 0_

0_ 0_

0_

0_

0_

0_

In this paper, we realized wave generation, which hasphase difference and shift neutral position of waveoscillation adjusting CPG parameters such as , andare used for motion control.

(3)

(4)

(5)

The CPG for the multi-link mobile robot is shown inFig. 12. A neural oscillator consists of an extensor neuron( ) and a flexor neuron ( ), which connects eachother through the weights of and . means neuraloscillator number. Extensor neurons are connected toflexor neurons of the neighboring neural oscillator, andflexor neurons are connected to extensor neurons of theneighboring neural oscillator. We defined weights bet-ween neural oscillators as and . And, an external in-put of extensor neuron and flexor neuron are defined as

and . A set of neural oscillator is assigned to eachof our robot’s seven joints. Taga[15],[16] simulated bi-ped walking robot connecting neural oscillator to muscu-loskeletal model. In this paper, we used same method asneural oscillator model. The output of neural oscillator

follows Eq. (6). The network architecture is designedas a closed loop to generate periodically successive sig-nals with a certain phase. After CPG simulations, we em-ployed sets of CPG parameters for generating waves witha certain phase difference and CPG waves are convertedinto target joint angles following Eq. (7). Table 2 gives

ô ôu v

k k

ef fe

e f

k

u

EN FNw w k

w w

u u

O

0

1 2

0_ 0_

4.2. Design of Motion Control System Using CPG

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 59

Fig. 10. Variation of Dissipation Power on Ground.

Fig. 11. Variation of Dissipation Power in underwater.

Page 61: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles60

Fig. 12. A CPG architecture of robot.

Fig. 13. Output of CPG for forward motion. Fig. 14. Output of CPG for right turn.

Table 2. Parameters of CPG in forward motion. Table 3. Parameters for adjustment of frequency.

Table 4. Parameters for adjustment of amplitude.

Table 5. Parameters for adjustment of bias.

Page 62: JAMRIS 2010 Vol 4 No 2

tually inhibiting neurons with adaptation”,, vol. 52, 1985, pp. 367-376.

[12] Matsuoka K., “Mechanisms of frequency and patterncontrol in the neural rhythm generators”,

, vol. 56, 1987, pp. 345-353.[13] Terman D., Wang D.L. ,”Global competition and local

cooperation in a network of neural oscillators”,, 1995, pp. 148-176.

[14] Wilson H.R., Cowan J.D., “Excitatory and inhibitory in-teractions in localized populations of model neurons”,

, vol. 12, 1972, pp. 1-24.[15] Taga G., Yamaguchi Y., Shimizu H., “Self-organized

control of bipedal locomotion by neural oscillatorsin unpredictable environment”, ,vol. 65, 1991, pp. 147-159.

[16] Taga G., “A model of the neuro-musculo-skeletal systemfor human locomotion II. Real-time adaptability undervarious constraints”, , vol. 73,1995, pp. 113-121.

BiologicalCybernetics

Biological Cy-bernetics

PhysicaD81

Biological Journal

Biological Cybernetics

Biological Cybernetics

5. ConclusionsIn this paper, an amphibious multi-link mobile robot

that has seven joints and moves on ground and underwater was developed. And dynamics property analysiswas carried out on ground and in underwater. Velocitywas max speed at =30 deg =0.4 Hz =40 deg on groundand at =40 deg =0.3 Hz =60 deg in underwater. Ad-ditionally, a distributed robot motion control system us-ing a CPG was developed and realized forward and turn-ing motion. In future work, we will include a sensor feed-back in the bio-inspired robot motion control system.

- Depart-ment of Brain Science and Engineering, Kyushu Instituteof Technology, 2-4 Hibikino, Wakamatsu, Kitakyushu,Fukuoka, 808-0196, Japan. Tel&Fax: +81-93-695-6102.E-mail: [email protected].

- YASUKAWA Electric Co. 2-1, Kuro-saki-shiroishi, Kitakyushu, Fukuoka, 806-0004, Japan.* Corresponding author

a fa f

ACKNOWLEDGMENTS

AUTHORSTakayuki Matsuo*, Daishi Ueno, Kazuo Ishii

Takeshi Yokoyama

References

This work was supported by a 21 Century Center of ExcellenceProgram, “World of Brain Computing Interwoven out of Animalsand Robots (Pl: T. Yamakawa)” (center#J19) granted to theKyushu Institute of Technology by the Ministry of Education,Culture, Sports, Science and Technology of Japan.

[1] Taga G., “Emergence of Locomotion”, , vol.15, no. 5, 1997, pp. 680-683 (in Japanese).

[2] Williamson M.M.,, PhD thesis, Massachusetts Institute of Tech-

nology, 1999[3] Matsuoka K., Ohyama N., Watanabe A., Ooshima M.,

“Control of a giant swing robot using a neural oscil-lator”, (Proc. ICNC2005, Part II), 2005, pp.274-282.

[4] Mori M., Yamada H., Hirose S., “Design and Develop-ment of Active Cord Mechanism ”ACM-R3” and its 3-dimentional Locomotion Control”, , vol. 23,no. 7, 2005 , pp. 886-897 (in Japanese).

[5] Takayama T., Hirose S., ”Study on 3D Active Cord Me-chanism with Helical Rotational Motion”, ,vol. 22, no.5, 2004, pp.625-635 (in Japanese).

[6] Chigasaki S., Mori M., Yamada H., Hirose S., “Design andControl of Amphibious Snake-Like Robot “ACM-R5””. In:

, ALL-N-020(1)-(3).[7] Hirose S., “ ”, Kougyo

Chosakai, 1987 (in Japanese).[8] Azuma A., , Asakura,

1997.[9] Matsuo T., Yokoyama T., Ishii K., “Development of Neu-

ral Oscillator Based Motion Control System and Appliedto Snake-like Robot”. In: , 2007, pp. 3697-3702.

[10] Val der Pol B., “On relaxation oscillations”, ,vol. 2, no.11, 1926, pp. 978-992.

[11] Matsuoka K., “Sustained oscillations generated by mu-

st

J. of R.S.J.

Robot Arm Control Exploiting NaturalDynamics

Advances in Natural Computation

J. of R.S.J.

J. of R.S.J.

Proceedings of the 2005 JSME Conference on Roboticsand Mechatronics

Bionic machine engineering

The subject-book of Animal’s Motion

IROS’07Phil. Mag.

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 61

Page 63: JAMRIS 2010 Vol 4 No 2

Abstract:

1. Introduction

Antagonistic mechanisms attract attentions as jointactuators of linkage mechanisms, which control output tor-que, joint stiffness and position simultaneously. As the ac-tuators or components of antagonistic driven joints, spe-cial devices with nonlinear elasticity property such as pneu-matic actuators, nonlinear springs are often utilized to sa-tisfy the requirements of antagonistic mechanisms. How-ever, these devices have difficulties in control caused bycomplex and nonlinear properties, downsizing of actuator,and response time of articular compliance. In order to solvethese problems, we propose a new antagonistic joint me-chanism using kinematic transmission mechanism (KTM),which is composed of links and cams with dedicated design.The performance of KTM is evaluated through stiffness andposition control simulations and experiments.

Keywords: antagonistic-driven joints, equilibrium pointhypothesis, controllable stiffness mechanisms.

Main applications of robotics research in the 1960sand 1970s target the automation of industrial processesusing industrial robots/manipulators [1], and manipula-tors resembling human arms are deployed for various au-tomation tasks of the factories. In the 1980s, robots be-gan to spread to the manufacturing floors in the form ofwheeled or legged mobile mechatronic systems calledAGV and extreme environments such as space, under-water, nuclear plants. The roles of robots are no longerlimited to tasks in automated factories and expand intoexploration of hazardous, human-unfriendly, extremeenvironments. Recently, various kinds of robots are pro-vided to surveillance, security, and cleaning tasks [2].Ingenious autonomous robotic systems emerging in late1990s have artificial intelligence, e.g. Sony’s Aibo robo-tic dog [3]-[5], and Hondas humanoid robots from P2, P3to Asimo [6], [7], and make robots familiar to our sur-rounding and daily life. One of recent research topics inrobotics is the co-habit of human and robots. It is ex-pected that robots with different degrees of autonomyand mobility will play an increasingly important role in allaspects of human life. To enable coexistence of humanand robots, robots will be much more complex in theirhardware, software and mechanical structures, and bio-mimetic technology and brain inspired processing wouldbe a breakthrough of future robotics.

One of the important technical issues in robotics ismotion control problem. In the most of application, ro-bots are required to work quickly, precisely and safety,

however, these requirements sometimes make conflictsand trade-off. In order to realize precise, stable and lessvibration motion control, the robot joints should berigid. On the other hand, the more rigid robot jointsbecome, the less safe and flexible to unexpected things.To overcome this problem, the compliance control ofrobot joint should be considered from the viewpoints ofhardware and software developments. Hardware appro-aches are; for example, to use soft materials as the me-chanical structure components, cover the body withshock absorbers, or implement soft actuators such aspneumatic actuators. In the software approaches, forcecontrol and impedance control are often used, wherevirtual springs and dampers model are supposed to existin the working space. Morita showed the possibilityto absorb shocks not only by protections using shock-absorbing materials but also by softness of the joints.In the tasks which need skillful motion control such aspeg-in-hole insertion, inaccuracy of position control ofrobot arm should be compensated by compliance me-thods. As one of the methods, the remote center com-pliance (RCC) device with elastic elements is proposed.However, the position and orientation control are notenough for the peg-in-hole problem, because the infor-mation about the forces around the joints in the controlaren't employed.

Although the industrial robots are preferred to be ma-de rigid as much as possible, the service robots should ha-ve soft joints to avoid damages to their environments andto humans. As a solution of this problem, bio-inspiredapproaches attract attentions. Animals including humancan control articular impedances during motion [8], andhuman can realize the quick motion and the supple beha-vior. The system, which can change articular impedancesof robots, is one of research trends in robot joints andsoftware impedance controls have been developed [9].

One of the most important technical problems in thecompliance control is the time delay caused by electricfactors and computing time. Regarding collision, thetime delay in the response becomes very critic problem.So, the robot joints with mechanical softness are resear-ched and proposed, for examples, programmable passiveimpedance (PPI) which controls its compliance by usingnonlinear springs slotting directly between the actuatorand the wire [10], nonlinear spring tensioner (NST)which controls its compliance by using the movable pul-leys and the linear springs [11], the mechanism with L-type link added to NST [12], mechanical impedance ad-juster (MIA) which controls its compliance by changingthe moment arm by using a flat spring and a slider con-nected to the spring [13], actuator with nonlinear elastic

et al.

DEVELOPMENT OF ANTAGONISTIC WIRE-DRIVEN JOINT

EMPLOYING KINEMATIC TRANSMISSION MECHANISM

Takashi Sonoda, Yuya Nishida, Amir Ali Forough Nassiraei, Kazuo Ishii

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles62

Page 64: JAMRIS 2010 Vol 4 No 2

system (ANLES) which produces nonlinear elasticity bywinding a linear spring around a guide shaft [14], and themechanism using pneumatic rubber muscles [15].

The above mechanisms have quick response to suddenexternal forces because they can set the stiffness ofjoints beforehand, and guarantee delays mechanically.Koganezawa et al. assert that above mechanisms havetwo problems as following:

A) Design of the nonlinear elastic elements isdifficult.

B) The mechanisms employing nonlinear elasticelements become large size.

Including these above problems, we also consider anew item:

C) Slow responsiveness in the dynamic compliancecontrol.

In the conventional mechanisms, elastic materialsmust be deformed to obtain target stiffness, and thedelays occur by deforming. Animals change the articularimpedance during the sequence of motion from one mo-ment to the next. For example, the joints are well ad-justed to high or low stiffness at an instant in the hop-ping motion. That is, in a series of motion, the joints ofrobots should have good responsiveness regarding arti-cular compliance. The conventional joint mechanismswith stiffness adjustment function show better perfor-mance than software-based methods in the static com-pliance control responsiveness, on the other hand, havedifficulty in the dynamic compliance control responsive-ness.

In our research, we especially address controllingcompliance by variable mechanical impedance. We pro-pose an antagonistically wire-driven mechanism, whichhas good control performance regarding dynamic com-pliance. We describe the viscoelasticity property of mus-culoskeletal system, and propose the model describingthe compliance control system of musculoskeletal sys-tem. Then we show the antagonistically wire-driven me-chanism realizing the model mechanically.

2. Impedance control mechanisms ofmusculoskeletal system

In general, regarding the mechanical impedance,three elements: mass, viscosity and elasticity, are con-

2.1. Kinetic features of muscles

trollable parameters, and musculoskeletal systems con-trol only the viscosity and the elasticity because thechange of mass is not so easy. Although the mechanismof impedance control system in the musculoskeletal sys-tems is not fully understood yet, a few researches reporton kinetic features of muscles as follows [16]:

1) A muscle can work in the contracting directiononly.

2) A tension of the muscles is bigger when thefrequency of the input pulses to the muscle ishigher.

3) A tension force of the muscle depends on themuscle length.

A mechanism which rolls round the wire using actua-tor is generally used to realize the kinetic feature regar-ding the above 1). However, this mechanism alone can-not produce a similar feature of muscles. The input pulsesof the 2) are neural pulses, which transmit contractionsignals to muscles. The contracting force of muscles be-comes large when the pulses arrive more frequently. Thispulse input is a manipulating variable in the control. Theitem 3) means that a muscle force is the function of themuscle length (or joint’s position; see Fig. 1). Muscleschange their outputs depending on their length (thatis joints’ angles), however, conventional actuators don'tchange the output according to their position, whichis the main difference between muscles and generalactuators.

In muscles, there is an organ called muscle spindle,which sends the afferent signals by sensing the change ofposition and velocity of the muscle, and neurotendinousspindle in tendons responding to the tension of the mus-cle, which also sends the afferent signals. Those signalsare transmitted to the central nerve system alpha mo-tor neuron. Then, the alpha motor neuron sends the con-traction signal to the muscles, so that the muscle spin-dles, the neurotendinous spindles and the alpha motorneuron comprise a structure that has a servo system cha-racteristic with feedback loop regarding the motion (seeFig. 2). There is a response called stretch reflex genera-ting in this servo system. In stretch reflex the musclecontracts to the resisting direction to stretch of the mus-cle. However, it is still disputable at the point of whetherthe impedance control of musculoskeletal system is alsorealized by structured servo system mentioned above.

2.2. Neural system of muscles and servo system

via

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 63

Fig. 1. Length vs. tensions of muscles.

Page 65: JAMRIS 2010 Vol 4 No 2

stiffness. For example, in isometric contractions, whichchange the muscle tensions and don't change the posi-tion of the joint, the articular stiffness is proportional tothe muscle tensions [8]. Moreover, in closed-link mecha-nisms and antagonistically wire-driven mechanisms, it isknown that the internal forces define the mechanicalstiffness [18], [19].

Two kinds of wire-driven mechanisms models with 2-input (torques and in Fig. 4) and 2-output (jointangle and stiffness) are shown in Fig. 4. Figure 4a)shows a conventional antagonistically wire-driven me-chanism, and 4b) is the EP hypothesis-based antagonis-tically wire-driven mechanism. The main difference bet-ween Fig. 4 a) and Fig. 4b) is the torque around theequilibrium point. In the wire-driven system of Fig. 4a),the reaction torque will not occur and not go back to theoriginal position against to external forces, because thejoint can take any angles if the input torques and areequal. However, in the case of Fig. 4b), the reaction forcewill be generated to go back to the equilibrium point cau-sed by the non-linear property of muscle-like actuators,when the external force is given to the joint. That is, the

Summarizing the above:Controlling the position without a neural servosystem is possible.Internal forces are proportional to the muscletensions.Internal forces relate to articular stiffness.

3. Modeling of musculoskeletal system

3.1. Antagonistic wire-driven models

ô ô

è

ô ô

1 2

1 2

Fig. 2. Neural system of the muscle.

Regarding to position control of the joints in the mo-tion, such as reaching, even if the neural pathways ofmuscle spindles and neurotendinous spindles are inter-rupted, the position control is possible [16], however,the accuracy of position control decreases. This factmeans that the position control is possible in open-loopand, in a musculoskeletal system, the result suggeststhat the position control is realized by balancing themuscle’s tensions, which is called equilibrium point (EP)hypothesis in reference [17]. An articular EP is a point ofbalanced forces between extensors and flexors (see Fig.3). At EP external forces are not generated. The musclesgenerate only the returning forces to EP, if articular po-sition moves away from EP. The articular position there-fore changes when EP changes. This is the principle ofthinking in the position control by EP hypothesis. As theexternal forces are not generated around EP, the jointsdon't move. Important point is that the internal forcesare generated, so that the forces realize the articular

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles64

(a) The mechanism using general actuators. (b) The mechanism using actuators like muscles under theequilibrium point hypothesis.

Fig. 4. Wire-driven mechanisms.

Fig. 3. Equilibrium point hypothesis.

Page 66: JAMRIS 2010 Vol 4 No 2

wire-driven mechanism in Fig. 4b) can keep the positionof the joint without feedback control regarding position.

Another important aspect of the joint is the stiffness.The gradient of output torque, , means the stiffness ofthe joint, which is proportional to the output of muscle-like actuator in the mechanism in Fig. 4b). The higher thegradient, , becomes, the larger the stiffness. This factshows that the actuator mechanism, which changes theoutput according to its position, is needed to realize mus-culoskeletal system. In the following sections, we des-cribe a mathematical model considering these charac-teristics.

We use sigmoid function as the output model of mus-cle-like actuator for the joint as shown in Fig. 3. The mus-culoskeletal system is the antagonistic system, so thissystem is expressed by using two sigmoid functions as:

(1)

where the constant parameters, , arearbitrary design parameters, constant parameters arethe offset values to decide the initial position, variableis the joint angle, variable are the torques of theactuators, and variable is the torque around rotationaxis of the joint. Equation (1) expresses the curve shownin Fig. 1. Next, the elastic coefficient is obtained bypartial derivative of with respect to as:

. (2)

In the right side of the above equation, all values areconstants except for and . So, we can set the elasticcoefficient by changing only the values, and . Thismeans that the joint stiffness of the mechanism whichhas the relation of eq. (1) is controllable without elasticmaterials by the principle of eq. (2). And, at , thejoint converges to the EP, which is decided by the ratio of

and . Within the range to satisfy , the positionof can be changed.

ô è

ô è

å

è

ô

ô

ô è

ô ô

ô ô

ô

ô ô ô

è

/

/

( = 1,2; = 0,1,2)

= 0

= 0

3.2. Proposed mathematical model

k i jij

i

o

ci

o

o o

c c

c c

o

c c o

o

1 2

1 2

1 2

4. Kinematic transmission mechanism (KTM)In order to realize the mathematical model in eq. (1),

we propose a wire-driven mechanism shown in Fig. 5. Inthe mechanism, the cam (the input axis) rotation movesthe passive link, where the wire is connected, and thepassive link pulls the pulley (the output axis). Figure 5a)shows the geometric design of the proposed model. In thewhole mechanism, because it is an antagonistic joint me-chanism, the mechanism is symmetric. The cam shape canbe determined based on the kinematic input-output rela-tion of mechanism. We call this mechanism KinematicTransmission Mechanism (KTM), as the kinematic ele-ments such as the cam and the link are used instead ofelastic materials. Although in case of employing elasticmaterials, analysis of its feature is not easy, the design ofKTM is clear because we can design the cam to satisfy thekinematic constraints of the mechanism. Moreover, regar-ding to miniaturization of KTM, the similarity shape keepsthe same input-output relation because its kinematic re-lationship depends on only the ratio of lengths, so that itwill be possible to realize small size KTM.

The relation between the output torque(around the output axis of the joint mechanism) and theinput torque (around the rotation axis of the cam) isexpressed by eqs. (1) and (2):

(3)

. (4)

Equation (4) means the kinematic transmission pro-perty of the cam, which corresponds to the gear ratio bet-ween output and input of KTM. Here, the relation between

and can be obtained by the principle of virtual workas:

, (5)

where and are the virtual displacements. Solvingthe above equation with regard to , eq. (6) is obtained.

4.1. Analysis of KTMô

ô

ô ô

äè äè

äè

oi

ci

ci oi

o ci

o

( =1,2,...)i

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles 65

110)(

11

1121cko

ke

k

o

ôôåè

���

���

��

220)(

21

2221ck

ke

k

o

ôåè

���

���

��

(a) Structure of KTM. (b) Analysis of kinematics.

Fig. 5. Kinematic transmission mechanism (KTM):(This image shows only one side. Practically, a symmetrical mechanism isadded).

Page 67: JAMRIS 2010 Vol 4 No 2

, (6)

Here, and are nearly zero, so the above equa-tion is transformed into the following form:

, (7)

where, and is small displacement. Integrate theabove equation with regard to , and then we obtain:

. (8)

The obtained and from eq. (8) are the displa-cements regarding the input and the output which satisfyeq. (3). From here on, we discuss only one-side KTM ex-cept when especially noted. So, the subscript denotingthe distinction number of KTM is abbreviated. Using ob-tained and we analyze the KTM. The final purpose isto obtain the cam shape satisfying the kinematic cons-traints on the KTM. Figure 5 (b) shows the vectors and theangles of the KTM.

The meaning of each symbol is:: Names of each point

: Position vectors from a point to a point Subscriptsexpress the points e.g. is the position vector frompoint to: Vectors that express the diameter of a circle: Normal vector of a follower: Tangent vector of a follower: Torque around a joint: Joint angle

äè äè

è è

è è

è è

ô

è

o ci

o ci

o ci

o ci

cb

b

b

Ä Ä

i

A,...D,O,...,Ql

lC B

rnt

The wire is rolled round point when the input dis-placement increases. And is displaced by . Then dis-placed wire length is:

. (9)

From law of cosines we obtain the increment ofwhen the side length of the triangle AQP is increased by

. From the , we obtain after rotation, where

(10)

The coordinate of cam shape vector can be transfor-med to the coordinate frame of the cam as:

(11)

where, is rotation matrix. The cam shape is obtainedby calculating by using eq. (11).

In this section, we discuss about the design of KTM.The parameters regarding dimensions such as the linklength, the size of the bearing and the positioning of eachrotary shaft are decided by adjusting to the designer’spurpose.

The design steps are:1. Decide the design parameters regarding the

dimensions such as the lengths of the links.2. Decide the design parameters regarding the

sigmoid function from eq. (1) or (3).

via P

( )

è è

è è

è

è

o a

qp

a a

qp a ab

cd

cd

Ä

Ä

Ä

Ä Ä

l

l l

l

lR

C

4.2. Design of KTM

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

Articles66

or

Fig. 7. Cam for KTM.

Fig. 6. Joint angle vs. input-output torque ratio, and transitions of the equilibrium points.èo

(b) CAD drawing of cam(a) Designed cam shape

Page 68: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

3. Obtain and from eq. (8).4. Derive the cam shape coordinates from eq. (9) to

(11).

Figure 6 shows the graphs of the output torque of jointvs. the joint angle when the ratio of the input torques oftwo actuators is changed. These graphs are obtained byeq. (1) and Table 1 show the corresponding parameters.In phase 2, we decide some parameters by checking thecurves such as shown in Fig. 6.

The designed cam is shown in Fig. 7-(a). Fig. 7b)shows the CAD drawing of the produced cam. Figures 8a)and 8b) are the graphs that show kinematic transmissionfeature expressed by the gear ratio with respect tothe output joint angle and the input cam angle . Weverified the cam functioning whether it works accordingto design.

The verifying way is check of the matching ratio aboutthe curves of the theoretically designed cam and the madecam when the abscissa is the input angle and the ordinateis the output angle. The result is shown in Fig. 9. The ac-tual values match to theoretical values well. So, we con-clude that the cam made according to our design behavesas expected.

Using designed KTM, we developed the 1-DOF jointmechanism (see Fig. 10). The specifications of the jointmechanism are shown in Table 2.

The EP moves by changing the ratio of input torques asshown in Fig. 6. Changing the ratio, we examined the po-

è è

è

è è

o c

c

o c

C( )

5. Experimental Prototype

Table 2. Specifications of developed joint mechanism.

sition control capability of the joint mechanism emplo-ying KTM without position feedback. Only the motor’s cur-rents are controlled by the PI-controller in the equipment.The joint angle and the motor angles are measured byusing a potentiometer and two rotary encoders. In theexperiment, the responses of the current and the jointangle are measured; at first, the joint angle is set to a po-sition of 40[deg], then the current values are changedand the joint angle target is set to 0[deg]. The currentsare controlled to be a constant value while measuring. Thesame values of the currents are set to the two motors inthe joint mechanism while targeting the joint angle to0[deg]. In order to investigate the difference of the jointstiffness, we compared the responses with respect to thejoint angle in cases of 0.04[A] and 0.08[A]. The resultsare shown in Fig. 11, where (a) is the response of the jointangle and (b) is the response of the motor’s currents. Thejoint angle gets close to the target angle 0[deg], by con-trolling only the current inputs. However, there are thesteady-state errors caused by the effect of friction and theloss of back drivability. The main factor of loss of backdrivability is the counter electromotive force, which iscaused by the motor. The size of the steady-state error inthe case of 0.08[A] is smaller than in the case of 0.04[A].And the responsiveness in the case of 0.08[A] is betterthan in the case of 0.04[A] as well. As the next step, weevaluate the responsiveness regarding dynamic stiffnesscontrol when the joint mechanism is loaded by theexternal force. The currents are controlled at 3 secondsand 6 seconds after the experiment starts, and the jointstiffness is changed. The loaded external force is cyclicand the value of amplitude is invariable. The result isshown in Fig. 12. As shown in the figure, the amplitude ofthe joint angle becomes large when the current values areset low. This occurs due to the decrease of the joint’s stif-fness. The amplitude changes, immediately after thechange of currents. So it is verified that the joint me-chanism’s stiffness can be controlled dynamically.

Here, we discuss about the relationship of motioncontrols and brains. Marr and Albus have been proposeda theory that cerebellums have the function of perceptronfrom their minute observation at the part [20],[21]. The-refore, cerebellums output a pattern when a pattern isinputted to them, such as perceptron. In biogenic motioncontrols, it has cerebral-cerebellum loop, and it has beenshown that cerebellums participate in the motion control.The movement disorder occurs when cerebral-cerebellum

6. Joint Mechanism and Biogenic MotionControl

VOLUME 4, N° 2 2010

Articles 67

Table 1. Parameters.

Parameters ParametersValues ValuesUnits Units

Page 69: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

loop has broken, but the intention of trying to move(voluntary motor control) is not inhibited. So, it has beensuggested that cerebellums participate in the motionskill. If the theory of cerebellar perceptron is true, themotion should be converted to any patterns as the inputor the output to the central nervous system. The proposedjoint mechanism is able to control the position and thecompliance of the joint by changing the motor output(this equals the tension of a muscle) . So, controlling the

joint position, it is able to control the joint directions.Therefore, the joint mechanism is possible to control themotion by the static inputs and not the dynamic inputs.This means that the position, the compliance, thedirection and the outputs of the joint are regarded as thestatic patterns. We suggest that the mechanism changingthe movement elements to the static patterns such as theproposed mechanism by us is able to connect the biogenicmotion control to neural system models.

VOLUME 4, N° 2 2010

Articles68

(a) The input (cam) angle èo (b) The joint angle èo

Fig. 8. Gear ratio of designed cam (the case of cam ).1

Fig. 9. Cam curves of theoretical and actual value. Fig. 10. Antagonized wire-driven mechanism using KTM(1 DOF).

(a) The joint angles èo

(b) The currents of the motorsof the joint (M and M are themotors of left and right side)

1 2

Fig. 11. Experimental results of angle and stiffness control of the joint (The cases of low and high stiffness).

Page 70: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

7. ConclusionsIn this paper, we proposed the robot joint “KTM”

which is based to the EP hypothesis, which is a leadinghypothesis of the position control system of musculoske-letal system. One of the important issues of the EP hypo-thesis is that the nonlinear kinetic features of muscles arethe essential to control the articuler position and stif-fness. In the conventional mechanical softness joints,elastic materials are employed for obtaining nonlinearperformance. However, deformation of elastic materialsoccur delays. KTM has the advantage of the nonlinearkinematic transmission features such as linkage mecha-nisms have it. We proposed an EP hypothesis based ki-netic model, and we discussed the kinematic transmissionfeature expressing by the model with the design steps.And we showed that the position and the stiffness controlare possible from experiments. We proved that the jointangle moves to around the target angle but the steady-state errors remains. The steady-state errors becomesmaller by making the articuler stiffness higher. However,the improvement of position controllability is expected bydealing with the counter electromotive force of themotors.

We observe the responsiveness of the stiffness controlby loading the constant cyclic external force to the joint.Then, we verified that oscillation of the joint angle justthe changing of the articuler stiffness. A future work is toverify the response in loading high frequency externalforce to the joint. As other problems, there are the stretchof wire, optimization of KTM, multi-DOF and closed loopcontrol. KTM is more simple mechanism than conventio-nal mechanical softness joints. Moreover, freedom ofdesign is higher, and minimizing is possible. We expect

the large applications of KTM such as an active shockabsorber for vehicles and such as the application tomanufacturing and household robot.

- Fukuoka Industry, Science & Techno-logy Foundation, The University of Kitakyushu, 1-1 Hibi-kino, Wakamatsu-ku, Kitakyushu, 808-0135, Japan, +81-93-695-6102 (ext. 2838). E-mail: [email protected].

-Kyushu Institute of Technology, 2-4 Hibikino, Wakama-tsu-ku, Kitakyushu 808-0196, Japan, +81-93-695-6102,E - m a i l s : { n i s h i d a - y u y a @ , n a s s i r a e i @ ,ishii@}brain.kyutech.ac.jp.* Corresponding author

AUTHORSTakashi Sonoda*

Yuya Nishida, Amir Ali Forough Nassiraei, Kazuo Ishii

References[1] Crisman J. D., Bekey G., “The Grand challenge for robo-

tics and automation”,, vol. 3, 1996, pp. 10-16.

[2] Engelberger J. F., , The MIT Press,Cambridge, MA, 1989.

[3] Friedman B., , “Hardware companions?: What on-line AIBO discussion forums reveal about the human-robotic relationship” . In: , ACM Press,2003, pp. 273-280.

[4] MelsonG. F., , “Robots as Dogs?: Children's Inter-actions with the Robotic Dog AIBO and a Live AustralianShepherd”. In: Proc. in conference on Human Factors inComputing Systems, 2005, pp. 1642-1659.

[5] Bartneck C., Forlizzi J., “Shaping human-robot interac-

IEEE Robotics & Automation Ma-gazine

Robotics in Service

et al.

Proc. of CHI 2003

et al.

VOLUME 4, N° 2 2010

Articles 69

(a) The joint angle

(b) The currents of the motors of the joint

Fig. 12. Experimental results of response of changing stiffness.

Page 71: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

tion: Understanding the social aspects of intelligentrobot products”, , ACM Press,2004, pp. 1731-1732.

[6] Hirai K., Hirise M., Haikawa Y., Takenaka T., “The deve-lopment of Honda Humanoid Robot”. In:

, Leuven, Beligium, 1998 , pp. 1321-1326.[7] Honda Motor Co., Ltd., Asimo year 2000 model,

.[8] Akazawa K., Aldridge J.W., Steeves J.D., Stein R.B.,

“Modulation of Stretch Reflexes During Locomotion inthe Mesencephalic Cat”, , vol. 329,1982, pp. 553-567.

[9] Siciliano B., Khatib O. (Eds.),, Springer Berlin Heidelberg, 2008, pp. 161-

185.[10] Laurin-Kovitz K.F., Colkgate J.E., Carnes S.D.R, “Design

of Components for Programmable Passive Impedance”.In:

, 1991, pp. 1476-1481.[11] Noborisaka H., Kobayashi H., “Design of a Tendon-Dri-

ven Articulated Finger-Hand Mechanism and Its Stiff-ness Adjustability”,

, vol. 43, 2000, no. 3, pp. 638-644.[12] Yamaguchi J., Takanishi A., “Development of a Leg Part

of a Humanoid Robot-Design of a Biped Walking RobotHaving Antagonistic Driven Joints Using a NonlinearSpring Mechanism”,

, vol. 11,1997, no. 6, pp. 633-652.

[13] Morita T., Sugano S., “Development and Evaluation ofSeven-D.O.F. MIA ARM”. In:

, USA,1997, pp. 462-467.

[14] Koganezawa K., Nakazawa T., Inaba T., “AntagonisticControl of Multi-DOF Joint by Using the Actuator withNon-Linear Elasticity”. In:

, USA,2006, pp. 2201-2207.

[15] Schulte H.F., “The Characteristics of the McKibbenArtificial Muscle”. In:

, National Academy of Science,1961, pp. 94-115.

[16] Akazawa,, Tokyo Denki University Press (in Japane-

se), 2001, pp. 81-103.[17] Feldman A.G., “Once more on the Equilibrium-Point hy-

pothesis ( model) for motor control” ,, vol. 18, 1986, no. 1, pp. 17-54.

[18] Hanafusa H., Adli M.A.. “Effect of Internal Forces onStiffness of Closed Mechanisms”. In:

, Italy, 1991, pp.845-850.

[19] Li Z., Kubo K., Kawamura S., “Effect of internal force onrotational stiffness of a bicycle handle”. In:

,, 1996, pp. 2839-2844.

[20] Marr D., “A theory of cerebellar cortex”,, vol. 202, 1969, pp. 437-470.

[21] Albus J.S., “A theory of cerebellar function”,, vol. 10, 1971, pp. 25-61.

Ext. Abstracts CHI 2004

Proc. of the1998 IEEE International Conference on Robotics & Auto-mation

http://world.honda.com/ASIMO/technology/spec.html

Journal of Physiology

Springer Handbook ofRobotics

Proc. of the 1991 International Conference onRobotics & Automation

JSME International Journal. SeriesC: Mechanical Systems, Machine Elements and Manufac-turing

Advanced robotics: the Internatio-nal Journal of the Robotics Society of Japan

Proc. of the 1997 IEEE Inter-national Conference on Robotics & Automation

Proc. of the 2006 IEEE Inter-national Conference on Robotics & Automation

Application of External Power inProsthetics and Orthotics

Biomechanism Library Biological InformationEngineering

Journal of MotorBehavior

Proc. 5 Interna-tional Conference on Advanced Robotics

Proc. of the1996 IEEE International Conference on Systems Man,and Cybernetics

Journal of Phy-siology

Mathema-tical Bioscience

ë

th

VOLUME 4, N° 2 2010

Articles70

Page 72: JAMRIS 2010 Vol 4 No 2

More information on Robonaut and video at: http://robonaut.jsc.nasa.govBased on http://www.sciencedaily.com

More information: http://www.buffalo.edu/news/10997Based on: http://www.robotworldnews.com

New chapter in the cooperation NASA with GE

Engineers and scientists from NASA and General Motors with the help ofengineers from Oceaneering Space Systems of Houston, developed and builtthe next generation of Robonaut 2 (R2) - a faster, more dexterous and moretechnologically advanced robot. R2 can use the same tools as humans, whichallows them to work safely side-by-side humans both on Earth and in space.Going everywhere the risks are too great for people, Robonaut 2 will expandpeople's capability for construction and discovery.

Using leading edge control, sensor and vision technologies, future robotscould assist astronauts during hazardous space missions and help GM buildsafer cars and plants.The idea of using dexterous, human-like robots capable of using their handsto do intricate work is not new to the aerospace industry.

NASA and GM cooperate since 1960s (through a Space Act Agreement at the agency's Johnson Space Center in Houston)with the development of the navigation systems for the Apollo missions.The original Robonaut, a humanoid robot designed for space travel, was built by the software, robotics and simulationdivision at Johnson in a collaborative effort with the Defense Advanced Research Project Agency 10 years ago.

New simulator for da Vinci Robotic Surgical System

RoSS - the Robotic Surgical Simulator, developed by Thenkurussi ("Kesh") Kesavadas (University of Buffalo's School ofEngineering and Applied Sciences) and Khurshid A. Guru (Roswell Park Cancer Institute) allows surgeons to practice skillsneeded to perform robot-assisted surgery without risk to patients. It is one of the world's first simulators that closelyapproximates the "touch and feel" of the da Vinci robotic surgical system.

Khurshid A. Guru, MD, director of the Center for Robotic Surgery and attending surgeon in RPCI's Department of Urology,and Thenkurussi Kesavadas, PhD, professor of mechanical and aerospace engineering at UB and head of its Virtual Reality

Lab, founded the Western New York-based spin-offcompany, Simulated Surgical Systems, LLC, tocommercialize the simulators. Both stress howimportant is training for proficiency for surgeons,especially in robot-assisted operations, just like inthe aircraft, they say. "Think of the RoSS as a flightsimulator for surgeons", said Mr Kesavadas toRobotWorldNews' reporter.The RoSS will play an educational role for RPCI andother similar institutions involved in robot-assistedsurgical systems.Already, at least 70 percent of all prostate surgeriesin the U.S. are performed using robotic surgicalsystems; robotic surgeries are generally less

invasive, cause less pain, require shorter hospital stays and allow faster recoveries than conventional surgery. Roboticsurgical systems are increasingly being used for gynecologic, gastrointestinal, cardiothoracic, pediatric and other urologicsurgeries. Is expected that Ross will be commercially available by the end of 2010.

Focus on new

Journal of Automation, Mobile Robotics & Intelligent Systems

IN THE SPOTLIGHT

VOLUME 4, N° 2 2010

?71In the spotlight

Page 73: JAMRIS 2010 Vol 4 No 2

Source: http://www.sciencedaily.comhttp://www.ncsu.edu

Source: http://www.physorg.com

Precise security technology based on voice recognition

New feature of robots': a runny nose

According to Dr. Robert Rodman, professor of computer science at North Carolina State University, and his fellowresearchers, their new research will help improve the speed of speech authentication, with keeping accuracy. There are notwo identical voices, just like the fingerprints or faces. Current technology is still too slow to gain widespread acceptanceperson's voice recognition may take several seconds or more. The response time needs to improve without increasing theerror rate.

Scientists extended SGMM method (sorted GMM - a novel Gaussian selection method) by using 2-dimensional indexing,which leads to simultaneous frame and Gaussian selection. They modified existing computer models to streamline theauthentication process so that it operates more efficiently.

Potential users of this technology are governments, financial, insurance, health-care and telecommunicationsindustries everywhere where high level of data protection is needed.

The others co-authors of the research are: Rahim Saeidi, Tomi Kinnunen and Pasi Franti of the University of Joensuu inFinland; and Hamid Reza Sadegh Mohammadi of the Iranian Academic Center for Education, Culture & Research.

The research, "Joint Frame and Gaussian Selection for Text Independent Speaker Verification", was presented in Marchat the International Conference on Acoustics, Speech and Signal Processing (ICASSP) in Dallas.

At Tsukuba University in Japan Yotaro, a robot, which emulatesa real baby, has been constructed. It's full-cheeked face, made ofsoft translucent silicon with a rosy hue, looks a little weird with soluminous blue eyes. It also has sporting a pair of teddy-bear ears.

Robot's face is backlit by a projector connected to a computer tosimulate crying, sneezing, sleeping, smiling, while a speaker can letout bursts of baby giggles. Mood's changes are based on thefrequency of touches. It moves its arms and legs when differentparts of its face and body are touched. Yotaro also simulatesa runny nose, with the help of a water pump that releases body-temperature droplets of water through the nostrils.

The inventors hope that cute Yotaro will induce young Japanesepeople to parenting through showing its pleasures as Japan facesa demographic crisis. Japan has the world's longest average lifeexpectancy and one of the lowest birth rates. The fifth of thepopulation is aged 65 or older. By 2050, that figure is expected torise to 40 percent.

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 4, N° 2 2010

In the spotlight72

Page 74: JAMRIS 2010 Vol 4 No 2

Journal of Automation, Mobile Robotics & Intelligent Systems

April

June

July

August

September

16 – 18 ICMAE 2010

23 – 25 ICIII 2010

23 – 25 ICTLE 2010

11 – 13 ICNIT 2010

13 – 17 ICAISC 2010

16 – 18 SEDE 2010

22 – 24 ICINT 2010

23 – 24

24 – 26 RoEduNet 2010

28 – 30 Design and Nature 2010

9 – 11 ICIAE 2010

9 – 11 ICCSIT 2010

1 – 3 ICMEE 2010

20 – 22 ICACTE 2010

8 – 10 ADM 2010

– IEEE International Conference on Mechanical and Aerospace Engineering,Chengdu, Sichuan, China.http://www.iacsit.org/cmae

– International Conference on Industrial and Intelligent Information, Bangkok,Thailand.http://www.iciii.org

– International Conference on Traffic and Logistic Engineering, Bangkok,Thailand.http://www.ictle.org

– International Conference on Networking and Information Technology. Manila,Philippines.http://www.icnit.org

– 10 International Conference on Artificial Intelligence and Soft Computing,Zakopane, Poland.http://icaisc.org

– 19 International Conference on Software Engineering and Data Engineering.San Francisco Fisherman's Wharf, San Francisco, California, USA.http://www.users.csbsju.edu/~irahal/sede2010

– 2 International Conference on Information and Network Technology.http://www.icint.org3 International Conference on Human-Robot Personal Relationships, Leiden, Netherlands.http://hrpr.liacs.nl

– 9 RoEduNet International Conference, Sibiu, Romania.http://roedu2010.ulbsibiu.ro

– 5 International Conference on Comparing Design in Nature withScience and Engineering, Pisa, Italy.http://www.wessex.ac.uk/10-conferences/design-and-nature-2010.html

– IEEE International Conference on Information and Applied Electronic. Chengdu,Sichuan, China. ICIAE 2010 will be held in conjunction with the 3 IEEE ICCSIT 2010.http://www.iciae.org

– 3 IEEE International Conference on Computer Science and InformationTechnology. Chengdu, Sichuan, China.http://www.iccsit.org

– 2 IEEE International Conference on Mechanical and Electronics Engineering,Tokyo, Japan.http://www.icmee.org

– 3 IEEE International Conference on Advanced Computer Theory andEngineering. Chengdu, Sichuan, China.http://www.icacte.org

– 3 International Conference on Advanced Design and Manufacture, Nottingham,United Kingdom.http://www.admec.ntu.ac.uk/adm2010/home.html

th

th

nd

rd

th

th

rd

rd

nd

rd

rd

EVENTSSPRING-SUMMER 2010

VOLUME 4, N° 2 2010

?73Events