12
R OBOTICS AND M ACHINE P ERCEPTION NEWSLETTER NOW AVAILABLE ON-LINE Technical Group members are being offered the option of receiving the Robotics and Machine Perception Newsletter electronically. An e-mail is being sent to all group members with advice of the web location for this issue, and asking members to choose between the electronic and printed version for future issues. If you are a member and have not yet received this message, then SPIE does not have your correct e-mail address. To receive future issues electroni- cally, please send your e-mail address to: [email protected] with the word Robotics in the sub- ject line of the message and the words electronic version in the body of the message. If you prefer to receive the newsletter in the printed format, but want to send your correct e-mail address for our database, include the words print version preferred in the body of your message. Calendar—See page 10 Technical Group Registration Form—See page 11 Continues on page 9. OCTOBER 2004 VOL. 13, NO. 2 SPIE International Technical Group Newsletter Biorobotic study of Norway rats in individuals and groups Norway rat pups provide an excellent system for the biorobotic study of embodied, environmentally- embedded, sensorimotor-based intelligence. They are sufficiently simple, have a protracted develop- mental period, and exhibit interesting behavior at both individual and group levels. As altricial mam- mals, Norway rat pups have underdeveloped senses and poorly-coordinated sensorimotor systems. They are born deaf and blind, remaining so for the first two weeks of life, and have very limited use of ol- factory cues, relying primarily on tactile sensors. Individually, rat pups embody a host of taxes, or environmental orientations. The most salient of these is thigmotaxis, an orientation toward objects, which causes rat pups to follow walls, ‘snoop’ in corners, and aggregate in huddles with other pups. Rat pups also exhibit a negative phototaxis, locomoting away from light sources visible behind their eye- lids. This taxis is observed for a period prior to the opening of their eyes. Rat pups’ extended sensory development allows us to sequentially implement touch sensors, primitive light sensors, and full- blown visual sensors, building the corresponding control systems step-by-step. Although pups aggregate into groups from birth, data modeling has shown that seven-day-old pup activity is independent of other pups, while ten- day-old pups’ behaviors are coupled. 1 One ques- tion, then, is what gives rise to these inter-pup de- pendencies. We are currently entertaining two hy- Figure 1. Comparison of rat and robot highlighting similar morphology.

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004 …

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

1SPIE International Technical Group Newsletter

ROBOTICS ANDMACHINEPERCEPTION

NEWSLETTER NOW

AVAILABLE ON-LINETechnical Group members are beingoffered the option of receiving theRobotics and Machine PerceptionNewsletter electronically. An e-mail isbeing sent to all group members withadvice of the web location for thisissue, and asking members to choosebetween the electronic and printedversion for future issues. If you are amember and have not yet receivedthis message, then SPIE does nothave your correct e-mail address.

To receive future issues electroni-cally, please send your e-mail addressto:[email protected]

with the word Robotics in the sub-ject line of the message and the wordselectronic version in the bodyof the message.

If you prefer to receive thenewsletter in the printed format, butwant to send your correct e-mailaddress for our database, include thewords print version preferredin the body of your message.

Calendar—See page 10

Technical Group RegistrationForm—See page 11

Continues on page 9.

OCTOBER 2004VOL. 13, NO. 2

SPIE InternationalTechnical Group

Newsletter

Biorobotic study of Norway ratsin individuals and groupsNorway rat pups provide an excellent system forthe biorobotic study of embodied, environmentally-embedded, sensorimotor-based intelligence. Theyare sufficiently simple, have a protracted develop-mental period, and exhibit interesting behavior atboth individual and group levels. As altricial mam-mals, Norway rat pups have underdeveloped sensesand poorly-coordinated sensorimotor systems. Theyare born deaf and blind, remaining so for the firsttwo weeks of life, and have very limited use of ol-factory cues, relying primarily on tactile sensors.

Individually, rat pups embody a host of taxes, orenvironmental orientations. The most salient of theseis thigmotaxis, an orientation toward objects, whichcauses rat pups to follow walls, ‘snoop’ in corners,

and aggregate in huddles with other pups. Rat pupsalso exhibit a negative phototaxis, locomotingaway from light sources visible behind their eye-lids. This taxis is observed for a period prior to theopening of their eyes. Rat pups’ extended sensorydevelopment allows us to sequentially implementtouch sensors, primitive light sensors, and full-blown visual sensors, building the correspondingcontrol systems step-by-step.

Although pups aggregate into groups from birth,data modeling has shown that seven-day-old pupactivity is independent of other pups, while ten-day-old pups’ behaviors are coupled.1 One ques-tion, then, is what gives rise to these inter-pup de-pendencies. We are currently entertaining two hy-

Figure 1. Comparison of rat and robot highlighting similar morphology.

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

2 SPIE International Technical Group Newsletter

Editorial

Welcome to this, the second issue of the R&MPnewsletter for 2004.The articles in this issue cover a range oftopics and applications in robotics and ma-chine perception. Learning, for example,is a continuing theme in robotics and arti-ficial intelligence. It is represented here inthe article by Santos, who describes effortsaimed at formulating theories relating ut-terances to perceptual (visual) observa-tions. The overall goal is to build machinescapable of learning tasks by observationof humans.

Cooperation is another continuing themein robotics. It is represented here in the ar-ticles by Bouloubasis et al. and May et al..The two articles offer contrasting themesin cooperative robotics; the first takes anengineering perspective, focusing on de-signing robot systems for cooperation,whereas the second takes a more science-oriented perspective, focusing on mimick-ing cooperative behaviour observed in rats.

Both draw strongly on sets of low-levelbehaviours to formulate models of coop-eration. Bouloubasis et al. also emphasisethe concept of modularity as a basis forsystems design and reconfigurability, andits realisation in the emerging area of net-worked robotics.

Human-robot interaction is a thirdtheme in this issue, represented in the ar-ticles by De Gersem et al., Carignan etal., and McElligott et al..

The first two focus on very different ap-plications in medicine. De Gersem et aldescribe methods for enhancing hapticfeedback in order to afford the discrimi-nation of tissues based on stiffness.Carignan et al., on the other hand, describemethods for assessing the physical con-dition of a patient and for performing re-habilitation tasks remotely. Both applica-tions offer clear benefits to the doctor and

Contributing to the Robotics and Machine Perception NewsletterNews, events, and articles

If there are things you'd like to see us cover in the newsletter, please let our Technical Editor,Sunny Bains {[email protected]}, know by the deadline indicated on the right. Before submittingan article, please check out our full submission guidelines at:

http://www.sunnybains.com/newslet.html

Special issues

Proposals for topical issues are welcomed and should include:

• A brief summary of the proposed topic;

• The reason why it is of current interest;

• A brief resumé of the proposed guest editor.

Special issue proposals should be submitted (by the deadline) to Sunny Bainsand will be reviewed by the Technical Group Chair.

Upcoming deadlines24 November 2004: Specialissue proposals.

10 December 2004: Ideas forarticles you'd like to write (orread).

4 March 2004: Calendaritems for the 12 monthsstarting April 2005.

to the patient. The interaction in both casesis also via the Interent, which is also themedium for the online robot arena de-scribed by McElligott et al.. In the lattercase, however, the focus is on educationand invoking the participation of schoolchildren and general members of the pub-lic, whereas the first two articles focus onthe experienced professional.

There are many issues that cross thesedifferent articles and underpin importantresearch areas in robotics and machine per-ception. We encourage you to read the ar-ticles and pursue the references for furtherdetails.

Gerard McKeeTechnical Group ChairThe University of Reading, UKE-mail: [email protected]

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

3SPIE International Technical Group Newsletter

Robotic rehabilitation over the internetRobots have been explored as possiblerehabilitation aids in laboratory settingsfor well over a decade. These investi-gations have recently expanded into thefield of ‘teletherapy’ whereby a clini-cian can interact with patients in remotelocations. However, time-delays en-countered in force feedback betweenremotely-located rehabilitation devicescan cause instability in the system, andhave so far prevented widespread ap-plication over the internet.

The ISIS Center at Georgetown Uni-versity Medical Center has recentlyassembled a robot rehabilitationtestbed consisting of a pair ofInMotion2 robots1 connected over theinternet (see Figure 1). The specificaims of our research are twofold: toenable a clinician to assess the physi-cal condition of a patient’s arm using metricssuch as strength, dexterity, range of motion, andspasticity; and to help a clinician perform co-operative rehabilitative tasks with a patient us-ing a virtual environment intended to simulateactive daily living (ADL) tasks.

An example of a cooperative rehabilitationtask is depicted in Figure 2. The patient andtherapist ‘pick up’ opposite ends of a virtualbeam by grasping the handle at the end of therobot or ‘master’ arm.2 The mass parametersof the object such as mass, length, and inertiacan be adjusted to correspond to real-life ob-jects using a graphical user interface (GUI) onthe robot’s computer. As the object is being‘lifted’, the side that is lower will begin to feelmore of the weight, thus stimulating the par-ticipants to maintain the beam in a horizontalposition. Also, if one side tugs on the object,

that patient’s computer connects to aserver on their therapist’s computer.User Datagram Protocol (UDP) en-abled communication speeds of up to100 16-byte datasets per second. A vir-tual object generator (VOG) calculatesthe motion of the object resulting fromthe force inputs from the patient andthe therapist.4 An artificial time delay(T1 in Figure 3) is introduced betweenthe VOG process and the local control-ler to maintain symmetry in communi-cation between the VOG and the tworobots.

To date, we have successfully con-ducted the virtual beam task betweenGeorgetown University and InteractiveMotion Technology in Cambridge,Massachusetts. Internet delays of up to110ms were observed which caused

borderline stability without compensation. Thewave-variable controller was found to be ro-bust to variations in time delay, and perfor-mance degraded only gradually with increas-ing delay. The degradation manifests itself asa higher perceived inertia at the handle whichwas still quite manageable even with artifi-cially-induced delays of over 1s.

We are currently testing even larger timedelays and examining the incorporation ofsensed force into the wave-variable methodol-ogy to produce higher-bandwidth feedback tothe clinician. In addition, a head-mounted dis-play and tracker are being integrated into thesystem to allow for more realistic simulationsusing three-dimensional visualization. Coordi-nation of the haptic and visual feedback in thesimulator is an area of ongoing research.

Figure 1. A researcher engages in a game of air hockey on anInMotion2 robot.

Figure 2. A virtual beam task realized with two robot handles coincidentwith the ends of the beam.

Figure 3. System architecture for implementing a cooperative internettask.

the other side feels, it thus encouraging a co-operative strategy to lift.

Time-delay compensation is key to avoid-ing instabilities in force-feedback interactionover a distance. In our research, the wave-vari-able method emerged as the most natural ap-proach for manipulating virtual objects over theinternet.3 Instead of sending force commandsdirectly, an impedance ‘wave’ command is is-sued by the master and then subsequently re-flected by the virtual object or ‘slave’. Howmuch of the wave is reflected depends uponthe impedance of the virtual object: a compli-ant object will not reflect the incoming waveas greatly as a rigid wall.

The robots are controlled by computers run-ning RT-Linux and connected to the internetas illustrated in Figure 3. When controlling apatient’s robot directly, a client application on Continues on page 10.

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

4 SPIE International Technical Group Newsletter

Systems design for working roversA current theme in the exploration of Mars us-ing robotics systems is the construction of habi-tats for maintaining a robotic and eventual hu-man presence.1 A key aspect of this task is thetransport of extended payloads using a set ofcooperating robots. The work presented in thisarticle is part of ongoing research in the ActiveRobotics Laboratory (ARL) at the Universityof Reading to explore networked robotics con-cepts for the coordination and control of multi-robot systems1,2 for such tasks.

The networked robotics approach assumesthe presence of a set of resources distributedacross the robot platforms. These resources canbe recruited in different architectural configu-rations to create, across the platforms, a set ofnetworked robotics agents. A single roboticagent may draw from a number of physically-decoupled robot platforms. This provides thepossibility of configuring the multiple robotsystems into a single robotic entity when closeinter-robot coupling and coordination is re-quired. For example, the dual-robot transpor-tation task incorporates requirements for vari-ous levels of coupling and coordination:3 theapproach to the pick-up point can be relativelyautonomous whereas grasping requires coor-dination and traversal requires both couplingand coordination.

The design of the ARL Rovers shown in Fig-ure 1 has been customized to integrate the re-quirements for the complete task and hence hasa number of constraints it needs to satisfy. It isstrongly influenced by the traversal constraints,specifically the requirement to traverse overuneven terrain, as well as by the fact that therovers trajectory cannot be perfect. As a result,the load has six degrees of freedom (DOF) withrespect to the grippers. The manipulator sys-tems and the two mobile bases must be able toaccommodate this ‘deflection’ and also achievethe desired motion.

At present, only two agents have been con-structed. Each comprises a mobile base, a ma-nipulator, and a stereo camera system. The low-level controller is built around the PIC16F873microcontroller and the on-board high-level con-troller will be a MINI-ITX board equipped witha wireless modem. The mechanical design of therovers incorporates a rocker-bogie mobility sys-tem with a passive suspension mechanism. Themotorized six-wheeled drive uses its two rearwheels—via servomechanisms—for steering.The manipulators consist of three major compo-nents: a two-DOF arm, a passive but compliantone-DOF wrist, and an active gripper mecha-nism (see Figure 2). Apart from the motors andthe gearboxes, all the components were hand-crafted.

Figure 1. The Active Robotics Laboratory rovers.

Figure 2. The rover manipulator.

There are a total of 17 sensor signals avail-able on each unit, giving information about thestatus of the load (i.e. the amount of slide ofthe load within the gripper mechanism as wellas the angle of the load with respect to the roverbody), the position of the joints and the wrist,the stress imposed upon the fingers by the load,the relative motion of the base, and the relativeposition of each of the rovers with respect toeach other. Many of the switches had to be cus-tom made in order to fit the application.

The information obtained from the sensorysystems is fed to a small network of threemicrocontrollers. A design goal is modularity,so if there are cases where certain sensory in-formation is needed by, say, two controllers,they are both connected to the particular sen-sor. Each of the modules runs almost indepen-dently and there is minimum communicationbetween them. This facilitates the addition ofmodules. The communication between themodules is accomplished through a serial in-terface with the high-level controller acting asa bridge. The controllers will use the sensorysignals to not only perform the basic controlbut also to form behaviors and respond in theway that each situation demands. A number ofreflexive behaviors have been developed tosupport the dual transportation task. For in-stance the Grasp Behavior (GB), once enabled,will automatically open the fingers if exces-sive stress is detected. The behaviors can sup-press basic control functions: for example ifthe command ‘close gripper’ is given whilstthe GB is enabled, and if the fingers open be-cause there is excessive stress in the gripper,the fingers will continue to open until the stresslevels become acceptable or the behavior isdisabled.

First in the hierarchy of the low-level con-troller are ‘sequences’ and each sequence is di-vided into ‘states’. Once it is in a particular se-quence, the robot will execute actions in seriesin order to achieve a goal. An example is the

approach sequence in which the rover will ap-proach and align itself with the load. When theapproach sequence is initiated, each of themodules will enter its own version of that se-quence. During the approach, the modules willcollectively use the information obtained by thesensors to judge whether a particular state intheir respective sequence has been reached and,if so, to move to the next state until the ap-proach sequence is completed. In some casesbehaviors will be utilized by a sequence. Thereis minimal communication between the mod-ules and no guidance from the high-level con-troller. The system is set up in a way such thateach module will recognize its current state andact accordingly.

The low-level controller has been tested suc-cessfully and the next stage of the work is tointegrate the sequences under the guidance ofthe high-level controller. This stage will includethe introduction of visual guidance for the task.

Antonios Bouloubasis and Gerard McKeeThe University of Reading, UKE-mail: [email protected]

References1. P. S. Schenker, T. L. Huntsberger, P. Pirjanian, and

G. T. McKee, Robotic autonomy for space: closelycoordinated control of networked robots, Proc. i-SAIRAS-’01, Montreal, Canada, June 18-21, 2001.

2. A. K. Bouloubasis, G. T. McKee, and P. S.Schenker, A Behavior-Based Manipulator forMulti-Robot Transport Tasks, Proc. IEEE Int’lConf. on Robotics and Automation (ICRA), pp.2287-2292, 2003.

3. A. Trebi-Ollennu, H. Das Nayer, H. Aghazarian, A.Ganino, P. Pirjanian, B. Kennedy, T. L.Huntsberger, and P. Schenker, Mars Rover PairCooperatively Transporting a Long Payload, Proc.IEEE Int’l Conf. on Robotics and Automation(ICRA), pp. 3136-3141, 2002.

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

5SPIE International Technical Group Newsletter

Looking for logic in perceptual observationsIn his seminal work1 Bertrand Russell suggeststhat the discovery of what is really given in per-ceptual data is a process full of difficulty. Wehave taken this issue seriously in AI by applyinga knowledge discovery tool to build theories fromperceptual observation. In particular, this articleintroduces some of the research we are conduct-ing on autonomous-learning rules of behaviorfrom audio-visual observation.

In our framework, classification models ofperceptual objects are learned (in an unsuper-vised way) using statistical methods that en-able a symbolic description of observed scenesto be created. Sequences of such descriptionsare fed into an off-the-shelf inductive logic pro-gramming (ILP) system, called Progol,2 whosetask is, in this context, to construct theoriesabout the perceptual observations. The theo-ries thus constructed are further used by anautomatic agent to interpret its environment andto act according to the protocols learned.

In its current implementation, our system iscapable of learning the rules of simple table-topgames (and how to use them) from the observa-tion of people engaged in playing them. Our ba-sic motivating hypothesis for assuming game-playing scenarios is that these can provide bothrich domains—allowing multiple concepts to belearned—and domains with gradually-increas-ing complexity, so that concepts can be learnedincrementally. Moreover, it has been largely ar-gued that games provide interesting ways ofmodelling any social interaction.3

The experimental setting used in this re-search is composed of two video cameras, oneobserving a table-top where a game is takingplace, and another pointed at one of the play-ers. This player will also have a microphonerecording utterances that he produces whenplaying the game. The purpose of the secondcamera is to capture the facial movements ofthe player, whose voice is being recorded, sothat a synthetic agent can reproduce them insimilar situations. A schematic of the systemis shown in Figure 1.

The vision system consists of a spatio-tem-poral attention mechanism and an object clas-sifier. Classes obtained from vision data areused to provide a symbolic description of statesof the objects observed on the table top. This is

used as input data for Progol.In particular, our interests in this research

are twofold. The first aim of this project wasthe autonomous learning of perceptual catego-ries from continuous data, grounding them intomeaningful (symbolic) theories. The second(but no less important) aim is the autonomousdiscovery of simple mathematical rules fromthe perceptual observation of games. We shallconsider these goals in turn.

Grounding symbols to the worldIn the current guise of this project, symbolicdata provided by the audio and vision systemsare input to the knowledge discovery engineas atomic formulae. Within these formulae,symbols for utterances are arguments of predi-cates representing actions in the world whereassymbols for visual objects compose atomicstatements representing the state of the world.Both statements are time-stamped with the timepoint relative to when they were recorded. Arelation successor connects two subsequenttime-points. The task here is to use Progol todiscover the relationship between the utterancesproduced by one of the players, and the ob-jects played on the table. Therefore, we are in-terested in the autonomous learning of the con-nection of audio and visual objects within aparticular context.

Some preliminary results of this research,presented in Reference 3, show that the systemcould learn accurate descriptions of simpletable-top games.

Learning simple axioms from observationWe next submitted the system to the challengeof finding the transitivity, reflexivity, and sym-metry axioms from the observation of gameswithout assuming any preconceived notion ofnumber or any pseudo definition of ordering.This research is described in References 5 and6, where these axioms are obtained from noisydata with the help of a new ensemble algorithmthat combines the results of multiple Progolprocesses by means of a ranking method.

It is worth pointing out that the discovery ofsimple mathematical axioms from observation,without assuming any explicit backgroundknowledge, was not set for pure intellectual plea-

sure. In fact, I believe that the capacity of ab-stracting general truth from simple data is es-sential if any system is to solve new problemsand to overcome obstacles not seen before.

ConclusionThe final aim of the project summarized aboveis to build a machine capable of learning (andfurther executing) human tasks by observingpeople accomplishing them. The preliminaryresults of this research suggest that the combi-nation of statistical methods with inductivelogic programming applied to the task of learn-ing behavior by observation is a promisingroute towards our final goal.

Chris Needham, Derek Magee, AnthonyCohn, and David Hogg are all active partici-pants in the development of the project de-scribed above. Thanks to Brandon Bennett andAphrodite Galata for discussions and sugges-tions.

Paulo SantosCogVis GroupSchool of ComputingUniversity of Leeds, UKhttp://www.computingleeds.ac.uk/E-mail: [email protected]

References1. B. Russell, Our Knowledge of the External

World, The Open Court Publishing Company,1914. Reprinted in 2000 by Routledge.

2. S. Muggleton, Inverse entailment and Progol, NewGeneration Computing, Special issue onInductive Logic Programming, 13 (3-4), pp. 245-286, 1995.

3. S. P. H. Heap and Y. Varoufakis, Game Theory: acritical introduction, Routledge, 1995.

4. D. Magee, C. J. Needham, P. Santos, A. G. Cohn,and D. C. Hogg, Autonomous learning for acognitive agent using continuous models andinductive logic programming from audio-visualinput, Proc. AAAI Workshop on AnchoringSymbols to Sensor Data, 2004.

5. P. Santos, D. Magee, and A. Cohn, Looking forlogic in vision, Proc. 11th Workshop onAutomated Reasoning, pp. 61-62, Leeds, 2004.

6. P. Santos, D. Magee, A. Cohn, and D. Hogg,Combining multiple answers for learningmathematical structures from visual observation,Proc. European Conf. on Artificial Intelligence(ECAI-04), Valencia, Spain, 2004, to appear.

Figure 1. A schematic of the experimental set-up that allows machine-learning of game rules through observation.

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

6 SPIE International Technical Group Newsletter

A Mars Station online robot arenaAn online robot system offersvisitors to a website the oppor-tunity to control a real robotthat is normally located in auniversity laboratory, perhapsset up as part of a studentproject. A video stream deliv-ered to the user’s browser isnormally the means by whichthe user can watch what ishappening in the remote en-vironment. Over recent years,online robot systems haveboth moved out of the labora-tory, into museums for ex-ample, or they have remainedin the laboratory but encapsu-lated within specially-constructed arenas. It isalso now possible to distinguish two types ofonline robot system: open systems that are pri-marily for public consumption, and closed sys-tems that are used for recurring educational in-struction. Open systems are particularly challeng-ing since the designers often want to encouragecontinuing and repeated participation on the partof visitors. These systems may also have an edu-cational value, seeking the public understand-ing of science.

A notable case in point is the ongoing USAand European missions to Mars.1,2 What betteruse of an online robot system than to give thegeneral public some sense of the wonder andawe that the professional scientists participat-ing on these missions are experiencing? Thishas been the goal of The Planetary Society MarsStation concept, comprising a robotic rover thatschool children and general members of thepublic can drive about an arena that models alocation on the surface of Mars.3 We recentlybuilt such an arena in the Active Robotics Labo-ratory at the University of Reading as part of aproject to set up a number of Mars Stations inthe UK. The project was funded by the UK’sParticle Physics and Astronomy ResearchCouncil (PPARC) under its Public Understand-ing of Science Small Awards Scheme. This ar-ticle describes the construction of the arena andsome novel techniques we used to maintain atether to the rover as it moved about the ter-rain. The construction is in one sense simple,but nevertheless very effective in creating a vi-sually-appealing terrain for visitors to explore.

The Mars Station arena, or ‘diorama’, mod-els a specific geographic location on Mars. Wechose to model the Mars location known as‘Eos Chaos’. This is a ‘chaotic’ terrain, mean-ing that it comprises a rugged unstructuredlandscape with steep escarpments and gullies.The basic frame for the arena was constructedfrom the metal under frames of two laboratory

benches that were put together side-ways: the top surfaces were lifted away,and the adjacent upper-longitudinal sec-tions of the two benches were then re-moved to create a rectangular bay forthe terrain. The bay structure is clearlyillustrated in Figure 1.

The actual Mars terrain was con-structed on a removable wooden baseto allow easy maintenance and the op-tion of swapping in alternative terrainmodels. Sections of wire mesh werebent into shape and pinned to the baseto create the undulations used to reflectthe chaotic terrain of Eos Chaos. Thewire mesh surface thus created was thencovered with about five layers of papermache and painted with four layers ofthick ‘rustic red’ house paint. Sand wasadded prior to the final coat in order togenerate a more realistic surface. Rockswere later added to the landscape. Thewalls of the arena were made out of3mm plywood and were painted by anart student with a Martian backdrop inorder to provide a horizon and an im-pression of depth (Figure 2).

One of the main problems when set-ting up an online robot arena is supply-ing power to the robot so that it can bepermanently live. A wire tether is nor-mally used for mobile robots. The presence ofthe tether, however, creates an interesting chal-lenge; specifically how to make sure that therover can traverse the entire arena without get-ting tangled up. A cable running from a pointwell above the arena normally suffices, how-ever our arena used an alternative method basedon a simple passive-control mechanism. Thetether is mounted from a hoist that can movein the X-Y plane above the arena. It passesthrough a simple ball mechanism that rests ona set of four switches. These are arranged such

Figure 1. Students learning about the design of theUniversity of Reading Mars Station arena during the launchof the UK Mars Stations Network. Figure 2. The painted surrounds add depth to the arena

landscape.

Figure 3. The hoist mechanism allows the tether to track therover.

as to control two dc motors that drive the hoistin the X-Y plane. Figure 3 shows the hoistmechanism in profile. Whenever the rovermovement causes one or more of the switchesto close, the hoist moves to maintain a positionapproximately above it. Additional stopswitches mounted to the arena frame preventthe hoist hitting either end of the arena. Anadditional weight, specifically a spare batterypack, was mounted on the LEGO rovers (de-signed and constructed by co-author AshleyGreen) in order to compensate for the weight

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

7SPIE International Technical Group Newsletter

of the tether. Full construction plans for thisrover design are available online.4

The University of Reading Mars Station wasthe focal point for the launch of the UK MarsStation network during the UK’s National Sci-ence Week. Since its launch, the arena has re-quired only minor maintenance and receives aregular stream of visitors each week.5

potheses: that they are a function of differ-ential sensorimotor integration abilities, orthat they are rooted in the thermoregulatorybenefits provided by huddling. We believethat these dependencies mark the emergenceof social behavior, which is further devel-oped later in life. Thus, Norway rat pupsmay also provide important insights intogroup robotics.

Robots were designed to capture the es-sential properties of the rat pups: they arelong, with a somewhat-pointed head thatrounds out at the nose, and have the samelength to width ratio as rat pups (approxi-mately 3:1), where the head constitutes 1/3of the length. An aluminum skirt with 14micro/limit (touch) switches epoxied tobrass strips allows for a 360° sensory range.Sensor cluster density is substantially higherat the nose, mimicking a rat’s sensory mon-tage. Since rat pups’ front legs are under-developed and aid only in steering, theyprimarily use their back legs for locomo-tion. Accordingly, the robots are equippedusing rear-driven wheels with differential driveon a single chassis. Four identical robots havebeen completed to date, and four more are un-der construction. Currently, a subsequent gen-eration is in the planning stages, which wouldhave more sophisticated touch sensors, ther-mal detectors, and more mechanical degreesof freedom.

We are currently developing three classes ofcontrol system, with a number of variants ofeach. All attempt to derive the scope of behav-iors above using sensorimotor contingenciesalone. The first class is a reactive architecturethat employs condition-action rules linkingeach sensor to a particular movement. Theangle of movement is roughly commensuratewith the sensor’s angle to the prime meridian,thereby encouraging thigmotaxis. Multiple sen-sor activations are handled differently in sepa-rate architectures, including behavioral aver-

Biorobotic study of Norway rats in individuals and groupsContinued from cover.

aging, priority queuing, and random selection.These separate routines provide for compari-sons of sensorimotor integration strategies. Inthe absence of an active sensor, the robot ran-domly chooses an action from its behavioralrepertoire. This turns out to have a tremendousadvantage over the standard approach, wheresome default movement is selected.

A second class comprises a suite of probabi-listic architectures. These are much like the re-active ones, with the exception that each sen-sor is more weakly tied to its associated action.That is, a given sensor activation will trigger agiven behavior with some probability less thanone, where the remaining probability is dynami-cally allocated to the rest of the behavioral op-tions. This routine takes a step towarddecoupling artificially-imposed action selec-

tion, and yields striking pup-like behavior.Finally, a third class consists of a set of

neural-network architectures. As an initialstep, we’ve developed a two-layer, fully-distributed neural network employing a re-inforcement learning algorithm. The inputlayer represents the 14 sensors, and the out-put layer the entire behavioral repertoire.The learning algorithm adjusts input-out-put weightings based on time between sen-sor contacts, again encouraging thigmot-axis. With these architectures, action selec-tion is left wholly unspecified, and sen-sorimotor contingencies develop over time,rather than being predetermined. We antici-pate that all of the above architectures willyield novel results and provoke interestingnew ideas, making important contributionsto robotics and biology alike. For more in-formation on our study, please refer to Ref-erence 2 and 3.

This work is supported by the UnitedStates National Science Foundation underGrant Number ITR-0218927.

Christopher May, Jonathan Tran, JeffreySchank, and Sanjay JoshiDepartment of Mechanical and AeronauticalEngineeringUniversity of California at Davis, CAE-mail: [email protected]

References1. J. C. Schank and J. R. Alberts, The developmental

emergence of coupled activity as cooperativeaggregation in rat pups, Proc. of the RoyalSociety of London B 267, p. 2307, 2000.

2. S. Joshi and J. Schank, Of rats and robots: a newbiorobotics study of norway rat pups, Proc. of the2nd Int’l Workshop on the Mathematics andAlgorithms of Social Insects, p. 57, 2003.

3. S. Joshi, J. Schank, N. Giannini, L. Hargreaves,and R. Bish, Development of autonomous roboticstechnology for the study of rat pups, IEEE Int’lConf. on Robotics and Automation (ICRA), p.2860, 2004.

Figure 2. Lateral view of robot.

Figure 3. Similar configurations of rats and probabilistically-controlled robots.

Richard T McElligott, Gerard T McKee,and Ashley Green*The University of Reading, UK*The Open University, UKE-mail: [email protected]

References1. The NASA Mars Exploration Rovers:

http://www.jpl.nasa.gov/missions/mer/2. The European Space Agency Mars Express

Mission:http://www.esa.int/SPECIALS/Mars_Express/

3. The Planetary Society: http://www.planetary.org4. Rover design used in the University of Reading

Mars Station: http://www.redrovergoestomars.org/rovers/green/rockyA.html

5. The University of Reading Mars Station:http://www.redrover.reading.ac.uk

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

8 SPIE International Technical Group Newsletter

Shared knowledge is our greatest resource.

Submit your abstract today!

s p i e . o r g / i n f o / oeeTel: +1 360 676 3290 • [email protected]

Share your research with colleagues at OpticsEast, an all new East Coast multidisciplinarysymposium held this year in Philadelphia, aworld-class technology and research center.

Optics East will highlight key developments inoptics- and photonics-based devices, systemsand components for use in telecom, sensing,nanotechnology, and biology. We welcome yourparticipation!

Building a better world with light.Building a better world with light.Building a better world with light.Building a better world with light.Building a better world with light.

25–28 October 2004Philadelphia Convention CenterPhiladelphia PA, USA

Conferences • Courses • Exhibition

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

9SPIE International Technical Group Newsletter

This project is being supported by the U.S.Medical Research and Material Command un-der Grant #DAMD17-99-1-9022.

Craig Carignan, A. Pontus Olsson*, andJonathan Tang†Imaging Science and Information Systems(ISIS) CenterDepartment of RadiologyGeorgetown University Medical CenterWashington, DC, USAE-mail: {crc32, †jt96}@georgetown.edu,*[email protected]://www.visualization.georgetown.edu/inmotion2.htm

References1. H. I. Krebs, N. Hogan, W. Hening, S. Adamovich,

and H. Poizner, Procedural motor learning inparkinsons disease, Experimental BrainResearch (141), p 425-437, 2001.

2. C. R. Carignan and A. P. Olsson, CooperativeControl of Virtual Objects over the Internet usingForce-Reflecting Master Arms, Proc. IEEE Int’lConf. on Robotics and Automation, pp. 1221-1226, 2004.

3. G. Niemeyer and J. -J. Slotine, Towards forcereflecting teleoperation over the internet, Proc.IEEE Int’l Conf. on Robotics and Automation,pp. 1909-1915, 1998.

4. I. Goncharenko, M. Svinin, S. Matsumoto, S.Hosoe, and Y.Kanou, Design and implementationof rehabilitation haptic simulators, Proc. Int’lWorkshop on Virtual Rehabilitation, pp. 33-39,2003.

Robotic rehabilitationover the internetContinued from page 3.implementation of enhanced sensitivity in the

desired stiffness range.Figure 2 shows some experimental results,

where an elastic rubber band was stretched us-ing a one-dimensional teleoperation system.The position-force curve shows a changingslope, corresponding to a changing local stiff-ness. Thereby, two major areas can be assigned.At first, when the band is only slightlystretched, a stiffness of about 220N/m is de-tected. Further stretched, the stiffness drops toabout 170N/m. The haptic device used showsclear Coulomb friction, which adds damping,but does not alter the local stiffness character-istics. The fact that the slopes of the curves inthe two areas differ more at the master side thanat the environment, shows how the stiffness dif-ference between the two areas is ‘blown up’.In fact, the difference is such that the areas eas-ily can be distinguished. This enhancement ofdiscriminability between tissues could enablesurgeons to more securely define tumor bound-aries, and have a higher confidence on whethercrucial structures in the neighborhood are clean

Enhanced haptic sensitivity in surgical telemanipulationContinued from page 12.

2004Carnegie Mellon University Robotics Institute25th Anniversary Celebration: Grand Challengesof Robotics Symposium9-11 OctoberPittsburgh PA, USAhttp://www.ri25.org

The 2004 AAAI Fall Symposium Series21-24 OctoberWashington DC, USASponsors: American Association for ArtificialIntelligencehttp://www.aaai.org/Symposia/symposia.html

For More Information Contact: SPIE • PO Box 10, Bellingham, WA 98227-0010 • Tel: +1 360 676 3290 • +1 360 647 1445 • Email: [email protected] • Web: www.spie.org

Optics East 2004

Robotics Technologies and Architectures25-28 OctoberPhiladelphia PA, USAIncluding:

Intelligent Robots and Computer Vision XXII:Algorithms, Techniques, and Active Vision

Robosphere 20049-10 NovemberMountain View CA, USASponsors: NASA Ames Research Center, Carnegie-Mellon University (West), QSS Inc; AIAAAbstracts Due September 1, 2004http://robosphere.arc.nasa.gov/workshop2004

2 0 0 4 C a l e n d a r

Figure 2. Experimental results on the interaction with a rubber band: the environment (grey), and thepositions and forces at the interface to the human operator (black). The slope of the position-force curvesdenotes the local stiffness.

or in an infected zone. And, as surgeons areable to feel better, safety will increase.

Gudrun De Gersem, Jos Vander Sloten,and Hendrik Van BrusselDepartment of Mechanical EngineeringDivisions PMA and BMGOKatholieke Universiteit Leuven, BelgiumE-mail:[email protected]

References1. R. Kumar, T. M. Goradia, A. C. Barnes, P. Jensen,

L. L. Whitcomb, D. Stoianovici, L. M. Auer, andR. H. Taylor, Performance of Robotic Augmenta-tion in Microsurgery-Scale Motions, Springer-Verlag Lecture Notes in Computer Science 1679,pp. 1108-1115, 1999.

2. S. E. Salcudean, S. Ku, and G. Bell, PerformanceMeasurement in Scaled Teleoperation forMicrosurgery, Springer-Verlag Lecture Notes inComputer Science 1205, pp. 789-798, 1997.

3. G. De Gersem, H. Van Brussel, and F. Tendick, Anew optimisation function for force feedback inteleoperation, Proc. Int’l Conf. on ComputerAssisted Radiology and Surgery (CARS), p.1354, 2003.

MBR04 Model-Based Reasoning in Science andEngineering: Abduction, Visualization, Simulation.16-18 DecemberPavia, ItalySponsors: University of Pavia, University of SienaCARIPLO,http://www.unipv.it/webphilos_lab/courses/progra1.html

2005IASTED International Conference on ArtificialIntelligence and Applications14-16 FebruaryInnsbruck, InnsbruckSponsors: IASTED, WMSFAbstracts Due September 15, 2004http://www.iasted.org/conferences/2005/innsbruck/aia.htm

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

10 SPIE International Technical Group Newsletter

We

MAKEIT EASY

Technology content like no other.

You’ll save time.

Get online access to full-textpapers from SPIE Proceedingsand Journals.

Check out the SPIE DigitalSPIE DigitalSPIE DigitalSPIE DigitalSPIE DigitalLibrary Library Library Library Library today! It’s easy!

spiedl.org

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

11SPIE International Technical Group Newsletter

Robotics & Machine PerceptionThis newsletter is published semi-annually by SPIE—The International Society for

Optical Engineering, for its International Technical Group on Robotics & Machine Perception.

Editor and Technical Group Chair Gerard McKee Managing Editor/Graphics Linda DeLano Technical Editor Sunny Bains

Articles in this newsletter do not necessarily constitute endorsement or the opinionsof the editors or SPIE. Advertising and copy are subject to acceptance by the editors.

SPIE is an international technical society dedicated to advancing engineering, scientific, and commercialapplications of optical, photonic, imaging, electronic, and optoelectronic technologies. Its members are engineers,scientists, and users interested in the development and reduction to practice of these technologies. SPIE providesthe means for communicating new developments and applications information to the engineering, scientific, anduser communities through its publications, symposia, education programs, and online electronic informationservices.

Copyright ©2004 Society of Photo-Optical Instrumentation Engineers. All rights reserved.

SPIE—The International Society for Optical Engineering, P.O. Box 10, Bellingham, WA98227-0010 USA. Tel: +1 360 676 3290. Fax: +1 360 647 1445.

European Office: Karin Burger, Manager, SPIEEurope • [email protected]: +44 (0)29 2056 9169 • Fax: +44 29 2040 4873 • Web: www.spie.org.

In Russia/FSU: 12, Mokhovaja str., 119019, Moscow, Russia • Tel/Fax: +7 095 202 1079E-mail: [email protected]

This newsletter is printed as a benefit ofthe Robotics & Machine Perception

Technical Group. Membership allows you tocommunicate and network with colleaguesworldwide.

As well as a semi-annual copy of the Robotics& Machine Perception newsletter, benefits in-clude SPIE’s monthly publication, oemagazine,and a membership directory.

SPIE members are invited to join for thereduced fee of $15. If you are not a member ofSPIE, the annual membership fee of $30 willcover all technical group membership services.For complete information and a full applicationform, contact SPIE.

Send this form (or photocopy) to:SPIE • P.O. Box 10Bellingham, WA 98227-0010 USATel: +1 360 676 3290Fax: +1 360 647 1445E-mail: [email protected]

http://www.spie.org/info/robotics

Please send me■■ Information about full SPIE membership■■ Information about other SPIE technical groups■■ FREE technical publications catalog

Join the Technical Group...and receive this newsletter

Membership Application

Please Print ■■ Prof. ■ ■ Dr. ■ ■ Mr. ■ ■ Miss ■ ■ Mrs. ■ ■ Ms.

First Name, Middle Initial, Last Name ______________________________________________________________________________

Position _____________________________________________ SPIE Member Number ____________________________________

Business Affiliation ____________________________________________________________________________________________

Dept./Bldg./Mail Stop/etc. _______________________________________________________________________________________

Street Address or P.O. Box _____________________________________________________________________________________

City/State ___________________________________________ Zip/Postal Code ________________ Country __________________

Telephone ___________________________________________ Telefax _________________________________________________

E-mail Address _______________________________________________________________________________________________

Technical Group Membership fee is $30/year, or $15/year for full SPIE members.

❏ Robotics & Machine PerceptionTotal amount enclosed for Technical Group membership $ __________________

❏ Check enclosed. Payment in U.S. dollars (by draft on a U.S. bank, or international money order) is required. Do not send currency.Transfers from banks must include a copy of the transfer order.

❏ Charge to my: ❏ VISA ❏ MasterCard ❏ American Express ❏ Diners Club ❏ Discover

Account # ___________________________________________ Expiration date ___________________________________________

Signature ___________________________________________________________________________________________________(required for credit card orders)

ROBOTICSONLINERobotics Web

Discussion Forum

You are invited to participate inSPIE’s online discussion forum onRobotics & Machine Perception. Topost a message, log in to create auser account. For options see“subscribe to this forum.”

You’ll find our forums well-designed and easy to use, with manyhelpful features such as automatedemail notifications, easy-to-follow‘threads,’ and searchability. There isa full FAQ for more details on how touse the forums.

Main link to the new Robotics &Machine Perception forum:http://spie.org/app/forums/

tech/

Related questions or suggestionscan be sent to [email protected].

Reference Code: 3995

ROBOTICS AND MACHINE PERCEPTION 13.2 OCTOBER 2004

12 SPIE International Technical Group Newsletter

P.O. Box 10 • Bellingham, WA 98227-0010 USA

Change Service Requested

DATED MATERIAL

Non-Profit Org.U.S. Postage Paid

Society ofPhoto-Optical

InstrumentationEngineers

Enhanced haptic sensitivity in surgical telemanipulationRobot-assisted minimally-inva-sive surgery is a recent drivingfield for research on hapticfeedback within telemanipula-tion. In minimally-invasive sur-gery (MIS), long-shafted robottools are inserted into the patientthrough small incisions, therebyreducing trauma and recoverytime. The use of a robot helpsthe surgeon to overcome tradi-tional MIS drawbacks such asreduced dexterity and the non-trivial hand-eye coordinationproblem. In bilateral telemani-pulation, the human operatormanipulates a haptic interfacethrough which the movementsof the surgical robot holding thetools can be controlled. The in-terface offers the operator visualand haptic information about theremote surgical scene.

Traditionally, the perfor-mance of a telemanipulationsystem is judged by its stabil-ity, its tracking behavior, andthe obtained transparency. Inaddition to these system capa-bilities, we are interested in en-hancing the haptic perceptualcapabilities of the surgeonthrough the medical telerobot.This would allow the surgeonto better discriminate between tissues than bybare hand or direct manipulation.

Absolute and relative sensitivityThe benefits of scaled telemanipulation for sur-gical applications are clear. The robot can ex-tend human capabilities by allowing greaterspatial resolution in microsurgical tasks, reduc-ing tremor, and significantly increasing pickand place accuracy.1 A study by Salcudean etal.2 shows that using scaled-force feedbackyields better force-tracking behavior combinedwith significantly less fatigue and a higher con-fidence level.

the tissue because the vein orartery denotes a stiffer objectembedded in the soft fat.Also to define the boundariesof e.g. tumorous tissue withinhealthy tissue, stiffness dis-crimination is most impor-tant.

These examples, given byexperienced surgeons, all canbe related to one requirementfor a telesurgical system: thehuman operator should beable to discriminate betweenthe stiffness of different (soft)tissues as well as within opensurgery.

According to Weber, stiff-ness discrimination follows arelative law.3 Only a slightdifference is necessary fordifferentiating soft tissues,while more stiff objects needto differ by a much greateramount in order to be distin-guishable by a human. Thisrelative law suggests that, inorder to ameliorate discrimi-native abilities through thesystem by a factor σ, environ-ment stiffness k

e should be

transmitted to the operator inan exponential relationship:k

t= K k

eσ, with K being a con-

stant scaling factor. Naturally, this exponen-tial relationship cannot be obtained by usingstandard linear techniques.

Telemanipulation control for enhancedsensitivityAn extended Kalman filter estimates the en-countered stiffness in real time. The controlgains of the impedance controller of the hapticinterface are adapted to the online estimationof the environment stiffness (see Figure 1).Mathematical manipulation allows for the

Figure 1. Block schematic for a one-dimensional teleoperation system with enhancedhaptic sensitivity. The slave is under position control, and follows the master, with amotion-scaling factor α if desired. At the haptic interface, adaptive impedance control isimplemented, using information about the environment stiffness ke and an offset force f0obtained by real-time estimation.

Continues on page 10.

Scaling techniques allow absolute humanthresholds to be overcome: smaller forces canbe perceived than with the bare hand, and mi-crometer-scale motions can be performed ac-curately. However, humans also have differ-ential thresholds. Even within the range wherethe human proprioceptive system is very sen-sitive, a certain difference in stimuli needs tobe present in order for the human to be able todistinguish between the two.

Within surgical procedures, surgeons specifi-cally use their haptic senses to decide on a va-riety of problems. An artery or vein that is hid-den beneath fat e.g., can be located by probing