16
Robotics and Machine Perception SPIE’s International Technical Group Newsletter continued on p. 10 VOL. 10, NO. 1 MARCH 2001 Newsletter now available on-line Technical Group members are being offered the option of receiving the Robotics and Machine Perception Newsletter in an electronic format. An e-mail notice is being sent to all group members advising you of the web site location for this issue and asking you to choose between the electronic or printed version for future issues. If you have not yet received this e-mail message, then SPIE does not have your correct e-mail address in our database. To receive future issues of this newsletter in the electronic format please send your e-mail address to [email protected] with the word ROBOTICS in the subject line of the message and the words "Electronic version" in the body of the message. If you prefer to continue to receive the newsletter in the printed format, but want to send your correct e-mail address for our database, include the words "Print version preferred" in the body of your message. Special Issue: Advances in telemanipulator and telepresence technologies for Space, Industry, and Internet Audio-feedback experiments on Engineering Test Satellite VII and its assessment using eye-mark recorder We get a lot of information from sound, not only from verbal utterances but also non-ver- bal sounds and noise. Consider the case of turn- ing the ignition key in a car. How do you know the engine has started normally? Perhaps by the sound it makes: an engine emitting an un- usual sound or noise is usually not functioning normally. Moreover you probably received in- formation about the engine cycle aurally be- fore you looked at the tachometer. Audio information has fascinating features that make it potentially useful in man-machine interfaces. In particular, it can be acquired with- out attention, which is not true of visual infor- mation (at least the sort of information that is intentionally displayed on an interface). Hu- mans notice changes and the tendency of change in the tone of a continuous sound very easily. Sound also increases the sense of real- ity and presence we get from an interface sys- tem (if we use realistic noises or sounds). Our research is on the effect of adding meaningful sound to the operator interface of space robots operated from the ground. From 1998 to 1999, we performed more than 50 teleoperation experiments on Engineering Test Satellite VII, which was launched in 1997 and is the first telerobotic satellite developed by the National Space Development Agency of Japan. The operation of space robots is a stressful task that is dominated by concerns about safety and reliability. Collisions are the biggest worry; they may damage the robot and/or other equip- ment, and damaged equipment is inevitably very expensive to repair in space (and often cannot be repaired). In particular, for safe operation of a space robot, operators must quickly analyze large amounts of information about the targets and rapidly make accurate decisions about how to proceed. The majority of this information is presented by visual cues, such as digital val- ues, status displays, 3D computer simulations, and camera images. Complex recognition tasks that carry either high risk or huge stakes make the operator’s job even more stressful and in- crease the likelihood of misrecognition and misoperation. Based on these considerations, we thought that audio feedback could reduce operator workload and improve the reliability of space- robot teleoperation procedures. We developed an audio feedback system (AFS) to present te- lemetry information. It uses three computers. Computer A analyzes the telemetry data and detects changes in the status information of the robot and other equipment (for example, robot starting to move or AAM latch opening). These changes are announced by pre-recorded voices to verify the commands. Computer B analyzes the telemetry data and detects the magnitudes of the force and torque Figure 1. Downlink image during experiments on Engineering Test Satellite VII. Figure 2. Experiment using Eye Mark Recorder and its readout image.

ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

  • Upload
    vanbao

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

1SPIE’s International Technical Group Newsletter

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

Roboticsand Machine Perception

SPIE’sInternational

TechnicalGroup

Newsletter

continued on p. 10

VOL. 10, NO. 1 MARCH 2001

Newsletter now available on-lineTechnical Group members are being offered the option of receiving the Robotics and Machine Perception Newsletterin an electronic format. An e-mail notice is being sent to all group members advising you of the web site location forthis issue and asking you to choose between the electronic or printed version for future issues. If you have not yetreceived this e-mail message, then SPIE does not have your correct e-mail address in our database. To receivefuture issues of this newsletter in the electronic format please send your e-mail address to [email protected] the word ROBOTICS in the subject line of the message and the words "Electronic version" in the body of themessage.

If you prefer to continue to receive the newsletter in the printed format, but want to send your correct e-mailaddress for our database, include the words "Print version preferred" in the body of your message.

Special Issue: Advances in telemanipulator andtelepresence technologies for Space, Industry, and Internet

Audio-feedback experiments on Engineering TestSatellite VII and its assessment using eye-mark recorderWe get a lot of information from sound, notonly from verbal utterances but also non-ver-bal sounds and noise. Consider the case of turn-ing the ignition key in a car. How do you knowthe engine has started normally? Perhaps bythe sound it makes: an engine emitting an un-usual sound or noise is usually not functioningnormally. Moreover you probably received in-formation about the engine cycle aurally be-fore you looked at the tachometer.

Audio information has fascinating featuresthat make it potentially useful in man-machineinterfaces. In particular, it can be acquired with-out attention, which is not true of visual infor-mation (at least the sort of information that isintentionally displayed on an interface). Hu-mans notice changes and the tendency ofchange in the tone of a continuous sound veryeasily. Sound also increases the sense of real-ity and presence we get from an interface sys-tem (if we use realistic noises or sounds).

Our research is on the effect of addingmeaningful sound to the operator interface ofspace robots operated from the ground. From1998 to 1999, we performed more than 50teleoperation experiments on Engineering TestSatellite VII, which was launched in 1997 andis the first telerobotic satellite developed by theNational Space Development Agency of Japan.The operation of space robots is a stressful taskthat is dominated by concerns about safety andreliability. Collisions are the biggest worry;they may damage the robot and/or other equip-ment, and damaged equipment is inevitablyvery expensive to repair in space (and oftencannot be repaired).

In particular, for safe operation of a spacerobot, operators must quickly analyze largeamounts of information about the targets andrapidly make accurate decisions about how toproceed. The majority of this information ispresented by visual cues, such as digital val-ues, status displays, 3D computer simulations,

and camera images. Complex recognition tasksthat carry either high risk or huge stakes makethe operator’s job even more stressful and in-crease the likelihood of misrecognition andmisoperation.

Based on these considerations, we thoughtthat audio feedback could reduce operatorworkload and improve the reliability of space-robot teleoperation procedures. We developedan audio feedback system (AFS) to present te-lemetry information. It uses three computers.Computer A analyzes the telemetry data anddetects changes in the status information of therobot and other equipment (for example, robotstarting to move or AAM latch opening). Thesechanges are announced by pre-recorded voices

to verify the commands.Computer B analyzes the telemetry data and

detects the magnitudes of the force and torque

Figure 1. Downlink image during experiments onEngineering Test Satellite VII.

Figure 2. Experiment using Eye Mark Recorderand its readout image.

Page 2: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

SPIE’s International Technical Group Newsletter2

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

Guest Editorial

Editorial

Remote control of physical systems by hu-mans through the mediation of computers hasbeen a fascinating topic for scientists andengineers for almost four decades. Depend-ing on the field of application and the tech-nology involved, different terms were coinedto describe the process of controlling a sys-tem at a remote site: teleoperation, tele-ro-botics, tele-service or telepresence are just themost often repeated terms. In the early yearsthe development was mainly driven by space,underwater, and automation applications forhazardous areas, but especially in the recentfive years, different factors have led to an in-creasing number of applications. The firstimportant factor was the exponentially grow-ing computational power, enhanced controlalgorithms, and new mechatronic sensors andactuators which made it possible to activelyenhance the operator’s—visual, acousticaland haptic—perception of the remote loca-tion; new qualities of man–machine inter-faces originated from these developments.The second important factor for the increas-ing number of applications is the broad avail-ability of communication networks like theInternet. The Internet makes it a snap to ac-cess remote computers. It actually takes verylittle to deploy a remote system and make itavailable to many users over the Internet—not just for “fun and fame”, but also for in-dustrial, e.g., teleservice, applications.

All developments have the aim of intro-ducing human perception, planning, andcontrol into a technical process. Somepeople tend to consider this as just an inter-mediate step on the way to fully autono-mous systems, but the contributions in thisissue clearly show that the described tech-nologies provide a new quality of coopera-tion and coexistence of humans and ma-chines—where, of course, the human al-ways keeps control. As we are now intro-ducing the latest insights from the fields of

human perception, sensing, and cognition intotelepresence systems, we are not just makingthe automation part of such a system smarterbut we are also laying the groundwork for abroad range of intuitively comprehensibleman–machine interfaces. The articles in thisissue cover this aspect from different view-points.

Multisensory feedbackTo increase the sense of reality and presencein teleoperation systems is a major issue of cur-rent developments. Besides the stimulation ofthe visual sense by realistic computer graphi-cal representations of virtual worlds, the stimu-lation of additional senses becomes very im-portant—and the solutions become more andmore effective. The paper by Schmidt/Kron/Kammermeier outlines the developments re-lated to providing tactile feedback for the user’sarm, hand, and fingers. Different devices arebeing used and versatile control strategies arebeing developed and implemented to allowusers to “feel” virtual objects. Whereas thiswork aims at providing a realistic sensation ofphysical objects, Fong/Thorpe/Bauer use hap-tic feedback to teleoperate a vehicle. In thiscase, the forces felt are artificial force fields tosupport the precise driving of a vehicles; vir-tual forces here enhance the user’s intuition. Ina third paper on this topic, Kimura proposes toenhance teleoperation systems by a further sen-sual stimulation, by audio-feedback. Find outabout the psychophysical background of hissuggestion in this issue.

Projective Virtual RealityVirtual reality used to be only about immer-sion and interaction; the papers by Hirzinger/Landzettel/Brunner and by Freund/Rossmannadd the aspect of “projection.” Allowing usersto handle objects in the virtual world like theywould do in the real world is just the basis; thekey idea then is to automatically project these

actions onto robots and other means of au-tomation. This implies that robots make ex-actly the changes to the physical world thatcorrespond to the changes the user made tothe virtual world. The approaches presentedin both papers are different, but the aims arethe same: make the teleoperation of robotssafe and easy!

The idea of controling a real-world de-vice via a graphical user interface is also pur-sued by Tomatis/Moreau. They describe acomprehensive web interface used to con-trol a mobile robot. The ideas presented aremade complete by the work of Lane becausehe summarizes the results on the effect oftime delay for the control of robot manipu-lators over long distances.

Space ExplorationThe exploration of space has always been adriving force for the development of newideas related to teleoperation. In their paper,Pirjanian/Huntsberger/Kennedy/Schenkerlook into the more distant future of planetaryexploration where multiple rovers are to co-operate to explore planetary surfaces. Theirambitious plans are an inspiration and a driv-ing force for the work in this field.

Teleoperated systems pioneered space,the planets, and the deep seas. The contri-butions to this newsletter show how themethods and tools are now evolving to makefurther pioneering more intuitive, more cost-effective, and also more fun. Please read andenjoy the authors’ thoughts about the presentand the future in this field.

Dr.-Ing. Jürgen RossmannInstitute of Robotics ResearchOtto-Hahn-Str. 844227 Dortmund, GermanyE-mailto:[email protected]://www.irf.de/~rossmann

Welcome to this special issue of the Robot-ics and Machine Perception newsletter.

Our guest editor, Dr Jürgen Rossmannof the Institute of Robotics Research inDortmund, Germany, has assembled a fo-cused perspective on recent advances intelemanipulation and telepresence technolo-gies in the areas of Space, Industry, and theInternet. This issue illustrates the applica-tion of robotics and machine perceptiontechniques in developing advanced coopera-

tive man–machine systems. I would like tothank Jürgen for his efforts in the preparationof this special issue, and my thanks also to allof the authors for their contributions.

Dr. Gerard McKeeRobotics and Machine PerceptionTechnical Group ChairThe University of Reading, UKE-mail: [email protected]

Page 3: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

3SPIE’s International Technical Group Newsletter

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

continued on p. 12

A new telepresence approach through the combinationof virtual reality and robot control techniquesSince 1990, the application of virtualreality (VR) techniques have beeninvestigated at the Institute of Robot-ics Research, leading to the devel-opment of COSIMIR/VR, IRF’s 3Dsimulation and virtual reality system.It has been established that the po-tential for new virtual-reality-basedapproaches to man-machine inter-faces is extremely promising. Suchtechniques offer a means of convey-ing information about an automationsystem in an intuitive manner andcan combine supervisory capabilitieswith new, intuitive approaches to thecommanding of complex technicalsystems over long and short dis-tances.

In this context, the new paradigmof Projective Virtual Reality has beenrealized as a sophisticated new ap-proach to teleoperate and superviserobotic and automatic systems. Theidea behind Projective Virtual Real-ity is to “project” actions that are car-ried out by users in the virtual worldinto the real world, primarily bymeans of robots but also by otherforms of automation. Robot controltechnology thus provides the user inthe virtual reality with a “prolongedarm” into the physical environment,paving the way for a new quality ofa user-friendly man-machine inter-face for automation applications.Projective Virtual Reality is based onthe latest results in the fields of taskplanning, world modelling, coopera-tive robot control and sensor-basedrobot control.

Figure 1 depicts the main idea ofProjective Virtual Reality. A user isimmersed into the virtual world bymeans of a data helmet and he inter-acts with the virtual objects by means of his dataglove. All changes the user makes to objects in thevirtual world are “evaluated” by means of a petri-net-based tracking technique that returns a high-leveltask description1 like, “open heater” or “movesample from A to B” after the user has completed atask. This task description is then fed to a planningcomponent that decomposes this task descriptioninto elementary actions and programs for the robotsand automation devices involved. Using this task-oriented approach gets the user out of the real-timecontrol loop, so that stability problems typically en-countered when controlling systems over long dis-

tances do not arise. Figure 1 depicts a sce-nario where a user on the ground com-mands a robotic system: e.g. on board theColumbus module, the European contri-bution to the International Space Station(ISS). In such an application, the task-ori-ented approach has the further appeal thatwe can have multiple users in different vir-tual reality systems who all carry out taskssimultaneously. Each user thus generatestask descriptions that are then sent to theplanning system on board the module. Theplanning system then can let the robots

work in a time-sharing mode for allusers.

The realization shown in Figure2 shows that the development ofprojective virtual reality has alreadyleft the state of laboratory experi-ments. In April 1999 it was used torealize the ground control station forthe robot ERA on board the Japa-nese satellite ETS VII. This missionto control the first free-flying robotin space with our colleagues fromJapan was a great success. It is de-scribed in greater detail elsewhere.1

In addition to being useful forcommanding complex automationsystems, Projective Virtual Realityalso provides new features and ideasto intuitively supervise such systems.In order to make system informationavailable “at a glance” to the user,different metaphors1 were introducedto provide the user with importantinformation. In Figure 1, the spheresinside the virtual robot are metaphorsto visualize the robot’s motor cur-rents: at the robot’s tip, a coordinateframe depicts the exerted forces andtorques. In the ERA world shown inFigure 2, the numeric figures are dis-played in a “visor-metaphor” (re-member Geordie from the Start Trek:The Next Generation), and the trans-lucent copy of the robot is a “look-into-the-future” metaphor—it alwaysshows where the physical robot willbe five seconds later.

Besides the two applications men-tioned, Projective Virtual Reality iscurrently being using for differentkinds of telepresence applications inspace, and recently also forteleservice applications in manufac-turing applications. Examples appli-

cations and more detailed descriptions can befound at our web-sites.

Eckhard Freund and Juergen RossmannInstitute of Robotics ResearchDortmund, Germanyhttp://www.irf.dehttp://www.cosimir.com.

References

Figure 1. Telepresence by means of Projective Virtual Reality.

Figure 2. Teleoperation of the Japanese robot ERA on board the ETS VIIsatellite.

Page 4: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

SPIE’s International Technical Group Newsletter4

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

Intuitive haptic perception in virtual prototyping tasksThe Institute of Auto-matic Control Engineer-ing (IACE) in the Depart-ment of Electrical Engi-neering and InformationTechnology at theTechnische UniversitätMünchen performs re-search in human-orientedrobotics with a focus onhaptic feedback technolo-gies for multi-modaltelepresence and telero-botic applications.

Haptics for virtualprototypingThe need to lower time-to-market and reduce thedevelopment cost forphysical prototypes hasstimulated the demandfor virtual prototypingtechniques in the auto-mobile and other indus-tries. Today, highly de-veloped 3D visualizationsystems are available forgraphical display andanimation of CAD ob-jects or workpieces.Since such displays areconstrained to the visual modality: spectatorsor human operators play a more or less passiverole.

Because of this unsatisfactory situation, amajor goal of current research in telepresenceand telerobotics is to provide multi-modal sen-sory feedback to human operators. Technicalsetups in the human system interface have beendeveloped for advanced haptic feedback andmulti-fingered object manipulation. Applyingthis technology to virtual prototyping allowsintuitive exploratory procedures to be per-formed: procedures that consist of more thanjust passively viewing an object.

Current haptic interfacesHaptic displays, such as Sensable’sPHANToM, have proven their usefulness foran improved understanding of object shapesand properties compared to vision-only sys-tems. However, most available force-feedbacksystems only provide kinesthetic feedback witha single contact point for the whole hand. De-vices with wider mechanical bandwidth, as e.g.the PHANToM or the Pantograph, are also ca-pable of displaying high-frequency vibrationin order to represent texture as one componentof tactile information. This shift from kines-thetic to tactile display only refers to the corre-

sponding frequency range of feedback signalsin the spectrum of human haptic perception.Even if we assume ideal sensors and displaysand an absolutely transparent overall systembehavior, this kind of interaction only allowsfor an exploration “through” a rigid probe thatis directly fixed to the end-effector of a kines-thetic display. Six-DoF force/moment reflec-tion and line-based haptic rendering techniquestake into account the shape of this probe.

Being restricted to touching objects througha single rigid probe, can provide a certain de-gree of haptic information to the human opera-tor. However, he or she is forced to exploreand manipulate objects in a not-entirely-intui-tive manner. The objective of this researchproject is to overcome these restrictions by twomajor improvements: multi-finger kinestheticfeedback and distributed tactile feedback.1

Multi-finger kinesthetic feedbackTypical exploration and manipulation proce-dures carried out intuitively by humans implythe use of multiple fingers. Sampling of fingerposition/posture and independent force reflec-tion to each finger enable the operator in anadvanced virtual prototyping system to executehis/her intuitive motion patterns.

Besides allowing the operator intuitive fin-

ger/hand postures, acorresponding hapticfeedback device mustalso display a multi-tude of forces such asvarying contact forces,dynamic frictionforces, or weight. Thistype of feedback helpsto prevent the operatorfrom penetrating ob-jects with his/her fin-gers in the virtualprototyping environ-ment.

The IACE hasdeveloped a com-bined Wrist FingerKinesthetic Display(WFKD)2 capable ofdisplaying f ingerforces as well aswrist/arm forces. Acommercial 5-DoFhaptic glove ismounted as the end-effector on a desktopkinesthetic feedbackdevice generating 3-DoF forces in 3Dspace (see Figure 1).The fixation of the

lower part of the operator’s arm behind thewrist allows for execution of intuitive mo-tion patterns. Through this device, the op-erator can perceive separated force stimulion fingers and wrist as a single high-fidel-ity force sensation. This could be demon-strated by an experiment simulating the in-sertion of a radio into a virtual car dash-board. In addition to visual information, theWFKD provides detailed feedback of allforces that occur during the operator’s ex-ploratory and manipulatory operations. Theability to perceive physical interactionsleads to an increased degree of immersionand improved task execution.

Distributed tactile feedbackHaptic exploration means the simultaneous andconsistent evocation and processing of kines-thetic and tactile information. Beyond vibra-tion-representing textures, tactile perceptioncomprises further components, such as small-scale shape information, that is considered tobe crucial in haptic exploration tasks.

Actuator pin arrays have been proposed asa reasonable approach for representing a dis-crete approximation of a contact situation at

Figure 1. Multi-modal, multi-fingered installation of a virtual car radio.

Figure 2. Tactile exploration of parts of a virtual car engine prototype.

continued on p. 11

Page 5: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

5SPIE’s International Technical Group Newsletter

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

Space exploration and the expanding domain ofrobotic telepresenceThe recent development ofrobotics technology forspace has significantly el-evated both the human androbotic roles.1 Earlyteleoperative manipulationhas progressed to high dex-terity telerobotics whereina remote human operatorshares/trades controls withrobot autonomy, oftenguided by virtual/aug-mented reality displaysand sensor feedback.Complementing this, plan-etary/lunar surface vehicu-lar capabilities are rapidlyprogressing from low-level programmed automa-tion to longer-rangingsemi-autonomous naviga-tion, exploiting on-boardmachine perception, andreactive control, andsimple deliberative plan-ning for hazard detection,obstacle avoidance/man-agement, target tracking,and science sample acqui-sition. These technologyadvances will migrate intonear-future flight systemssuch as the planned MarsExploration Rovers of2003 and possible laterhighly anthropomorphictelerobotic work stationsfor shuttle/space stationservicing.

Looking into the moredistant future, the NationalAeronautics and SpaceAdministration (NASA) isconsidering missions that involve not one ro-bot, but rather an extended telepresence basedon multiple cooperating robotic agents. Oneexample is planetary outposts, wherein robotwork crews act to prepare and maintain a habi-tat for future shared human/robot presence.Thus, there is potential for a wide-rangingtelerobotic functionality spanning autonomouscooperation of remotely operating, closely in-teracting robots, to a later human-robot syn-ergy and community.

Within the Planetary Robotics Laboratory atthe Jet Propulsion Laboratory, Pasadena, we aredeveloping related autonomy technologies thatcan enable and significantly reduce the sustain-

ing cost of deploying, operating, and command-ing such complex missions. One such develop-ment is the Control Architecture for Multi-robotPlanetary OUTposts (CAMPOUT), which is adistributed, hybrid, behavior-based system in thatit couples reactive and local deliberative behav-iors without the need for a centralized planner.2

It provides facilities for behavior representation,behavior generation, inter-and-intra-robot behav-ior coordination, and communications infrastruc-ture for distributed robot coordination to supportnot only cooperative but also tightly coordinatedtasks. CAMPOUT constructs a methodologicalframework that builds on behavior-based syn-thesis, multiple objective decision theory, ap-

proximate reasoning, and symbolic planning.During 2000, we successfully demonstrated

a tight, kinematics-and-force-constrained coop-eration between two prototype planetary rovers(see Figure 1), SRR (Sample Return Rover) andSRR2K, for transport of an extended payload2.5 meters long over natural terrain. Neither ofthe rovers is capable of transporting this simu-lated PV (photovoltaic tent container) withoutassistance. The physical constraints imposed bythe physical link between the rovers and the non-holonomy constraints of each rover make thistask a real challenge, especially on natural ter-rain. However, we demonstrated that

Figure 1. Transport of an extended container by two rovers in the arroyo at JPL. Left: Rovers in a column (offset) transportformation. Right: Rovers in a row (side-by-side) formation.

Figure 2. The All Terrain Explorer system concept for cliff descent.

continued on p. 13

Page 6: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

SPIE’s International Technical Group Newsletter6

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

continued on p. 10

MARCO—DLR’s task-directed sensor-basedteleprogramming systemDLR’s telerobotics concepts have been veri-fied in two real space-robot projects, namelyin our own project ROTEX—the first remotelycontrolled space robot flying inside shuttleCOLUMBIA 1 in April 93—and in ETS VII,the first free-flying space robot developed byJapan’s space agency NASDA.

Based on the ROTEX experience, we havefocused our work in telerobotics on the designof a high-level task-directed robot program-ming system, MARCO, that may be character-ized as learning by showing in a virtual envi-ronment.2 This system is applicable to the pro-gramming of terrestrial robots as well. The goal(see Figure 1) was to develop a unified con-cept for:• A flexible, highly interactive, on-line

teleoperation station based on predictiveground simulation (including sensor-basedlocal autonomy)

• An off-line programming environment. Thisincludes all the sensor-based control and lo-cal autonomy features as tested already inROTEX, provides the additional possibilityof programming a robotic systemon an implicit, task-orientedlevel.A non-specialist user—e.g. a

payload expert—should be able toremotely control the robot systemin case of internal servicing in aspace station (i.e. in a well-definedenvironment). However, for exter-nal servicing (e.g. the repair of adefect satellite), high levels ofinteractivity between man and ma-chine are desirable. To fulfil the re-quirements of both applications, wehave developed a 2-in-2-layer-model that represents the program-ming hierarchy from the executiveto the planning level (see Figure 2).

Based on this four-level hierarchy,3 an op-erator working on the (implicit) task level nolonger needs real robotic expertise. With a 3Dcursor (controlled by a space mouse) or with ahuman-hand-simulator (controlled by a dataglove) he picks up any desired object in thevirtual world, releases it, moves it to a new lo-cation, and fixes it there. Sequences of thesekind of operations are easily tied together ascomplex tasks. Before they are executed re-motely, the simulated robot engaging its pathplanner demonstrates how it intends to performthe task (implying automatic collision avoid-ance). See Figure 3.

Nevertheless in the explicit layer (the learn-ing phase) the robot expert has to show and

demonstrate the elementary operations (theseinclude the relevant sensory patterns) and, ifnecessary, train the mapping between non-nominal sensory patterns and motion com-mands that servo into the nominal patterns lateron, in the real world. He performs these dem-onstrations by moving the robot’s simulatedgripper or hand (preferably without the arm)into the proximity of the objects to be handled(e.g. drawers, bayonet closures, doors in a labenvironment), so that all sensory patterns aresimulated correspondingly. The robot expert,at this stage, must have knowledge of position-and sensor-controlled subspaces, and must beable to define them, massively supported byMARCO function. He also has to define howoperations (e.g. remove bayonet closure) are

composed of elementary operations (approach,touch, grasp, turn etc.).

MARCO’s two-handed VR interfaceconceptThus, on the implicit as well as on the explicitlayer statement, we have to move 3D-pointersor grippers / hands around in the virtual labenvironment. Using classic “immersive”cyberspace techniques with data-glove andhelmet were not adequate for our approach, asthe human arm’s reaching space is fairly small(e.g. in a lab environment). Also, with headmotions, only very limited translational shiftsof the simulated world are feasible. An alter-native to the position control devices, the data-glove and helmet, is the velocity control de-vice: the space mouse. This is particularly trueif the robot system to be programmed has noarticulated hand. Velocity control here meanswe may easily steer around an object in VR,over arbitrary distances and rotations, via smalldeflections (which command velocities) of anelastic sensorized cap. Another important point

(confirmed by extensive tests of carmanufacturers in the context of 3DCAD-design) is that, just as in reallife, two-handed operations—wheninteracting with 3D-graphics—arethe optimum. Indeed, whenever hu-mans can make use of both hands,they will (e.g. when carving, mod-elling, cutting). In the northern hemi-sphere, the right hand is the work-ing hand for around 90% of thepopulation. The left hand is the guid-ance and observation hand, whichholds the object to be worked (viceversa for left-handers).

This ideal situation for a humanis easily transferred to the VR inter-

face scenario. A right-hander preferably movesaround the whole virtual world in 6-DoF witha space mouse in his left hand (the guidancehand). At the same time his right hand movesaround the 3D cursor with a second spacemouse (velocity control) or a simulated handwith a data glove (position control). One shouldnote that now, even for the glove, the problemof limited workspace disappears. With the lefthand, the operator is always able to move thevirtual lab world around such that the objectsto be grasped are very close. Thus, even in po-sition-control mode with a data glove, onlysmall, convenient motions of the operator’s

Figure 1. ROTEX telerobotic control concept.

Figure 2. The 2-in-2-layer-model.

Page 7: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

7SPIE’s International Technical Group Newsletter

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

continued on p. 11

Active interfaces for vehicle teleoperationSince 1997, the Robotics Institute of CarnegieMellon University and the Virtual Reality andActive Interfaces (VRAI) Group1 of the SwissFederal Institute of Technology have been de-veloping tools and technology for vehicleteleoperation. The goal of our collaboration isto make such teleoperation easier and moreproductive for all users, novices and expertsalike. Thus, we have developed a variety ofactive interfaces incorporating sensor-fusiondisplays, gesture and haptic input, personaldigital assistants, and the wwweb.2,3

Sensor fusion displaysPerhaps the most difficult aspect of vehicleteleoperation is that the operator is unable todirectly perceive the remote environment. In-stead, he is forced to rely on sensors, band-width-limited communications links, and aninterface to provide him with information. Asa result, the operator often fails to understandthe remote environment and makes judgementerrors. Thus, we need to make it easier for theoperator to understand the remote environment,to assess the situation, and to make decisions.4

Our approach is to develop sensor fusiondisplays that combine information from mul-tiple sensors or data sources to present a single,integrated view. These displays are importantfor applications in which the operator must rap-idly interpret multispectral or dynamic data.Figure 1 shows an example in which lidar, ste-reo, and ultrasonic sonar range data are fused.5

This display is designed to direct the operator’sattention to close obstacles and to improve situ-ational awareness in cluttered environments.

Gesture and haptic inputAlmost all teleoperation interfaces rely on in-put devices such as joysticks or two-dimen-sional computer pointers (mouse, pen, etc.).One problem with this approach is that the hu-man-machine interaction is essentially static:the form and range of input is limited to physi-cal device properties. With computer vision,however, we can create gesture-based inter-faces that provide flexible, user-adaptive inter-action. Moreover, since the interpretation issoftware-based, it is possible to customize in-put processing to minimize sensorimotorworkload, to accommodate operator prefer-ences, and to adapt to the task/operator in real-time.3

GestureDriver is a remote driving interfacebased on visual gesturing (see Figure 2). Handmotions are tracked with a color and stereo vi-sion system and classified into gestures usinga simple geometric model. The gestures arethen mapped into motion commands that aretransmitted to the remote vehicle. In our test-ing, we found that GestureDriver works well

almost anywhere within the vision system’sfield of view. Thus, users were free to moveabout when they were not directly command-ing the robot. Additionally, GestureDriver wasable to easily accommodate users of differentsizes and with different control preferences.

The most difficult aspect of remote driv-ing, as with all teleoperation, is that the opera-

tor is separated from the point of action. As aresult, he must rely on information from sen-sors (mediated by communication links anddisplays) to perceive the remote environment.Consequently, the operator often fails to un-derstand the remote environment and makesjudgement errors. This problem is most acutewhen precise motion is required, such as ma-neuvering in cluttered spaces or approaching atarget.3

HapticDriver addresses this problem by pro-viding force feedback to the operator (see Fig-ure 3). Range-sensor information is trans-formed to spatial forces using a linear modeland then displayed to the operator using a large-workspace haptic hand controller (the DeltaHaptic Device). Thus, HapticDriver enables theoperator to feel the remote environment and toachieve better performance in precise drivingtasks.

Personal interfacesFor some remote driving applications, install-ing operator stations with multiple displays,bulky control devices and high-bandwidth/low-latency communication links is infeasible (oreven impossible) due to environmental, mon-etary, or technical constraints. For other appli-cations, a range of operators having diversebackgrounds and skills drive the vehicle. Inthese situations, extensive training is impracti-cal and we need interfaces that require mini-mal infrastructure, can function with poor com-munications, and do not tightly couple perfor-mance to training.

PdaDriver is a PDA-based interface for ve-hicle teleoperation and is shown in Figure 4.PdaDriver uses multiple control modes, sen-sor-fusion displays, and safeguardedteleoperation to make remote driving fast andefficient. We designed the PdaDriver user in-terface to minimize the need for training, toenable rapid command generation, and to im-prove situational awareness. The current inter-face is has four interaction modes: video, map,command, and sensors (see Figure 4). We haveconducted a number of field tests with thePdaDriver and found the interface to have highusability, robustness, and performance.3

To date, we have created two Web-basedsystems: WebPioneer and WebDriver. TheWebPioneer (developed in collaboration withActivMedia Inc.) enables novices to explorean indoor environment. The WebPioneer, how-ever, requires significant network resources andonly provides a limited command set. We de-signed our second system, WebDriver, to ad-dress these problems. In particular, WebDriverminimizes network bandwidth usage, providesa dynamic user interface, and uses waypoint-

Figure 1. Sensor fusion display incorporating lidar,stereo vision, and sonar range data.

Figure 2. GestureDriver: a visual, gesture-basedinterface for vehicle teleoperation.

Figure 3. HapticDriver provides force feedback forprecision driving tasks.

Page 8: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

SPIE’s International Technical Group Newsletter8

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

Time delay and communication-bandwidth limitation ontelerobotic controlRemote teleoperation allows humans to extendtheir capabilities to environments too danger-ous for biological organisms. The communi-cation between the operator and the robot needsto be fast in order to accommodate quick inter-action with the environment. However, certainremote environments, such as deep-sea andspace operations, impose restrictions on thecommunication link that limits the availablebandwidth. In addition, as the distance in-creases, the communication between the op-erator and robot is further handicapped by sig-nificant time delays.

The experimentA simulation was developed (see Figure 1) al-lowing an operator to control a 7-DoF manipu-lator to perform a manipulation positioningtask. The task was to pop a target sphere withthe tip of the manipulator that was commandedusing a Cartesian rate controller using a pair of3-DoF hand controllers. The operator switchedbetween three fixed views to perform the task.The default view was an overall shot of the armand work site, useful for coarse positioning ofthe manipulator. As the tip neared the targetsphere, one of the two orthogonal views couldbe used (a side or top view) to help performthe final movements.

Four independent variables were alteredduring testing: time delay, the use of predic-tive displays, the communication bandwidth,and the manipulator velocity. Four differentround-trip time delays were chosen that theoperators could feasibly handle: 0s, 1.5s, 3s,and 6s. During preliminary testing, it was de-termined that, at 3s and 6s time delay, someform of predictive display was required to as-sist the operator. Three display methods werecompared: an unmitigated display showingonly the telemetry of the actual position, onewith an added predictive display that estimatedwhere the manipulator would move after thedelay, and one that commanded the joints tomove to the position as shown using an addi-tional commanded display. Since the predic-tive and commanded displays were used to re-duce the effects of time delay, they were onlyused during time-delayed treatments. Thecommunication bandwidth between theground and space is limited: it is, therefore,advantageous to limit the number of com-mands sent to the robot. Four sampling rateswere used as treatments: the baseline rate(20Hz), half rate (10Hz), quarter rate (5Hz),and one-eighth rate (2.5Hz). The maximumvelocity of the manipulator tip was also al-tered testing 6in/s and 1.3in/s.

ResultsThe strongest effect on task completion timewas time delay, up to a 480% increase at 6s.Figure 2 distinctly shows that completion timeincreases linearly with time delay. Each dis-play method is affected by time delay differ-ently, but all have this linear correlation. Theslope of the commanded display, in Figure 2,is less than one: therefore the commanded dis-play made up for the additional time delay.While still helpful for alleviating the time-de-lay effect, the predictive display was only halfas effective as the commanded display. Cali-bration errors between the predictive display’sestimated position and the actual position madethe predictive display less helpful. It would beexpected that any error causing a deviationbetween the commanded/predictive and actualoutput would reduce the effectiveness of theextra display. However, even with moderateerrors, the predictive display still reduced the

completion time considerably.Command sampling rate was a less impor-

tant factor in influencing performance. As canbe seen in Figure 3, only a minor differencewas found by cutting the rate in half from 20Hzto 10Hz. However, at around 5Hz, a break pointwas reached and performance becomes moreaffected by a reduction of sampling rate. Moretesting would likely show that further reduc-tion of the sampling rate from 2.5Hz would sig-nificantly increase completion time.

The least influential factor was the changein manipulator velocity. It can be seen fromFigure 3 that the 1.3in/s sampling rate curve isflatter when compared to the faster manipula-tor speed. The flatter shape of the sampling ratecurve, which can be seen in Figures 16 and 17,indicates that sampling rate has less effect onthe slower manipulator speed. On average, theslower manipulator speed increased thecompletion time by about 27%.

Overall, very few impacts were made on thework site. On average, an impact would occur

Figure 1. Modified Fitts1 law task screenshot withexaggerated size sphere.

Figure 2. Linear correlation of time delay andcompletion time.

Figure 4. Number of impacts at 6in/s manipulatorspeed.

continued on p. 12

Figure 3. Comparing 1.3in/s and 6in/s manipulatorspeeds for changing sampling rate curves.

Page 9: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

9SPIE’s International Technical Group Newsletter

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

Intelligent Systemsfor AdvancedManufacturing

SPIE International Symposium on

Intelligent Systemsfor AdvancedManufacturingPart of

28-31 October 2001Boston Marriott NewtonNewton, Massachusetts USA

Featuring Conferences on:

Industrial Inspection and Sensors

Optomechatronics

Environmentally ConsciousManufacturing

Embedded Intelligence

Vertical Integration

Robots and Autonomous Systems

Paul S. Schenker, Jet Propulsion Lab.Gerfried Zeichen, Technische Univ. Wien (Austria)Symposium Chairs

C a l l F o r P a p e r s

Submit your abstract today!

Abstract due date: 1 April 2001

Sponsored byContact InformationWeb www.spie.org/info/pbEmail [email protected] (1) 360/676-3290

Page 10: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

SPIE’s International Technical Group Newsletter10

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

on the end-effector. These values are convertedinto MIDI tone signal commands and are trans-mitted to the audio sampler. A sound source ofmotor noise is stored in the audio sampler, andthe audio sampler generates the motor noiseaccording to the magnitudes of the force andtorque. We examined several kinds of soundsources and found motor noise to be the mostfeasible and realistic.

Computer C analyzes the command dataand detects the transmission of the commandto the robot. Mac 5 announces the type of thecommand. This information can be used tocheck that the proper command was transmit-ted.

We chose an objective and psychophysicalassessment of the AFS, the eye mark recorder(EMR), which can record an operator’s eyemovements in response to very small stimuli.As far as we know, this is the first experimentto deal with psychophysical phenomena dur-ing the teleoperation of space robots on boarda space craft. It is important to assess the psy-chophysical phenomena during actual satelliteoperation, since the psychophysical state dur-ing actual operation is different from that ofsimulation-based training, even if the experi-menter and operator try to maintain the sameconditions.

From the experiments we found the follow-ing results.• Operation time can be significantly reduced,

and the operator can easily avoid collisions.• The operator didn’t have to concentrate on

the telemetry console and thus was free tomonitor other items.

• The spectrum patterns of eye movement ve-locity reflect the expertise level of the op-erators.

• The spectrum of a well-trained operatorchanges from a naive operator’s spectrumto an astronaut’s spectrum when using theAFS.

These results suggest that the audio feedbacksystem aids in teleoperation of space robots andthat the eye mark recorder is a unique and ef-fective tool to assess interface systems.

Shinichi KimuraSpace Communications DivisionCommunications Research Laboratory4-2-1, Nukui-Kitamachi, KoganeiTokyo 184-8795, JapanPhone: +81 423 27-7514Fax: +81 423 27-6699E-mail: [email protected]

Audio-feedbackexperimentscontinued from cover

hand are required (see Figures 4 and 5).More details on MARCO’s high level user

interface and Java/VRML client techniques aregiven in Reference 4 (see Figure 6).

The sensor-based task-level-teleprogram-ming system, MARCO, has reached a highlevel of universality. It was not only used as aground-control station for the ETS VII experi-

MARCO—DLR’s task-directed sensor-basedteleprogramming systemcontinued from p. 6

Figure 3. DLR’s universal telerobotic stationMARCO (Modular A&R Controller).

Figure 4. Two handed VR-interface using spacemouse and data glove (space-station scenario asexample).

Figure 5. Two-handed VR interface using twospace mice (ETS VII scenario as example).

ment, but it is also being studied for Germany’sExperimental Service Satellite ESS project, forremote ground-control of a new, climbing,space-station robot, and for mobile terrestrialand planetary robot projects.

G. Hirzinger, K. Landzettel, and B.BrunnerDLR OberpfaffenhofenGerman Aerospace CenterInstitute of Robotics and System DynamicsD-82234 Wessling/GermanyE-mail: [email protected]

References1. G. Hirzinger, Sensor-based space robotics—

ROTEX and its telerobotic features, IEEE Trans.on Robotics and Automation 9 (5), October 1993.

2. K. Landzettel, B. Brunner, G. Hirzinger, I.Schaefer, M. Fischer, M. Grebenstein, N. Sporer,and J. Schott, Space robotics—recent advances inDLR’s Robotics Lab, ASTRA ’98.

3. M. Fischer, P. van der Smagt, and G. Hirzinger,Learning techniques in a dataglove basedtelemanipulation system for the DLR hand, IEEEInt’l Conf. on Robotics and Automation (ICRA),1998.

4. B. Brunner, K. Landzettel, G. Schreiber, B.M.Steinmetz, and G. Hirzinger, A universal task-levelgrund control and programming system for spacerobot applications, iSAIRAS 5th Int’l Symp. onArtificial Intelligence, Robotics and Automationin Space, ESTEC, Noordwijk, 1-3 June 99.

Figure 6. Internet-programming with VRML /3DJava.

Page 11: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

11SPIE’s International Technical Group Newsletter

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

the human fingertip. However, the ideal speci-fication profile for such an array (which can bederived from physiological and biomechanicalas well as task-oriented data) poses severe chal-lenges in system design. The IACE has devel-oped an actuator array with extraordinarily highpin forces and mechanical bandwidth3 that isused for tactile representation of the interac-tion between the operator’s fingertip and stiffobjects in virtual prototyping tasks. Experi-ments have been performed with respect todetailed haptic exploration of an automobileengine (see Figure 2). In this scenario, the op-erator can examine the head of a screw to befixed, judge the quality of workmanship at theedge of a workpiece, or localize a visually-oc-cluded marker that indicates the assembly-ro-tation of a pulley.

PerspectiveThe benefits of including the proposed ad-

Intuitive haptic perception in virtual prototyping taskscontinued from p. 4

vanced haptic feedback technologies in virtualprototyping environments have been demon-strated in various laboratory experiments. Be-yond the automobile industry, potential appli-cations of this technique can be found in otherindustrial areas, as well as in medical simula-tion and education.

This work is supported by the German Re-search Council (DFG) within the Collabora-tive Research Center “High-FidelityTelepresence and Teleaction” (SFB 453).

A. Kron, P. Kammermeier, and G.SchmidtInstitute of Automatic Control EngineeringTechnische Universität München80290 München,GermanyPhone: +49 89 289-23415Fax: +49 89 289-28340E-mail: [email protected]

based safeguarded teleoperation. WebDriverdiffers from other web-based driving becauseit is highly effective in unknown, unstructured,and dynamic environments.6

Future workWe believe that our tools and technologies arewell-suited for tasks such as remote explora-tion. Thus, we are planning to apply our re-search to the exploration of planetary surfaces.In the next year, we intend to develop user in-terfaces that enable EVA crew members (e.g.,suited geologists) and mobile robots to collabo-rate and jointly perform tasks such as survey-ing, sampling, and in-situ site characterization.

This work is partially supported by theDARPA ITO Mobile Autonomous Robot Soft-ware program (SAIC/CMU task) and the SwissCommission for Technology and Innovation(CTI project 4327.1). Contributors includeGilbert Bouzeid, Francois Conti, SebastienGrange, Roger Meier, and Gregoire Terrien.

Terry Fong and Charles ThorpeThe Robotics InstituteCarnegie Mellon UniversityPittsburgh, PA 15213E-mail: {terry, cet}@cs.cmu.eduhttp://www.cs.cmu.edu/~{terry, cet}

Charles BaurInstitut de Systèmes RobotiquesSwiss Federal Institute of TechnologyCH-1015 Lausanne, SwitzerlandE-mail: [email protected]://imtsg7.epfl.ch/~baur

Active interfaces for vehicle teleoperationcontinued from p. 7

References1. P. Kammermeier, A. Kron, and G. Schmidt,

Towards Intuitive Multi-fingered HapticExploration and Manipulation, Proc. 2001Workshop on Advances in InteractiveMultimodal Telepresence Systems, Munich,March 29-30, 2001.

2. A. Kron, M. Buss, and G. Schmidt, Explorationand Manipulation of Virtual Environments using aCombined Hand- and Finger Force FeedbackSystem, Proc. 2000 IEEE/RJS Int’l Conf. onIntelligent Robots and Systems, pp. 1328-1333,Takamatsu, Japan, October 31, November 5, 2000.

3. P. Kammermeier, M. Buss, and G. Schmidt,Actuator Array for Display of Distributed TactileInformation—Design and Preliminary Experi-ments, Proc. 9th Annual Symp. on HapticInterfaces for Virtual Environment andTeleoperator Systems, pp. 1315-1321, 5-10November 2000, Orlando, Florida.

References1. http://imtsg7.epfl.ch (Virtual

Reality and Active InterfacesGroup).

2. T. Fong, C. Thorpe, and C.Baur, Advanced interfaces forvehicle teleoperation:collaborative control, sensorfusion displays, and remotedriving tools, AutonomousRobots, to be published July2001.

3. T. Fong, F. Conti, S. Grange,and C. Baur, Novel interfacesfor remote driving: gesture,haptic and PDA, SPIETelemanipulator andTelepresence VII, Boston,MA, November 2000.

4. R. Meier, T. Fong, C. Thorpe,and C. Baur, A Sensor FusionBased User Interface forVehicle Teleoperation, Fieldand Service Robotics(FSR’99), Pittsburgh, PA,August 1999.

5. G. Terrien, T. Fong, C.Thorpe, and C. Baur, RemoteDriving with a MultisensorUser Interface, SAE 30thInt’l Conf. on Environmen-tal Systems, Toulouse,France, July 2000.

6. S. Grange, T. Fong, and C.Baur, Effective vehicleteleoperation on the WorldWide Web, IEEE Int’l Conf.on Robotics and Automa-tion, San Francisco, CA,2000.

Figure 4. PdaDriver is a PDA-based interface for remote driving.

Figure 5. WebDriver enables safeguarded vehicle teleoperation via thewwweb.

Page 12: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

SPIE’s International Technical Group Newsletter12

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

and correct.• Crashes of one or more processes can cause

a loss of information that would be impor-tant for the off-line analysisOn-line supervision (see Figure 1) is there-

fore an important tool for speeding up progressin research on applications such as mobile ro-botics by allowing on-line detection of charac-teristics of the tested approaches. Having ac-cess to the machine’s perception permits us toidentify the correspondence between the per-ception and behavior of the robot. This is doneby visualizing sensory information on severallevels of abstraction, using state-of-the-art webtechnology yielding a plug-in-free interface thatcan be viewed with a standard browser. It pro-vides multi-modal information in several rep-resentations: off- and on-board vision, laserdata, and odometry (see Figure 2).

This tool proved to be indispensable in thedeveloping phase of navigation algorithms forlocalization, obstacle avoidance, and path plan-ning.3,4

SpecificationBy performing public presentations and com-mon experiments with distant labs, some limi-tations of our system become evident. Obscurein-line commands were used to control the ro-bot and the only feedback was the robot be-havior and some text output. This was not sat-isfying for people who were not familiar withthis development and operating system. Includ-ing task specification in a graphical feedbackinterface, making the results of the robotics re-search accessible to potential end-users becamea major aspect of the development.

For this, a means for controlling the robot—which was not a basic aim of the project—hasbeen developed. This has been achieved by in-troducing modern guidelines for ergonomicinterface design (like context-sensitive pop-upmenus or clickable goal specification). Defin-ing a navigation goal on a graphical interface

by clicking on an image showing the knownenvironment, makes the interface very intui-tive for the end-user. In the same way, localgoals near the current robot position can bedefined on the image showing the neighbor-hood of the robot and the raw data from thelaser scanner. Furthermore, the robot behaviorcan even be seen by distant users by means ofexternal cameras (see Figure 2).

Its practicality has been extensively dem-onstrated at the ‘Computer2000’ exhibition,where the robot was remote-controlled usingthis interface for four days, in a fully autono-mous mode, by visitors of the tradeshow.2 Thevisitors defined 724 missions for the robot,which had to travel a total of 5.013km in orderto fulfill them.

Nicola Tomatis and Benoit MoreauAutonomous Systems Lab, Prof. RolandSiegwartSwiss Federal Institute of TechnologyLausanne (EPFL)CH-1015 Lausanne, SwitzerlandE-mail: {nicola.tomatis,benoit.moreau}@epfl.chhttp://dmtwww.epfl.ch/isr/asl

References1. K. Arras, N. Tomatis, and R. Siegwart, Multisensor

On-the-Fly Localization Using Laser and Vision,Proc. IEEE/RSJ Int’l Conf. on Intelligent Robotsand Systems (IROS 2000), p. 462, Takamatsu,Japan, October-November 2000.

2. K. Arras, N. Tomatis, and R. Siegwart, MultisensorOn-the-Fly Localization: Precision andReliability for Applications, to appear in J.Robotics and Autonomous Systems.

3. R. Brega, N. Tomatis, and K. Arras, The Need ofAutonomy and Real-Time in Mobile Robotics: ACase Study of XO/2 and Pygmalion, Proc. IEEE/RSJ Int’l Conf. on Intelligent Robots and Systems(IROS 2000), p. 1422, Takamatsu, Japan,October-November 2000.

4. B. Moreau, N. Tomatis, K. Arras, B. Jensen, and R.Siegwart, Multimodal Web Interface for TaskSupervision and Specification, Proc. SPIE 4195,2000.

Web interfacing for task supervision and specificationcontinued from back cover

References1. E. Freund and J. Rossmann, Projective Virtual Reality: Bridging the Gap between Virtual Reality and

Robotics, IEEE Trans. on Robotics and Automation 15 (3), pp. 411-422, June 1999.2. E. Freund and J. Rossmann, Space Robot Commanding and Supervision by means of Projective Virtual

Reality: The ERA Experiences, Proc. 7th Conf. on Telemanipulator and Telepresence Technologies,Boston, November 2000.

A new telepresence approachcontinued from p. 3

0.04 times per trial, or about once in every 25attempts at popping a sphere. More evidenceshowing the usefulness of the commanded dis-play was that the number of impacts was sig-nificantly lower. When using the commandeddisplay, the number of impacts was reducedby 95%. Operations using the commanded dis-play were practically flawless. In all 1,440sphere-popping trials conducted in the 6in/smanipulator speed study, impacts occurred onlythree times when using a commanded display.No significant difference in the number of im-pacts was found for the predictive display ordue to changes in time delay. When the veloc-ity was slowed from 6in/s to 1.3in/s, the prob-ability of errors decreased by a third.

ConclusionEach of the individual effects was ranked todetermine which factors were most influentialto performance. For completion times, the rank-ing of importance from most to least influen-tial was the following: time delay and use ofcommanded display, sampling rate, and (fi-nally) manipulator speed. A wider range ofspeeds may find that the manipulator speedwould be more important. For number of im-pacts, the only conclusive effect was the useof a commanded display: all other factors hadvery little significance.

Continued research into determining whatfactors are most influential on user operationcan be used to develop better interface appli-cations to control robotics under adverse con-ditions. Creating an interface that is both easyto use and helpful in ameliorating time delayand communication bandwidth limitations willallow humans to effectively extend their capa-bilities to remote regions with the use of ro-botics.

J. Corde LaneEngineering Research Building 806Suite 4105College Park, MD 20742E-mail: [email protected]

References1. J. Corde Lane, Craig R. Carignan, and David L.

Akin, Time Delay and Communication BandwidthLimitation on Telerobotic Control, Proc. SPIE4195, 2000.

2. J. Corde Lane, Human Factors Optimization ofVirtual Environment: Attributes for a SpaceTelerobotic Control Station, Diss. Univ. ofMaryland, College Park. 2000.

Time delay and communi-cation-bandwidth limita-tion on telerobotic controlcontinued from p. 8

Page 13: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

13SPIE’s International Technical Group Newsletter

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

CAMPOUT provides thenecessary autonomyfunctions to operate andcommand the two roversas one single rover withcapabilities for transpor-tation of a PV tent.3

We are currently ex-tending this work-scal-ing the concept—to de-velop techniques thatenable command anddistributed coordinationof larger teams (three totens) of agents for vari-ous space applications.As one example, the AllTerrain Explorer3 is adistributed/modular ro-botic system for accessto high-risk, high-valuelocations such as cliffs,fissures, etc. This aggres-sive mobility platformwill support operationalfunctionality such as rappelling down a cliff,moving to a designated waypoint, and safe ob-stacle avoidance, all on a cliff wall (see Figure2). The system concept consists of a tetheredensemble of three robotic entities; the rappeller,and two anchoring assistants, anchors, that willcooperatively direct and safely guide therappeller to descend to way-points which areon the cliff-side and which are within theworkspace defined by the tether lengths andthe anchoring points. In this work we are usingCAMPOUT to demonstrate elements includ-ing collective fused mapping, state estimation,and distributed controls.

Second, we have recently begun work in ap-plying the same techniques for control and co-ordination of hundreds of satellites for tasksinvolving formation flying and collective dataacquisition. CAMPOUT provides the core tech-nology for a high degree of autonomy that iskey to a cost-efficient deployment of such dis-tributed satellite systems.

Finally, we note in passing an effort aimed atboth planetary and orbital platforms: the LEMUR(Legged Excursion Mechanical Utility Robot)5

(see Figure 3) is being developed by us as ahighly dexterous system (22 independent degreesof freedom) for assembly, inspection, and main-tenance operations. Sporting six limbs that canbe used for both mobility and manipulation,much like a primate, LEMUR can bereconfigured using easily exchanged end effec-tors to perform different tasks. To date LEMURhas been equipped with gripper, hex-key driver,and visual inspection end effectors. Eventually,we expect to show cooperative behavior between

Space exploration and the expanding domain of robotic telepresencecontinued from p. 5

LEMUR and heterogeneous platforms, in a man-ner related to the robot work crew activities de-scribed above.

P. Pirjanian, T. L. Huntsberger, B. A.Kennedy, and P. S. SchenkerJet Propulsion Laboratory, CaliforniaInstitute of Technology4800 Oak Grove Drive/MS 125-106Pasadena, CA [email protected]

References1. P. S. Schenker and G. T. McKee, “Man-machine

interaction in telerobotic systems and issues in thearchitecture of intelligent and cooperative control,”in Proc. IEEE/Intl. Symp. Intelligent ControlWorkshop (Architectures for Semiotic Modelingand Situation Analysis in Large ComplexSystems, Orgs.: J. Albus, A. Meystel, D. Pospelov,T. Reader), Monterey, CA, August, 1995.

2. P. Pirjanian, T.L. Huntsberger, A. Trebi-Ollennu,H. Aghazarian, H. Das, S. Joshi, and P.S. Schenker,‘’CAMPOUT: A control architecture for multi-robot planetary outposts,’’ in Proc. SPIESymposium on Sensor Fusion and DecentralizedControl in Robotic Systems III, Vol. 4196,Boston, MA, Nov. 2000.

3. P.S. Schenker, T.L. Huntsberger, P. Pirjanian, A.Trebi-Ollennu, H. Das, S. Joshi, H. Aghazarian,A.J. Ganino, B.A. Kennedy, and M.S. Garrett,‘’Robot work crews for planetary outposts: Closecooperation and coordination of multiple robots,’’in Proc. SPIE Symposium on Sensor Fusion andDecentralized Control in Robotic Systems III,Vol. 4196, Boston, MA, Nov. 2000.

4. P. S. Schenker, P. Pirjanian, B. Balaram, K. S. Ali,A. Trebi-Ollennu, T. L. Huntsberger, H.Aghazarian, B. A. Kennedy, and E. T.Baumgartner, Jet Propulsion Laboratory; K.Iagnemma, A. Rzepniewski, and S. Dubowsky,Massachusetts Institute of Technology; P. C. Legerand D. Apostolopoulos, Carnegie MellonUniversity; G. T. McKee, University of Reading(UK), “Reconfigurable robots for all terrainexploration,” Proc. SPIE Vol. 4196, Sensor Fusionand Decentralized Control in Robotic SystemsIII (Eds. G. T. McKee and P. S. Schenker), Boston,MA, Nov. 2000.

5. G. Hickey and B. Kennedy, “Six leggedexperimental robot,” NASA Tech Brief NPO-20897; and G. Hickey, B. Kennedy, and A. Ganino,Jr., “Intelligent mobile systems for assembly,maintenance, and operations for space solarpower,” Proceedings of the Space 2000Conference, Albuquerque, NM, 2000.

Figure 3. LEMUR with the right limb in manipulation configuration.

2001AeroSenseAerospace/Defense Sensing andControls16-20 AprilOrlando, FLTechnical Exhibit: 17-19 April

2nd International Symposium onMultispectral Image Processingand Pattern Recognition22-24 OctoberWuhan, China

Sponsored by Huazhong Univ. of Science &Technology and SPIE.

ISAM/EIS28-31 OctoberBoston, MA

Integrated ComputationalImaging Systems (ICIS)4-8 NovemberAlbuquerque, NM

Sponsored by OSA. SPIE is a cooperatingorganization. Contact OSA’s Customer Service:Optical Society of America, 2010 MassachusettsAve, NW, Washington, DC 20036-1023.Fax: 202-4166140. E-mail: [email protected]: www.osa.org/mtg_conf.

Calendar

Page 14: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

SPIE’s International Technical Group Newsletter14

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

Robotics & Machine PerceptionThe Robotics & Machine Perception newsletter is published semi-annually by SPIE—The International Society

for Optical Engineering for its International Technical Group on Robotics & Machine Perception.

Editor and Technical Group Chair Gerard McKee Managing Editor Linda DeLano Technical Editor Sunny Bains Advertising Sales Roy Overstreet

Articles in this newsletter do not necessarily constitute endorsement or the opinionsof the editors or SPIE. Advertising and copy are subject to acceptance by the editors.

SPIE is an international technical society dedicated to advancing engineering, scientific, and commercialapplications of optical, photonic, imaging, electronic, and optoelectronic technologies. Its members are engineers,

scientists, and users interested in the development and reduction to practice of these technologies. SPIE provides themeans for communicating new developments and applications information to the engineering, scientific, and usercommunities through its publications, symposia, education programs, and online electronic information services.

Copyright ©2001 Society of Photo-Optical Instrumentation Engineers. All rights reserved.

SPIE—The International Society for Optical Engineering, P.O. Box 10, Bellingham, WA 98227-0010 USA.Phone: (1) 360/676-3290. Fax: (1) 360/647-1445.

European Office: Contact SPIE International Headquarters.

In Japan: c/o O.T.O. Research Corp., Takeuchi Bldg. 1-34-12 Takatanobaba, Shinjuku-ku, Tokyo 160, Japan.Phone: (81 3) 3208-7821. Fax: (81 3) 3200-2889. E-mail: [email protected]

In Russia/FSU: 12, Mokhovaja str., 121019, Moscow, Russia. Phone/Fax: (095) 202-1079.E-mail: [email protected].

Want it published? Submit work, information, or announcements for publication in future issues of thisnewsletter to Sunny Bains at [email protected] or at the address above. Please consult http://www.sunnybains.com/newslet.html for submissions guidelines. All materials are subject to approval and may be edited. Please includethe name and number of someone who can answer technical questions. Calendar listings should be sent at leasteight months prior to the event.

T his newsletter is printed as a benefit ofmembership in the Robotics & Machine

Perception Technical Group. Technical groupmembership allows you to communicate andnetwork with colleagues worldwide.

Technical group member benefits include asemi-annual copy of the Robotics & MachinePerception newsletter, SPIE’s monthly publi-cation, OE Magazine, membership directory,and discounts on SPIE conferences and shortcourses, books, and other selected publicationsrelated to robotics.

SPIE members are invited to join for thereduced fee of $15. If you are not a member ofSPIE, the annual membership fee of $30 willcover all technical group membership services.For complete information and an applicationform, contact SPIE.

Send this form (or photocopy) to:SPIE • P.O. Box 10Bellingham, WA 98227-0010 USAPhone: (1) 360/676-3290Fax: (1) 360/647-1445E-mail: [email protected]

Web: www.spie.org/info/robotics

Please send me■■ Information about full SPIE membership■■ Information about other SPIE technical groups■■ FREE technical publications catalog

Join the Technical Group...and receive this newsletter

Membership Application

Please Print ■■ Prof. ■ ■ Dr. ■ ■ Mr. ■ ■ Miss ■ ■ Mrs. ■ ■ Ms.

First Name, Middle Initial, Last Name ______________________________________________________________________________

Position _____________________________________________ SPIE Member Number ____________________________________

Business Affiliation ____________________________________________________________________________________________

Dept./Bldg./Mail Stop/etc. _______________________________________________________________________________________

Street Address or P.O. Box _____________________________________________________________________________________

City/State ___________________________________________ Zip/Postal Code ________________ Country __________________

Telephone ___________________________________________ Telefax _________________________________________________

E-mail Address _______________________________________________________________________________________________

Technical Group Membership fee is $30/year, or $15/year for full SPIE members.

❏ Robotics & Machine PerceptionTotal amount enclosed for Technical Group membership $ __________________

❏ Check enclosed. Payment in U.S. dollars (by draft on a U.S. bank, or international money order) is required. Do not send currency.Transfers from banks must include a copy of the transfer order.

❏ Charge to my: ❏ VISA ❏ MasterCard ❏ American Express ❏ Diners Club ❏ Discover

Account # ___________________________________________ Expiration date ___________________________________________

Signature ___________________________________________________________________________________________________(required for credit card orders)

ROBOTICSONLINENew Robotics Web

Discussion Forum launched

You are invited to participate in SPIE’s newonline discussion forum on Robotics. TheINFO-ROBO mailing list is being “retired”as we move the discussions to the morefull-featured web forums. We hope you willparticipate. To post a message, log in tocreate a user account. For options see“subscribe to this forum.”

You’ll find our forums well-designed andeasy to use, with many helpful featuressuch as automated email notifications,easy-to-follow “threads,” and searchability.There is a full FAQ for more details on howto use the forums.

Main link to the new Robotics forum:http://spie.org/app/forums/

tech/

Related questions or suggestions canbe sent to [email protected].

Page 15: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

15SPIE’s International Technical Group Newsletter

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

• Taught by Experts!• Make the Most of Your Time!• Increase Your Earning Potential!• Training for One or the Whole Team!

Continuing Educationfor the way YOU learn.Video • CD-ROM • Online

Order today!Call customer service (1) 360/676-3290Email to [email protected] online at http://www.spie.org/info/educationShipping and handling fees apply.SPIE members, take 10% off the list price.

Can’t take the time to attend live short courses? SPIE has over 90 courses onvideo and CD-ROM, and new for 2001, the most popular courses are availableonline for immediate download. Taught by world-renowned instructors, newreleases are scheduled throughout year!

*Optics and Photonics *Digital Imaging

*Optoelectronics *Fiber Optics

Update your skills, discover a new technology, or chart a new career at yourown pace in the comfort of your own home or office. SPIE Distance Educationcourses make it easy to begin to achieve your goals today.

Short courses on video or CD-ROM and now online, learn about optics,optoelectronics, imaging and communications at your own pace. Learn newapproaches and problem-solving skills; enhance your in-house trainingand improve your overall performance. Make the most of your valuable time,enhance your in-house training and improve your overall performance. Gain thecompetitive advantage you need in the ever-changing world of optics!

Now available!

SPIE courses online

Page 16: ROBOTICS AND MACHINE PERCEPTION MARCH 2001 · PDF fileROBOTICS AND MACHINE PERCEPTION MARCH 2001 ... offered the option of receiving the Robotics and Machine Perception ... and sensor-based

SPIE’s International Technical Group Newsletter16

ROBOTICS AND MACHINE PERCEPTION MARCH 2001

P.O. Box 10 • Bellingham, WA 98227-0010 USA

Change Service Requested

DATED MATERIAL

Non-Profit Org.U.S. Postage Paid

Society ofPhoto-Optical

InstrumentationEngineers

continued on p. 12

Web interfacing for task supervision and specificationThe Autonomous Systems Lab at the SwissFederal Institute of Technology Lausanne(EPFL) is engaged in mobile robotics research.The lab’s research focuses mainly on indoorlocalization and map building, outdoor loco-motion and navigation, and micro mobile ro-botics. In the framework of a research projecton mobile robot localization, a graphical webinterface for our indoor robots has been devel-oped.1 The purpose of this interface is twofold:it serves as a tool for task supervision for theresearcher, and for task specification for theend-user. Our indoor robots are fully autono-mous systems based on the VME-bus standardwith a six-axis robot controller carrying aPowerPC processor at 300 MHz. Among thevarious peripheral devices, they have threemain sensors: the wheel encoders, a 360° laserrange finder, and a CCD camera. The localiza-tion approach is feature based and uses an Ex-tended Kalman Filter to integrate measure-ments from the encoders, the laser scanner, andthe CCD camera. Features are infinite lines forthe laser scanner and vertical edges for the vi-sion system.

SupervisionTesting algorithms like localization and ob-stacle avoidance on an autonomous self-con-tained robot2 requires a means for the researcherto check the algorithmic reactions to themachine’s perception of the world. However,the perceived data and the processing resultsremain embedded in the mobile vehicle untilthey are explicitly transferred to a local PC foranalysis. This can be done by tracing the robotposition (odometry), and by saving all the rawdata from the sensors along with the extractedfeatures and the results of each algorithm, thentransferring this information when an experi-ment is finished. The analysis can then be per-formed off-board. Nevertheless, this procedurehas several disadvantages:• The correspondence between the behavior

of the robot, and the data that caused thisbehavior, is difficult to identify.

• Critical states of algorithms that may causea failure cannot be detected immediatelybefore and are therefore difficult to isolate

Figure 1. For research and development in mobile robotics, the correspondence between robot behavior(3) and robot perception (2) is very important. This correspondence is easier to understand by using on-line supervision, where robot perception is visualized using a graphical interface (4-5). 1: Humanperception of the real world. 2: Machine perception of the real world. 3: Human perception of the robotbehavior. 4: On-line transfer of the machine perception and machine states. 5: On-line visualization of themachine perception and machine states. 6: Human commands via in-line text commands. 7: Visual taskspecification.

Figure 2. The web interface. A: Multi sensor localization monitor. B: On board video. C: External video. D:Local robot position (x,y,τ). E: Global robot position (x,y). F: Message window. G: Pop-up menu on eachwindow to access corresponding functionality. H: Office information pop-up menu on global map.Numbered mice are possible way to control the robot. Mouse1: Set a room as new global goal. Mouse2:Set (x,y) as new local goal. Mouse3: Assign new orientation (τ).