22
LUNARES : Lunar Crater Exploration with Heterogeneous Multi Robot Systems Florian Cordes 1 Ingo Ahrns 2 Sebastian Bartsch 1 Timo Birnschein 1 Alexander Dettmann 1 St´ ephane Estable 2 Stefan Haase 1 Jens Hilljegerdes 1 David Koebel 3 Steffen Planthaber 1 Thomas M. Roehr 1 Marc Scheper 3 Frank Kirchner 1 1 DFKI Robotics Innovation Center 2 EADS Astrium GmbH 3 OHB System AG July 14, 2010 Abstract The LUNARES project emulates the retrieval of a scientific sample from a in a robotic mission. The ref- erence of this demonstration scenario is the Shakelton crater at the lunar south pole, where samples of sci- entific interest are expected in permanently shadowed regions. For accomplishment of such kind of mission an approach of a heterogeneous robotic team consisting of a wheeled rover, a legged scout as well as a robotic arm mounted on the landing unit was chosen. All robots act as a team to reach the mission goal. To prove the feasibility of the chosen approach, an artifi- cial lunar crater environment has been established to test and demonstrate the capabilities of the robotic systems. Figure 1 depicts the systems in the artifi- cial crater environment. For LUNARES , preexisting robots were used and integrated into a common sys- tem control. A ground control station has been developed con- sidering conditions of a real mission, requiring in- formation of autonomous task execution and remote controlled operations to be displayed for human op- erators. 1 Introduction Water to be found on Moon would be a crucial re- quirement for extended human presence on Moon. By splitting up water into hydrogen and oxygen, fuel for spacecrafts as well as oxygen for human habitats might be obtained. Currently, evidence is growing for water ice on the lunar surface. In addition multiple theories try to explain a possible existence of water on the Moon: J.R. Arnold [1], for example, names four main Figure 1. LUNARES systems: Landing unit (A), wheeled rover (B) and legged scout (C) in the artificial crater environment mechanisms building up deposits of water on the lu- nar surface: (1) water was brought down to the lunar surface as part of the meteoritic bombardment the Moon is exposed to, (2) hydrogen of the solar wind reacts with lunar oxides and leaves water molecules. (3) Cometary impacts and (4) outgassing of water from the lunar interior are also considered as sources. The latter one is considered to be the least proba- ble source. At the same time, Arnold identifies ef- fects that counteract building up significant deposits of water ice on the Moon. These include for exam- ple dissociation of H 2 O molecules through photons. This leads to the assumption that deposits of water ice reside, if at all, at permanently shadowed areas of the lunar surface, as already described by Watson, Murray, and Brown in 1961 [23]. Additionally, several satellite missions indicate that regolith-bound water ice can be found in such 1

LUNARES : Lunar Crater Exploration with Heterogeneous

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: LUNARES : Lunar Crater Exploration with Heterogeneous

LUNARES : Lunar Crater Exploration

with Heterogeneous Multi Robot Systems

Florian Cordes1 Ingo Ahrns2 Sebastian Bartsch1 Timo Birnschein1

Alexander Dettmann1 Stephane Estable2 Stefan Haase1 Jens Hilljegerdes1

David Koebel3 Steffen Planthaber1 Thomas M. Roehr1 Marc Scheper3

Frank Kirchner1

1DFKI Robotics Innovation Center 2EADS Astrium GmbH 3OHB System AG

July 14, 2010

Abstract

The LUNARES project emulates the retrieval of ascientific sample from a in a robotic mission. The ref-erence of this demonstration scenario is the Shakeltoncrater at the lunar south pole, where samples of sci-entific interest are expected in permanently shadowedregions.

For accomplishment of such kind of mission anapproach of a heterogeneous robotic team consistingof a wheeled rover, a legged scout as well as a roboticarm mounted on the landing unit was chosen. Allrobots act as a team to reach the mission goal. Toprove the feasibility of the chosen approach, an artifi-cial lunar crater environment has been established totest and demonstrate the capabilities of the roboticsystems. Figure 1 depicts the systems in the artifi-cial crater environment. For LUNARES , preexistingrobots were used and integrated into a common sys-tem control.

A ground control station has been developed con-sidering conditions of a real mission, requiring in-formation of autonomous task execution and remotecontrolled operations to be displayed for human op-erators.

1 Introduction

Water to be found on Moon would be a crucial re-quirement for extended human presence on Moon.By splitting up water into hydrogen and oxygen, fuelfor spacecrafts as well as oxygen for human habitatsmight be obtained. Currently, evidence is growing forwater ice on the lunar surface. In addition multipletheories try to explain a possible existence of wateron the Moon:

J.R. Arnold [1], for example, names four main

Figure 1. LUNARES systems: Landing unit (A),wheeled rover (B) and legged scout (C) in theartificial crater environment

mechanisms building up deposits of water on the lu-nar surface: (1) water was brought down to the lunarsurface as part of the meteoritic bombardment theMoon is exposed to, (2) hydrogen of the solar windreacts with lunar oxides and leaves water molecules.(3) Cometary impacts and (4) outgassing of waterfrom the lunar interior are also considered as sources.The latter one is considered to be the least proba-ble source. At the same time, Arnold identifies ef-fects that counteract building up significant depositsof water ice on the Moon. These include for exam-ple dissociation of H2O molecules through photons.This leads to the assumption that deposits of waterice reside, if at all, at permanently shadowed areasof the lunar surface, as already described by Watson,Murray, and Brown in 1961 [23].

Additionally, several satellite missions indicatethat regolith-bound water ice can be found in such

1

Page 2: LUNARES : Lunar Crater Exploration with Heterogeneous

permanently shadowed craters at both lunar poles: In1996 the Clementine bistatic radar experiment sug-gested the presence of water ice at the lunar southpole [17]. Differences in the polarization of radarechoes from permanently shadowed and sunlit areasled to this conclusion. In 1998, measurements fromLunar Prospector implied the presence of hydrogen,possibly in from of water ice at both lunar poles [8].

In November 2009, the LCROSS mission [16] de-tected water ice in the Cabeus crater at the lunarsouth pole. In this mission a part of the satellitehit the ground of the permanently shadowed crater,resulting in an ejection of a material plume to be an-alyzed by the instruments of LCROSS. Additionallyby the end of 2009, NASA announced the detectionof several hundred tons of water ice at both lunarpoles within the MINI-RF experiment that flew withIndia’s Chandrayaan-1 mission [15].

Current satellite missions can detect water onlyindirectly. For direct detection of water, an in situsample has to be taken and analyzed. It is clearthat such missions will be executed by robotic sys-tems since these missions for exploration of lunar en-vironments operate with lower risk and higher effi-ciency compared to missions with direct human in-volvement [3]. Such robotic mission could includetaking core samples and might give insight in the his-tory of volatiles brought to Moon by asteroids andmeteorites.

Various approaches are proposed to explore lu-nar craters. These approaches employ different meanssuch as ”classical” single rover systems, cooperativerobotic systems, tethered systems and new hybrid ap-proaches. Some of the approaches are presented inthe following paragraphs.

Barlett et al. [2] intend to use a four wheeled roverwith a drilling tower to explore the bottom of darklunar polar craters. The Scarab rover is designedto be deployed directly in the interior of the crater.The rover will solely operate in dark regions, thus anRadioisotopic Thermoelectric Generator (RTG) forpower generation will be used in the flight system.In various experiments Scarab proved to be able tosafely overcome terrains with slopes of up to 20◦ ondifferent soils [24]. The instrumentation is feasible totake 1 m drill cores and to demonstrate the extractionof water from the taken soil sample.

In October 2008 the robot CESAR was the onlyrobot to fulfill all mission objectives of the LunarRobotic Challenge (LRC) hosted by the EuropeanSpace Agency (ESA). The robot uses a hybrid legged-wheel approach for locomotion in steep crater envi-ronments [19]. The objective of the LRC was to senda robotic device into a crater at the Teide Volcano

on Tenerife to find and pick up 100 g of colored sand.The sample had to be delivered back to a designatedsite outside the crater. The challenge simulated mis-sions where the scientific equipment has to reside out-side of the crater itself; requiring robots to collect thesamples. The robots in the challenge were remotelyoperated from a simulated ground control station.

With the TRESSA system, Huntsberger et al. [12]employ a team of three rovers to explore hard-to-access terrain. The system comprises a Cliff-Bot thatis tethered down a slope by two so called Tether Bots.The tether bots act as anchors with winches for thetethering system. The cliff bot is equipped with aninstrumented arm for scientific experiments to be con-ducted directly in the slope. The system proved tobe able to successfully operate in steep canyon wallswith slopes of up to 85◦.

In comparison, the LUNARES project aims ata reconfigurable cooperative system of heterogeneousrobots to accomplish the task of analyzing materialfrom a permanently shaded region at the lunar southpole [5].

The system consists of three main elements: (1)a landing unit that has to transport all subsystemssafely to the lunar surface and provide manipulationsupport using its robotic arm (2) a wheeled rover toallow locomotion in moderate terrain and to providean energy efficient vehicle for long distance and (3)a legged scout robot that can advance into the darkareas of a lunar crater and return a sample from thecrater bottom. The aim of the project was to providea proof of concept based on existing technology, i.e.reusing existing robotic systems.

The subsystems involved are described in detailin Section 2. Section 3 gives an overview over thelunar crater testbed, that has been build up as simu-lation environment to demonstrate and test the feasi-bility of the approach within the project LUNARES .The demonstration mission conducted in the testbedis presented in Section 4. An overview on the ready-ness level compared to a real mission is described inSection 5 and finally the conclusion of the project aswell as an outlook on following activities is given inSection 6.

2 Systems of the LUNARES Scenario

The following section describes the systems beingpart of the LUNARES mission. The section is subdi-vided into a description of the lander mock-up’s su-perstructural parts, namely sensor tower and manip-ulator arm. Section 2.2 describes the wheeled roversystem, which is used for longer distances in moderateterrains. Section 2.3 presents the details of the legged

Page 3: LUNARES : Lunar Crater Exploration with Heterogeneous

scout system (aka Scorpion). Finally, system and mis-sion control elements of the LUNARES system arediscussed in Section 2.4 and Section 2.5, highlightingthe required functionality of a ground segment for areal exploration mission.

2.1 Landing Unit and SuperstructuralParts

The landing unit is represented by a non functionalmock-up and serves as mounting platform for therobotic arm and the sensor tower, these componentsare described in the following paragraphs. However,the design of the lander is inspired by a real landingunit, though in a 1:1.6 scale.

2.1.1 Lander Manipulator

The lander manipulator is a 6 DOF robotic ma-nipulator (Figure 2(a)). The main tasks for therobotic manipulator in the LUNARES reference mis-sion involves the following components:

• reconfiguration of rover by grasping a payloadelement from the lander and installing it at aspecial payload bay on the back of the rover.

• recollecting a payload by grasping the payloadelement from the back of the rover and placementof the element on the lander

• grasping the sample container from the back ofthe scout which contains the collected lunar sam-ple, and transport of the sample container to thelander

(a) The 6 DOF lander manipula-tor in its space saving configura-tion on top of the lander mock-up

(b) The sensor systemincluding a stereo cam-era, a 3D laser scannerand two spotlights

Figure 2. The lander’s superstructural compo-nents

2.1.2 Lander Sensor SystemThe sensor system of the lander (Figure 2(b)) in-

cludes a pan-tilt unit which contains a stereo camerasystem, a laser scanner and spotlight for illumina-tion. By swiveling the laser line over the scene, thereconstruction of static scenes of the environment isperformed. The sensor system fulfills several tasks:

1. The stereo camera system provides camera im-ages for the ground control station enabling avisual monitoring of the robotic systems, whenthe robots are in the vicinity of the lander1.

2. The pan-tilt unit allows the ground operator tocontrol the viewing direction in order to focus onparts of the scene that are of greater importancefor the current task execution.

3. The 2D laser scanner works together with thetilt-unit as an imaging 3D-LIDAR (Light Detec-tion And Ranging), thus providing dense 3D im-ages of the vicinity of the lander, Figure 3. Byusing the values of remission, the 3D laser scan-ner can also be used as a convenient 3D featuretracker. For this purpose, retro-reflective mark-ers are attached to the rover which enable anaccurate tracking of the rover pose in the neigh-borhood of the lander (See also Figure 23 in Sec-tion 4.4).

4. The spotlight can be used to illuminate the closevicinity of the lander for improved imagery.

2.2 RoverOne of the preconditions of the LUNARES projectwas to reuse existing robotic systems. Therefore theLUNARES rover is based on an industrial mobileplatform which was not designed as a lunar rover.The existing platform was utilized to demonstrate thebasic idea of combining two different mobility sys-tems, i.e. a wheeled rover for the coverage of largerdistances in moderate terrain and the legged scoutfor the exploration of shorter distances in more dif-ficult terrain (with more obstacles and terrain incli-nation). Nevertheless, the existing mobile platformhad to be adapted to fit the demonstration require-ments: (1) carry an additional payload (i.e. a liftingmechanism for the scout and the scout itself) of ap-proximately 20 kg mass (14 kg for the scout and 6 kgfor the lifting mechanism), (2) operate in very dustyterrain and (3) negotiate inclinations of about 15◦.

1Up to now, the operator at ground control only receivesmonoscopic cameras. However, for future enhancement of thesystem a visual control through a stereoscopic display is possi-ble

Page 4: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 3. Scan data provided by the 3D laserscanner showing parts of the lander (red, lowerleft corner), the rover in front of the lander,and parts of the crater rim (right half of thepicture).

This resulted mainly in modifications to the tractionsystems, which are actually still not applicable for areal lunar mission. Figure 4 shows the modified LUN-ARES rover.

For intelligent and cooperative behaviour, therover was equipped with an additional processing unitand additional sensors. The sensor system of therover comprises a sensor head including a stereoscopiccamera system and a laser scanner. The sensor headis mounted on a pan-tilt unit. Additionally, an incli-nation sensor allows to detect hazards resulting fromtoo large ground inclinations. To support the self-localization of the rover an Inertial Measurement Unit(IMU) including an accelerometer and a gyro basedon Micro-Electro-Mechanical-System (MEMS) tech-nology were added.

The pan-tilt unit enables the control of the view-ing direction of the camera system as well as the usageof the 2D laserscanner as an imaging 3D LIDAR. Thedata of the laser scanner are mainly used for reactiveobstacle avoidance and sensor data driven emergencystops.

The camera serves two purposes: (1) It allows themonitoring of the operation via the ground controlstation. (2) It is required for the autonomous dockingbehaviour between rover and scout. This is discussedin more detail in Section 4.4.

Rover ControlThe rover is one of four robotic subsystems in theLUNARES system architecture is embedded into thesystem control which will be described in Section 2.4.

Figure 4. Image of the LUNARES rover reachingthe crater rim and looking into the crater wherethe scout is working.

Figure 5. Control system of the rover hosted onthe rover on-board computer.

The structure of the rover specific control is depictedin Figure 5. The rover is controlled by five compo-nents:

1. The rover device controller (level A controller)controls the H/W of the rover, the move-ments, reactive emergency behaviors, spline-interpolation, trajectory generation, the pan-tiltunit and the data acquisition of the 3D imagingLIDAR.

2. The rover skill controller is a scheduler thatcontrols the execution of skills such as the au-tonomous scout-docking behavior depending onthe specific tasks.

3. The rover vision controller acquires image datafrom the stereo camera system and distributesthese data via WLAN.

Page 5: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 6. Cubic splines and circle segments areused to generate more complex trajectories forthe rover

4. The rover process controller starts and stops allprocesses of the rover’s on-board computer andperforms a health check of all components of therover.

5. The logging controller of the rover records all in-coming and outgoing messages and generates anasynchronous error signal to the higher levels ofthe system control in case of severe error signalsfrom one of the components of the rover control.

6. The rover communication interface provides acentral interface program to the higher levels ofthe system control.

Rover Motion PlanningThe level A controller of the rover provides differenttypes to control it’s motion, i.e. velocity control (leftand right wheels control, jog-rate etc.), control of cer-tain distances at defined velocities, as well as morecomplex trajectories which can be assembled fromsimpler trajectory parts. Possible parts can be seg-ments of circles as well as cubic splines which reach acertain point with given heading direction (Figure 6).

The general concept of motion planning of therover consists of three main steps:

• Depending on the task a trajectory is planned,either automatically as it is the case during theautomatic docking to the lander mockup, or con-trolled by the operator from ground.

• The trajectory is transferred to the controllerwhich commands the motion system and the spe-cific velocities for the left and right wheel system(differential drive).

• During the motion execution the motion is mon-itored via the odometry system which estimatesa rough information of the position and headingdirection of the rover.

Figure 7. Collecting several laser scans whiledriving and combining these scans to a 3D re-construction using the rover’s odometry pro-vides enough information in order to assess theterrain.

In the vicinity of the lander, the estimation ofthe rover position and heading direction is supportedby the 3D laser scanner of the lander system whichestimates the rover pose from retro-reflective mark-ers. Sensor data is processed and monitored to applya reactive emergency behavior that stops the rover.Potential reasons for an emergency stop are:

• Loss of WLAN connection checked by continuouspinging the system control at the lander.

• Obstacles in front of the rover detected by thelaserscanner.

• Too large inclination indicating the start of acrater rim or any other obstacle which has notbeen not detected by the laserscanner.

The reactive emergency stop-behavior gathersrange information from the 3D laser scanner and as-sembles a rough estimation of the terrain in front ofthe rover using the self-localization resulting from therover odometry. The 3D data allows to extract in-formation whether there is an obstacle, a hill or thebeginning of a hole (e.g. a crater). This principle isshown in Figure 7.

The ground control station (Section 2.5) providesdifferent methods for monitoring and control therover. A mission involving the rover contains auto-matic tasks and interactive tasks which are monitoredor even solved by the operators. The LUNARES sys-tem does not include autonomous path planning andobstacle avoidance with trajectory replanning. In-

Page 6: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 8. MMI at the ground control station ded-icated to rover. The MMI provides monitoring(e.g. artificial horizon, lower right) and trajec-tory planning (white area, top left). The win-dow in the top right area shows the acquired3D-data from a scan while not driving. Theplanned trajectory can be applied to the simu-lated rover in the 3D scan image.

stead, for safety issues, every movement of the roveris preceded by the acquisition of a 3D range image.

This image supplies the ground operators with agood impression of the near terrain and is used to plana small trajectory which does not exceed 1.5 m (Fig-ure 8). This movement can be simulated on groundand finally send to the flight system which executesthe planned trajectory. Unforeseen hazards can stillbe avoided by the emergency system of the rover.

2.3 ScoutThe Scorpion robot [13, 14] serves as legged scout inthe LUNARES mission. Scorpion is an eight-leggedbiomimetic walking robot. Each of its legs has threeactive DOF and one passive DOF in the lower leg.The locomotion control employs the biological in-spired pattern generator and reflexes for efficient loco-motion. The following paragraphs describe the mod-ifications that have been executed in order to adaptthe existing robot Scorpion for the LUNARES sce-nario. Figure 9 shows the Scorpion as fully equippedLUNARES -scout.

Scout’s Basic Locomotion PrincipleThe scout’s locomotion is controlled by a micropro-cessor running a micro-kernel for behavior-based con-trol of robots [20]. The higher behavioral levels are

executed on an embedded PC separately. This prin-ciple of separation is biological inspired: Human oranimal locomotion and reflexes are produced in thespinal cord, whereas higher level understanding is lo-cated in the brain. In case of the Scorpion the mi-croprocessor replaces the spinal cord and the higherlevel behaviors are executed on the PC system.

Beside real-time capabilities and reflexes, themicro-kernel also offers an inverse kinematics layerwhich is used to describe the scout’s rhythmic move-ment patterns in Cartesian coordinates.

This inverse kinematics layer is important for theclimbing task of the robot, because it prevents the feetfrom slipping due to a reduction of tension betweenthe legs while walking.

The micro-kernel allows to write multiple inputsto single hardware drivers using different mergingfunctions, e.g. the normal posture of the robot iswritten to the joints, the walking itself is defined asoffsets to this position. The micro-kernel merges bothvalues by adding and relays the value to the inversekinematic resulting into the final control values forthe joints.

Due to this approach an automatic merging ofwalking patterns is possible. Thus, forward- andsideward-walking can be combined to a diagonalwalking pattern. The posture (e.g. body height, leanforward, tilt angle, etc.) can be set independentlyfrom walking patterns.

The representation of the movements allows tomodify the walking speed by changing the frequencyof the curve and to modify the step height by chang-ing the amplitude.

Reflexes for Secure LocomotionIn order to enable a secure locomotion in crater slopeof up to 35◦, several reflexes had to be implemented oradapted. Already existing reflexes were the hole reflexand stumbling correction, additionally a ridge reflexand a balance reflex were introduced specially for thelocomotion in steep environments with obstacles.

The hole reflex is triggered when, due to the stateof the walking behavior, ground contact of a foot isexpected but not measured by the linear potentiome-ter in the spring-damped lower leg (passive DOF). Asa result the reflex stretches the leg until ground con-tact is measured. The ”opposite” reflex is the ridgereflex : This reflex is triggered, when in touchdownphase of the leg ground contact is sensed before itis expected. In this case the reflex inhibits furtherstretching of the leg in order to keep the body of therobot in level.

The stumbling correction reflex is triggered whena leg is stuck in swing phase. To detect this, the

Page 7: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 9. Scorpion robot: Scout system for LUN-ARES . One of the eight legs is equipped witha gripper for sample pick up. The sample con-tainer with a grapple fixture for the lander’sarm is mounted on the scout’s back. Main sen-sors are an operator camera, also used for au-tomatic sample approach and a tiltable laserscanner.

currents of the thorax joint are measured and com-pared with a threshold. If the current raises abovethe threshold, the basal and distal joints are movedreflex-like in order to lift the foot over the obstacle,i.e. the foot swings with a much higher amplitudeover the obstacle like when using the regular gait pat-tern. A second indication to trigger this reflex is thedifference of actual position and desired position ofthe foot.

The balance reflex shifts the body accordingly tothe slope of the terrain in order to keep the load asequally distributed on all legs as possible.

Reusing a Leg as ManipulatorThe complex system of an walking robot can increaseits advantages with the adaption of an functional ele-ment on the footprint. With this element, e.g. grab-ber or sensor, the legs of the system can perform asa manipulator or sensor arm. Furthermore, a robotsgripping device can support locomotion, when usedto increase the footprint of the robot.

For the LUNARES scenario, the Scorpion isequipped with a gripper. The main function of thisdevice is to collect probes on the ground and to placethem in a container on the robot. More informationon the gripping device is provided in [5].

Autonomy Framework for the ScoutThe Scorpion’s task requirements within the scenarioexceeded the existing online processing capability of

the Scorpion. To handle this issue, a software frame-work was introduced. This framework provided thecommunication infrastructure to remotely control theScorpion and eventually allowed the introduction ofhigh level action commands such as “collect sam-ple” - a command which is executed after the po-sition of this sample was determined (described inSection 4.5).

This framework also allowed the distribution ofsoftware on different processing units such as work-stations or embedded PCs, and also allowed for con-ducting simulation experiments with “hardware inthe loop”, e.g. using the real sensors of the robot fora obstacle detection task, while controlling the robotsin simulation.

Additional SensorsIn order to fulfill the task of detecting a sample au-tonomously as described in Section 4.5, the scouthas been equipped with a laser scanner. Since thelaser scanner was added to the existing robot in theproject, a compact additional module has been de-signed, consisting of controller electronics, a powerconverter, a digital servo motor and the laser scanneritself.

The controller is executed on a microcomputerwith a Linux operating system and uses wireless com-munication to connect to the mission control station.This enables the controller to be embedded into thecontrol station and communicate within the auton-omy framework (see Section 2.3).

The pan-tilt laser scanner unit is controlled by ahigh level software module, which collects the infor-mation of the laser scanner and constructs a heightmap. This map is then used for localization of sam-ples (see Section 4.5). Similarly, the camera outputis processed by another high level software module,which controls the approach of the scout (see Sec-tion 4.5).

2.4 System ControlThe control of the systems is based on the Func-tional Reference Model (FRM) defined by the Eu-ropean Space Agency (ESA) [22, 9]. FRM dividesautonomous robot control into three layers:• Mission Layer (Level C)• Task Layer (Level B)• Action Layer (Level A)

In case of LUNARES , the Action Layer definesbasic actions like movements or gripping. The TaskLayer is responsible for sequences of actions and de-fines tasks like “go to the next position while avoidingobstacles” or “deliver sample”. The Mission Layer isresponsible for the overall mission execution. It de-

Page 8: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 10. Overview of the LUNARES control architecture: Bottom: Subsystem control with all level Acontrollers of rover, scout, lander, and manipulator. Middle: System control with level B and C, andother support functions like monitoring, telemetry, vision server, world model, and skill controller. Top:LUNARES ground control distributed over several workstations.

fines which task is executed when and how to handledependencies of tasks between all robots. Due to thisapproach the Level B and C Controllers can make useof the existing Level A Controllers.

Also the three main columns of informationflow (forward control, nominal feedback, non-nominalfeedback) typical for the FRM [22] have been imple-mented for the LUNARES control system. The firstcolumn is the feed forward control which sends com-mands from the top (mission level) via level B to thebottom (action level close to the robotic hardware).

The second column of the FRM is the nominalfeedback channel which is implemented as a syn-chronous communication channel where nominal re-sponses to the commands of the feed forward com-mand channel are send and evaluated. Normally, thischannel serves to acknowledge simple commands, orto generate errors. In case of an error generated bya lower level, the next higher level has to react onthat error, or in case of no possible error recovery,the error has to be reported to the next higher level.

In addition, the FRM foresees a third columnwhich is called the non-nominal feedback channel.On that channel, asynchronous error messages canbe generated, which are reported from a lower levelto the next higher level. Again, the next higherlevel can try to recover from the problem if possi-ble. This channel is the most complicated due to itsasynchronous character.

For the LUNARES system control, this has beensolved by a simple error handling mechanism so farand should be enhanced in the future. Besides thethree control levels, the LUNARES system controlalso contains a global state machine which controlsdifferent operating modes and error states of the sys-tem. The overall mode controller comprises vectorstates which contain component states for every sub-system. This allows an error handling for single sub-systems whereas other subsystems operate in nominalautomatic or manual operating mode.

Furthermore, the system control contains a worldmodel database, a skill controller which provides alibrary of higher skills (such as cognitive skills forgrasping or docking) and a monitoring controllerwhich gathers telemetry information from the systemitself and all connected subsystems, i. e. the differentrobots. The telemetry data is permanently analyzedand is used to throw asynchronous error messages.A vision server collects all video and image relateddata and provides a central service for all other skillsand functions that need these type of data. Figure 10depicts the main components of the system architec-ture.

The Level B Controller (i.e. task controller), ap-plies a PHP script interpreter. So every task is codedas a small PHP snippet of code. Via network com-munication these scripts call actions of the level Acontroller and skills of the skill controller. On thiscontrol level, most of the data representations are

Page 9: LUNARES : Lunar Crater Exploration with Heterogeneous

symbolic, like “move to payload-bay”. By access-ing the world model database from the task script,the Level B controller resolves the symbolic data andobtains the corresponding numeric data. This typeof data is then send to the Level A, which normallyunderstands numeric data and parameters only.

2.5 Ground Control StationThe benefit of robotic missions is the ability toachieve various mission objectives within the samemission based on a mission specific set of pro-grammable robots and payloads. The LUNARESmission foresees for instance to retrieve samples froma crater relying on a heterogeneous robotic team in-cluding the remote system control and a ground con-trol station. The Ground Control Station (GCS) hasto support the LUNARES mission activities at dif-ferent levels: (1) generate the mission database andthe associated mission tasks, (2) edit and verify themission timeline, (3) execute the mission timeline ina supervised autonomy mode, (4) direct control of allthe subsystems. These features are not specific to theLUNARES mission but can be found for a wide va-riety of robotic missions. The LUNARES GCS inte-grates these requirements and has been used withinthe experiments to prepare the missions and to ex-ecute automatically the mission timelines includingmanual recovery actions.

2.6 Operational ConceptUsing state-of-the-art robotic systems for mobilityand manipulation, only a low level of autonomy canbe expected, especially in an unstructured environ-ment like the moon. The knowledge about the mis-sion and their subsystems has then to be sharedamong the elements of the whole system includingthe operators:

1. Complex operations which cannot be performedautonomously by the robots are coded in theLevel B Tasks of the system control

2. Complex operations which can be performedautonomously by the robot are coded in theLevel A actions of the system control as well assimple operations

3. The generation of the mission structure leadingto the timeline (Level C Timeline) which neces-sitates reasoning capabilities is performed by theoperator

4. Mission execution which requests as well analysiscapabilities is supervised by the operator. Theoperational concept is based on the supervisedautonomy which shall involve the operator in allcritical phases of the mission.

In order to improve the safety and the reliabilityof the mission, the operator shall be able to managethe system from full automatic (automatic executionof the timeline) to full manual (direct commanding ofall the subsystems including the sensors). In case oferrors and contingencies, or for complex operationsbeyond routine activities, the operator shall be ableto stop the mission, to operate manually the faultysubsystem, and to resume the mission.

The GCS shall also be user friendly in order toreduce the training effort and to increase the oper-ational safety. The primary operation of the ManMaschine-Interface (MMI) consists of selecting an op-erational mode or context (i.e. Standby, Automatic,Manual) with the mode controller in which the modesand their transitions are represented as a finite statemachine. The accessibility of the commanding tools(i.e. manual controller or vision server) or part ofthem is set according to the operational modes.

As the LUNARES mission involves several robotsthat could operate in parallel the mission responsibil-ities are shared among several operators: (1) missiondirector for supervising and monitoring the missionstate and delegating activities to the Mission Opera-tors, (2) mission operators for commanding directlythe robots in specific mission phases. The groundcontrol station has then the capability to involve sev-eral operators in a coordinated way for monitoringand commanding a mission.

2.7 Functions

The operational functionality consists of standardsystem status telemetry (TM) monitoring and the re-mote manual commanding of the subsystems (TC).To interact with the environment the operator cancommand the subsystems from the GUIs.

Each robotic and sensor subsystem has a dedi-cated GUI to configure and command that subsys-tem. The remote operator can obtain situationalawareness of the worksite environment by viewing im-ages from the head cameras of the rover, the scoutand the lander. The mission preparation steps andthe mission configuration, planning and execution areperformed via dedicated GUI interfaces.

2.8 Architecture

The control station architecture applies the ThinClient Three Tier (TCTT) architecture model (Fig-ure 11). In this architecture the three levels UserInterface, Domain Application and Data Access arestrictly separated (Three Tier). According to the se-lected TCTT architecture model, the implementationof each function is split into a kernel application anda GUI unit.

Page 10: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 11. Basic Architecture of the robotic Ground Control Station

The kernel application runs the main processingrelated to the function while the GUI unit enablesthe GUI-based commanding of the kernel application.The GUI units are not linked to each other but onlywith their dedicated kernel application. In additionthe kernel and GUI applications are shared betweenthe core applications which manage the basic func-tionalities and the robot applications which directlydepend on the robot. While the core applications arerobot independent or fully configurable, the robot ap-plications are either partially configurable or need are-design according to the robot needs.

The communication between the kernel applica-tions as well as between the kernel and the GUI ap-plications rely on the design patterns (i.e. observerand mediator patterns). In order to reach the multi-operator capability the GUI of the subsystems to becommanded by additional operators are started on aparallel control station and linked to their kernel ap-plication, while the kernel application remains on thecentral control station of the mission director.

2.9 ConfigurabilityThe control station provides a variety of configurationfiles for the definition of the environment, mode con-trol and mode transition, telemetry, and command-ing. Robot commands are specified as macros inan Excel sheet. The assembly of binary telemetrystreams coming from the system control is definedvia another Excel sheet. The sheet holds informa-tion about subsystem, name, position, type, length,and monitoring values (warning and error minimumand maximum limits). The mission database (accessfile) contains the path definitions of the manipulator

Figure 12. MMI environment with the MissionController (left) and the camera controller ofthe Head and the Manipulator (right). Errorand warning messages are displayed in upperpart of MMI, status messages are displayed atthe bottom, the upper right provides command-ing tools to be opened in the central workspace.

and the rover, intermediate points used in the Level Btasks, as well as the position of objects (i.e. payloads)on the subsystems (i.e. lander, rover).

2.10 MMI environment

The main MMI consists of fixed areas split aroundthe screen (Figure 12). (1) error and warning mes-sages, (2) status message, (3) commanding tools, (4)monitoring, (5) workspace in the center where thecommanding tools can be opened and used. The com-manding tools are selectable according to the current

Page 11: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 13. Ground Control Station in a multi-operator configuration. Above the monitors therelated subsystems are pasted into the photo-graph.

mode of the control station, so the operator can onlyselect context specific operations. This allows to easethe utilization of the MMI and to increase the oper-ational safety. So, different workspaces can be con-figured for the different mission phases like planning,monitoring, or commanding of a subsystem.

In multi-operator mode the ground control sta-tion is started on two computers allowing a parallelmonitoring and commanding of the LUNARES ap-plication scenario (see Figure 13). The MMI envi-ronment is the same for both control stations. TheDirector MMI has full functionality over the roboticsystem and can distribute rights to the operators ofthe subsystems. The Operator MMI is configured ac-cording to the role assigned by the director, so theoperator has only access to the commanding tools ofthe subsystem he is in charge of.

3 Test Environment and TestEquipment

The scenery for experiments conducted in LUNARESis an artificial lunar crater environment called SpaceTestbed (STB). The STB simulates the conditions atlunar polar regions. The surface of the STB consistsof hard rocks with gray basalt chips as regolith sub-stitute including stones and small craters. The STBprovides slopes between 30◦ and 45◦ (Figure 14).

In addition, a lighting system is installed whichis able to create very bright areas (14500 lux at 10m/12o per spotlight) at the crater rim as well as com-plete darkness in the interior of the crater in order tosimulate the lighting conditions at the lunar polar re-gions.

A visitor platform is installed allowing to spec-

tate the experiments and demonstrations. The con-trol center is located below the crater. Thus, theoperators have to depend on sensor data from thesystems for control and supervision, resulting in asituation similar to real mission scenarios.

The STB is equipped with supervision tools to ac-quire experiment data for evaluation of the systemsunder test. To collect, archive, and synchronize theexperiment data and to control the cameras a soft-ware is implemented called STB-Control. The testequipment is controlled automatically in order to sup-port the operators. The equipment comprises• a Motion Tracking System (MTS),• two Pan-Tilt-Zoom (PTZ) cameras,• a fixed observation camera,• and a gantry crane equipped with an observation

camera and a tracking camera.The following paragraphs provide a rough

overview of the automatic surveillance system, [7] of-fers more detailed information.

3.1 Automatic Supervision UsingPan-Tilt-Zoom Cameras and aMotion Tracking System

Besides position determination of the scout in theslope, the MTS is used to automatically focus thePTZ cameras on the robot under test. Therefor,the position of a reflective marker fit to the robot istracked by the MTS and its position is used to alignthe cameras. The continuous video material from dif-ferent points of view including a constant bird’s eyeview (Section 3.2), combined with the recorded tra-jectory helps to improve locomotion in the slope bydiscovering malfunctions and improving walking be-haviors.

The camera alignment is realized by a cameracalibration using the known position of the reflec-tive markers and their position in the camera im-age. Thus, the extrinsic parameters describing theposition and orientation of the camera in the coordi-nate system of the MTS (WCS) can be calculated andcan be used to transfer the positions of the markersinto the camera coordinate system (CCS). The fol-lowing sections describe the algorithm used to detectthe markers within the camera image, the CMA-ESoptimization [11] of the extrinsic parameters and thefinal camera alignment on the markers in the CCSwhich have been realized by a spherical coordinatetransformation.

The camera alignment is realized by a cameracalibration using the known position of the reflec-tive markers and their position in the camera image.Thus, the extrinsic parameters describing the posi-tion and orientation of the camera in the coordinate

Page 12: LUNARES : Lunar Crater Exploration with Heterogeneous

(a) CAD-model of STB, including landing unit, rover andlegged scout.

(b) Panoramic view from within thecrater. The picture is stitched fromseveral single shots.

Figure 14. Space Testbed (STB): CAD-model and view from interior. The main slope is 30◦ to 35◦, theenvironment additionally provides a slope of 45◦ (special area in lower left corner of CAD-picture)

system of the MTS (WCS) can be calculated and canbe used to transfer the positions of the markers intothe camera coordinate system (CCS).

The following sections describe the algorithmused to detect the markers within the camera im-age, the CMA-ES-optimization [11] of the extrinsicparameters and the final camera alignment on themarkers in the CCS which have been realized by aspherical coordinate transformation.

Detection AlgorithmThe camera calibration requires a reliable detection ofthe markers used by the MTS. Therefore, the camerasare equipped with infrared emitters and the infraredcut filters of the cameras are switched off. The detec-tion algorithm uses a run-length encoding algorithmto build horizontal intervals of pixels, whose bright-ness reaches a certain threshold, and an union-find al-gorithm to connect the intervals to regions [10]. Thedetection algorithm allows the collection of passpointswhich consist of a marker position in the WCS andits corresponding pixel in the camera image. Thesepasspoints are used during the optimization to ratethe different camera positions.

Optimization of the extrinsic parametersThe CMA-ES optimization uses a camera calibrationfunction containing the intrinsic parameters of thecamera, which have been extracted from the datasheet, and the six extrinsic parameters describing theposition and orientation of the camera in the WCS.

Each camera pose is rated by transferring the 3Dpoint of the collected passpoints into the camera im-age and compares the calculated image coordinates

with the desired ones. Thus a set of extrinsic param-eters is rated by the average deviation over all pass-points in pixel-related units. If rough start param-eters are supplied, a deviation of 6.79 to 7.84 pixelis achieved (Figure 15(a) and Figure 15(b)). Thisfitness value can be further improved by optimizingthe algorithm which is responsible for the passpointcollection.

Camera AlignmentTo focus each camera on a 3D point within its co-ordinate system, modified spherical coordinates areused. Mapping function (2) restricts the tilt angleϑ to [−90, 90] and the pan angle ϕ to [−179, 180]using the right-handed coordinate system shown inFigure 16. Together with the calculated camera posethis mapping function allows a fast, calculation cost-effective and accurate alignment on each marker-equipped robotic system within the STB.

ϑ = − arctanz√

x2 + y2−90◦ ≤ ϑ ≤ 90◦

ϕ =

arccos x√x2+y2

y ≥ 0

− arccos x√x2+y2

y < 0 −180◦ < ϕ ≤ 180◦

(1)

with x2 + y2 6= 0

3.2 Automatic Robot Tracing using aGantry Crane

The gantry crane’s purpose is to autonomously tracethe scout during movements in the slope in order to

Page 13: LUNARES : Lunar Crater Exploration with Heterogeneous

(a) Optimization of the extrinsic parameters (b) Fitness optimization from a distance of 1509.54 to 6.79pixel (4446 evaluations in 0.08 seconds)

Figure 15. Optimization of the extrinsic parameters using CMA-ES and camera coordinate systems

Figure 16. Coordinate system of the PTZ cam-eras in standing and hanging orientation, di-rection of rotation, and an example mappingon point (1,1,1)

maintain a constant top view. In LUNARES , thecrane is used only for documentation, but in laterprojects it can be used to provide a simple simulationof lower gravity for the robot by using a counterweightand deflection rollers. Therefor, it is crucial for thegantry crane to reside over the robot all the time,even when the robot is moving. A tracking camera isutilized to capture the infrared light reflected from aretro-reflective marker on the robot and emitted froman IR source, mounted next to the camera.

The detection algorithm (Section 3.1) is used todetermine the marker position in the tracking cam-era image. The center position of the marker has tobe kept on a constant reference position in the im-age in order to automatically trace the robot. Sincethe gantry crane needs absolute desired positions ascontrol input, a transformation from image to world

coordinates is necessary.If the actual position of the robot is used directly

for the new desired position of the gantry crane, an er-ror would remain, which is proportional to the speedof the robot. This is due to the time the gantry craneneeds to reach the desired position, while the tracedrobot is still moving. For this reason, the velocityof the robot has to be taken into account. The ve-locity is calculated using the actual position and thelast known position as well as the time between mea-surements which is defined by the control frequencyof the gantry crane. A constant speed between twomeasurements is assumed.

However, the gantry crane is not designed for re-altime interactions. The control frequency is limitedto 4 Hz and a delay between command and action ofaround 0.5 s can be observed. In order to cope withuncertainties and to realize a smooth tracing, a par-ticle filter similar to [21] is used.

The algorithm uses a linear perceptual model.This improves the dynamic behavior compared to aGaussian weighting. Afterwards, a resampling stepfilters out poor predictions and draws new ones. Themotion model uses the actual velocity added withGaussian noise and position to move each particle.The new desired position results from the average ofall particle positions.

4 Sample Retrieval from aPermanently Shadowed LunarCrater

In this section, a reference mission for the LUNARESscenario is presented, then the steps of the LUNARES

Page 14: LUNARES : Lunar Crater Exploration with Heterogeneous

demonstration mission are described. Additionally,three key elements of the mission are described indetail: Section 4.3 displays the exchange of rover’spayloads and the handling of the sample containerthrough the lander’s manipulator arm. Section 4.4demonstrates the autonomous docking of rover to lan-der and scout to rover. Finally, Section 4.5 showshow the sample pick up in the crater bottom is ac-complished.

4.1 Reference Mission

The choice of a lunar mission, which is used as refer-ence for the system of LUNARES and its successors,is established with the help of the following selectioncriteria (ordered by importance):

1. Maximum scientific payload possible2. Realisation of the mission in a realistic time

frame3. Visionary character, specifically the inventive-

ness level4. Public Outreach Value

Some robotic missions which are in the long termplanning (e.g. the ones that are necessary for thebuild up of a lunar infrastructure for a permanentlunar base), are seen as non-realistic in the near fu-ture and from the standpoint of science they are seenas unattractive. Because of this, missions which areorientated around ”hot” scientific topics are giventhe priority. The mission objectives of this narrowedchoice of missions are:

1. Deployment of a Geophysical Environment Pack-age (GEP) with seismometers and thermal flux-sensor.

2. In-Situ analysis of soil samples or sample re-turn to a central measurement station (e.g. Geo-Chronology, interstellar particles in the lunar re-golith)

3. Sample-return mission for samples from the land-ing zone (Measurement station on the Lander orwith a sample-return module)

4. Radio telescope, more specifically an antenna ar-ray for radio science

5. Measurements in the earth magnetic field tail6. Astro-habitat with biological experiments in a

radiation environmentFrom this list, mission 2 was chosen. In addition,

the possibility for a sample-return mission was inves-tigated. As destination for the mission, the rim of theShackleton Crater has been chosen (Figure 17).

The mission profile foresees that a wheeled roverdrives to the rim and deploys a walking robot. Afterthis the walking robot descents into the crater andtakes a sample in the shadowed area, which it takesback to the wheeled rover. After this, the walking

Figure 17. Illumination Characteristics at theShackleton Crater Rim, picture taken from [4].

robot is merged with the wheeled rover again and thecombination drives back to the lander. After arrival,the lander’s robotic arm takes the soil sample andplaces it in the central analysis unit or in the sample-return vehicle.

4.2 LUNARES Demonstration MissionFigure 18 illustrates the demonstration scenario ineight subsequent steps. The following paragraphsgive a more detailed description of each step, and thusprovide a detailed summary of the complete LUN-ARES demonstration scenario.

Starting PositionThe landing procedure on the surface of the moon isnot part of the LUNARES demonstration. Also, dueto space limitations, the rover egress is not addressed.Instead, the demonstration scenario starts after therover has been deployed on the lunar surface. For thedemonstration the rover is in view distance to thelanding unit simulating the arrival of the rover fromanother mission part in order to be reconfigured forthe next part. Figure 18(a) shows the start configu-ration.

Rover Docks to Landing UnitThe rover has to be equipped with payloads from thelander. For that purpose the docking procedure be-tween lander and rover has to be initiated. Initially,the lander extracts the rover’s relative position andgenerates a trajectory for the rover. Then, the lan-der leads the rover into the workspace of the lander’smanipulator to allow for payload exchange, see Sec-tion 4.4 for more details.

Payload Exchange on the RoverAfter reaching the workspace of the lander’s manip-ulator, the rover is equipped with a payload (P/L)

Page 15: LUNARES : Lunar Crater Exploration with Heterogeneous

(Figure 18(b)). The P/L is picked from the landerand placed into a designated payload bay of the rover(Section 4.3). The payloads used in the demonstra-tion scenario are mock-ups representing scientific in-struments in a real mission. By equipping the roverwith different payloads, it is possible to configure thesystem for the current mission at hand.

Movement of Rover and Scout to the CraterRimWhen the rover is equipped with a new P/L, roverand docked scout drive towards the crater rim. Inprinciple, the rover is able to negotiate moderatelyrough terrain and can travel longer distances in anenergy efficient way compared with the legged scout.However, due to space constraints in the LUNARESmission the distance to be covered by the rover is lim-ited to several meters. Figure 18(c) illustrates roverand scout collectively driving towards the crater’srim.

Undocking of Scout and RoverOnce the unit consisting of rover and scout arrived atthe crater rim, the scout undocks from the rover (Fig-ure 18(d)). The docking adapter allows the scout’sdeployment onto the surface. The process of de-taching the scout from the rover is described in Sec-tion 4.4.

Scout Descends into CraterAfter the undocking, the scout has to overcome thecrater rim (Figure 18(e)) and enter the dark interiorof the crater. To arrive at the crater bottom the scouthas to safely climb down the crater slope which is cov-ered with small rocks and small impact craters. Inthe LUNARES mission the movements of the scoutin the crater slope are remotely controlled by an op-erator, using the camera which is mounted on top ofthe scout.

(a) Autonomous docking ofrover to lander

(b) Equipment of rover withnew payload

(c) Rover and scout on theirway to the crater’s rim

(d) Deployment of scout

(e) Scout is about to climbinto the crater

(f) Scout arrives at craterbottom

(g) Sample pick up in thecrater

(h) Scout climbs up thecrater slope

(i) Autonomous docking ofScout and Rover

(j) Sample container ispicked up by the landersmanipulator arm

Figure 18. Scenes from the LUNARES sample re-turn demonstration mission

Page 16: LUNARES : Lunar Crater Exploration with Heterogeneous

Sample Collection at Crater BottomAfter arriving at the crater bottom (Figure 18(f)) ascientific operator chooses a geological sample usingthe video image provided by the scout. The scoutpositions itself in front of the selected sample, usinga visual servoing approach (Section 4.5). When thecoarse positioning is done, the scout executes a finedetection of the samples’s coordinates by making useof its laser scanner (Section 4.5). When the coordi-nates of the sample are determined, a leg is used asmanipulator to place the sample into a sample con-tainer on the scout’s back (Figure 18(g)).

Scout Climbs Back up the CraterWhen the sample has successfully been collected andstored in the sample container, the scout starts toclimb the crater slope and back towards the rover.The scout climbs freely in the crater slope (Fig-ure 18(h)), i.e. no tethering system is applied. How-ever, it remains remotely controlled.

Cooperative Docking of Rover and ScoutAfter arrival at the rover, the scout turns its backto the rover (Figure 18(i)) to prepare for the au-tonomous docking procedure as described in Sec-tion 4.4. For this procedure, the rover detects thefour markers on the scout and commands the scoutinto a predefined docking pose. When the scout is inthe correct pose, the hook of the docking adapter islowered so that the scout is able to hang itself intothe hook.

Return of Rover and Scout to Landing UnitSimilar to the initial procedure, rover and scout col-lectively drive back to the landing unit using thedocking procedure between rover and lander. Thedocking process ends, when the rover and thus thescout’s sample container are within the workspace ofthe lander’s manipulator arm.

Transfer of Sample Container to LandingUnitThe last step of the LUNARES demonstration mis-sion consists of unloading the sample container fromthe docked scout (Figure 18(j)) and to transferringthe sample onto the landing unit. The process is per-formed autonomously, a visual servoing approach al-lows to determine the exact position of the samplecontainer with respect to the manipulator (see Sec-tion 4.3).

4.3 Automatic Payload ExchangeOne of the main goals of the LUNARES system andthe corresponding reference mission was to demon-

strate the cooperation between a team of heteroge-neous robotic subsystems all working together in or-der to achieve a common goal – collecting a samplefrom the inner of a lunar crater and returning thesample to the lander for further analysis.

For this purpose, the robotic subsystems had tobe reconfigured by the system. One example is theexchange of payload dummies. The most importantstep concerning the exchange of payloads is the returnof the sample container to the lander.

Manipulation is based on a visual servoing ap-proach [18], avoiding the need for a thoroughly per-formed calibration of the camera systems. However,instead of calibration, a teaching phase is necessarywhich replaces the calibration.

The LUNARES visual servoing approach de-tects visual markers in monocular camera images.The markers are black filled circles on white back-ground with a binary ring code around the cir-cle. An adaptive binarisation technique followed byblob analysis generates a set of marker hypotheseswhich can be identified very robustly by their ringcodes. Every image contains a set of n markers{(m1

x,m1y)T , · · · , (mn

x ,mny )T }. For any static scene,

the locations of these markers only depend on theconfiguration of the manipulator as long as the cam-era is mounted to the end effector. Thus, the markerlocations can be regarded as the result of the percep-tual kinematic map π (PKM):

π :R6 → R2n, c 7→ (m1x,m

1y, · · · ,mn

x ,mny )T . (2)

where

c : (x, y, z, α, β, γ)

xr,yr,zr: robot coordinates (WKS)

α,β,γ: rotation around xr,yr,zr-axis

mnx ,m

ny : x/y-image coordinates of the n-th feature

The grasping can be solved as a fixed movementstarting from a well-known reference position c0 ∈ R6.Therefore, the task of grasping reduces to the recov-ery of the reference position. By linearizing the PKMaround that reference configuration c0, the followingdirection ∆(c) in the configuration space is obtained.Moving the end-effector in that direction minimizesthe differences between the current marker locationsand marker locations of the reference image.

∆(c) =(Dπ(c0)TDπ(c0)

)−1Dπ(c0)T · (π(c0)− π(c)).

(3)The Jacobian Dπ(c0) can be determined by ap-

plying test movements along all six directions. Figure19 depicts the result of the visual servoing approachfor a payload with four markers attached.

Page 17: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 19. Overlay of the reference image usedfor the teaching process of the Jacobian andthe image of the camera view after successfulcontrol of the manipulator. The visual servo-ing was able to perfectly align the marker loca-tions whereas the differences of the backgroundclearly show the different situations.

Figure 20. Positions of the markers in the imageplane tracked during the visual servoing pro-cess.

Figure 20 shows the movements of the visualmarkers during the visual servoing process. The pro-cess runs on standard PC hardware (Intel Core 2Duo) at 10 Hz and requires approximately 10 s forconvergence. The accuracy is high enough to performa “blind grasping” through a predefined trajectory af-terwards.

4.4 Autonomous Docking Procedures

In the LUNARES reference mission, the global mis-sion goal is achieved by splitting the task into sev-eral sub tasks which are solved by specialized roboticsubsystems. For instance, the required high degreeof mobility is achieved by splitting up the require-ment into two different mobility systems the roverfor larger distances in moderate terrain and the scoutfor shorter distances in more difficult terrain.

Figure 21. Typical situation for the docking be-tween the scout and the rover.

This splitting however, requires new capabili-ties of the robotic subsystems. In this section twoautonomous and also cooperative behaviors of therobotic subsystems shall be discussed. The first be-havior is the docking between the scout and the roverwhich is required after the scout has returned from itscrater exploration. Second is the autonomous dockingof the rover to the lander in order to reach a workingposition from which the lander manipulator can reachthe sample canister.

Docking of Scout to RoverThe scout and the rover have not been specifically de-signed for a docking procedure. However, the LUN-ARES mission required the scout to dock to therover. Different possibilities for docking procedureshave been evaluated, e.g. such as the scout walkingonto the rover.

Here we present the final choice for the dockingprocedure, which requires the scout to approach therover by stepping backward. This eliminates the pos-sibility to use the scout’s visual sensors for the dock-ing procedure. Instead, the vision system of the roveracquires images of the docking scout and generatescorrection manoeuvres of the scout to reach a certaingoal position. From there a mechanical hook is ableto lift the scout to the back of the rover.

The autonomous docking procedure has to per-form within given constraints. These constraints de-pend on the mechanical docking mechanism, i.e. thedocking adapter. The adapters capability to dealwith position errors and to compensate for such er-rors define with what precision the scout has to get tothe target position. In order to increase robustness of

Page 18: LUNARES : Lunar Crater Exploration with Heterogeneous

the given docking scenario we identified three criticalelements: (1) the predefined actions of the dockingprocedure, (2) design of the docking adapter and itsability to compensate for alignment errors, (3) fall-back safety range for scout’s pose correction.

While predefining the actions, the docking pro-cedure has to account for errors in the x-alignmentas well as in the z-alignment within some limitation2,i.e. the final procedure requires security distances forthe mechanical docking, so that the risk of a collisionof docking adapter and the scout’s docking handle isminimized. Eventually and for a worst case scenario,the scout has to maintain a fallback range for man-ual pose correction leading to the semi-autonomousdocking approach.

The overall docking procedure comprises multi-ple steps. For the start of the docking procedure therover’s camera requires capture the scout and the vi-sual markers attached to the scout. Then, the rovertakes control of the scout movements by applying avisual servoing approach. The control target is toreach a certain target position with the scout suchthat the position of the visual markers is identicalto the marker positions of a reference image whichhas been taken during a teaching phase of the visualservoing approach. The visual servoing approach isexactly the same as described in Section 4.3.

A training phase is required to generate the Ja-cobian matrix of the PKM to linearise the mappingfunction around the target configuration. For thatpurpose several test movements along six different de-grees of freedom in positive and negative directionshave been performed to setup the Jacobian of thePKM. As a controller we utilized a proportional con-troller.

Once the position has been successfully reached,a number of predefined actions will be executed3:

1. scout shifts its body forward (approx. 2 cm awayfrom the rover), to increase the clearance be-tween its handle and a lowered hook,

2. moving down the docking adapter including aclearance distance (approx. 3 cm),

3. scout shifts its body backward (approx. 5.5 cm)to guarantee that the scout’s handle has contactto the lowered docking adapter’s hook,

4. docking adapter lifts the scout (12◦) just so thatthe scout’s legs do not touch the ground any-more,

2The coordinate system is depicted in Figure 93Since the scout has play in joints the distances of the pose

changes are only approximate.

Figure 22. The scout as seen from the rover cam-era. The coded ring markers are clearly visiblein the image and extracted from the rover’s im-age processing system.

5. scout folds its legs into docking position so it canbe loaded onto the rover,

6. docking adapter lifts the scout into its final dock-ing position.

The docking procedure will be performed withdisabled reflexes, to allow for the predefined posturesetting.

Docking of Rover to LanderThe second autonomous docking manoeuvre concernsthe placement of the rover in front of the lander inorder to achieve a sufficiently precise starting pointfor the manipulation and sensor based grasping of thepayloads or the sample canister. For that purpose acertain accuracy of the rover position in front of thelander has to be reached.

The docking between rover and lander is solvedby utilization of the lander’s sensor system. Here,the lander acquires a range image from the 3D laserscanner providing two types of information: (1) 3Dinformation for every pixel, and (2) the intensity ofthe reflected laser pulse. Especially the latter enablesthe detection of special retro-reflective targets in thescene. Four of these visual markers have been placedat the rover and can easily be detected by the roverto lander docking skill. Using the 3D positions of thedetected markers, a graph matching method [6] hasbeen implemented to assure an identification of thesingle markers, Figure 23. Finally, a pose estimationof the rover with respect to the lander system is ob-tained.

Using this pose information, a spline trajectory isplanned to reach the desired target position from thecurrent pose of the rover. However, due to drift errors

Page 19: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 23. Laser scan image with detectedmarker configuration for graph matching

in the rover’s odometry, that trajectory is only iter-atively executed. In one step, the rover only drivesalong half the pre-planned trajectory. After that dis-tance, a new measurement with the 3D laserscanneris applied and a new trajectory is planned.

This process is repeated until the distance be-tween target position and current position is below acertain threshold. If the position has been reached,the orientation is corrected in a similar manner. Af-ter 3-4 iterations, the rover reaches the target with aprecision of about 2 cm which is sufficient for furtherreconfiguration steps and the unloading of the samplecanister.

4.5 Sample Pick Up by a Legged ScoutSystem

Before collecting samples, the robot has to positionitself in front of an sample of interest. This sample isselected by a human operator using a graphical userinterface. Afterwards, the robot starts an automaticapproach to the sample until the scout is close enoughto grab the sample.

The approach is vision based and utilizes a sin-gle camera with an analog transmitter and receiver.Due to limited processing power on the scout robotitself, the computer vision algorithms are executed onan external processing system, a flight system wouldhave to embed such system into the deployed robotsitself. Wireless transmission is used for the cameraimages. Transmission can be easily affected by otherwireless systems, and result in image distortions. Tocope with these distortions a particle filter was usedto track the sample’s position in the camera image.

The sample detection process can be separatedinto three steps:

1. Detect the object

2. Update the particle filter with the position in theimage (if detected)

3. Control the scout’s movementsStep one is accomplished currently by a thresh-

old on image brightness, it is assumed that samplesof interest have different color compared to “regu-lar” stones in the scenario. After selecting the sampleof interest, the characteristics are saved and used tofind the same object in subsequent images. After thethreshold was applied to the camera image, fitting ob-jects are searched beginning at the expected positionof the object. The starting point is extracted fromthe particle filter, which includes a movement modelof the object in the camera images.

Only if an object fitting to the characteristics ofthe selected sample is found, the particle filter and itsmovement model are updated (step two).

The final step is the control of the scout. De-pending on the object’s distance to the target area(the area where the gripper can reach the sample),values for the forward speed and turn values are setappropriately.

Figure 24 shows a labeled camera image of the ap-proach, extracted objects are highlighted in the image(green colored areas from brightness threshold), thepoints are the single particles of the particle filter,the cross marks the expected position of the sampleextracted from the particle filter and the box marksthe target area where the robot stops moving whenthe sample resides in that box for some time. Theonly light comes from some infrared diodes attachedto the camera on scout’s back (bright spot is visiblein the image).

When the robot reaches its final position, it noti-fies the Ground Control Station, which then can ini-tiate the actual collecting procedure. After finishingthe approach, the location of the target sample has tobe determined with high precision to allow for samplepick up. It can be assumed that the manipulation willtake place in a planar environment. Hence, that tar-get sample can be easily determined after generatinga height map of the environment, given a certain re-gion of interest (ROI). One of the influencing factorsfor the ROI is the accuracy of the approach. Experi-ments showed that manipulation works best for sam-ple distances of 22 cm in a straight line of the scout’sright thorax joint (’shoulder’ joint). This knowledgeallowed the definition and extraction of the ROI. Cur-rently a target area of 121 cm2 (11 cm×11 cm) applies.

The scout uses the mounted laserscanner drivenby a servo motor to extract an distance image of theenvironment, which is subsequently transformed intoa height map. The essential procedure to extract asample’s position consists of the following steps:

Page 20: LUNARES : Lunar Crater Exploration with Heterogeneous

Figure 24. Labeled camera image of the approachaction (post processed in brightness and con-trast). Around the green highlighted objectto approach to, the particles of the filter aredisplayed as white dots. The approach ends,when the object is completely in the goal re-gion (white rectangle)

1. Extraction of a laser scan of the direct environ-ment within a horizontal range of ±30◦

2. Transformation of the scan data from the worldcoordinate system into the robot coordinate sys-tem

3. Generation of the height map in the robot coor-dinate system

4. Extraction of the region of interest, defining theallowed manipulation area of the scout

5. Extraction of the local minimum within the ROI

6. Extraction of the region around the local mini-mum to extract the likely target center

The height map is actually transformed into agrey scale image to allow further processing steps suchas median filtering (Figure 25). During the samplepickup process the operator can get hold of the imagesand validate the extracted final target position. If noobject can be extracted, the operator will receive anerror message. Additionally, if the operator identifiesa false positive, he can initiate a second scan.

5 Potentials for a Real Lunar Mission

The realisation of a robotic Moon mission necessitatesthe timely development and qualification of a series oftechnologies, which do not possess the required highdevelopment status. More precisely, for the start of

Figure 25. Resulting grey scale image. The cen-tre of the detected sample is marked by andiamond-shaped marker.

Phase C (actual mission planning), a TRL (technol-ogy readiness level) of 5 - 6 is required, representinga technology demonstrator or prototype which hasalready been tested in a representative environment.The identification of technologies which do not havea sufficient TRL level is necessary to minimize thetechnical and programmatic risks associated with asystem development.

In the LUNARES study critical technologiesand requirements for the realisation of a multiple-configuration robotic concept were investigated anda stepwise plan for the development was elaborated.

For the LUNARES project, three mission classeshave been investigated:

1. Single Moon exploratory missions, e.g. for spec-tral analysis of surface samples from a Mooncrater or even sample return of these samples.

2. The construction of a scientific Moon infrastruc-ture

3. Cargo transportation for support of human mis-sions

For handling of surface samples by the walkingrobot and the lander, an appropriate gas tight move-able sample container has to be developed. By this itcan be ensured that no volatile gases are lost duringtransport from crater bottom to landing unit.

For the lander, the rover, and the walking robota higher-level system control of the board computervia an appropriate ground station for automation androbotics has to be developed.

For appropriate programs for technology ver-ification, the DLR program On-Orbit-Verifikation(OOV4) is very suited. For this program the Ger-man Small Satellites Platform TET5 can be used. On

4http://www.dlr.de/rd/en/desktopdefault.aspx/

tabid-2265/3376_read-9781/5http://www.dlr.de/rd/en/desktopdefault.aspx/

tabid-2274/3396_read-5085/

Page 21: LUNARES : Lunar Crater Exploration with Heterogeneous

the ESA side, there is the Technology Research Pro-gramme (TRP), which is specifically for early tech-nology development, or the General Support Tech-nology Programme (GSTP), which is for the develop-ment of more developed technologies for market readyproducts. The following demonstration methods forcomponents, subsystems or the complete LUNARESfollow-on system are applicable:• Software-based simulations• Earth-based component tests• Piggy-back Technology Flight Opportunities for

component in-orbit tests• Test campaigns for system prototypes on the

EarthFor the qualification and testing philosophy,

the classical qualification with structure- and ther-mal models, engineering model, qualifications model(QM) as well as a flight model (FM) is recommended.

6 Conclusion and Outlook

The project LUNARES provides a terrestrial demon-strator to evaluate the feasibility of a heterogeneousrobotic team for lunar crater exploration. Existingrobots not specifically designed for the chosen mis-sion scenario have been employed.

The systems came from different project partners.Despite the differences concerning hardware as well ascontrol approaches of the systems, an overall systemcontaining the different subsystems was successfullyimplemented in the presented project. In numerousdemonstrations the combination of the various sub-systems were constituted successfully.

For the purpose of experiments and demonstra-tions, an artificial lunar crater environment compris-ing realistic slopes and illumination has been estab-lished in the project. The testbed is equipped withvarious surveillance sensors such as video cameras anda motion tracking system. Automated experimentdocumentation has been implemented in the testbed.

A docking procedure for a walking machine anda wheeled rover was developed. It is based on visualinformation from the rover’s camera system, whichis used to control the legged scout. Furthermore, adocking procedure allowing the precise placement ofa rover in front of a landing unit was developed usingthe lander’s sensor system. For exchanging payloadsand sample containers between rover, scout, and land-ing unit, visual servoing methods were implemented.

During the project, important experiences withlocomotion of walking machines in crater environ-ments were made and the locomotion principle wassignificantly improved. With appropriate controlmechanisms even the Scorpion robot, not explicitly

designed for this terrain, was able to climb in theartificial crater with slopes of up to 35◦. The locomo-tion was safe and reliable, even with leg failure, therobot could negotiate the slope with the remainingseven legs.

Overall, LUNARES successfully demonstratedthe feasibility of the chosen approach. In the projectRIMRES we want to further pursue the idea of het-erogeneous robotic systems. Here mobile systems willbe newly developed in a co-design process. This al-lows for a closer coupling between rover and scout. Astandardized mechatronic interface and a connectionproviding interfaces for exchange of data and energywill be developed. An additional focus will be themodularity of the system, several payload modules,each equipped with the mechatronic interface, will bedeveloped.

References

[1] James R. Arnold. Ice in the lunar polar regions. Jour-nal of Geophysical Research (JGR), 84(B10):5659–5668, September 1979.

[2] P. Bartlett, D. Wettergreen, and W. L. Whittaker.Design of the scarab rover for mobility and drillingin the lunar cold traps. In International Symposiumon Artificial Intelligence, Robotics and Automationin Space, February 2008.

[3] Andrew J. Coates. Limited by cost: The case againsthumans in the scientific exploration of space. Earth,Moon, and Planets, 87(3):213–219, 1999.

[4] Doug Cooke. Exploration lunar architecture. In Pro-ceedings of the NASA Advisory Council Workshop onScience Associated with the Lunar Exploration Archi-tecture, Feb. 27–March 2, 2007 2007.

[5] Florian Cordes, Steffen Planthaber, Ingo Ahrns,Timo Birnschein, Sebastian Bartsch, and FrankKirchner. Cooperating reconfigurable robots forautonomous planetary sample return missions.In ASME/IFToMM International Conference onReconfigurable Mechanisms and Robots (ReMAR-2009), London, United Kingdom, June 22-24 2009.

[6] Thomas H. Cormen, Clifford Stein, Ronald L. Rivest,and Charles E. Leiserson. Introduction to Algorithms.McGraw-Hill Higher Education, 2001.

[7] Alexander Dettmann, Stefan Haase, and FrankKirchner. Automatic robot supervision within a lu-nar crater environment. In Joint 41st InternationalSymposium on Robotics and 6th German Conferenceon Robotics (ISR Robotik-2010), June 7-9, Munich,Germany, Munich, Germany, 2010.

[8] W. C. Feldman, S. Maurice, A. B. Binder, B. L. Bar-raclough, R. C. Elphic, and D. J. Lawrence. Fluxes offast and epithermal neutrons from lunar prospector:Evidence for water ice at the lunar poles. Science,281:1496–1500, September 1998.

Page 22: LUNARES : Lunar Crater Exploration with Heterogeneous

[9] L. Ferrarini and E. Carpanzano. Reference modelsfor the supervision and control of advancedindustrialmanipulators. In American Control Conference, vol-ume 4, pages 2440–2444, 1999.

[10] C. Fiorio and J. Gustedt. Two linear time union-findstrategies for image processing. Theoretical Com-puter Science, 154(2):165–181, 1996.

[11] N. Hansen and A. Ostermeier. Completely deran-domized self-adaptation in evolution strategies. Evo-lutionary Computation, 9(2):159–195, 2001.

[12] T Huntsberger, A Stroupe, H. Aghazarian, M. Gar-rett, P. Younse, and M. Powell. Tressa: Teamedrobots for exploration and science on steep areas:Field reports. J. Field Robot., 24(11-12):1015–1031,2007.

[13] Bernhard Klaassen, Ralf Linnemann, Dirk Spen-neberg, and Frank Kirchner. Biomimetric walkingrobot scorpion: Control and modeling. In Roboticand Autonomous Systems Journal, 2002.

[14] Ralf Linnemann, Bernhard Klaassen, and FrankKirchner. Walking robot scorpion - experiences witha full parametric model. In E.J.H. Kerckhoffs, editor,15th European Simulation Multiconference: Mod-elling and Simulation, pages S.1012–1018, Prague,Czech Republic, June 6-9 2001. International Soci-ety for Computer Simulation -SCS-.

[15] NASA.gov – Mini-RF. Exploring the lunar poles,March 2010.

[16] NASA.gov – Mission Update. Lcross impact dataindicates water on moon. NASA Homepage, 11/2009.

[17] S. Nozette, C.L. Lichtenberg, P. Spudis, R. Bonner,W. Ort, E. Malaret, M. Robinson, and E. M. Shoe-

maker. The clementine bistatic radar experiment.Science, 274:1495–1498, November 1996.

[18] A. C. Sanderson and L. E. Weiss. Adaptive visualservo control of robots. Robot Vision, pages 107–116,1983.

[19] J. Schwendner, F. Grimminger, S. Bartsch,T. Kaupisch, M. Yuksel, A. Bresser, J. BessekonAkpo, A. Dieterle, S. Schmidt, M. Seydel, andF. Kirchner. Cesar: A lunar crater exploration andsample return robot. In Intelligent Robots and Sys-tems. IROS 2009, St. Louis, Oktober 2009.

[20] D. Spenneberg, M. Albrecht, and T. Backhaus.M.O.N.S.T.E.R.: A new behavior–based microker-nel for mobile robots. In ECMR 2005, 2005.

[21] S. Thrun, D. Fox, W. Burgard, and F. Dellaert. Ro-bust monte carlo localization for mobile robots. Ar-tificial Intelligence, 128(1-2):99–141, 2001.

[22] Gianfranco Visentin. Autonomy in ESA PlanetaryRobotics Missions.

[23] Kenneth Watson, Bruce Murray, and HarrisonBrown. On the possible presence of ice on themoon. Journal of Geographical Research (JGR),66(5):1588–1600, 1961.

[24] David Wettergreen, Dominic Jonak, David Kohan-bash, Scott Jared Moreland, Spencer Spiker, JamesTeza, and William (Red) L. Whittaker. Design andexperimentation of a rover concept for lunar craterresource survey. In 47th AIAA Aerospace SciencesMeeting Including The New Horizons Forum andAerospace Exposition, January 2009.