7
TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACE DEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT Neal Y. Lii 1 , Daniel Leidner 1 , Peter Birkenkampf 1 , Benedikt Pleintinger 1 , Ralph Bayer 1 , and Thomas Krueger 2 1 German Aerospace Center (DLR), Oberpfaffenhofen, Germany 2 European Space Agency, ESTEC, 2200 AG Noordwijk, The Netherlands ABSTRACT This paper presents the complete preparation and prelim- inary findings through user studies and astronaut train- ing, for METERON SUPVIS Justin, a supervised auton- omy space telerobotics experiment. Astronauts on board the International Space Station shall command a com- plex humanoid robot as a coworker to perform naviga- tion, service, and repair tasks in a simulated Martian solar farm experimental environment on Earth through a tablet PC graphical user interface (GUI). The GUI is designed to be powerful, yet intuitive to use, with features such as dynamically update relevant information and robot commands, as well as intuitive user guidance function- ality. The robot coworker, in turn, can utilize its local intelligence to reason and execute the commanded tasks. Preliminary findings from the user studies and astronaut training are presented and discussed. Finally, a series of multiple SUPVIS Justin experiment sessions between the ISS and the ground are planned to beginning during ISS expedition 52/53 in 2017, extending through to 2018. Key words: Telerobotics; Telerobotics Technology; In- tuitive User Interface; Supervised Autonomy; Robot Coworker; Robot Intelligence. 1. INTRODUCTION Space robotic and telerobotic solutions have proven their success in extending our ability to explore farther into our solar system, exemplified by numerous Lunar, and Mars rovers designed and launched by several national and international space agencies. They have also enabled critical space construction and repair tasks, as shown by the Canadarm series [1] on board the space shuttle and the International Space Station (ISS). As robots grow increasingly capable and consequently more complex, the ability to command these robots ef- fectively to extract their full capability becomes more urgent. This is particular true with space telerobotics, where the user (often an astronaut) must command a com- plex robot to perform challenging mission critical tasks. This paper presents the work being carried out by the Ger- Figure 1. From telepresence to supervised autonomy: the different modes of telerobotics in METERON[2]. One of the goals of METERON is to explore different modalities of teleoperation. In the Haptics series, a one degree-of- freedom (DOF) joystick is used for telepresence with hap- tic force feedback. In the SUPVIS series, the robot is con- trolled through a notebook or tablet PC interface at the task level. A mixture of both modalities is also explored in the Interact experiment. man Aerospace Center (DLR) and the European Space Agency (ESA) toward enabling a robust scalable teler- obotics solution for space deployment. In this paper we present the current status of the planned SUPVIS Justin experiment to teleoperate a humanoid robot on Earth from the ISS. SUPVIS Justin is a part of the METERON (Multi-purpose End-To-End Robotic Operation Network), experiment suite initiated by ESA to study viable telerobotic solutions. Figure 1 shows the some different modalities of teleoperation being explored in the METERON experiment suite, from real-time, hap- tic telepresence with the likes of the Haptics series [3] [4] to task level command using supervisory control of the SUPVIS series [5] [6] [7] [8] . Hybrid teleopera-

TOWARD SCALABLE INTUITIVE TELEOPERATION …TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACE DEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT Neal Y. Lii 1, Daniel Leidner

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: TOWARD SCALABLE INTUITIVE TELEOPERATION …TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACE DEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT Neal Y. Lii 1, Daniel Leidner

TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACEDEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT

Neal Y. Lii1, Daniel Leidner1, Peter Birkenkampf1, Benedikt Pleintinger1, Ralph Bayer1, and Thomas Krueger2

1German Aerospace Center (DLR), Oberpfaffenhofen, Germany2European Space Agency, ESTEC, 2200 AG Noordwijk, The Netherlands

ABSTRACT

This paper presents the complete preparation and prelim-inary findings through user studies and astronaut train-ing, for METERON SUPVIS Justin, a supervised auton-omy space telerobotics experiment. Astronauts on boardthe International Space Station shall command a com-plex humanoid robot as a coworker to perform naviga-tion, service, and repair tasks in a simulated Martian solarfarm experimental environment on Earth through a tabletPC graphical user interface (GUI). The GUI is designedto be powerful, yet intuitive to use, with features suchas dynamically update relevant information and robotcommands, as well as intuitive user guidance function-ality. The robot coworker, in turn, can utilize its localintelligence to reason and execute the commanded tasks.Preliminary findings from the user studies and astronauttraining are presented and discussed. Finally, a series ofmultiple SUPVIS Justin experiment sessions between theISS and the ground are planned to beginning during ISSexpedition 52/53 in 2017, extending through to 2018.

Key words: Telerobotics; Telerobotics Technology; In-tuitive User Interface; Supervised Autonomy; RobotCoworker; Robot Intelligence.

1. INTRODUCTION

Space robotic and telerobotic solutions have proven theirsuccess in extending our ability to explore farther intoour solar system, exemplified by numerous Lunar, andMars rovers designed and launched by several nationaland international space agencies. They have also enabledcritical space construction and repair tasks, as shown bythe Canadarm series [1] on board the space shuttle andthe International Space Station (ISS).

As robots grow increasingly capable and consequentlymore complex, the ability to command these robots ef-fectively to extract their full capability becomes moreurgent. This is particular true with space telerobotics,where the user (often an astronaut) must command a com-plex robot to perform challenging mission critical tasks.This paper presents the work being carried out by the Ger-

Figure 1. From telepresence to supervised autonomy: thedifferent modes of telerobotics in METERON[2]. One ofthe goals of METERON is to explore different modalitiesof teleoperation. In the Haptics series, a one degree-of-freedom (DOF) joystick is used for telepresence with hap-tic force feedback. In the SUPVIS series, the robot is con-trolled through a notebook or tablet PC interface at thetask level. A mixture of both modalities is also exploredin the Interact experiment.

man Aerospace Center (DLR) and the European SpaceAgency (ESA) toward enabling a robust scalable teler-obotics solution for space deployment.

In this paper we present the current status of the plannedSUPVIS Justin experiment to teleoperate a humanoidrobot on Earth from the ISS. SUPVIS Justin is a partof the METERON (Multi-purpose End-To-End RoboticOperation Network), experiment suite initiated by ESAto study viable telerobotic solutions. Figure 1 shows thesome different modalities of teleoperation being exploredin the METERON experiment suite, from real-time, hap-tic telepresence with the likes of the Haptics series [3][4] to task level command using supervisory control ofthe SUPVIS series [5] [6] [7] [8] . Hybrid teleopera-

Page 2: TOWARD SCALABLE INTUITIVE TELEOPERATION …TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACE DEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT Neal Y. Lii 1, Daniel Leidner

tion of mixing real-time haptic telepresence and task levelcommands to optimize the teleoperation effectiveness, asexplored in the METERON Interact experiment [9], fur-ther offer great potential for high complexity and unex-pected mission scenarios where low level human-in-the-loop may only be called upon sporadically.

For the SUPVIS Justin experiment, we are developingthe robot supervisory control in the form of supervisedautonomy. The robot is treated as a coworker for the as-tronaut on the planetary surface as opposed to a physi-cal tool extension, which can be more effectively realizedwith telepresence. The robot is designed to have more lo-cal intelligence, with the ability to perform reasoning toplan and execute a given task [10]. The tablet PC userinterface (UI) is incorporated with dynamically updatedelements to provide the astronaut with the relevant infor-mation and command [11]. This UI would serve as thecommunication gateway between the astronaut and therobot, where the astronaut can command and receive taskexecution feedback from the robot.

In the SUPVIS Justin experiment, the astronaut is taskedwith commanding a humanoid robot to perform a cat-alog of tasks including survey, navigation, and repairsin a simulated Martian solar farm environment designedand implemented at DLR as part of the ground segment.In particular, the hardware and software repairs and up-dates can be carried out with mechanical manipulation,and electrical and data interface by the robot. The aim isto explore the ability of teleoperating a humanoid robotthrough an intuitive GUI on a tablet PC using supervisedautonomy. A high level concept illustration is shown inFigure 2. As identified by the International Space Ex-ploration Coordination Group (ISECG), a teleoprationmodality gap is still present between full telepresence andautonomy in space robotics. The supervised autonomyconcept is a versatile possibility to bridge this gap. Com-mands are given to the robot at the abstract, (sub)taskpackage level for the robot to execute with its own al-gorithms. With this approach, time delays in commu-nication can be effectively handled, as the robot wouldcarry out the commanded task, and remain in a safe statethroughout. The success (or failure) of the task execu-tion are communicated to the astronaut at the end of eachexecution.

The remainder of this paper is organized into five sec-tions. Section 2 gives an overview of past related workin space robotics with special attention paid to teleopera-tion. Section 3 details the development of the key compo-nents of SUPVIS Justin on the humanoid robot, GUI, andthe experimental environment. These are integrated intothe SUPVIS Justin experiment protocol, which is brieflydescribed in Section 4. Section 5 describes the tests con-ducted so far with ground users as well as astronaut train-ing sessions, and discusses our preliminary findings. Fi-nally, Section 6 draws the relevant conclusions of this pa-per, looks toward the upcoming ISS to ground experimentsessions, and provides an outlook of SUPVIS Justin andbeyond.

Figure 2. Illustration of the basic METERON SUPVISJustin command concept[7]. The astronaut on-board theISS is tasked with commanding a humanoid robot on theplanetary surface with a tablet PC in a supervised auton-omy fashion. The astronaut and the robot communicateon a abstract task level, which relies on the robot’s ownintelligence to carry out commanded task packages. Therobot in turn communicates the available robot tasks dy-namically to the astronaut on the tablet PC. The SUPVISJustin concept should reduce the astronauts workload oncommanding the robot, as it does not require the full at-tention of the user through out the entire operation. Tasklevel robot cooperation is also robust against communi-cation delays and jitters due to the distance and distur-bances between the astronaut and the robot.

2. RELATED WORK

Telerobotics in space applications has seen great progressin the past few decades. As mentioned in Section 1, theCanadarm series has been deployed on the space shut-tle, and continues to be a vital part of the ISS [1]. Thefirst humanoid robot, Robonaut 2, developed by NASAand General Motors, has also been deployed on the ISS[12]. The humanoid form is of particular interest for teler-obotics as it offers the most intuitive form factor as theavatar for the human operator, which can in turn reducetraining time and improve teleoperation effectiveness.

Other telerobotic systems such as the ROKVISS exper-iment have also shown great promise in providing longdistance (Earth to ISS) real-time haptic telepresence. Inthis five-year long experiment jointly conducted by DLR,Roscosmos, RTC, and RSC Energia, a 2-DOF robot withforce-torque sensors at each joint was mounted on theoutside of the Russian segment Zvesda module of theISS, and controlled from ground using a 2-DOF force re-flection joystick [13]. The same concept has been putin reverse with the on-going KONTUR-2 experiment, inwhich the joystick is deployed on board the ISS to con-trol several dexterous robots on the ground. With a 4-channels architecture controller with guaranteed stabil-ity [14] through passivity and the Time Delay PowerNetwork, hatpic telemanipulation have be successfullyachieved in several experiment sessions [15].

These system the current state-of-the-art in real-time hap-tic force feedback telerobotics. They provide the userwith the best immersion performance, having the robot

Page 3: TOWARD SCALABLE INTUITIVE TELEOPERATION …TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACE DEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT Neal Y. Lii 1, Daniel Leidner

acting as an extension of the user in distance. However,such systems require short time delays of well under 1sec. communication round trip. This could prove to beunfeasible in more stringent communication conditionsdue to distance (for example, Earth-Moon communica-tion round trip is around 3 sec.), or disturbances in thecommunication infrastructure that can cause undesirablejitter. In addition, full immersion telepresence requires ahigh mental, and sometimes physical, workload, whichreduces the effective astronaut teleoperation time. Re-search has been underway in recent years to reduce theteleoperation workload by delegating some of the tasks tothe robot. With the TaskMan concept, in which the usercommands the robot through gestures, rather than jointlevel command, the robot is effectively commanded intask space. Furthermore, by delegating completed tasksfor the robot to maintain, the user is relieved of signifi-cant workload, while achieving greatly improved telema-nipulation performance[16]. With supervised autonomy,the task space command concept advances further for-ward in by using higher levels of autonomy of the robotwith more advanced reasoning and task execution capa-bilities, in combination with an interactive UI [10]. Thesupervised autonomy concept also serves as the basis forSUPVIS Justin’s mode of teleoperation. The UI can beimplemented on a Tablet PC, or in a new form factor ofthe smartwatch. Given the ease of to carry, particularly ofthe smartwatch, the astronaut can have available a modeof communicating with the robot(s) at all times [17].

3. KEY TECHNOLOGY DEVELOPMENTS FORSUPVIS JUSTIN

To realize the SUPVIS Justin experiments, several keytechnology components have to be in place and inte-grated, namely, the deployment of a humanoid robot ca-pable of reasoning and abstract task execution, a power-ful, yet intuitive, tablet GUI to command Justin, an ex-perimental environment that simulates realistic planetaryconditions, and is able to measure the robot task perfor-mance, and the ISS to ground Ku-band communicationlink.

3.1. Rollin’ Justin and robot reasoning

Rollin’ Justin, as shown in Figure 3, is DLR’s high dex-terity humanoid robot [18] to be deployed for SUPVISJustin. It is equipped with two 7 DOF DLR Light WeightRobot (LWR) III [19] arms, and DLR II hands [20], a 1-DOF torso, 2-DOF head equipped with an assortment ofcamera options, and an 8-DOF mobile platform capableof navigating Justin through uneven terrain. Command-ing such a complex robot can be cause a significant work-load for the astronaut.

To ease this workload, the robot functions as coworker bytaking over some of the cognitive workload with its ownintelligence, utilizing hybrid reasoning [10] to help planand execute the commanded task. As such, the astronautis not required to provide every detail of the robot’s tra-jectory to complete a simple motion. Instead, the com-

Figure 3. DLR humanoid robot Rollin’ Justin [7]. With51 total active DOF in the head, torso, arms, hands, andmobile platform [18], as well as a data interface probe(DIP), Rollin’ Justin is a high capability, high complex-ity humanoid robot. It’s local intelligence is capable orreasoning to help plan and execute a commanded task.

mand takes the simple intuitive form of task packageswith easy-to-understand terms such as “shut down de-vice”, or “navigate to target”. Furthermore, Justin canbe updated with additional relevant functionality for thetasks at hand. This capability enables great potential forscalability of functionality. Finally, a context-aware mis-sion control concept ensures that only the safe and rele-vant actions are provided to the astronaut [21]. They aresent to the tablet PC GUI to be described in the next sec-tion.

3.2. Designing a user interface for supervised auton-omy in space

Given the limited space in the ISS, size of any item isat a premium. On the other hand, commanding complexrobots to perform complex tasks, even with the aid of su-pervised autonomy, requires the exchange of a significantamount of information through the GUI. The key to a suc-cessful UI design for the ISS is to pack relevant informa-tion in a timely manner, while maintaining usability bynot flooding the user with unnecessary information.

The SUPVIS Justin tablet PC application is developedbased on the ESA Telerobotics lab’s GUI design for theMETERON Haptics and Interact experiments. Therefore,they share the same look and feel. For SUPVIS Justin,we further developed context specific and safe robot func-tions for the current task, such that only the relevant com-mands and information are displayed at all times [11].Figure 4 gives an example of the GUI layout. The largeamount of information made available can be seen in thefigure and its caption.

Furthermore, the UI is designed with intuitive user guid-ance functionality. With the camera view in the UI, theoperator sees through the robots vision which is aug-mented by the objects known to the robot. To interactwith an object, the user simply has to click on the ob-ject in the camera view. A set of available and relevantrobot commands is then displayed to the operator. Thisconstitutes a vital step forward toward a powerful, yet in-

Page 4: TOWARD SCALABLE INTUITIVE TELEOPERATION …TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACE DEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT Neal Y. Lii 1, Daniel Leidner

Figure 4. An example of the SUPVIS Justin GUI layout.Maximizing information while making it manageable bythe user through dynamic status and available commandupdates. 1: live status of the connectivity to the roboton the ground. 2: progress of the protocol and the in-structions for the current tasks. 3: main viewing area ofthe GUI. In this figure, the camera view is shown, whichdisplays the view as seen by the robot’s camera. This isthe sole source of visual information of the location andobject the astronaut can work with through the robot. Anorange overlay of the object in view can be seen, whichrepresents the robot’s estimation of the location of the ac-tual object. The user may click on the overlay to interactwith the object with the robot. 4: view selector buttons toswitch between different views such as map view or cam-era view. 5: dynamic robot command area. relevant andsafe robot commands are populated here by mission con-trol to ensure the astronaut not to command dangerousand faulty tasks. 6: buttons to access manual of the ex-periment or to log out of the application. 7: stop buttonto stop the experiment protocol at any time.

tuitive UI for telerobotics. By keeping the UI simple touse, it can be foreseen to scale up the robot commandingtask of one GUI device to command a team of robots. Inthe space environment, where the number of astronaut isscarce, this can be a great advantage.

3.3. The Solex environment and SPU

The SOLar Farm EXperimental (Solex) environment,shown in Figure 5, is developed to provide a realisticMartian solar farm environment for the purpose of spacerobotics performance testing and validation[8]. It is de-signed with Martian scenery and terrain to give a closeapproximation of the Martian environment for the robotto navigate through. A fleet of Solar Panel Units (SPUs)are placed in the Solex environment for the robot to per-form mechanical manipulation tasks on different formsof switches and mechanisms, as well as electronic datainterface using the DIP. With mechanical and electronicinterface available, the robot can perform a wide range ofmaintenance and repair tasks.

The SPU, as shown in Figure 6, uses modular design toenable quick and easy exchanges of components to facil-itate a wide array of robot task experiments. Each SPU is

Figure 5. Solex concept and realization. Top: the Solexenvironment concept. It is envisioned to simulated a so-lar farm on the Martian surface. It is equipped with solarpanel units (SPUs) to be serviced and maintained by therobot Rollin’ Justin. Simulated Martian terrain are tobe implemented. Life-size Martian surface photographsprinted on large surfaces surround the Solex environmentto give the astronaut a more realistic feel of commandinga robot on Mars. Bottom: the Solex environment underconstruction. In the photo shown, the Solex environmentis near completion, with the SPUs in place, and Mar-tian surface image surrounding the Solex environment.The simulated Martian terrain is not yet installed in thisphoto.

equipped with a Smart Base to monitor and control eachcomponent. This enables the SPU to record the actionsperformed by the robot from the perspective of the ma-nipulated object, which helps obtain a fuller picture andunderstanding of the conducted experiment, thus elimi-nating faulty interpretation of the task execution on theenvironment or the serviced and repaired device.

3.4. ISS-Earth communication

For the communication link between the tablet PC on theISS and the robot on the ground, SUPVIS Justin employsthe Multi Protocol Communication Controller (MPCC)over the Ku-band, which was also used in METERON In-teract [9]. The data is relayed between a team of Trackingand Data Relay Satellites (TDRS) in orbit relays data be-tween the Earth and the ISS, as shown in Figure 7. Withfull satellite coverage around the Earth, loss of signal(LOS) can be theoretically completely avoided throughthe ISS’s entire orbit time. However, short LOS (about5 minutes) do occur during most ISS orbits. Due to themultiple gateways the data must go through, the round

Page 5: TOWARD SCALABLE INTUITIVE TELEOPERATION …TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACE DEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT Neal Y. Lii 1, Daniel Leidner

Figure 6. The Solar Panel Unit (SPU). Left: the frontof the SPU. A variety of modular mechanical switches lo-cated on the Smart Base can be quickly exchanged to helpexamine the robot’s manipulation performance. (Center)the view of the electronic components in the Smart Baseof the SPU. An industrial PC (shown in the pulled-outdrawer) enables it to record all actions, mechanical andelectrical, as performed by the robot to help better eval-uate the real performance of robot’s actions. The smartbase can be equipped with different electronics to supportdifferent accessories attached via a quick-change adapteron the top side of the Smart Base, such as the solar panelused for the SUPVIS Justin experiment. (Right) touchscreen display on the Smart Base. A touch screen on theside of the Smart Base of the SPU allow the operator tocommand the robot to make easy changes to system set-tings and read information on the state of the SPU.

trip time observed between DLR Oberpfaffenhofen andthe ISS is 800-850 msec, which is significantly higherthan a direct ISS to ground link such as the S-band linkused in the KONTUR-2 of 20-30 msec [15]. However,the S-band link suffers from much higher LOS, withnominal communication availability of around 10 min-utes in orbits when the ISS flies by the ground station.Furthermore, as supervised autonomy does not dependon a real-time link, the high availability with low LOSmakes it the more desirable communication architecturefor SUPVIS Justin. A bandwidth of 1 Mbit up/down hasbeen requested, although more bandwidth is available.

4. EXPERIMENT SCENARIO

With the key technologies discussed in the previous sec-tion in place, we are able to fully implement the SUPVISJustin experiment. One of the main goals is to studytelerobotics capability to serve in a planetary colony en-vironment. This differs from exploration missions in thatmanufactured mechanical and electronic devices wouldbe present in such a scenario. As such, we envision usingthe tablet PC GUI, as shown in Figure 4, on board the ISSto command Justin on the ground to perform navigation,service, and repair duties on devices in the environment,as shown in Figure 5. Our early findings are presented inthe following section.

Figure 7. The Communication architecture between ISSand ground for SUPVIS Justin [22]. The team of TDRSsatellites serves as the gateway between the ISS andground. The Tablet PC in the Columbus Module throughthis array of satellites on the Ku-band using MPCC. Abandwidth of 1 Mbit up/down is assigned for the exper-iment, with communication round trip of about 800-850msec.

5. PRELIMINARY FINDINGS

Over 150 test runs of on-ground user studies have beencarried out to test the SUPVIS Justin experiment concept.Part of the test campaign has been conducted during theAutomatica exhibition in Munich in 2016, as shown inFigure 8. During these protocol runs, the tablet PC wasplaced in an actual size ISS European segment Colum-bus module mockup 35 km away from the Solex environ-ment. In order to further approach real ISS to ground con-ditions, an artificial time delay of 800 msec was injectedinto the communication. In addition, astronaut trainingand ground segment simulation have also been performedbetween DLR Oberpfaffenhofen and the European Astro-naut Center (EAC) in Cologne, Germany.

Through the tests we conducted, we noticed a learningcurve for the user regarding the supervised autonomyform of teleoperating a robot. This is particularly truewith users who are not familiar with the robot’s perfor-mance limits. We noted the users readjusting workloadexpectations between full robot autonomy (“I am not surewhy am I needed here?”), and full user immersion e.g.telepresence with real time haptic and position feedback(“I have/want to control everything on the robot!”), untilreaching the optimal level of human-robot collaborationas afforded by current technology availability. Regard-ing scalability, users have reflected that the system can befeasible for commanding multiple robots.

As technology improves, the level of abstraction can alsobe increased, yielding larger autonomous work packages.The robot can also provide more detailed and nuancedfeedback to the astronaut to be more effective member ofthe human-robot-collaboration, albeit with human main-taining the lead role in the working relationship.

Page 6: TOWARD SCALABLE INTUITIVE TELEOPERATION …TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACE DEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT Neal Y. Lii 1, Daniel Leidner

Figure 8. A preliminary test campaign in 2016. In thistest campaign, the robot Justin is located 35 km awayfrom the user, which provided the first simulation of truetelecommand from a distance. Top: a user commandingJustin from an ISS Columbus Module mockup with the su-pervised autonomy tablet GUI. Bottom: Justin carryingout an SPU maintenance task in the Solex environmentlocated at DLR Oberpfaffenhofen.

One interesting observation made during training andsimulation sessions wars the users’ realization the robotis intelligent, and it knows how to perform a task. Thisis a cross-over point for us to see the user recognizingthe robot as a coworker, rather than just a complex tool.A consequence is that detailed descriptions of each taskbecomes redundant, and allows the protocol descriptionto be short. This should be a welcomed feature to astro-nauts working with bulky documentations for each exper-iment. Equally importantly, it can greatly improve workefficiency on board the ISS.

We find that the supervised autonomy concept deployedin SUPVIS Justin shows great potential for scalability.By delegating (sub)task packages to the robot on-site,thus relieving the astronaut of every minute detail of eachrobot task, he/she can command a team of robots in futuremissions to perform larger, more complex tasks. Throughthe continuing advancements in robot physical (e.g. pay-load, dexterity) and processing (e.g. perception, reason-ing), the level of task abstraction can be increased, whichwould allow each astronaut to be in charge of increasingnumbers robots. This would make large scale planetaryprojects such as colonization, large scale exploration, andmining more feasible.

The final validation of METERON SUPVIS Justin isplanned in the form of multiple successive ISS to groundexperiments with iterative experiment design changes tohelp improve human-robot collaboration performance.The first experiment of SUPVIS Justin is slated for ISSExpedition 52/53 in 2017, and shall continue into 2018.Our aim is for the outcome of this experiment to serve asa benchmark for future space missions with teleroboticsdeployment.

6. CONCLUSION AND MISSION OUTLOOK

In this paper, we presented the complete preparation ofMETERON SUPVIS Justin toward ISS to ground ex-periment. The experiment design and the key tech-nologies developed and incorporated were described anddiscussed. Through extensive preliminary testing welearned of the performance potential of supervised auton-omy based telerobotics. In was particularly interestingto observe the user’s learning curve in becoming famil-iar with the concept of using the robot as an interactivecoworker, rather than a tool extension. It was also notedthrough the user study, that the current design lends itselfto scaling up to commanding a team of robots to performmore complex tasks. Finally, the SUPVIS Justin mis-sion is planned to start in ISS Expedition 52/53 in 2017through to 2018.

ACKNOWLEDGMENTS

We would like to thank all our colleagues and partnersat ESA, EAC, GSOC/Columbus-Control Centre, Dan-ish Aeropace Company, and DLR for their hard work tomake SUPVIS Justin possible. We also want to thankMr. Paolo Nespoli for taking an interest and the time togo through our experiment design on multiple occasions.A big note of gratitude also goes to Dr. Andre Schiele,the co-creator of SUPVIS Justin.

REFERENCES

[1] S. Sachdev, W. Harvey, G. Gibbs, B. Marcotte,N. Buckley, T. Braithwaite, and D. Rey. Canada andthe international space station program: Overiewand status since iac 2005. In The 57th InternationalAetronautical Congress (IAC), Valencia, Spain, Oc-tober 2006.

[2] D. Leidner. Cognitive Reasoning for CompliantRobot Manipulation. PhD thesis, University of Bre-men, 2017.

[3] A. Schiele, M. Aiple, T. Krueger, F. van derHulst, S. Kimmer, and J. Smisek adn E. den Ex-ter. Haptics-1: Preliminary results from the firststiffness JND identification experiment in space. In10th International Conference on Haptics: Percep-tion, Devices, Control, and Applications, EuroHap-tics 2016, pages 13–22, London, UK, July 2016.

[4] T. Krueger and A. Schiele. Preparations for thehaptics-2 space experiment on-board the interna-tional space station. In The 13th Symposium on

Page 7: TOWARD SCALABLE INTUITIVE TELEOPERATION …TOWARD SCALABLE INTUITIVE TELEOPERATION OF ROBOTS FOR SPACE DEPLOYMENT WITH THE METERON SUPVIS JUSTIN EXPERIMENT Neal Y. Lii 1, Daniel Leidner

Advanced Space Technologies for Robotics and Au-tomation (ASTRA), Noordwijk, The Netherlands,2015.

[5] National Air and Space Administration. Su-pervisory control of EUROBOT (SUPVIS-E).http://www.nasa.gov/mission pages/station/research/experiments/1979.html, November 2016.

[6] C. Jacobs, S. Klai, and K. Struyven. Meteronsupvis-m. In The 39th Payload Operation Integra-tion Working Group (POIWG) Meeting, Huntsville,AL, USA, January 2016.

[7] N. Y. Lii, D. Leidner, A. Schiele, P. Birkenkampf,B. Pleintinger, and R. Bayer. Command robots fromorbit with supervised autonomy: An introductionto the meteron supvis-justin experiment. In TheACM/IEEE International Conference on Human-Robots Interaction (HRI), 2015.

[8] N. Y. Lii, , A. Schiele, D. Leidner, P. Birkenkampf,R. Bayer, B. Pleintinger, A. Meissner, and BalzerA. Simulating an extraterrestrial environment forrobotic space exploration: the meteron supvis-justintelerotic experiment and the solex proving ground.In Proc. of the 13th Symposium on Advanced SpaceTechnologies for Robotics and Automation (AS-TRA), Noordwijk, The Netherlands, 2015.

[9] A. Schiele. Towards the interact space experi-ment: Controlling an outdoor robot on earth’s sur-face from space. In The 13th Symposium on Ad-vanced Space Technologies for Robotics and Au-tomation (ASTRA), Noordwijk, The Netherlands,2015.

[10] D. Leidner, P. Birkenkampf, N. Y. Lii, and C. Borst.Enhancing supervised autonomy for extraterrestrialapplications by sharing knowledge between humansand robots. In Workshop on How to Make Best Useof a Human Supervisor for Semi-Autonomous Hu-manoid Operation, IEEE-RAS International Con-ference on Humanoid Robots (Humanoids), 2014.

[11] P. Birkenkampf, D. Leider, and C. Borst. Aknowledge-driven shared autonomy human-robotinterface for tablet computers. In IEEE-RAS Int.Conf. on Humanoid Robots (Humanoids), 2014.

[12] M. A. Diftler, J. S. Mehling, M. E. Abdallah, N. A.Radford, L. B. Bridgwater, A. M. Sanders, R. S.Askew, D. M. Linn, J.D. Yamokoski, F. A. Perme-nter, et al. Robonaut 2-the first humanoid robot inspace. In Proceedings of IEEE International Con-ference on Robotics and Automation, pages 2178–2183, Shanghai, China, 2011. IEEE.

[13] G. Hirzinger, K. Landzettel, D. Reintsema,C. Preusche, A. Albu-Schaeffer, B. Rebele, andM. Turk. Rokviss robotics component verifica-tion on ISS. In International Symposium on Artifi-cal Intelligence, Robotics and Automation in Space(iSAIRAS), 2005.

[14] M. Stelzer, P. Birkenkampf, B. Brunner, B.-M.Steinmetz, J. Vogel, and S. Kuehne. Software ar-chitecture and design of the kontur-2 mission. In

IEEE Aerospace Conference, Montana USA, March2017. IEEE.

[15] J. Artigas, R. Balachandran, C. Riecke, M. Stelzer,B. Weber, J.-H. Ryu, and A. Albu-Schaeffer.KONTUR-2: Force-feedback teleoperation fromthe international space station. In Robotics and Au-tomation (ICRA), 2016 IEEE International Confer-ence on, Stockholm, Sweden, 2016.

[16] N. Y. Lii, Z. Chen, M. A. Roa, . Maier,B. Pleintinger, and C. Borst. Toward a taskspace framework for gesture commanded tele-manipulation. In IEEE International Workshopon Robot-Human Interactive Communication (Ro-Man), 2012.

[17] P. Birkenkampf, D. Leidner, and N. Y. Lii. Ubiqui-tous User Interface Design for Space Robotic Oper-ation. In Proc. of the 14th Symposium on AdvancedSpace Technologies for Robotics and Automation(ASTRA), 2017.

[18] C. Borst, T. Wimboeck, F. Schmidt, M. Fuchs,B. Brunner, F. Zacharias, P. R. Giordano, R. Koni-etschke, W. Sepp, S. Fuchs, C. Rink, A. Albu-Schaeffer, and G. Hirzinger. Rollin’ justin-mobileplatform with variable base. In IEEE InternationalConference on Robotics and Automation (ICRA),2009.

[19] G. Hirzinger, N. Sporer, A. Albu-Schffer, M. Hhnle,R. Krenn, A. Pascucci, and M. Schedl. DLR’storque-controlled light weight robot III - are wereaching the technological limits now? In The IEEEInternational Conference on Robotics and Automa-tion (ICRA), Washington D.C., USA, May 2002.

[20] S. Haidacher, J. Butterfass, M. Fischer, M. Greben-stein, K. Joehl, K. Kunze, M. Nickl, N. Seitz, andG. Hirzinger. DLR hand II: Hard-and software ar-chitecture for information processing. In Proceed-ings of IEEE International Conference on Roboticsand Automation, volume 1, pages 684–689, Taipei,Taiwan, 2003. IEEE.

[21] D. Leidner, P. Birkenkampf, and N. Y. Lii. Context-aware Mission Control for Astronaut-Robot Collab-oration. In Proc. of the 14th Symposium on Ad-vanced Space Technologies for Robotics and Au-tomation (ASTRA), 2017.

[22] Massachusetts Institute of Technology System Ar-chitecture Lab. Research: Space communica-tions. http://systemarchitect.mit.edu /spaceCommu-nications.php.