28
SPHERES Interact - Human-Machine Interaction aboard the International Space Station Enrico Stoll Steffen Jaekel Jacob Katz Alvar Saenz-Otero Space Systems Laboratory Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge Massachusetts, 02139-4307, USA [email protected] [email protected] [email protected] [email protected] Renuganth Varatharajoo Department of Aerospace Engineering University Putra Malaysia 43400 Selangor, Malaysia [email protected] Abstract The deployment of space robots for servicing and maintenance operations, which are tele- operated from ground, is a valuable addition to existing autonomous systems since it will provide flexibility and robustness in mission operations. In this connection, not only robotic manipulators are of great use but also free-flying inspector satellites supporting the oper- ations through additional feedback to the ground operator. The manual control of such an inspector satellite at a remote location is challenging since the navigation in three- dimensional space is unfamiliar and large time delays can occur in the communication chan- nel. This paper shows a series of robotic experiments, in which satellites are controlled by astronauts aboard the International Space Station (ISS). The Synchronized Position Hold Engage Reorient Experimental Satellites (SPHERES) were utilized to study several aspects of a remotely controlled inspector satellite. The focus in this case study is to investigate different approaches to human-spacecraft interaction with varying levels of autonomy under zero-gravity conditions.

SPHERES Interact - Human-Machine Interaction aboard the

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SPHERES Interact - Human-Machine Interaction aboard the

SPHERES Interact - Human-Machine Interaction

aboard the International Space Station

Enrico StollSteffen JaekelJacob Katz

Alvar Saenz-OteroSpace Systems Laboratory

Massachusetts Institute of Technology77 Massachusetts Avenue, Cambridge

Massachusetts, 02139-4307, [email protected]

[email protected]

[email protected]

[email protected]

Renuganth VaratharajooDepartment of Aerospace Engineering

University Putra Malaysia43400 Selangor, Malaysia

[email protected]

Abstract

The deployment of space robots for servicing and maintenance operations, which are tele-operated from ground, is a valuable addition to existing autonomous systems since it willprovide flexibility and robustness in mission operations. In this connection, not only roboticmanipulators are of great use but also free-flying inspector satellites supporting the oper-ations through additional feedback to the ground operator. The manual control of suchan inspector satellite at a remote location is challenging since the navigation in three-dimensional space is unfamiliar and large time delays can occur in the communication chan-nel. This paper shows a series of robotic experiments, in which satellites are controlled byastronauts aboard the International Space Station (ISS). The Synchronized Position HoldEngage Reorient Experimental Satellites (SPHERES) were utilized to study several aspectsof a remotely controlled inspector satellite. The focus in this case study is to investigatedifferent approaches to human-spacecraft interaction with varying levels of autonomy underzero-gravity conditions.

Page 2: SPHERES Interact - Human-Machine Interaction aboard the

1 Introduction

Human-machine interaction is a wide spread research topic on Earth since there are many terrestrial applica-tions, such as industrial assembly or rescue robots. Analogously, there are a number of possible applicationsin space such as maintenance, inspection and assembly amongst others. Satellites are the only complexengineering system without an infrastructure for routine maintenance and repair. The Space Shuttle basedsatellite servicing missions, like the Hubble, Solar Maximum, SYNCOM IV-3, and INTELSAT VI (F-3)missions, which were executed in the past, are not applicable to arbitrary spacecraft. The reasons there-fore are on the one hand the costs due to the Space Shuttle deployment, which exceed the expenses forre-constructing and re-launching the specific satellite. On the other hand, using manned spacecraft like theSpace Shuttle always constitutes a risk for the crew.Currently, the size and weight of spacecraft is limited by the launch vehicle. They have to fit into thepayload envelope of the respective launcher. However, most scientific satellites would benefit from largerpayload volumes. Space telescopes for example could be significantly improved if more room was availablefor larger apertures. Likewise, if the size of solar panels can be increased, the increased energy availableallows to integrate more complex payloads. In-space robotic assembly (ISRA) is an approach to overcomelauncher limitations. Robotic assembly can be used to construct spacecraft in orbit after their parts havebeen brought to space with multiple launch vehicles. This principle has already been employed for the con-struction of the ISS. Different modules were separately brought to space by either the Space Shuttle or theRussian Proton launch vehicle and were subsequently assembled in space by astronauts. In contrast, ISRAcan also be performed by controlling the procedures from ground.Similar to ISRA, ground controlled robotic spacecraft can be utilized for on-orbit repair and maintenanceoperations. Robotic spacecraft can also be utilized for inspecting a target satellite or monitoring such on-orbit servicing (OOS) operations. Furthermore, it is planned to use ground controlled spacecraft for orbittransfer. Malfunctioning spacecraft can be de-orbited from low Earth orbit (LEO) or geostationary (GEO)satellites, that have exceeded their operating life, can be relocated to the graveyard orbit.

1.1 State of the Art

In general, the deployment of regular used robotic systems in space is currently limited to the Shuttle RemoteManipulator Systems (SRMS), the Japanese Experiment Module Remote Manipulator System (JEM-RMS)and the Mobile Servicing System (MBS) aboard the ISS. In addition to the 17 m long Canadarm2 (SpaceStation Remote Manipulation System, SSRMS), the MBS also features a Special Purpose Dexterous Manip-ulator (SPDM) (Mukherji et al., 2001). The three systems can be teleoperated by the crew and are beingused for Extra Vehicular Activity (EVA) support, Space Station assembly, and satellite operations (retrieve,repair, deploy). In combination with the SRMS, the Orbiter Boom Sensor System (OBSS) (Greaves et al.,2005) is utilized for the inspection of the Shuttle’s heat protection tiles.In addition to the described robotic servicing capabilities which are strictly bound to the Shuttle or the ISS,several satellite-based demonstrators have been brought to orbit (or controlled on Earth via a satellite inorbit (Stoll et al., 2009a)) in order to demonstrate the possibility of more flexible and cost-effective futurerobotic on-orbit servicing systems. This section gives a brief overview of existing systems with special em-phasis on missions that involve free-flyers for proximity operations and inspection.The Robot Technology Experiment (ROTEX) (Hirzinger et al., 1993) was developed by the GermanAerospace Center (DLR). It was flown by the National Aeronautics and Space Administration (NASA)aboard Space Shuttle Columbia in 1993 and was the first remotely controlled robot in space. Besides au-tonomous (pre-programmed) and tele-sensor-programmed (learning by showing) operations, the operator onground could control the robot by using predictive, three dimensional (3D) computer graphics in teleopera-tion mode with a delay of approximately seven seconds.The Ranger robotics program started in 1992 as the Ranger telerobotic flight experiment (RTFX) at theUniversity of Maryland (Roderick et al., 2004). The goal was to develop a dexterous extravehicular spacetelerobot with four robot manipulators and a free-flight capability in space. In 1996 the program was redi-rected as a Shuttle launch payload but never advanced beyond an engineering model.The Japanese Engineering Test Satellite VII (ETS-VII) (Imaida et al., 2001), launched by the Japan

Page 3: SPHERES Interact - Human-Machine Interaction aboard the

Aerospace Exploration Agency (JAXA) in 1997, was composed of a pair of satellites and successfully demon-strated bilateral teleoperation in space. The smaller and cooperative target satellite was autonomouslyinspected and captured by the servicer satellite featuring a six degrees of freedom (DoF) robotic manipula-tor with haptic feedback.Installed in 2005 outside the Russian Service Module, the German Robotic Component Verification aboardthe ISS (Rokviss) (Albu-Schaffer et al., 2006) experiment featured a two joint robotic arm. It was controlledby operators on ground utilizing a haptic-visual display for telepresent manipulation via a direct S-band linkwith a total communication delay below 30 ms. In addition, the robot could be operated automatically inorder to allow continuous experimentation without the need for a constant ground link to the experimenta-tion platform.The Demonstration of Autonomous Rendezvous Technology (DART) (Rumford, 2003), developed by NASAand launched in 2005, was the first mission to rendezvous with a satellite completely autonomously. However,DART showed problems with its navigation system and suffered from excessive fuel usage. When DARTapproached its target for the execution of close proximity and formation flight operations, it overshot an im-portant waypoint and collided with the communication satellite MUBLCOM, it was supposed to rendezvouswith. Consequently, the mission was retired prematurely.Developed by the US Air Force Research Laboratory, the Experimental Small Satellite-10 (Davis and Melan-son, 2004) and 11 (Madison, 2000) (XSS-10/11) were launched in 2003 and 2005, respectively, and wereintended to demonstrate key technologies for future on-orbit servicing missions. The micro satellites demon-strated line-of-sight guidance, rendezvous as well as close-proximity maneuvering around an orbiting satellite.Both missions utilized the upper stage of the launch vehicle as a simulated target spacecraft to be serviced.Brought into geostationary orbit in 2006, the Micro-Satellite Technology Experiment’s (MiTEx) (Boeing-Company, 2006) purpose was to execute a variety of autonomous operations, maneuvering and station-keeping. In 2008 and 2009, both satellites conducted the first deep space inspection of the malfunctioningdefense support satellite (DSP-23).The goal of the Orbital Express (Shoemaker and Wright, 2003) mission, developed by the Defense Ad-vanced Research Projects Agency (DARPA) and launched in 2007, was to validate the technical feasibilityof robotic on-orbit servicing including autonomous rendezvous, proximity operations, capture, docking andfuel transfer. The experiment was composed of two satellites, the servicer ASTRO which featured a roboticmanipulator, and a surrogate target satellite, called NextSat. After completing a successful docking ma-neuver, refueling and the substitution of orbital replacement units could be demonstrated on a low level ofautonomy.Funded by DARPA and implemented by the Naval Center for Space Technology, the Spacecraft for theUniversal Modification of Orbits (SUMO) (Bosse et al., 2004) was initially planned to be launched in 2010.The spacecraft is supposed demonstrate the integration of machine vision and multiple robotic manipulatorswith autonomous control to demonstrate rendezvous and grapple maneuvers for future spacecraft servicingoperations in geostationary orbit.The Orbital Life Extension Vehicle (OLEV) (Krenn et al., 2008) and the German Orbital Servicing Mission(DEOS) (Sellmaier et al., 2010) are currently under development at DLR. DEOS investigates technologiesto autonomously and manually perform rendezvous and proximity operations as well as to capture a tum-bling and uncooperative target satellite with a manipulator based on DLR’s 7-DoF Lightweight Robot-III(LWR-III). The implemented torque controlled joints have previously been space-qualified during the Rokvissmission and allow the execution of a soft and reflexive grapple maneuver while anticipating the movement ofthe coupled satellite platform (Abiko et al., 2006). OLEV’s purpose is to approach and dock with a depletedgeostationary satellite utilizing a special capture tool for apogee motor insertion. Consequently, active atti-tude and position control can be performed for the coupled system along with orbit transfer maneuvers suchas relocation to a graveyard orbit for OLEV and controlled de-orbiting for DEOS.Figure 1 shows a classification of all mentioned free-flying space-robotic systems and outlines the focusedresearch area within the framework of this paper. The composition is based on a breakdown in different pos-sible OOS tasks. Additionally, the respective grade of autonomy is considered. Missions including multipleaspects may be depicted twice in the Figure 1. Concluding the results of the aforementioned complex OOSdemonstrator missions that involve free-flyers, it becomes obvious that they are mostly applied in operationswhere the target is known in detail. Most of them were successful due to an autonomous approach, which hadbeen monitored from the ground. Especially when the approached target’s exact composition and state are

Page 4: SPHERES Interact - Human-Machine Interaction aboard the

uncertain or not known at all, the remote environment can not be precisely modeled for predictive, dynamiccomputations used during close-proximity and grapple operations. Therefore, additional teleoperated inspec-tor satellites with real-time visual feedback become very beneficial for gaining crucial target information.In addition to OOS missions, such inspectors were supposed to be used for Space Transportation System(STS) and ISS maintenance and EVA support operations. Currently, there are no operational, free-flyinginspection or maintenance support systems deployed in space. However, there have been several space andground-based demonstrations.The Autonomous Extra-vehicular Robotic Camera (AERcam) (Choset et al., 1999), featuring a stereo visioncamera system, was designed to provide astronauts and ground control with a visual feedback of the SpaceShuttle’s and International Space Station’s exterior. The prototype (AERCam Sprint) was first deployedaboard Columbia during STS-87 in 1997. While being teleoperated by an astronaut inside the Shuttle it flewfreely inside the forward cargo bay. Currently, the Miniature Autonomous Extravehicular Robotic Camera(Mini AERcam) (Fredrickson et al., 2003) is being developed at NASA as the nano-satellite class successorof AERCam. It is supposed to deliver the capability for next generation external inspection and remoteviewing of human spaceflight activities, including ISS operations. Besides a teleoperation mode, the systemis supposed to feature a supervisory autonomous control with collision avoidance.The Space Systems Laboratory at the University of Maryland is developing the Supplemental Camera andManeuvering Platform Space Simulation Vehicle (SCAMP SSV) (SCAMP SSV, 2006), which demonstratesfree-flying camera applications within a neutral buoyancy test bed. It provides a stereo video interface forteleoperated 3-DoF navigation up to full 6-DoF autonomous control.Similar in size compared with SPHERES (cp. Chapter 2), the Personal Satellite Assistant (PSA) (Doraisand Gawdiak, 2003) was developed by NASA and is intended to act as a free floating and autonomousintra-vehicular spacecraft. It is supposed to propel itself autonomously with eight small impellers. In con-trast to the aforementioned OOS related experiments, PSA would interact with the crew in a short-sleeveenvironment with the main purpose to provide a remote-sensing and diagnosis platform for astronauts andground control.

1.2 Work Overview

The possibilities of performing representative research on human-spacecraft interaction are limited on Earth,since almost all interactions are affected by gravity. Thus, full 6-DoF experiments cannot be representativelyperformed and experiments are often reduced to 3-DoF. As depicted in the previous section, several ambitiousmissions have demonstrated key technologies and concepts for robotic operations in space. However, thisoften required development in the context of complex, space-proven1 systems, the challenges of which oftenrestrict the scope of demonstration missions. The possibility of extending the functionality is limited, oncethe spacecraft is in orbit.In contrast, the MIT SPHERES test bed aboard the ISS utilizes three experimental satellites, designed forproviding researchers with a long term, replenishable, and upgradeable testbed for the validation of highrisk metrology, control, and autonomy technologies. It features the ability of easy abort-improve-repeatapproaches, i.e. all experiments are observed by astronauts and can easily be aborted. After evaluating theresults on ground the algorithms can be improved and tests can be repeated in a subsequent test session.Moreover, re-programming of the satellites allows for changing the control algorithms with respect to varyingtest objectives.This paper shows a case study of a teleoperated inspector satellite as emphasized in Figure 1, which issupposed to support ISRA or OOS operations. Several tests aboard the ISS have been performed thattogether form a mission scenario for a satellite inspection robot and shed light on important aspects of suchinspector operations.

• After initial proximity operations for approaching a target satellite, the inspector satellite will startits operations by orbiting around the target satellite to build or update a map (geometric model)

1Space is a harsh environment. Besides the constantly changing thermal condition, spacecraft have for example to bedesigned and tested in consideration of the impacts of radiation and vacuum.

Page 5: SPHERES Interact - Human-Machine Interaction aboard the

Assembly Maintenance Inspection EVA support

utom

ated XSS-10 / XSS-11

DARTROTEX

MiTExRanger

onom

ous/

A MiTEx

Orbital Express

Ranger

AERcammini

ETS-VII

Aut

o

ervi

sed

onom

y

SUMO

OLEV

Supe

Aut

oed

ETS-VII SCAMP

DEOS

DEOS AERcammini

Tele

oper

ate

ROTEX

DEOS

Ranger

AERcammini

T

Grade ofAutonomy

RokvissIn Development,Ground Tests

On Orbit,Flown

ResearchFocus

Figure 1: A classification of OOS demontrator missions

of the remote environment. The Circumnavigation experiment in section 3 investigates interactionsbetween the human operator and the controlled spacecraft.

• Such close proximity operations are always critical and the likeliness of collisions has to be reduced.In addition to autonomous abort mechanisms, the effectiveness of a Manual Abort command istherefore evaluated in the following experiment.

• Once sufficient (map) data of the remote location and its physical properties is available, the inspectorwill move on to a target location. This can either be a certain instrument at the target satellite, thathas to be inspected, or an advantageous location for monitoring the docking approach of a servicerspacecraft to the target spacecraft. The resulting precise motion towards a commanded location isconsidered in the Human Navigation experiment.

• The inspector satellite will not only stay at a certain location but may have to change its positionto optimize its view on the incidents in space. Thus, subsequent tests are investigating CollisionAvoidance techniques with varying levels of onboard autonomy.

• Since all the operations in space will be subjected to a time delay between operator command andspacecraft reaction (and the according telemetry to the operator), the concluding experiments willincorporate time delays for performing Delayed Human Navigation and Delayed Avoidance.

Page 6: SPHERES Interact - Human-Machine Interaction aboard the

2 SPHERES

The Synchronized Position-Hold, Engage, Reorient Experimental Satellites is a high fidelity test bed designedfor developing and maturing algorithms for distributed satellite system concepts. The SPHERES programbegan as a design course in the Space System Laboratory (SSL) at MIT and developed over the years to apermanent robotic experiment aboard the ISS.

2.1 The SPHERES Test Environment

Pressure

Thusters

HardwareB tt

regulator

Ultrasound sensors

Buttons

Pressuregauge

Battery

sensorsgauge

Gas tank

Property Value

Diameter 0.22mMass (with tank and batteries) 4.3kg

Max linear acceleration 0.17m/s2

Max angular acceleration 3.5rad/s2

Power consumption 13WBattery lifetime 2h

Figure 2: Main components (left) and basic properties (right) of a SPHERE Satellite

SPHERES are football-sized nano-satellites, currently three aboard the ISS and on ground respectively. TheSPHERES spacecraft control their position and attitude using a cold gas system. Multi-phase CO2 is storedin a tank located inside the satellites. It is regulated to 25 psi and fed through an expansion capacitorto the 12 valve thruster, which are distributed over the surface and controlled via pulse-width modulation.Hardware buttons on a control panel are used to power and reset the satellite, initiate the bootloading ofthe test software, and enable the satellite for the tests. The basic properties of the satellites are summarizedin Figure 2 (right).The experimental satellites feature a full 6-DoF control authority. The built-in navigation system consists ofa custom pseudo-GPS based on ultrasound beacons and sensors. The beacons are located at the borders ofthe test volume such as the walls of the Japanese Experiment Module (JEM). This enables the SPHERESto perform absolute state measurements. Given a priori knowledge about the beacon configuration, the on-board computer performs time-of-flight measurements and additionally uses three accelerometers and threegyroscopes to estimate its state. The sensor fusion is done by an Extended Kalman Filter (EKF), reachinga precision in the vicinity of 10−3 m (Nolet, 2007).SPHERES are currently used to mature space technology. Scientists on the ground and astronauts aboardthe ISS, initiate, monitor, and occasionally restart the experiments as well as change consumables such aspropellant tanks and batteries. In this way, SPHERES is a risk-tolerant test bed that can be used forrobotic control in space. Complex tests can be performed in a representative environment under zero gravityconditions and full 6-DoF control, without the danger of losing hardware in case the test conditions provetoo challenging (Saenz-Otero, 2005). Since its commissioning in 2006, about 28 SPHERES test sessions, eachfeaturing approximately 10-15 tests, were executed aboard ISS. The test sessions have included research onFormation Flight (Chung and Miller, 2008), Docking and Rendezvous (Nolet and Miller, 2007), Fluid Slosh,Fault Detection, Isolation, and Recovery (FDIR) (Pong, 2010), and general distributed satellite systemscontrol and autonomy. Before being uploaded to ISS, the flight experiment software is integrated andverified with the SPHERES test bed on a 3-DoF (two degrees of translational and one degree of rotationalfreedom) air-bearing table at MIT SSL. The satellites are put on floating devices that are equipped withadditional CO2 tanks, cp. Figure 3 (left). A research oriented Graphical User Interface (GUI) providesdetailed text-based state information about all involved satellites as well as custom telemetry, e.g. activationof an integrated collision avoidance system. Figure 3 (right) shows the typical course of development fora flight experiment. After the initial scenario is implemented using a C++ based guest scientist program(GSP) (Enright and Hilstad, 2004), which uses a defined programming interface, the code can be tested and

Page 7: SPHERES Interact - Human-Machine Interaction aboard the

SPHERESPHERE

Flat Table

FloatingDeviceDevice

Computer Simulation

• Implementation in simulation (ideal) environment

H d i th l d t ti i 3 D F

p ( )• Source code validation and error management• Simulated human-machine interaction

Hardware-in-the-loop ground testing in 3-DoF

lity

• Implementation for real-time computation on actualSPHERES hardware F

Acc

essi

bil • Testing on air-bearing table at MIT SSL with real human-

mashine interaction (repeatable)• Consideration of environment imperfections

Fidelity

Flight testing on ISS in 6-DoF

• Testing in a relevant environment on International Space Station under zero-gravity conditions and with full 6-DoFStation under zero gravity conditions and with full 6 DoF, minor imperfections (air flow)

• Restricted experiment time, limited repeatabilility

Real Space Hardware Implementation

Figure 3: SPHERES put on floating devices on the SSL air-bearing table (left) and typical course of devel-opment for a SPHERES experiment implementation (right)

debugged within a MATLAB simulation environment (Radcliffe, 2002). Subsequently, hardware-in-the-loopexperiments are performed in 3 DoF on the air-bearing table. This iterative approach inherently reducesrisk and allows for assessing repeatability while improving the reliability of the implemented scenario.The SPHERES are also equipped with an expansion port for adding new hardware for ground and eventuallyISS testing. Thus, it can be extended to incorporate tests and validations for new ISS hardware such ascomputer vision based navigation equipment (Tweddle, 2010) or ground tests of other novel space subsystems(Varatharajoo and Kahle, 2005), (Varatharajoo et al., 2003). As Figure 3 (right) depicts, the accessibilitydecreases with increasing fidelity from computer simulation to space hardware.

2.2 SPHERES aboard the International Space Station

SPHERES Test Bed

Page 16

2.3 SPHERES on the International Space Station

Similar to the SPHERES testing environment on ground, the equipment aboard ISS is

composed of three nano-satellites, a custom metrology system based on ultrasound time-of-

flight measurements, communications hardware, consumables (tanks and batteries), and an

astronaut interface. Fig. 2-3 (left) shows part of the space-based SPHERES setup in the

Japanese Experiment Module (JEM, also known as Kibo module), where both test sessions

documented within this thesis have been performed.

Fig. 2-3: SPHERES in the JEM during TS25a (left) - ISS-fixed coordinate frame (right,

courtesy of NASA)

Fig. 2-4 shows the complete positions of the ultrasound beacons for the metrology system in a

JEM mockup. They are mounted on the walls around the test volume which is approximately

a 2m cube. A SSC laptop is used as control station to collect and store telemetry, upload new

algorithms and interact with the experiment. Unfortunately, crew-satellite interaction is

currently still limited to key commands via the crew interface. However, current

developments aim at including an upgraded graphical user interface with virtual reality

support in order to allow for more advanced experimental setups within the SPHERES

Interact program (cp. chapter 2.4). Additionally, a stereo-vision camera system is scheduled to

extend the SPHERES hardware system. Fig. 2-3 (right) shows the ISS-fixed coordinate

system used for orientation and crew-satellite interaction. Instead of e.g. ±{x, y, z}, the

directions AFT, FWD (forward), PORT, STBD (starboard), OVHD (overhead), and DECK

are used. That way, directions are intuitively usable. The alignment of this ISS-fixed

coordinate frame in the JEM can also be seen in Fig. 2-4.

Beacon(x5)

Satellite(x3)

CO2 fueltank

GUI andcomm.

FWDX+ Axis

AFTX‐Axis

OVHDZ‐Axis

DECKZ+ Axis

STBDY+ Axis

PORTY‐Axis

SPHERES Test Bed

Page 16

2.3 SPHERES on the International Space Station

Similar to the SPHERES testing environment on ground, the equipment aboard ISS is

composed of three nano-satellites, a custom metrology system based on ultrasound time-of-

flight measurements, communications hardware, consumables (tanks and batteries), and an

astronaut interface. Fig. 2-3 (left) shows part of the space-based SPHERES setup in the

Japanese Experiment Module (JEM, also known as Kibo module), where both test sessions

documented within this thesis have been performed.

Fig. 2-3: SPHERES in the JEM during TS25a (left) - ISS-fixed coordinate frame (right,

courtesy of NASA)

Fig. 2-4 shows the complete positions of the ultrasound beacons for the metrology system in a

JEM mockup. They are mounted on the walls around the test volume which is approximately

a 2m cube. A SSC laptop is used as control station to collect and store telemetry, upload new

algorithms and interact with the experiment. Unfortunately, crew-satellite interaction is

currently still limited to key commands via the crew interface. However, current

developments aim at including an upgraded graphical user interface with virtual reality

support in order to allow for more advanced experimental setups within the SPHERES

Interact program (cp. chapter 2.4). Additionally, a stereo-vision camera system is scheduled to

extend the SPHERES hardware system. Fig. 2-3 (right) shows the ISS-fixed coordinate

system used for orientation and crew-satellite interaction. Instead of e.g. ±{x, y, z}, the

directions AFT, FWD (forward), PORT, STBD (starboard), OVHD (overhead), and DECK

are used. That way, directions are intuitively usable. The alignment of this ISS-fixed

coordinate frame in the JEM can also be seen in Fig. 2-4.

Beacon(x5)

Satellite(x3)

CO2 fueltank

GUI andcomm.

FWDX+ Axis

AFTX‐Axis

OVHDZ‐Axis

DECKZ+ Axis

STBDY+ Axis

PORTY‐Axis

Figure 4: A SPHERES experiment in the JEM under micro gravity conditions (left) and the ISS-fixedcoordinate frame as used by the astronauts(right)

Analogously to the SPHERES testing environment on ground, the equipment aboard ISS is composed ofthree nano-satellites, communications hardware, replenishable consumables (tanks and batteries), and an

Page 8: SPHERES Interact - Human-Machine Interaction aboard the

astronaut interface. Figure 4 (left) shows the complete SPHERES setup in the JEM, where SPHERES ex-periments are currently conducted. Previously, the U.S. Destiny Laboratory (USLab) was used for SPHEREStest sessions. The current test volume is approximately a 2 m cube.A complete test session (TS) usually takes between two and three hours of crew time2. After a first in-troductory crew conference, the astronaut sets up and configures the hardware. An ISS-supplied standardlaptop is used as a control station to upload new programs to the satellites, collect telemetry and interactwith the experiment. The provided simplified flight GUI is used for detailed crew instructions on the cur-rent experiment, test initiation as well as the display of test results, that are also communicated down tothe MIT SPHERES team. Besides the direct visual feedback of the scene right in front of the astronaut,i.e. the movement of the satellites, the current crew-satellite interaction during a test run is limited tokeyboard commands as well as text-based feedback on the laptop screen. However, current developmentsaim at including an upgraded GUI including augmented virtual reality (VR) elements in order to allowfor more advanced human interaction experiments (as depicted in the following section). In addition, crewfeedback may be entered and, if required for experiment evaluation, customized follow-up questions have tobe answered within the GUI. Usually the crew has to complete the following tasks during a test session:

i) First, the crew unstows all SPHERES hardware, sets up the ultrasound beacons, connects the commu-nication equipment to the laptop and runs the GUI. Depending on the status of the consumables thatare installed in the satellites, and the number of satellites that are used for the following experiments,it is sometimes required to exchange some batteries and tanks. All such actions are logged in the userinterface in order for the ground team to have an accurate overview about the quantity and quality ofall resources aboard ISS.

ii) After uploading the current experiment file to the satellites, the crew carefully reads the HTML-basedexperiment instructions embedded into the user interface. This includes a graphical description of thesatellites initial positioning in the test volume as well as an experiment synopsis including demandedastronaut-satellite interaction and tasks. The astronaut also gets a basic idea about the scientificbackground of each experiment.

iii) After approximately positing all involved SPHERES according to the instructions and enabling themby pressing a dedicated button in the satellites’ control panel, the crew starts a test run by pressingthe respective test number key in the GUI. Subsequently, the satellites will initiate, locate themselvesin the test volume, and move into their proper initial position.

iv) Usually SPHERES experiments run completely autonomously with no crew interaction required. For thedescribed human-satellite interaction scenarios within this publication, however, the crew had to issuekeyboard commands for actively controlling the position and velocity of one SPHERE. After completingall steps listed in the crew instructions, test runs are usually terminated automatically if a set of certainparameters is met, e.g. a predefined position of the controlled SPHERE is reached. Alternatively, testsmay also be stopped using a dedicated button in the GUI. The action chosen depends on whether ornot all satellites react nominally. Numerical test results are displayed in the GUI and are communicateddown to the ground team. Subsequently it is decided whether to move on or repeat the current test runin case of an obvious or potential failure.

v) After each test run, the crew has the chance to enter personal feedback into the GUI in order for theexperiment scientist to better understand what happened, especially in case of off-nominal satellitebehavior - that might be software, hardware or human-machine interaction related. In addition, manda-tory crew feedback can be implemented within each test, i.e. radio buttons or a verbal description aboutsome detail of the experiment.

vi) Subsequently, the consecutive tests are executed, either until all experiments within the test session arefinished, or the end of the scheduled crew time is reached. The SPHERES hardware is finally stowedaway again.

2The time in which astronauts are available for a specific research purpose.

Page 9: SPHERES Interact - Human-Machine Interaction aboard the

Figure 4 (right) shows the ISS-fixed coordinate system used for orientation and crew-satellite interaction.Instead of ±X,Y, Z the directions AFT, FWD (forward), PORT, STBD (starboard), OVHD (overhead),and DECK are used. During operations, the MIT SPHERES team monitors the experiments in real-timeby means of a high-bandwidth video and audio downlink. Instructions are communicated, upon request, tothe astronaut performing the experiments. Downloaded telemetry and recorded video of the test sessionsserve as a source for experiment evaluation and if needed, subsequent algorithm improvement for follow-upexperiments.

2.3 SPHERES Interact

There is a wide variety of applications for terrestrial telerobotics existing. The spectrum comprises un-derwater telerobotics (Ridao et al., 2007), search and rescue operations (Ruangpayoongsak et al., 2005),minimal invasive surgery (Ortmaier, 2007), and unmanned aerial vehicles (Chandler et al., 2002) amongstothers. The control architectures can usually be distinguished by whether a direct control, shared control orsupervisory control strategy is applied (Niemeyer et al., 2008). A direct control allows the operator to definethe robot’s motion, whereas the precision of a rate controlled robot is higher than under acceleration control(Massimino et al., 1989). A shared control architecture utilizes local sensory feedback loops with varyinglevels of operator assistance. The use of virtual elements in this context has proven to increase the accuracyand consolidate the task performance (Abbott et al., 2007). Utilizing supervisory control (Sheridan, 1992)the operator usually issues high level commands and receives a summary of the actions in terms of sensoryinformation. If the control architectures are subject to a time delay in the communication channel, they cansuffer from serious instabilities (Hirche and Buss, 2007) (Munir and Book, 2003), for which several compen-sation techniques such as wave variables (Tanner and Niemeyer, 2006) or time domain passivity (Ryu et al.,2005) were developed.Most of the previous SPHERES test sessions on the ISS matured autonomous algorithms. Future servicingmissions and the assembly of complex space structures will depend on increased autonomy. However, theability of humans to provide high-level oversight and task scheduling will always be critical. In addition,elements of shared and supervised control techniques have to be evaluated to support free flyer operations.Thus, a complex test environment for demonstrating advanced concepts for human-spacecraft interactionwas in this context set up on ground (Stoll and Kwon, 2009). The SPHERES hardware is synchronized witha virtual reality entity of the test environment. The human operator is capable of controlling the exper-imental satellites in 3-DoF by means of a force feedback joystick (the Novint Falcon), that features threetranslational degrees of freedom. That way the satellites can be enveloped by virtual obstacles (cp. Figure5) and collision avoidance or fuel optimal paths can be made perceptible by utilizing the force feedback thatis easy to understand by the operator (Stoll et al., 2010). The actual state as well as the commanded stateare visible in the virtual environment and time delays can be introduced in the communication channel.However, high-fidelity feedback is not yet available to the human operator aboard ISS and fundamentalresearch had to be conducted using the ISS laptops only as a mean for human-machine interaction. Ex-periments were designed to be executable with keyboard commands and visual feedback (motion of thesatellites) only. That way experiments were conducted to develop and advance algorithms for adjustableautonomy and human spacecraft interaction. The research began with basic tests during Test Session 11,where the crew was asked to move a satellite to multiple corners in a predefined volume. The satellite au-tonomously prevented collisions with the walls of the ISS. The test demonstrated the ability of the crew touse the ISS laptop to control SPHERES. It provided baseline results for future tests. An ongoing sequence ofISS tests is being conducted in the framework of the SPHERES Interact program, including upgrading theSPHERES GUI to provide visual feedback about the satellites’ state. The goal of the program is to conceivenew algorithms that utilize both human interaction and machine autonomy to complete complex tasks ina 6-DoF environment. The research area comprises human orientation, navigation, collision avoidance, andinteraction with autonomy under the influence of time delay by a communication channel.

Page 10: SPHERES Interact - Human-Machine Interaction aboard the

Figure 5: SPHERES ground test environment with haptic-visual feedback and synchronized virtual (left)and the hardware entity (right)

3 Experiments aboard the International Space Station

This section presents eight experiments, which were performed in the framework of three different SPHEREStest sessions aboard the ISS. All of the tests required astronauts to control SPHERES satellites under zero-gravity conditions.

• TS19 took place on August 27, 2009 and lasted approximately 3.75 hours. Amongst other tests, twoSPHERES Interact experiments were performed.

• TS20 took place on December 05, 2009. A total of 28 tests were run in 2.5 hours. Four testsconcerned the SPHERES Interact program.

• TS24 was executed on October 07, 2010. The session contained a total of 18 tests, from which 8were SPHERES Interact tests.

Table 1 summarizes the research objectives of the tests performed in the framework of SPHERES interact.The results of the tests are presented in this section. The experiments evaluated the potential benefits of ahuman controlled inspector satellite. Hence, each of the eight tests in this case study tries to shed light on apartial aspect of an inspector satellite mission, which is supporting ISRA or OOS. The availability of crewtime is very limited and so are the consumables (battery packs, CO2 tanks) available for SPHERES, which

Table 1: Executed experiments aboard ISS and their respective research objectives

Test TSResearch objectives

Orientation Dist. est. Performance Coll. avoid. Delayed Autonomy

Circumnav. 19 x x xManual Abort 19 x x x x xHuman Nav. 20 x x x

Manual Avoid. 20 x x x xShared Avoid. 20 x x x x

Supervised Avoid. 20 x x x xDelayed Nav. 24 x x x x

Delayed Avoid. 24 x x x x x

Page 11: SPHERES Interact - Human-Machine Interaction aboard the

cannot be recharged or refilled on ISS but have to be brought to orbit. Thus, each experiment is usuallyonly performed once.The small sample size can influence the statistical significance of the results. Therefore, the SPHEREStests are usually designed in a way that the main objective, which was evaluated in one test will bere-verified as a secondary objective or sub-goal in a later test. Table 1 emphasizes this approach for theexperiments presented here. Each tests has a different objective. However, most of the tests feature acommon intersecting set, which permits drawing and re-verifying the conclusions.

3.1 Human Task Performance Evaluation

The strategy for evaluating the human task performance is directly connected to the requirements derivedfrom teleoperated satellite scenarios for OOS and ISRA missions. Since the propellant is in general themain limiting factor for space missions, fuel usage was chosen as one of the metrics to evaluate the humantask performance. Especially for OOS and ISRA missions, the time window3, i.e. the time available forcommunicating with an orbiting spacecraft from ground, is another major constraint for interactive on-orbitoperations. The time until a specific positioning task is accomplished will therefore also contribute to thetest metrics. Time and fuel consumption are opposing factors, since both cannot be minimized at thesame time. The crew therefore has to find a trade-off between both factors in order to maximize the taskperformance. Further, the accuracy in positioning the inspector satellite, i.e. the distance to the targetposition as well as its residual velocity are incorporated into the human task performance considerations.The latter is of importance for collision avoidance purposes. Therefore, the overall performance P for aninspector satellite positioning task is composed of the four performances for time Pt, fuel Pf , distance Pd,and velocity Pv.

P = Pt + Pf + Pd + Pv (1)

They are individually evaluated using a calibration value cali and a reference value iref where i ∈ (t, f, d, v).

P = ((1− calt(ttask − tref )) + (1− calf (ftask − fref ))

+ (1− cald |dfinal − dtarget|) + (1− calv |vfinal|) (2)

The individual performances are standardized that way to a magnitude of 1 if there is no deviation to therespective reference value. The reference values are constant throughout each test setup and for time andfuel derive from the average data within similar tests. The reference values for distance and final velocity arepredefined and will be stated at the bottom of each evaluation table. The calibration factors cali representthe performances’ derivative with respect to the measured values and therefore consider their deviation fromthe respective reference values.

cali =αi

iref; i ∈ (t, f, d, v) (3)

The factors αi allow for applying different weights to the individual performance criteria. Figure 6 shows agraphical representation of the individual performance functions in Eq. 2 depending on the values obtainedin the tests. The time performance Pt approaches the value one if the time used for the task is equal to thereference time. If it exceeds the reference time Pt will decrease. Likewise, if the task time is better thanthe reference, Pt will increase. Analogously, the fuel performance Pf is evaluated and can exceed the valueone if the fuel consumption is better than the reference. In contrast, the task of locating the inspector in a

3This matter will be explained in more detail in section 3.9.

Page 12: SPHERES Interact - Human-Machine Interaction aboard the

given distance to the target cannot be overperformed. Deviations from the target distance will, no matter ifpositive or negative, always result in a decreased distance performance Pd as Figure 6 shows. Furthermore,the performance Pv which takes into account the velocity will be at its maximum if there is no residualvelocity.According to Eq. 2, an overall task performance of 4 means that the performance values for time and fuelcoincide with their reference values and that the satellite is positioned with no deviation to the its intendedtarget position and no residual velocity. This implies a strong task performance in the respective experiment.Values below 4 reflect a decreased task performance, whereas values above 4 show an overperformance.

Pt, Pf

P [-]

1

Pv

dy/dx(Pi) =

1

Pd

cali

d [m], vfinal [m/s],f [%] t [ ]

d=dtarget, vfinal=0,f f t t f [%], t [s]f=fref, t=tref,

iref/αiref/ i

Figure 6: Performance criteria evaluation for human-satellite interaction with SPHERES

3.2 Circumnavigation

Within this test, elements of human supervision were implemented into the SPHERES test bed for the firsttime. This so-called supervised autonomy aims at providing the human operator (here the astronaut) withthe possibility of direct interaction with the SPHERES. High level commands are given by the operator andexecuted by the satellites autonomously.A map building or updating task of the inspector satellites was simulated in this experiment, in which theastronaut controlled an inspector that orbited around a target satellite. Since there was no feedback onthe satellite states available from the flight GUI, the operator’s direct observation on the satellite motionwas the only way to receive state feedback. No augmented or virtual reality techniques were supporting theastronaut and the human decision making process was purely based on the motion of the inspector satellite.This experiment in particular aimed at testing the astronauts’ spatial orientation and judgment of motionin three dimensional space. It was evaluated whether a human operator is capable of recognizing motionpatterns (orbits) and how precise they are controllable. Note that since in this experiment there was nomanual control of the satellite position, the metrics of Eq. 1 will not be applied.Therefore the test setup was as follows. After an initial positioning period of the two SPHERES, the primarysatellite (SPH1), which is the inspector satellite, flew around the secondary (target satellite) (SPH2) in aplane parallel to DECK, and was constantly pointing with its Velcro side4 (-X) as an imaginary camera atSPH2. The task of the astronaut was to align this camera with the Z-axis of the target. That means itshould be either aligned with the pressure regulator or the tank of the target (cp. Figure 2), which servedas imaginary target instruments. The test involved two phases,

i) an inclination change and

4Velcro is used as a simple docking mechanism.

Page 13: SPHERES Interact - Human-Machine Interaction aboard the

ii) an alignment adjustment of the inspector.

While the inspector was permanently orbiting the target satellite, keyboard commands had to be used tochange the inclination of the inspector’s orbit plane to 90◦. The DECK served in this connection as thereference plane. After reaching this final orbit, the goal in the phase was to stop the inspector at a positionroughly aligned with either the regulator or the tank of the target. In the second phase, this rough position-ing was to be adjusted to a precise alignment.Figure 7 (left) depicts the motion of SPH1 in 3D and points out the inclination change start after an ini-tial convergence of the EKF and the positioning of both SPHERES. The target satellite was autonomouslystaying at the origin of the coordinate system, i.e. the center of the experiment volume. The inclinationincrements for the inspector were not constant but depending on the actual inclination. The closer theactual inclination i was to a 90◦ inclination (final orbit), the smaller the increments of i were, which shouldallow the astronaut to precisely change the inclination. This was supposed to yield data on the accuracy ofthe human controlled orbit inclinations. Figure 7 (right) shows the inclination of the inspector with respectto deck. The inclination was changed to a maximum of 75◦ but the target inclination of about 90◦ wasnever reached. It was difficult for the astronauts to recognize an orbiting plane of SPH1 and to reference itsinclination to DECK. The crew feedback stated that ”it was a little difficult to understand the use of [...]commands to control inclination and alignment. Initially, the movement of the primary satellite seemed tomove in a plane parallel to the deck, then moved more vertical. It was difficult to predict with the two keysand the continual motion of the primary satellite when the plane was perpendicular to deck.”On the one hand, it seemed to be complicated to recognize a 3D path since the motion of the SPHERESis comparably slow (approximately 2 cm/s). On the other hand the change of inclination did not happenas a step but there was a transition phase of the inspector from one orbit inclination to the other. Thismotion superimposed the actual circumnavigation and made it difficult for the astronaut to judge the actualinclination.Therefore, this maneuver was not stopped with a rough alignment of SPH1 with the Z-axis of SPH2 (in-spector ”above” or ”under” the target), but instead to the side. This can be seen in Figure 7 (left) as theregion in which the alignment adjustment of SPH1 took place. The following alignment adjustment of SPH1confused the astronauts since they did not see a distinct motion of the inspector as a reaction to the keyboardcommands. The crew stated that ”For the alignment adjustment, it was not clear what the satellite responsewas to the [...] command[s].” This is due to the fact that the alignment adjustment incremented only 1◦ perkeyboard command. This command was intended to allow precise repositioning of the satellite in 3D butinstead made the control appear unresponsive.Even though this test was not operated as expected, it yielded very valuable information on the humaninteraction with SPHERES. It is evident that motion patterns are hard to recognize for a human super-visor, since the SPHERES velocity is comparably slow. The orientation of a SPHERES circumnavigationorbit is difficult to judge in three dimensions. Therefore, re-orientations of orbits or motions to changean orbit should be executed and evaluated by low level autonomous processes instead of high level humancommands. Further, precise location (within the magnitude of 1◦) of a SPHERES is too complicated for thehuman supervisor to judge, since in three dimensional space reference points are not easily utilizable andthe commanded SPHERES motion is superimposed by random noisy motion.

3.3 Manual Abort

While the inspector satellite is orbiting the target satellite to support the ground operator with his decisionmaking process, it is crucial to monitor the distance between the two satellites. Collision risk is usuallyminimized by autonomous algorithms, which monitor the relative positions. Since the algorithms are notalways fail safe, as the DART collision showed (cp. section 1.1), the efficiency of a human controlledabort command is evaluated in the following experiment. The incorporation of a manual abort capabilityinto mission operations can yield an additional safety layer, but false positive abort commands within aninspection task can have a significant time and fuel penalty associated with moving the satellite to a safeposition and restarting proximity operations. Accordingly, the test metric to be evaluated in this test is thenumber of false abort commands.

Page 14: SPHERES Interact - Human-Machine Interaction aboard the

Figure 7: 3D motion of the of the inspector satellite around the target at the center of the test volume (left)and the commanded inclinations of the inspector over time (right)

Human perception of a potential collision was evaluated in this experiment. After the two satellites underwentan initial positioning period, the inspector (SPH1) orbited around the target (SPH2) in a plane parallel toDECK. It was constantly pointing with its Velcro side (again the imaginary camera system) at the targetsatellite. A series of simulated errors occurred, which caused the inspector to decrease its orbiting radius andthus its absolute distance to the target satellite. It was the astronaut’s task to observe the motion and detectpossible collisions. In the case a potential threat to the satellites was detected, a keyboard command wasused to initiate an abort maneuver. Following an abort command, SPH1 increased its distance by returningto the original formation.Six possible collision maneuvers were implemented into the test, each with a different relative position ofSPH1 to SPH2 to test how the spatial positions of both satellites influence the judgment of distance of theastronaut. In order not to provoke an actual collision between satellites, an autonomous abort procedurewas also implemented into the test, which was executed whenever the distance between both satellites fallsbelow a certain threshold.Figure 8 (left) shows the maneuvers which caused SPH1 to decrease its orbiting radius. Out of those 6maneuvers, 2 were aborted manually. The other 4 were terminated by the autonomous abort maneuver.Figure 8 (right) depicts the absolute distance between SPH1 and SPH2 (center) position over time. Keepingin mind that the SPHERES feature a span between 21 cm and 23 cm it can be deduced that the astronautswere able the judge relative distances between both satellites very well and that distances of 25 cm to 26 cmhave not been considered as a potential threat of a collision. There was no manual abort initiated until thedistance amounted to less than 24 cm.This demonstrated that human judgment of distances in the three dimensional environment is much betterthan the judgment of attitude. Further, the abort command has only been used in the two describedincidents, meaning that there were no false positives initiated. The crew stated that ”The satellites separatedon their own except for two times when the [...][abort] command was selected. The satellites were veryresponsive to the command.” The test suggests that a human supervisor with a third person perspectivehas a good understanding of when a proximity maneuver of a SPHERE constitutes a possible threat toanother spacecraft. With this configuration an operator abort capability could add safety and robustness toan autonomous mission.

Page 15: SPHERES Interact - Human-Machine Interaction aboard the

manual

abort

contact

manual abort

Figure 8: Absolute distance between the inspector and the target satellite (left) and the motion of inspectorin the x-y plane (right)

3.4 Human Navigation

After there is sufficient data of the proximity environment available from the initial scanning procedures, theobserver satellite can approach special areas of interest under manual control. For an inspection, the satellitemay examine fine details of an area of interest or for docking, an observer satellite can closely monitor thefinal berthing phase to provide more accurate distance measurements.The human navigation experiment simulated a close approach maneuver of an inspector to a target locationusing manual keyboard commands. The five global metrology beacons served as distinct locations to beinspected and were supposed to be approached to a distance of about 30 cm in an arbitrary sequence untilthe satellite exhausted its propellant.Figure 9 (left) shows the motion of the inspector in the test volume and the actual beacon positions (right).The inspector started in the middle of the test volume and was first controlled by the crew to approachbeacon number 1 in the upper back right corner. Here, the minimum distance between the inspector andthe approached beacon was 39.2 cm. Beacon 4 was the second target of the crew and the minimum distanceamounted to 31.0 cm. The trend of distances indicates that the crew appeared to get more confident inapproaching the targets and came closer towards the demanded 30 cm of distance. The final target, beforethe tank depleted, was beacon 2. Here the crew was able to reach a distance of 28.0 cm. This test againshowed the crew’s ability of judging distances in 3D space very well, which seconds the test results of theManual Abort tests in TS 19.The crew did not directly maneuver from one beacon to the other (e.g. directly from beacon 1 to beacon4). Before approaching the new target, the inspector was steered back to the middle of the test volume.These re-centering maneuvers can also be seen in Figure 10, which shows the background telemetry5 of theinspector satellite. The position coordinates as well as the velocities are depicted. The x, y, and z positionof SPH1 were close to zero in the maneuvers at t = 210 s (after beacon 1), at t = 340 s (after beacon 4),and again at t = 420 s (after beacon 2, shortly before the tank depleted). Here the absolute velocity of theinspector was close to zero as well.Table 2 summarizes the performance criteria and the performance of the three beacon approaches, conductedin this experiment. The re-orientation phase for beacon 4 and 2 in the middle of the test volume increases timeand fuel6. Hence, the overall performance (1.3 and 2.6) decreases compared to beacon 1, which constituteswith Poverall = 4.1 a slight overperformance. The reference values for evaluating the respective performancesare listed at the bottom of Table 2, tref and fref are the means of time and fuel during the tests. dtarget =30cm and is the target distance to SPH2, whereas dref = 30cm was chosen for this and all following

5Background telemetry is available for all SPHERES during the test sessions. Not only the state vectors, but also theattitude in form of quaternions and angular rates are logged.

6The fuel consumption is stated as a fraction of the overall tank volume.

Page 16: SPHERES Interact - Human-Machine Interaction aboard the

positioning tasks to allow a comparability between tests with different target distance. Further, vref = 2cm/sis a heuristic value for an acceptable residual velocity. The weighing factors αi are constant for all tests andwere chosen to put more emphasize on the fuel consumption since this is the most crucial aspect in space.The performance Pd,v = Pd+Pv are listed separately to allow comparability with subsequent tests, for whichfuel and time values will differ since the task has changed.

target1

beacon 1

d = 39.2 cm

target 2

beacon 4

d = 31.0 cm

target 3

beacon 2

d = 28.0 cm

start

Figure 9: 3D motion of SPH1 in the test volume when approaching the beacons

Figure 10: Background telemetry of the human navigation test

3.5 Collision Avoidance Steering Law

The following tests evaluated the interaction between a human controlled spacecraft and autonomous collisionavoidance mechanisms. This section shortly describes the collision avoidance steering law (Katz et al., 2011).It operates on the closest point of approach (CPA), defined as the point in space and time in a relativetrajectory when two objects are closest. For two satellites, starting at time t = t0, the motion of satellites 1and 2 is assumed to continue along the current velocity direction. Defining the relative position from satellite1 to satellite 2, (x2 − x1), as r12 and relative velocity, (x2 − x1), as u12, the time evolution of the relative

Page 17: SPHERES Interact - Human-Machine Interaction aboard the

Table 2: Human Navigation Performance

TargetPerformance Criteria Performance

time fuel distance final velocity Pd,v Poverall

Beacon 1 116.6 s 2.57 % 39.2 cm 1.1 cm/s 1.1 4.1Beacon 4 169.4 s 6.23 % 31.0 cm 3.04 cm/s 0.4 1.3Beacon 2 91.8 s 4.64 % 28.0 cm 3.06 cm/s 0.4 2.6

tref = 126s, fref = 4.5%, dref = 30cm, dtarget = 30cm, vref = 2 cms

, αt = αd = αv = 1, αf = 2

position isr12(t) = r12(t0) + u12(t0)t. (4)

For clarity, the time index and subscripts will be omitted from this point forward. All values can be assumedto be from the perspective of satellite 1 at t = t0 unless otherwise specified.Taking the squared magnitude of the relative position and minimizing with respect to time gives the timeat closest point of approach tCPA.

d2 = r(tCPA)T r(tCPA) (5)

d

dtd2 = 2(rTu) + 2tCPA(uTu) = 0 (6)

tCPA = − rTu

uTu(7)

The distance at closest point of approach, dCPA, can be calculated by evaluating Eq. 5 at tCPA and takingthe square root.

dCPA =√

r(tCPA)T r(tCPA)

=√

rT r + (rTu)tCPA (8)

Potential collisions can be identified by examining the pair (dCPA, tCPA). If tCPA > 0 and dCPA < da, whereda is a critical distance threshold, then the avoidance controller should be activated to avoid the collision.The collision avoidance controller steers a pair of satellites away from a potential collision by commanding achange in velocity that increases the magnitude of dCPA. To minimize the required velocity correction, thethrust is directed along the gradient of dCPAwith respect to the satellite’s current velocity, x1.

∂dCPA

∂x1=

∂u

√rT r + (rTu)tCPA

∂u

∂x1

= − 1

2dCPA

(tCPArT + rTu

∂tCPA

∂u

)(9)

∂tCPA

∂u=

1

(uTu)2(2(rTu)uT − (uTu)rT

)(10)

The gradient of dCPA can also be used in a linear approximation to select the thrust magnitude. Assumingthat the satellite provides an impulsive change in velocity along gT = ∂dCPA

∂x1with some magnitude, k, the

approximation for dCPA is given by Eq. 11. After specifying the desired dCPA target, dt, the resulting thrustmagnitude is calculated from Eq. 12.

dCPA = dCPA,0 + gT g

‖g‖k (11)

k =dCPA − dt‖g‖

(12)

Page 18: SPHERES Interact - Human-Machine Interaction aboard the

3.6 Manual Avoidance

The three following tests of Manual, Shared, and Supervised Avoidance consider an inspector satellite whichhas to be moved from one observation point to another with an obstacle – the target satellite – in the path.The goal of the tests was to evaluate collision avoidance techniques with varying levels of onboard autonomy.A pure autonomous collision avoidance controller can be counter-productive for a human operator. It takesaway the user’s control capability and might steer the inspector into another direction than the operatoranticipated. This counteraction is hypothesized to have a noticeable impact on the task performance of theoperator. Thus, the three tests were designed to benchmark different collision avoidance techniques withincreasing autonomy of the onboard controller.The crew’s general task was identical in the three tests. The inspector (SPH1) was guided by use of thekeyboard commands, to a position 50 cm FWD of the target (approx. 0.8 m FWD of the center of the testvolume) and stop with as little velocity as possible behind the target satellite (SPH2). Figure 11 shows thegeneral test setup and the anticipated motion of the satellites. There was no autonomous collision avoidance(CA) involved in the Manual Avoidance test, in which the crew had to take care of the CA manually,while guiding the inspector with 3 translational degrees of freedom. Autonomous CA was introduced in theShared Avoidance test, in which the crew had to guide the inspector in 3 degrees of freedom again. Thethird Supervised Avoidance test utilized the autonomous CA but the crew was only enabled to control theinspector in 1 degree of freedom, towards and away from the target satellite.Figure 12 (left) shows the projected motion of the inspector satellite into the x-y and the x-z plane for theManual Avoidance test. It can be seen that the crew maneuvered the inspector with sufficient distance toSPH2 into the target area with a 30 cm radius around the target position (50 cm FWD of SPH2). The testwas automatically terminated as soon as the SPH1 was positioned in the target area with an absolute velocityof less than 2 cm/s. The inspector was controlled at first to a position in close vicinity (in x direction) ofthe target area’s center. However, the position was changed afterwards to closer proximity to SPH2. Thedistance between the SPHERES satellites in x direction was most complicated to judge since SPH2 was inthe line of sight at that point in time.Table 3 shows the results of this CA test with pure manual CA. The crew needed 79 seconds to accomplishthe task. The final distance between the inspector and the target position was 22 cm, whereas the finalvelocity amounted to 1.41 cm/s. The tank was used by 3.32% in this test. There was no collision betweenthe inspector and the target. It showed that the crew was capable of judging relative distances (this continuesto support results of test session 19). The crew successfully avoided collision at all points in the test.

FWD

AFT

STBD

PORT

SPH1 SPH2

Figure 11: Initial setup of the collision avoidance tests, including an inspector satellite and a target satelliteas obstacle

Page 19: SPHERES Interact - Human-Machine Interaction aboard the

3.7 Shared Avoidance

For this test, the autonomous CA was activated, whenever the relative distance between the two satelliteswas smaller than 40 cm, moving the satellite away from a potential collision. Figure 12 (middle) showsthe projected motion into the x-y plane and the x-z plane. The CA area, in which the autonomous CAalgorithm controlled SPH1, was made visible next to the target area. Figure 12 (middle) shows that adirect path from the starting to the target position was executed. The motion was efficient, considering thelength of the path. There was almost no change in y direction. Furthermore, there was no overshootingin x direction, i.e. the x coordinate was monotonically increasing until SPH1 was stopped in the targetarea. Another interesting fact, considering-human spacecraft interaction, is that for manual avoidance adownward motion (change in z position) was chosen by the crew and not a sideways motion, as common onEarth.Table 3 summarizes the results in comparison to the manual avoidance test. It shows that the crew wasconfident with the shared CA algorithms. The task execution time amounted to 24 seconds, which is lessthan one third of the time required when the crew alone had to take care of CA (manual avoidance). Thefinal distance from the target position amounted to 21 cm. This can be considered as an equal value tothe first test. Further, the implementation of the shared CA enabled the crew to concentrate on the finalvelocity. Controlling the satellite to 0.27 cm/s, the final velocity was almost zero. The fuel consumptionwas halved in comparison to the first test and no collisions between SPH1 and SPH2 took place. The crewutilized and trusted the shared CA, which resulted in a faster task performance and less fuel consumption.As can be seen in Figure 12 (middle) the path of SPH1 intersected the CA area and the autonomouscontrol overruled the manual control in that area, but it did not result in a distinct change of the crew control.

3.8 Supervised Avoidance

target area

collision avoidance

final position

path

Figure 12: Motion of the SPHERES around the obstacle during the Manual Avoidance (left), the SharedAvoidance (middle), and the Supervised Avoidance (right) experiments

The collision avoidance responsibilities were shared between the crew and the machine autonomy in this test.Again, the crew was asked to position SPH1 approx. 50 cm FWD of SPH2, but the crew was only allowedto control SPH1 in the X (FWD/AFT) direction. An autonomous position controller for the y- and z-axiscontrolled both coordinates to zero, and the CA technique interrupted all three controls.

Page 20: SPHERES Interact - Human-Machine Interaction aboard the

The activation of the CA algorithm can be derived from the characteristics of the velocities in Figure 13.Between 80 s and 100 s there are frequent changes in velocity to be seen, which means that the crew wassteering the inspector towards the target satellite (in the x direction) and the autonomous CA became activeand controlled position and velocity whenever a possible collision was detected. This can also be seen inFigure 12 (right) where the projected motion is depicted. Inside the CA area the motion is jagged as thecollision avoidance controller alternates with the y and z centering controllers. In contrast, outside CA area,the approach of the inspector towards the target is straight, since only the x direction was controlled by thecrew.Table 3 summarizes the results of all three CA tests with human interaction. The supervised avoidancehelped the crew to finish the task in less than half of the time needed for manual avoidance. The propellantconsumption of the supervised avoidance is higher than the manual avoidance. This is due to the factthat in the CA area the crew control in connection with the centering controllers were counteracting theautonomous CA algorithm. The final positioning error was approximately one third less than in the othertwo tests, whereas the final velocity was nearly as high as in the manual avoidance test. The table alsosummarizes the values for the individual performance criteria and the performance of the three avoidanceexperiments. The fuel and time reference values are an average of all avoidance test, i.e. the three testspresented here and the four delayed avoidance tests in Section 3.10. Manual and Supervised avoidance arecomparable in their performance due to similar values for fuel and velocity. The shared avoidance test showsa strong overperformance (Poverall > 4), which underlines the confidence of the crew in using this controller.The performance suggests that this grade of autonomy has to be given preference over manual and supervisedavoidance when considering a human controlled inspector.Furthermore, since the CA experiments constitute in general a positioning or re-locating task, they canbe compared to the human navigation experiments in section 3.4. The performances Pd,v are therefore ofinterest and the performance of shared avoidance (Pd,v = 1.2) is similar to approaching the first beacon(Pd,v = 1.1). It seems reasonable to assume that using shared CA, the crew could concentrate on the taskof re-locating the inspector without taking care of collision issues. This was similar to approaching beacon1 from the middle of the test volume without an obstacle or having to re-orientate.

Table 3: Collision Avoidance Performance

Grade of AutonomyPerformance Criteria Performance

time fuel distance final velocity Pd,v Poverall

Manual 79 s 3.32 % 35.8 cm 1.41 cm/s 0.8 2.4Shared 24 s 1.76 % 68.8 cm 0.27 cm/s 1.2 4.6

Supervised 34 s 3.57 % 64.5 cm 1.84 cm/s 0.6 2.4

tref = 100s, fref = 2.5%, dref = 30cm, dtarget = 50cm, vref = 2 cms

, αt = αd = αv = 1, αf = 2

Figure 13: Background telemetry of the supervised avoidance test

Page 21: SPHERES Interact - Human-Machine Interaction aboard the

Uncontrolled floatingP iti i iti li tiPosition initialization

Target approach

Target:Beacon 5D=37.1cm

Figure 14: Beacon positions in the JEM (left) and 3D motion of the satellite towards the beacons in theJEM test volume (right)

3.9 Delayed Human Navigation

As an extension to the tests in the previous sections, this section investigates the effects of communicationdelay on human-spacecraft interaction. In general, there are two approaches to communicate with roboticcomponents in low Earth orbit. A direct communication link between an operating ground station and theteleoperated satellite can limit time delay to approximately 30 ms as Rokviss (Albu-Schaffer et al., 2006)showed. However, this approach limits operational time to 5-13 minutes (depending on the orbit height) for4-6 of approximately 15 orbits per day. Using a single geostationary communication satellite for data relayon the other hand can expand the time windows to more than 40 minutes on each orbit. The disadvantage ofthis approach is the increased time delay, since the signal has to travel round trip through the geostationaryorbit at 36,000km height. The time delay can amount to 600 ms or more as experiments with the AdvancedRelay Technology Mission (ARTEMIS) have shown (Stoll et al., 2009b). This number multiplies if morerelay satellites are utilized for global coverage or if deep space robotics is considered.The purpose of the Delayed Human Navigation tests was to evaluate the crew’s performance in navigatingthe inspector to a certain observation position. Manual keyboard commands were translated into thrustimpulses under the influence of communication delays of 1s, and 3s. Further, the human ability to anticipatedelay and effectively apply keyboard commands with respect to the satellite’s movement was examined.The crew chose a working position in the PORT-AFT corner of the experiment module with the laptopworkstation pointing in direction to STBD. As a result, there was a spatial rotation of approximately 90◦

around the z-axis between the crew’s personal reference frame, i.e. the crew’s line of sight (PORT-STBD)and the ISS-fixed coordinate frame that also had to be anticipated when applying keyboard commands.After a period of initial positioning the crew was asked to steer the satellite near one of the beacons on theSTBD side of the module (cp. Figure 14, left), at a distance of 30 cm and with minimal residual velocity.Both beacons featured the same distance to the satellite’s initial position. The difference between them,however, was the crew’s alternate perspective from the controlling laptop.

The criteria for evaluating the crew’s task performance were again the time to relocate the satellite, thefuel consumption, as well as the final position, and velocity. Figure 14 (right) shows the satellites motion inthe test volume for the first test without communication delay. The green dots represent the initializationand finish points of the target approach. Table 4 shows the compilation of all performance values gainedfrom the three test runs. According to crew feedback, there were no difficulties in appropriately issuingkeyboard commands. As can be seen in Figure 14 (right), the satellites movement is straightforward andaims precisely at the approached target. The tests substantiate results from previous tests, concluding thatthe crew is able to judge the spatial distance between the satellite and a fixed object such as a beacon verywell from different perspectives. Furthermore, throughout all tests a low residual velocity was achieved for

Page 22: SPHERES Interact - Human-Machine Interaction aboard the

final positioning.Considering the performance Pd,v, the precision for final position and velocity in this test appear to beindependent of the current communication delay. The time and fuel consumption for the target approachon the other hand increased significantly with higher communication delays. Thus, the crew required moretime and an increased number of translational force commands to gain comparable final positioning valuesunder the influence of communication delay. The overall performance decreased with increasing time delayas Table 4 shows. The beacon approach in the undelayed case is comparable to the first beacon approachof the human navigation experiment in section 3.4. Both show a better performance than their respectivereference values and suggest a strong confidence of the crew in commanding the inspector.Previous ground experiments on the air-bearing table indicated that most test subjects, when being con-fronted with a comparable 3-DoF scenario for the first time, show a significant increase of 1.4 points onaverage in overall task performance during the first two repetitions of the same test. After the second itera-tion, however, only minor increase could be observed, leaving the values on a level for all following tests. Inthis case and within the given 6-DoF zero-g environment, however, the crew did not show such a behavior. Incontrast, the first executed non-delayed test run resulted in an significantly higher overall task performanceof 1.3 points compared to the second executed test run with 1s of communication delay.

3.10 Delayed Avoidance

The purpose of this test was to investigate different collision avoidance techniques with focus on manual andshared control under the influence of communication delay. The test setup was identical with the previouslyexecuted non-delayed collision avoidance tests.The position of the crew was identical with the Human Navigation tests (cp. Section 3.4). Since the directionof motion was from PORT to STBD (cp. Figure 14), the crew had an undisturbed view on the scene inthe first part of each test. For the second part, however, the view was partly obstructed by the blockingsatellite. Figure 15 shows the motion of the the satellites in the test volume for the manual control 1s-delaytest without CA support. As can be seen in the figure, a downdrift during the test was observed by theastronaut throughout all tests. This may have been caused by the air flow in the JEM.Except for the first run, the Delayed Avoidance tests once more proved the crew’s good judgment of distanceand the ability to manually position a SPHERE satellite with a low final velocity. However, the performancevalues that were reached, are slightly worse compared with the previous tests. This was likely due to thefact that the direct view of the scene was partly blocked by the obstacle satellite during the final positioningphase. This can especially be observed when considering the position and velocity values of the 1s-delay testwith CA. The crew always chose a relatively wide path around the obstacle compared to previous, undelayedavoidance tests. The CA algorithm therefore did not influence the operator’s performance in this case. Andecrease of performance time can be observed with increasing time delay and active CA. This trend has tobe confirmed in upcoming test sessions that incorporate higher communication delay.

Table 4: Delayed Human Navigation Performance

Communication Delay TargetPerformance Criteria Performance

time fuel distance final velocity Pd,v Poverall

none Beacon 5 67.0 s 1.56 % 37.1 cm 0.14 cm/s 1.7 4.31 s Beacon 5 70.6 s 2.6 % 41.8 cm 0.24 cm/s 1.5 3.03 s Beacon 4 133.4 s 2.25 % 34.4 cm 0.02 cm/s 1.8 2.8

tref = 70s, fref = 2.1%, dref = 30cm, dtarget = 30cm, vref = 2 cms

, αt = αd = αv = 1, αf = 2

Page 23: SPHERES Interact - Human-Machine Interaction aboard the

SPH2 position holdPosition initialization

Final position

CircumnavigateDowndrift

Figure 15: 3D motion of the satellites in the JEM test volume

Table 5: Delayed Avoidance Performance

Communication DelayPerformance Criteria Performance

time fuel distance final velocity Pd,v Poverall

none 144.8 s 2.51 % 96.5 cm 0.41 cm/s 0.2 1.81 s 120.4 s 2.08 % 35.6 cm 0.37 cm/s 1.3 3.5

none with CA 135.1 s 2.21 % 55.9 cm 0.34 cm/s 1.6 3.51 s with CA 160.4 s 2.17 % 50.3 cm 0.73 cm/s 1.6 3.3

tref = 100s, fref = 2.5%, dref = 30cm, dtarget = 50cm, vref = 2 cms

, αt = αd = αv = 1, αf = 2

4 Conclusions and Future Outlook

The research presented here focused on human-machine interaction in space. In particular, experiments wereperformed, which involved astronauts controlling experimental satellites aboard the ISS. Different conceptsand levels of autonomy were evaluated. A summary of the results and approaches for future work is presented.

4.1 Conclusions

A case study of a teleoperated inspector satellite was performed and partial aspects of such a mission havebeen simulated in the framework of a series of experiments. The inspection started with a fly around of theinspector around the servicer. The tests showed that motion patterns are hard to recognize for a humanoperator. The orientation of a SPHERES orbit is difficult to judge in three dimensions. This was secondedby the crew feedback, which summarized that it was complicated to use commands to control inclinationand alignment.The subsequent Manual Abort experiment investigated the potential benefit of a manual abort command,which can overrule the control algorithms and stop the autonomous operations. During the test, there was nofalse positive abort command issued, which suggests the crew had a good understanding of when a proximitymaneuver constitutes a possible threat. It was concluded that a manual abort command could add anothersafety layer to mission operations.While the performance of the two initial tests was evaluated with the help of the inclination accuracy andthe number of false positives, all subsequent tests used the distinct performance metrics of task completion

Page 24: SPHERES Interact - Human-Machine Interaction aboard the

Table 6: Cross Evaluation of the Experiment Performances

orientation & distance collision delayed interactiontest re-orientation estimation avoidance control with autonomy mean

Poverall (TS20 & TS24) 3.1 3.0 3.1 3.2 3.4 3.2Pd,v (TS20 & TS24) 0.9 1.1 1.0 1.6 1.1 1.1

time, fuel usage, distance to target, and residual velocity. They were applied since the experiments requiredthe astronaut to command the inspector from a starting position to a goal position near the target satellite.During this motion different levels of autonomy, collision avoidance techniques, and the influence of timedelay to the task performance was evaluated.The human navigation test showed that relocating the inspector to a target location at another spacecraftcan be executed precise and fuel efficient. However, when re-locating the spacecraft to a subsequent target, are-orientation phase might be necessary for the human operator. This indirect motion from the start to thegoal via a reference position is time and fuel intense and decreased the overall performance. Thus, meanshave to be generated to provide the operator with additional (artificial) reference points or planes in orderto increase the performance.The subsequent experiments considered the influence of different collision avoidance techniques on the taskperformance. Therefore, the inspector had to be relocated to a position behind the target spacecraft withoutprovoking a collision. Out of the different levels of autonomy that were applied in the scenario, sharedautonomy showed a high performance and therefore might enhance the operation of a teleoperated inspector.This assumption can be affirmed by the performance values reached for time, fuel, and residual velocity, whichare superior to the other control levels.Since a ground controlled spacecraft will have an immanent time delay in the communication channel, the

final experiments considered operations, which were subject to a communication delay. According to crewfeedback, the Delayed Human Navigation tasks were well executable under time delay of up to 3 seconds. Thecombined performance value for distance and velocity affirms this statement with an approximate constantvalue throughout the three tests (0s, 1s, and 3s delay). The overall performance decreased distinctly whilethe time delay increased. This is due to the fact that feedback to the operator commands is delayed, whichdecreased the performance value for time. The Delayed Avoidance tests showed that a CA algorithm doesnot negatively influence the overall performance of the operator and reconfirmed that precise distance andresidual velocities can be achieved under the influence of time delay.Since every experiment is usually only run once, the tests were designed in a way that partial aspects of oneexperiment are re-verified in a follow-up experiment. Therefore, it was shown in multiple tests that relativedistances between two spacecraft can be achieved by manual control precisely. No collisions took place in anyof the tests, which emphasizes the ability of a human operator to steer spacecraft in three-dimensional spacereliably and precisely given a third person perspective of the scene. This principle of redundant (primary orsecondary) test objectives was emphasized in Table 1, which can be used to cross-evaluate all experimentswith positioning tasks7 under the considered aspects. Table 6 shows a cross evaluation over all experiments,for which a common criteria such as delay control applies. The according mean of Pd,v and Poverall werecomputed.Considering the overall performances Poverall, it can be concluded from Table 6 that all task performancesare approximately equal and none of the individual tasks was distinctly worse performed. However, it canalso to be seen that the interaction with autonomy adds benefit to the operations. The crew proved thattasks with a delay in the communication channel can be fulfilled with a high degree of efficiency. Assistingelements such as autonomous collision avoidance can improve the performance of the human operator whencontrolling an inspector satellite. It gives more confidence to the operator and draws the attention to theactual task. That way the operator does not primarily have to care about CA and works more efficiently,which is reflected in the reached high overall performance.Moreover, the values for Pd,v in Table 6 confirm that the according performance for orientation tasks is slightlyworse than the mean, whereas the reached performances in distance estimation, collision avoidance and

7Only experiments from TS 20 and TS24 where taking into consideration, since identical performance metrics were available.

Page 25: SPHERES Interact - Human-Machine Interaction aboard the

interaction with autonomy are comparable. A remarkable result of this evaluation is that the delayed controlalso resulted in a strong average distance and velocity performance. An explanation for this circumstancecan be that the human operator tried to act more careful in an environment in which the feedback to issuedcommands is not instantaneous and, therefore, the likeliness of failing is considered higher compared withnon-delayed manual operations.

4.2 Future Outlook

Future research plans aim at simulating a whole inspection scenario including three experimental satellites.This reflects a more complex OOS or ISRA scenario with an observer, servicer, and target satellite. Thehuman operator will control two of the satellites (servicer and inspector) and simulate procedures, such asproximity operations, inspection tasks, and docking. Enhanced virtual reality tools shall be embedded inthe SPHERES GUI not only on ground but also in the flight GUI. The performance of control tasks can belimited by line of sight conditions and satellites blocking the path. As already tested on Earth, a virtualreality representation would provide the possibility to change the perspective and may also increase thetask performance in space.As seen in the tests, manual re-location tasks may demand a re-orientation phase. Creating artificialreference points by means of virtual or augmented reality may make this re-orientation phase obsolete.Further, it was evident that tasks are slower executed under the influence of time delay. Virtual Realitytechniques can therefore be used to present a a pre-calculated state feedback to the operator in order toimprove the task performance. This approach is already used in the MIT ground test environment (cp.section 2.1). Such augmentation approaches can further be helpful to display distances, attitude data, andCA zones. Further, a stereo-vision camera system is scheduled to extend the SPHERES hardware system,providing the option of realistically controlling the satellites from the perspective of an observer on Earthwith limited accessibility to the remote environment.

Acknowledgments

This work was supported in part by the post-doctoral fellowship program of the German Academic ExchangeService (DAAD). The authors acknowledge the valuable input and hints by Alessandra Babuscia, ThomasDirlich, and Sebastian J. I. Herzig.

References

Abbott, J., Marayong, P., and Okamura, A. (2007). Haptic virtual fixtures for robot-assisted manipulation.In Springer Tracts in Advanced Robotics, volume 28, page 4964. Springer-Verlag, Heidelberg.

Abiko, S., Lampariello, R., and Hirzinger, G. (2006). Impedance control for a free-floating robot in thegrasping of a tumbling target with parameter uncertainty. In IEEE/RSJ International Conference onIntelligent Robots and Systems, pages 1020 –1025.

Albu-Schaffer, A., Bertleff, W., Rebele, B., Schafer, B., Landzettel, K., and Hirzinger, G. (2006). Rokviss- robotics component verification on iss current experimental results on parameter identification. InProceedings of IEEE International Conference on Robotics and Automation (ICRA)., pages 3879 –3885.

Boeing-Company (2006). Microsatellite launch. Crosslink – The Aerospace Corporation mag-azine of advances in aerospace technology, 7:1–2. Retrieved August 1, 2011, fromhttp://www.aero.org/publications/crosslink/fall2006/headlines.html.

Bosse, A. B., Barnds, W. J., Brown, M. A., Creamer, N. G., Feerst, A., Henshaw, C. G., Hope, A. S., Kelm,B. E., Klein, P. A., Pipitone, F., Plourde, B. E., and Whalen, B. P. (2004). Sumo: spacecraft for theuniversal modification of orbits. In Tchoryk, P., Jr., and Wright, M., editors, Spacecraft Platforms andInfrastructure, volume 5419, pages 36–46.

Page 26: SPHERES Interact - Human-Machine Interaction aboard the

Chandler, P. R., Pachter, M., Swaroop, D., Howlett, J. M. F. J. K., Rasmussen, S., Schumacher, C., andNygard, K. (2002). American control conference. In IEEE International Safety, Security and RescueRobotics Workshop, Anchorage, USA.

Choset, H., Knepper, R., Flasher, J., Walker, S., Alford, A., Jackson, D., Kortenkamp, D., Burridge, R.,and Fernandez, J. (1999). Path planning and control for aercam, a free-flying inspection robot in space.In IEEE International Conference on Robotics and Automation (ICRA)., volume 2, pages 1396 –1403.

Chung, S.-J. and Miller, D. W. (2008). Propellant-free control of tethered formation flight, part 1: Linearcontrol and experimentation. Journal of Guidance Control and Dynamics, 31:571–584.

Davis, T. M. and Melanson, D. (2004). Xss-10 microsatellite flight demonstration program results. InTchoryk, P., Jr., and Wright, M., editors, Spacecraft Platforms and Infrastructure, volume 5419, pages16–25. SPIE.

Dorais, G. and Gawdiak, Y. (2003). The personal satellite assistant: an internal spacecraft autonomousmobile monitor. In Aerospace Conference, 2003. Proceedings. 2003 IEEE, volume 1, pages 1 – 348 vol.1.

Enright, J. and Hilstad, M. (2004). The spheres guest scientist program: Collaborative science on the iss.In proc. of IEEE Aerospace Conference, Big Sky, Montana, USA, Mar. 2004.

Fredrickson, S. E., Abbott, L. W., Duran, S., Jochim, J. D., Studak, J. W., Wagenknecht, J. D., andWilliams, N. M. (2003). Mini aercam: development of a free-flying nanosatellite inspection robot. InTchoryk, P., Jr., and Shoemaker, J., editors, SPIE Space Systems Technology and Operations, volume5088, pages 97–111. SPIE.

Greaves, S., Boyle, K., and Doshewnek, N. (2005). Orbiter boom sensor system and shuttle return to flight:Operations analyses. In AIAA Guidance, Navigation, and Control Conference and Exhibit.

Hirche, S. and Buss, M. (2007). Transparent data reduction in networked telepresence and teleaction systems.In Networked Telepresence and Teleaction Systems. Part II: Time-delayed communication, volume 16,pages 532–542. MIT Press Journals.

Hirzinger, G., Brunner, B., Dietrich, J., and Heindl, J. (1993). Sensor-based space robotics-rotex and itstelerobotic features. IEEE Transactions on Robotics and Automation, 9(5):649 –663.

Imaida, T., Yokokohji, Y., Doi, T., Oda, M., and Yoshikawa, T. (2001). Ground-space bilateral teleoperationexperiment using ets-vii robot arm with direct kinesthetic coupling. In IEEE International Conferenceon Robotics and Automation (ICRA), volume 1, pages 1031 – 1038.

Katz, J. G., Saenz-Otero, A., and Miller, D. W. (2011). Development and demonstration of an autonomouscollision avoidance algorithm aboard the iss. In IEEE Aerospace Conference, Big Sky, USA.

Krenn, R., Landzettel, K., Kaiser, C., and Rank, P. (2008). Simulation of the docking phase for the smart-olev satellite servicing mission. In 9th International Symposium on Artificial Intelligence, Robotics andAutomation in Space. JAXA.

Madison, R. (2000). Micro-satellite based, on-orbit servicing work at the air force research laboratory. InIEEE Aerospace Conference Proceedings, volume 4, pages 215 –226.

Massimino, M., Sheridan, T., and Roseborough, J. (1989). One handed tracking in six degrees of freedom.In IEEE Int. Conf. on Systems, Man, and Cybernetics, Cambridge, USA.

Mukherji, R., Rey, D. A., Stieber, M., and Lymer, J. (2001). Special purpose dexterous manipulator(spdm) andvanced control features and development test results. In Proceedings of the 6th InternationalSymposium on Artificial Intelligence and Robotics and Automation in Space: i-SAIRAS 2001.

Munir, S. and Book, W. (2003). Control techniques and programming issues for time delayed internet basedteleoperation. Journal of Dynamic Systems, Measurement, and Control, 125:205–214.

Page 27: SPHERES Interact - Human-Machine Interaction aboard the

Niemeyer, G., Preusche, C., and Hirzinger, G. (2008). Telerobotics. In Springer Handbook of Robotics.Springer-Verlag, Heidelberg.

Nolet, S. (2007). Development of a Guidance, Navigation and Control Architecture and Validation Pro-cess Enabling Autonomous Docking to a Tumbling Satellite. PhD thesis, Massachusetts Institute ofTechnology, Department of Aeronautics and Astronautics.

Nolet, S. and Miller, D. W. (2007). Autonomous docking experiments using the spheres testbed inside theiss. Sensors and Systems for Space Applications, 6555(1):65550P.

Ortmaier, T. (2007). Robot assisted force feedback surgery. In Applications in Advances in Telerobotics.Springer-Verlag, Heidelberg.

Pong, C. (2010). Autonomous thruster failure recovery for underactuated spacecraft. PhD thesis, Mas-sachusetts Institute of Technology, Department of Aeronautics and Astronautics.

Radcliffe, A. (2002). A Real-Time Simulator for the SPHERES Formation Flying Satellites Testbed. PhDthesis, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics.

Ridao, R., Carreras, M., Hernandez, E., and Palomeras, N. (2007). Underwater telerobotics for collaborativeresearch. In Applications in Advances in Telerobotics. Springer-Verlag, Heidelberg.

Roderick, S., Roberts, B., Atkins, E., and Akin, D. (2004). The ranger robotic satellite servicer and itsautonomous software-based safety system. IEEE Intelligent Systems, 19.

Ruangpayoongsak, N., Roth, H., and Chudoba, J. (2005). Mobile robots for search and rescue. In IEEEInternational Safety, Security and Rescue Robotics Workshop.

Rumford, T. (2003). Demonstration of autonomous rendezvous technology (dart) project summary. In SpaceSystems Technology and Operations Conference, Orlando, USA.

Ryu, J.-H., Preusche, C., Hannaford, B., and Hirzinger, G. (2005). Time domain passivity control withreference energy following. IEEE Transactions on Control Systems Technology, 13:737–742.

Saenz-Otero, A. (2005). Design Principles for the Development of Space Technology Maturation LaboratoriesAboard the International Space Station. PhD thesis, Massachusetts Institute of Technology, Departmentof Aeronautics and Astronautics.

SCAMP SSV (2006). Supplemental camera platform space simulation vehicle (scamp ssv). Retrieved De-cember 1, 2010, from http://www.ssl.umd.edu/projects/SCAMP/SSV/index.shtml.

Sellmaier, F., Spurmann, J., and Boge, T. (2010). On-orbit servicing missions at dlr/gsoc. In InternationalAstronautical Congress (IAF).

Sheridan, T. (1992). Automation and Human Supervisory Control. MIT Press, Cambridge.

Shoemaker, J. and Wright, M. (2003). Orbital express space operations architecture program. In Tchoryk,P., Jr., and Shoemaker, J., editors, SPIE Space Systems Technology and Operations, volume 5088, pages1–9. SPIE.

Stoll, E., Artigas, J., Letschnik, J., Walter, U., Pongrac, H., Preusche, C., Kremer, P., and Hirzinger, G.(2009a). Ground verification of the feasibility of telepresent on-orbit servicing. Journal of Field Robotics,26(3):287 – 307.

Stoll, E. and Kwon, D. (2009). The benefit of multimodal telepresence for in-space robotic assembly. InProceedings of the IASTED International Conference on Robotics and Applications.

Stoll, E., Letschnik, J., Walter, U., Artigas, J., Kremer, P., Preusche, C., and Hirzinger, G. (2009b). On-orbitservicing - exploration and manipulation capabilities of robots in space. IEEE Robotics & AutomationMagazine, 16(4):29–33.

Page 28: SPHERES Interact - Human-Machine Interaction aboard the

Stoll, E., Saenz-Otero, A., and Tweddle, B. (2010). Multimodal human spacecraft interaction in remoteenvironments - a new concept for free flyer control. In Machine Learning and Systems Engineering,pages 1–14. Springer-Verlag, Heidelberg.

Tanner, N. and Niemeyer, G. (2006). High-frequency acceleration feedback in wave variable telerobotics.IEEE/ASME Transactions on Mechatronics, 11:119–127.

Tweddle, B. E. (2010). Computer vision based navigation for spacecraft proximity operations. Master’sthesis, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics.

Varatharajoo, R. and Kahle, R. (2005). A review of spacecraft conventional and synergistic systems. AircraftEngineering and Aerospace Technology, 77:131–141.

Varatharajoo, R., Kahle, R., and Fasoulas, F. (2003). Approach for combining attitude and thermal controlsystems. Journal of Spacecraft and Rockets, 40:657–664.