9
Underwater autonomous manipulation for intervention missions AUVs $ Giacomo Marani a, , Song K. Choi a,b , Junku Yuh c a Autonomous Systems Laboratory, University of Hawaii, USA b Marine Autonomous Systems Engineering, Inc., Honolulu, Hi, USA c KOREA Aerospace University, Gyeonggi-do, Republic of Korea article info Article history: Received 8 March 2008 Accepted 3 August 2008 Available online 17 August 2008 Keywords: Underwater intervention Autonomous manipulation Localization Tracking Teleprogramming AUV ROV abstract Many underwater intervention tasks are today performed using manned submersibles or remotely operated vehicles in teleoperation mode. Autonomous underwater vehicles are mostly employed in survey applications. In fact, the low bandwidth and significant time delay inherent in acoustic subsea communications represent a considerable obstacle to remotely operate a manipulation system, making it impossible for remote controllers to react to problems in a timely manner. Nevertheless, vehicles with no physical link and with no human occupants permit intervention in dangerous areas, such as in deep ocean, under ice, in missions to retrieve hazardous objects, or in classified areas. The key element in underwater intervention performed with autonomous vehicles is autonomous manipulation. This is a challenging technology milestone, which refers to the capability of a robot system that performs intervention tasks requiring physical contacts with unstructured environments without continuous human supervision. Today, only few AUVs are equipped with manipulators. SAUVIM (Semi Autonomous Underwater Vehicle for Intervention Mission, University of Hawaii) is one of the first underwater vehicle capable of autonomous manipulation. This paper presents the solutions chosen within the development of the system in order to address the problems intrinsic to autonomous underwater manipulation. In the proposed approach, the most noticeable aspect is the increase in the level of information transferred between the system and the human supervisor. We describe one of the first trials of autonomous intervention performed by SAUVIM in the oceanic environment. To the best knowledge of the authors, no sea trials in underwater autonomous manipulation have been presented in the literature. The presented operation is an underwater recovery mission, which consists in a sequence of autonomous tasks finalized to search for the target and to securely hook a cable to it in order to bring the target to the surface. & 2008 Elsevier Ltd. All rights reserved. 1. Introduction Today’s underwater intervention tasks are mostly performed with extensive human supervision, requiring high-bandwidth communication link, or in structured environments, which results in limited applications. Autonomous manipulation systems will make it possible to sense and perform mechanical work in areas that are hazardous to humans or where humans cannot go, such as natural or man-made disastrous regions, deep ocean, and under ice. Autonomous manipulation systems, unlike teleoperated manipulation systems, that are controlled by human operators with the aid of visual and other sensory feedback, must be capable of assessing a situation, including self-calibration based on sensory information, and executing or revising a course of manipulating action without continuous human intervention. It is sensible to consider the development of autonomous manip- ulation as a gradual passage from human teleoperated manipula- tion. Within this passage, the most noticeable aspect is the increase in the level of information exchanged between the system and the human supervisor. In teleoperation with ROVs, the user sends and receives low level information in order to directly set the position of the manipulator with the aid of a visual feedback. As the system becomes more autonomous, the user may provide only a few higher level decisional commands, interacting with the task description layer. The management of lower level functions (i.e. driving the motors to achieve a particular task) is left to the onboard system. The level of autonomy is related to the level of information needed by the system in performing the particular ARTICLE IN PRESS Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/oceaneng Ocean Engineering 0029-8018/$ - see front matter & 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.oceaneng.2008.08.007 $ This work has been developed at the Autonomous Systems Laboratory, University of Hawaii. Corresponding author. Tel.: +1808 956 2863. E-mail address: [email protected] (G. Marani). Ocean Engineering 36 (2009) 15–23

Underwater autonomous manipulation for intervention missions AUVs

Embed Size (px)

Citation preview

Page 1: Underwater autonomous manipulation for intervention missions AUVs

ARTICLE IN PRESS

Ocean Engineering 36 (2009) 15–23

Contents lists available at ScienceDirect

Ocean Engineering

0029-80

doi:10.1

$ Thi

Univers� Corr

E-m

journal homepage: www.elsevier.com/locate/oceaneng

Underwater autonomous manipulation for intervention missions AUVs$

Giacomo Marani a,�, Song K. Choi a,b, Junku Yuh c

a Autonomous Systems Laboratory, University of Hawaii, USAb Marine Autonomous Systems Engineering, Inc., Honolulu, Hi, USAc KOREA Aerospace University, Gyeonggi-do, Republic of Korea

a r t i c l e i n f o

Article history:

Received 8 March 2008

Accepted 3 August 2008Available online 17 August 2008

Keywords:

Underwater intervention

Autonomous manipulation

Localization

Tracking

Teleprogramming

AUV

ROV

18/$ - see front matter & 2008 Elsevier Ltd. A

016/j.oceaneng.2008.08.007

s work has been developed at the Auton

ity of Hawaii.

esponding author. Tel.: +1808 956 2863.

ail address: [email protected] (G. Marani).

a b s t r a c t

Many underwater intervention tasks are today performed using manned submersibles or remotely

operated vehicles in teleoperation mode. Autonomous underwater vehicles are mostly employed in

survey applications. In fact, the low bandwidth and significant time delay inherent in acoustic subsea

communications represent a considerable obstacle to remotely operate a manipulation system, making

it impossible for remote controllers to react to problems in a timely manner.

Nevertheless, vehicles with no physical link and with no human occupants permit intervention in

dangerous areas, such as in deep ocean, under ice, in missions to retrieve hazardous objects, or in

classified areas. The key element in underwater intervention performed with autonomous vehicles is

autonomous manipulation. This is a challenging technology milestone, which refers to the capability of

a robot system that performs intervention tasks requiring physical contacts with unstructured

environments without continuous human supervision.

Today, only few AUVs are equipped with manipulators. SAUVIM (Semi Autonomous Underwater

Vehicle for Intervention Mission, University of Hawaii) is one of the first underwater vehicle capable of

autonomous manipulation.

This paper presents the solutions chosen within the development of the system in order to address

the problems intrinsic to autonomous underwater manipulation. In the proposed approach, the most

noticeable aspect is the increase in the level of information transferred between the system and the

human supervisor.

We describe one of the first trials of autonomous intervention performed by SAUVIM in the oceanic

environment. To the best knowledge of the authors, no sea trials in underwater autonomous

manipulation have been presented in the literature. The presented operation is an underwater recovery

mission, which consists in a sequence of autonomous tasks finalized to search for the target and to

securely hook a cable to it in order to bring the target to the surface.

& 2008 Elsevier Ltd. All rights reserved.

1. Introduction

Today’s underwater intervention tasks are mostly performedwith extensive human supervision, requiring high-bandwidthcommunication link, or in structured environments, which resultsin limited applications. Autonomous manipulation systems willmake it possible to sense and perform mechanical work in areasthat are hazardous to humans or where humans cannot go, suchas natural or man-made disastrous regions, deep ocean, and underice. Autonomous manipulation systems, unlike teleoperatedmanipulation systems, that are controlled by human operatorswith the aid of visual and other sensory feedback, must be capable

ll rights reserved.

omous Systems Laboratory,

of assessing a situation, including self-calibration based onsensory information, and executing or revising a course ofmanipulating action without continuous human intervention. Itis sensible to consider the development of autonomous manip-ulation as a gradual passage from human teleoperated manipula-tion. Within this passage, the most noticeable aspect is theincrease in the level of information exchanged between thesystem and the human supervisor.

In teleoperation with ROVs, the user sends and receives lowlevel information in order to directly set the position of themanipulator with the aid of a visual feedback. As the systembecomes more autonomous, the user may provide only a fewhigher level decisional commands, interacting with the taskdescription layer. The management of lower level functions(i.e. driving the motors to achieve a particular task) is left to theonboard system. The level of autonomy is related to the level ofinformation needed by the system in performing the particular

Page 2: Underwater autonomous manipulation for intervention missions AUVs

ARTICLE IN PRESS

G. Marani et al. / Ocean Engineering 36 (2009) 15–2316

intervention. At the task execution level, the system must becapable of acting and reacting to the environment with theextensive use of sensor data processing.

The user may provide, instead of directly operating themanipulator, higher level commands during a particular mission,such as ‘‘unplug the connector’’. In this approach, the function ofthe operator is to decide, after an analysis of the data, whichparticular task the vehicle is ready to execute and successively tosend the decision command. The low-level control commands areprovided by a pre-programmed onboard subsystem, while thevirtual reality model in the local zone uses only the few symbolicinformation received through the low bandwidth channel in orderto reproduce the actual behavior of the system.

The main approach is layered into different levels, wheredifferent behaviors take place: a low-level layer which interactswith the robot hardware, a medium-level layer for describing thecontrols algorithms and finally, a high-level layer where the taskdescription is performed. Within this configuration, the controlsystem for the manipulator (medium layer) must ensure a reliablebehavior within the workspace, avoiding collisions, systeminstabilities and unwanted motions while completing the requiredtask, when it is theoretically executable. The control systemmust also address other general manipulation issues, such as task-space oriented, task priority assignment and dynamic prioritychanges.

AUV development is still mostly directed toward a survey-oriented vehicles. In literature there are only few examples ofintervention AUVs. These example include the OTTER I-AUV bythe Stanford Aerospace Robotics Lab. OTTER, developed back in1996, is a hover capable underwater vehicle which operates in atest tank at the Monterey Bay Aquarium Research Institute(MBARI). Current and past research include texture-based visionprocessing for feedback control and real-time mosaicking,autonomous intervention missions, and hydrodynamic modelingof underwater manipulators. A study on automatic objectsretrieval was done in Wang et al. (1995).

Another Intervention AUV, ALIVE, was developed in 2003 byCybernetix. The aim of the EU-funded ALIVE project was todevelop an intervention AUV capable of docking to a subseastructure which has not been specifically modified for AUV use.A description of the ALIVE vehicle was given in Evans et al. (2003).

Fig. 1. The SAUVIM autonom

This paper presents the solutions chosen to address the aboveissues for autonomous manipulation, developed during the courseof the SAUVIM research project. SAUVIM (Fig. 1) has been jointlydeveloped by the Autonomous Systems Laboratory (ASL) of theUniversity of Hawaii, Marine Autonomous Systems Engineering(MASE), Inc. in Hawaii, and Naval Undersea Warfare CenterDivision Newport (NUWC) in Rhode Island. The experimentalresults of the first attempt of underwater autonomous manipula-tion, here also presented, are promising. The recovery operationconsists in a sequence of autonomous tasks finalized to search forthe target, and to securely hook a cable to it, in order to bring thetarget to the surface. To the best knowledge of the authors, no seatrials in underwater autonomous manipulation of this nature havebeen presented in the literature. The successful trial of thepresented recovery experiment represents a positive testing of theabove solution.

2. The SAUVIM autonomous underwater vehicle

SAUVIM (Semi Autonomous Underwater Vehicle for Interven-tion Missions, Yuh et al., 1998; Yuh and Choi, 1999, Fig. 1) involvesthe design and fabrication of an underwater vehicle that iscapable of autonomous interventions on the subsea installations,a task usually carried out by ROVs or human divers. The vehicle isbuilt around an open-framed structure enclosed by a floodedcomposite fairing. Its movement is controlled by eight thrusterslocated around the center of mass. The four vertical move thevehicle in the Z-axis (heave); the two, internally mounted,horizontal thrusters move the vehicle in the Y-axis (sway); andthe two, externally mounted, horizontal thrusters move thevehicle in the X-axis (surge). The lower frame houses only theNI-MH battery pack, while the upper frame hosts all the essentialelectronics, visual hardware, navigation and mission sensors in sixcylindrical pressure vessels.

The proper subsea navigation and positioning accuracy isachieved with a Photonic Inertial Navigation System (PHINS) unitfrom by IXSEA. This INS outputs position, heading, roll, pitch,depth, velocity and heave. Its high accuracy inertial measurementcapability is based on fiber optic gyroscope technology coupledwith an embedded digital signal processor that runs an advanced

ous underwater vehicle.

Page 3: Underwater autonomous manipulation for intervention missions AUVs

ARTICLE IN PRESS

G. Marani et al. / Ocean Engineering 36 (2009) 15–23 17

Kalman filter. This INS is aided by a differential GPS, a DopplerVelocity Log (DVL) other than the classical depth sensor, in orderto improve the absolute measurement of the vehicle position.

This navigation sensor system is capable of providing a stableand precise feedback of the vehicle position, velocity andacceleration on all the 6 degrees of freedom (DOF). In one of ourexperiments, we tested the stability and precision of the vehicleduring station keeping with a generalized PID controller active onall the 6 axes. The INS was aided only by the DVL, since the GPSwas, in this case, submerged. The vehicle was able to maintain thetarget position with a sub-inch accuracy for the translational part.This was confirmed by the manipulator camera output, which waslooking toward an earth-fixed target. This position was main-tained for over 15 min without any relevant change in the x- andy-axis positions. We only noticed a slow change in the Z position,which was due to the tide activity. As a matter of fact, while the

Fig. 2. MARIS 7080 underwater manipulator.

target was fixed with respect to the earth, the INS uses a depthsensor to correct the z-coordinate. A video recording of the aboveexperiment can be downloaded from the ASL web sites.1

To achieve the intervention capabilities, SAUVIM is equippedwith a 7 DOF robotic manipulator (MARIS 7080, Fig. 2). The arm,unlike the classical hydraulic technology in use for ROVs, isactuated by electromechanical components, in order to meet thelow-power requirements and higher accuracy in manipulationtasks. Each DOF is actuated by a brushless motor with a reductionunit (harmonic drive). The accuracy of the angular measurementis guaranteed by the combination of two resolvers, mounted,respectively, before and after the reduction unit. This configura-tion allows a sub-millimeter positioning of the manipulator’s end-effector, an important requirement when dealing with a largeclass of underwater intervention. A force/torque sensor, installedbetween the DOF of the wrist and the gripper, senses the amountof the force and torque acting on the gripper. Designed forunderwater applications at extreme depths, it is internallycompensated with appropriate oil.

The sensor devices of SAUVIM are the most critical compo-nents of a generic intervention mission, since at the task executionlevel the system must be capable of acting and reacting to theenvironment with the extensive use of sensor data processing.SAUVIM is equipped with a dual frequency identification sonar(DIDSON),2 a digital multi-frequency scanning sonar, several videocamera with image processing computational unit and a specialultrasonic device for tracking the position of a generic target in 6DOF.

Finally, the hardware architecture is composed of severalcomputer and peripherals for sensor data acquisition. Two VME-based single board computers host the distributed control systemfor the vehicle and manipulator, and 3 PC-104 units provide thenecessary computation power for sensor data processing, includ-ing a dedicated system for optical vision.

3. The real-time architecture of SAUVIM

The architecture plan for the SAUVIM platform has beendeveloped with a heavy emphasis to autonomy and globalinformation sharing. It has several similarities to the ‘‘back-seat-driver’’ paradigm (Benjamin, 2007), which has been implementedon a number of platforms (e.g., Bluefin, Hydroid, and OceanServer). The paradigm refers to a division between ‘‘low-level’’control and ‘‘high-level’’ control on the vehicle, with most likelythe former residing on the vehicle’s main computer and the latterresiding on a computer in a payload section that can be physicallyswapped out of the vehicle. The low-level control is also referredto as ‘‘vehicle control’’ and the high-level control as ‘‘missioncontrol’’. Here, the architecture that coordinates the set ofsoftware modules collectively comprising the ‘‘back-seat-driver’’system running in the payload has been implemented usingMOOS.3

SAUVIM uses a similar configuration, with a precise roleseparation between high-level or mission control (in the ‘‘back-seat’’) and low-level or vehicle control (in the ’’front-seat’’). Thisseparation has been implemented with a dedicated softwareenvironment for autonomous systems. The mission controlsystem (‘‘back-seat’’) is basically a software-emulated CPU thatruns a custom programming language specially created in order to

1 http://auv2.eng.hawaii.edu/sauvim/public or http://www.eng.hawaii.edu/�asl.2 http://www.soundmetrics.com.3 Mission Oriented Operating Suite, developed by Paul Newman at the MIT,

Department of Ocean Engineering, http://www.robots.ox.ac.uk/�pnewman/

TheMOOS/.

Page 4: Underwater autonomous manipulation for intervention missions AUVs

ARTICLE IN PRESS

G. Marani et al. / Ocean Engineering 36 (2009) 15–2318

simplify high-level operation and algebraic manipulations at thesame time. Since it is a software-emulated CPU, it can be compiledwithin the main vehicle computer while still maintaining thevirtual separation between the mission control and the vehiclecontrol (front-seat). The hardware resides within an abstractionlayer, and the entire language can be easily re-adapted to adifferent hardware layer, given a precise and standard specifica-tion for the interface procedures. Fig. 3 shows this conceptimplemented for the vehicle navigation system.

Within the mission control layer, another very important issuethat the programming language for autonomous systems mustaddress is the time interaction. A generic control system is usuallyhosted by a real-time operating system, with at least a periodictask running at a fixed sample time in order to correctly quantifythe discrete-time blocks (e.g. integrators, derivators, etc.). Themid-layer of the language, where part of control algorithm may

Fig. 3. Control architecture fo

reside, must have the capability of synchronizing with the abovesample time while monitoring the execution length for avoidingon exceeding the time-line. This is easily achieved since in ourapproach the local back-seat resides within the same vehiclecontrol (main vehicle computer, MVC) and the software-emulatedCPU can be looped directly within the main control loop. This hasthe immediate advantage of performing additional high-leveloperations like real-time tracking of time dependent trajectories.

This distributed programming environment for autonomoussystems is completely written in ANSI-compliant C and Cþþ, andcan be cross-complied for different platforms (VxWorks, Win-dows, Unix). This makes it possible to break the environment intoseparate parts, the software-emulated CPU and the code generator(‘‘complier’’): the execution CPU can run inside the real-timecontroller (for instance running a VxWorks operating system)while the compiler may reside on a remote platform such as

r the navigation system.

Page 5: Underwater autonomous manipulation for intervention missions AUVs

ARTICLE IN PRESS

G. Marani et al. / Ocean Engineering 36 (2009) 15–23 19

Windows or Unix, linked via the communication system. Fig. 3shows the case where the remote client is a personal computerresiding externally (at least when the communication link withthe vehicle is available).

This configuration is duplicated for the manipulator and linkedtogether with the SAUVIM navigation system through the maincommunication layer xBus.

3.1. Distributed control: the data exchange bus

SAUVIM uses a client–server approach for delivering informa-tion from and to each distributed module. Each subsystem (as abackset module or a generic sensor) embeds a custom TCP-IPclient–server communication system (‘‘xBus’’, see Marani et al.,2005). Within this architecture, every server can deliver therequested information on demand to any number of clients, andthis configuration allows a different utilization of the bandwidth,since every data is broadcasted only on demand. This approach issimilar to the publish-subscribe middleware paradigm (Benjamin,2007), where the term ‘‘middleware’’ refers to the architecturesoftware that coordinates the set of software modules collectivelycomprising the ‘‘back-seat-driver’’ system running in the payload.Publish-subscribe middleware implements a community ofmodules communicating through a shared database process thataccepts information voluntarily published by any other connectedprocess and distributes particular information to any such processthat subscribes for updates to such information. In the SAUVIMapproach the information is not published by a central database,but every source acts as a server that may send only the requestedinformation to the requesting client. The distributed client–serverarchitecture also provides a security hand-shaking mechanism,which provides direct feedback on the execution of any instance ofdata exchange. This is particularly desirable in issuing securitycommands (such as for aborting the mission).

3.2. The programming language

The software emulated CPU, where the mission control resides,hosts a dedicated programming language developed in order toaddress the above issues (Marani et al., 2005). This language,suitable for real-time embedded control systems, offers at thesame time flexibility, good performance and simplicity indescribing a generic complex task. Its layer abstraction approachallows an easy adaptation to the hardware-specific requirementsof different platforms. For example, the same module can befound in the manipulator platform for describing a genericmanipulation task and in the main navigation controller fordriving the vehicle to the target area. The client–server approachallows the necessary communications between the arm and thenavigation module. The language is completely math-orientedand capable of symbolic manipulation of mathematical expres-sions. The last is an important distinctiveness from most of thecurrently available robot programming languages. The proceduralapproach has been chosen in order to enhance the performancewhile maintaining the flexibility required for executing complextasks. It is particularly suitable for real-time embedded systems,where the interaction of a generic algorithm with the time iscritical.

3.3. The SAUVIM simulator

An immediate advantage of the client–server approach is thateach module can be transparently substituted by a simulator,without affecting the structure of each back-seat. This is doneselecting, in each client side, the appropriate IP address of the

server. In our system, the vehicle model has been implementedvia Simulink (The Mathworks, Inc.). The communication server,the programming language server, the task space controller andthe navigation controller have been compiled and embedded in acustom Simulink block. Since the source code is essentially thesame for the simulator and the actual system, this process allowstesting and simulating every aspect of the control system beforerunning it on the actual vehicle.

3.4. The manipulator control system

The primary purpose of an autonomous manipulation systemis to perform intervention tasks with a limited exchange ofinformation between the manipulator and the human supervisor.The information passed to the main control system is often only ahigh level decision command, and the controller must be capableof following the decision command by providing reliable controlreferences to the actuators. The main issue in designing andimplementing a control system for autonomous manipulation isensuring a reliable behavior within the workspace. A reliablebehavior means also avoiding singularities, collisions, systeminstabilities and unwanted motions while performing the requiredtask that is theoretically executable. The control system must alsoaddress some general manipulation issues, such as being task-space oriented, with task priority assignments, and dynamicpriority changes.

We chose a ‘‘task reconstruction’’ approach (Marani et al.,2002, 2003, 2006; Kim et al., 2006) in order to automaticallycorrect the required task according to the priority of the situation.For example, when approaching a singularity, in order to preventunwanted motions and system instabilities the priority of thecontrol system becomes maintaining the distance from thesingularity over a predefined threshold, rather than followingthe input task (Kim et al., 2002). The task reconstruction isextendible to several other issues, such as collision avoidance, andensures a reliable execution of the input task when it istheoretically feasible. This approach is suitable for autonomoussystems because it ensures avoiding every kind of singularityregardless of the input task. If, for example, the task planner(in the ‘‘back-seat’’) requires the arm to reach a particularconfiguration close to or in a singular point, the manipulator willexecute the task with an error as small as possible, avoiding thesingular point. The path planner could be informed of this errorand then would make an appropriate action, i.e. ordering to thenavigation controller to move the vehicle in a different position.

4. Autonomous manipulation

The principal obstacle in communicating with a generic AUVduring a mission is the low bandwidth and a significant timedelay inherent in acoustic subsea communication. This aspectmakes operating the manipulator remotely very difficult. Robotteleprogramming was proposed as an intermediate solutionbetween supervised control systems and direct teleoperationwhen a significant time delay appears in the communication (Paulet al., 1993; Sayers et al., 1998; Funda et al., 1992). The main idea isto make the operator ‘‘feel’’ the system as a common teleoperationwhere the communication delay has disappeared. The conceptconsists in decoupling the local and remote zones by limiting thedata exchange to a few symbolic information, in opposition to thecommon robot teleoperation system that uses low level informa-tion (e.g. the joint position, motor velocity reference, etc.).

Usually, in teleprogramming systems, a partial copy of theremote information is used to create a virtual reality model ofthe remote environment. In this way, the main component of the

Page 6: Underwater autonomous manipulation for intervention missions AUVs

ARTICLE IN PRESS

G. Marani et al. / Ocean Engineering 36 (2009) 15–2320

virtual model is a predictive simulator. The interaction user-system is performed over a simulated environment and itsstability is not affected by the delay of the communicationchannel. The remote system, if not able to cope with an error, cantransmit this information to the local system that may assume asafety state. It is the task of the remote system to avoid as much ofthese critical errors as possible. However, this solution wasintroduced in order to teleoperate the manipulator by the userand requires a constant presence of the communication channel.

SAUVIM, on the other hand, introduces a different concept inorder to release the necessity of the constant presence of thecommunication link. Within our approach, the user provides ahigher-level information for a particular mission. An example ofhigh-level information could be a sentence like ‘‘unplug theconnector’’, which transfers to the system the responsibility of allthe proper control issues and interaction with the environment forproperly executing the ‘‘unplugging’’ task. In our implementationwe use this concept in order to perform a generic interventiontask. It is a different alternative between the still challenging anddifficult problem of a fully autonomous control system and theclassic teleoperation mode. In this approach, the function of theoperator is to decide, after an analysis of the data, whichparticular task the vehicle is ready to execute and successivelysend the decision command.

In our experiment, when SAUVIM reaches the target site, asimple command directs the robot to start the ‘‘search-hook’’sequence. In this case, the function of the operator is just toconfirm that the target is within a locally searchable area. At thispoint the arm begins scanning the immediate area in search forthe known target using video feedback. Once the target has beendetected, the robot attempts to hook the cable to a known point ofthe object, in order to bring the target to the surface. The low-leveloperation commands are generated and provided by a vehicle-resident intelligent module, while the virtual reality model in thelocal zone (Fig. 4) uses only the few symbolic informationreceived through the low bandwidth channel in order toreproduce the actual behavior of the system whenever it ispossible.

The virtual interface is the ‘‘entry point’’ of the overall system.Whenever the communication channel is available, it may act asthe main gateway between the AUV and the human supervisor.

Fig. 4. The SAUVIM uni

From here, it is possible to load new missions, download data, andeven teleoperate the vehicle or the manipulator if the bandwidthallows it. Similarly, it can display the output of every sensor likethe images of the DIDSON sonar while the vehicle is scanningthe area. Fig. 4 shows a snapshot taken during a survey mission atthe bottom of SNUG harbor. Here, the graphical engine of theinterface reconstructs in real-time the bottom profile using thedata from the DVL and successively overlays the DIDSON imageryto the terrain.

The virtual interface may also be connected to the simulationserver instead of the real system and serves as a preliminary testbed for the mission. As explained above, the only technicaldifference between the simulator and the system is the serveraddress.

5. Target identification and tracking

One of the most difficult aspects of an intervention mission isthe identification and localization of the target. The SAUVIM AUVswitches through three main sensing methods in order to acquirereliable data. As shown in Fig. 5, the sensor technology changesaccording to the combination of range and accuracy needed.

In long range (over 25 m), 375 KHz image sonars are used forinitial object searching. The accuracy in this range is necessaryonly to direct the vehicle toward the target zone.

In mid-range (2–25 m), DIDSON sonar is used for objectrecognition and the vehicle positioning. This is the phase wherethe vehicle has to position itself in order to have the targetconfined within the manipulation workspace.

Finally, when the target is within the manipulator workspace,short range and high accuracy sensor are used in order to performthe actual intervention task. This goal is achieved with thecombined use of underwater video cameras and an ultrasonicmotion tracker, used to retrieve the real-time 6 DOF position ofthe target during the manipulation tasks. The device utilizes highfrequency sound waves to track a target array of ultrasonicreceivers. The use of four transmitters at the stationary positionswith four receivers on the target can be used to determine the 6DOF generalized position (rotation and translation) of the object.

fied user interface.

Page 7: Underwater autonomous manipulation for intervention missions AUVs

ARTICLE IN PRESS

Fig. 5. The phases involved in a search for the target.

G. Marani et al. / Ocean Engineering 36 (2009) 15–23 21

A common scenario for a generic autonomous manipulationintervention is a situation where the vehicle is station-keepingwhile the arm performs the required task. In the aboveconfiguration, the vehicle’s position and orientation are main-tained with the aid of several different sensors, which may haveconsiderable measurement noise as well as different accuracymeasurements. For example, LBL (or similar sensors) is often usedto measure the position in x, y, and z (or altitude) in Earth-fixedcoordinates and its output has an accuracy of about 1 m. However,z-position can also be measured by a depth sensor that gives amuch more accurate output. Orientation in x, y, and z (vehiclecoordinate) are measured by the sensor server unit (with the INSand DVL). Since all sensors experience a certain level of randomnoise in their measurements, the absolute position and orienta-tion measurements of the vehicle, especially in x and y, have aninsufficient accuracy for a precision manipulation task. However,as long as the error in the measurement of the vehicle position/orientation is confined within the magnitude of the arm work-space, the manipulator can compensate for this inaccuracy usingthe precise measurement given by the target position sensor.

5.1. DIDSON sonar

The medium range target localization using the DIDSON sonaris still under development, and a preliminary research work hasbeen presented (Yu et al., 2006, 2007). Here, the most noticeableaspect is that the images from the sonar have differenttransformation (projection) compared to the optical case, becausethe acoustic nature of the device.

5.2. Short range target localization

Some sensors, for example video processing and laser orultrasonic 3D scanners, provide an absolute measurement, even ifusually with a low sample rate and high cost. Video processing,however, may present some drawbacks in the ocean depths. Theneed of a constant light source during the manipulation task mayconsiderably degrade the autonomy of the vehicle. Moreover, thepoor visibility in some environments may introduce difficulties in

the target detection/recognition process. On the other hand,motion trackers can provide reliable and high sample rateinformation, but the measurement is relative to the position ofthe tracking sensor. This means that the system must knowexactly the relative, generalized position (rotation and orienta-tion) of the sensor with respect to the target. However, the sensorapplication/localization has to be done only once and can beachieved substantially in one of the following ways:

Operator assisted. In the case of a sufficiently reliable link, theapplication of the sensor to the target can be executed by theoperator using teleoperation and/or teleprogramming mode(Paul et al., 1993; Funda et al., 1992; Sayers et al., 1998). This issometime referred as a semi-autonomous modality of execu-tion of the task. � Autonomous mode. The target localization and sensor applica-

tion are executed in fully autonomous mode with the aid of theabove mentioned absolute measurement 3D sensors (camera,scanners, etc.).

After this phase, the manipulation task can be executed usingonly the information of the motion tracker. The motion tracker-aided manipulation is conceptually similar to the use of a passivearm measurement devices. The main advantage is the absence of amechanical link between the target and the AUV, which istransformed into a simple wire or even absent in case of wirelesssensors.

The commercially available motion trackers are mainly devel-oped for virtual reality purposes (for instance in capturing thebody movements) or medical use (i.e. for tracking the position ofprobes).

In order to validate the feasibility of the ultrasonic trackingduring autonomous manipulation, a commercial unit in air ismodified to work with the robotic manipulator of SAUVIM. Thisexperiment consists of pouring the content of a test-tube into acontainer. The system (test-tube seat and container, Fig. 6) wasprepared with a moving base, whose position and orientationwere tracked by the ultrasonic sensor.

In Fig. 6, the little white triangle on the back of the movingbase is the ultrasonic tracking sensor. The control software knowsthe relative generalized (6 DOF) position of the various objectswith respect to the application coordinates, which is a sufficientinformation for executing the task.

An underwater version of the 6 DOF tracker is currently underdevelopment. In order to cope with the precision requirements ofa generic autonomous underwater manipulation task, it isexpected to have the same accuracy of the one used in our dryexperiment. The underwater tracking technology can be also usedin different situations, for example in precision vehicle docking/undocking procedures (Evans et al., 2003).

6. The recovery operation

The presented system organization for autonomous under-water manipulation has been validated within an interventionmission of SAUVIM. The mission consists of a recovery operationof the submerged target in Fig. 7, by using the arm in order tosecurely hook to the target a cable connected to the vehicle. In thisexperiment, we used optical feedback to detect the target andlocate its generalized position (translation and orientation withrespect to the vehicle).

This goal is achieved using a video camera located on the wristof the manipulator (visible in Fig. 8a) and a dedicated videoprocessing system, based on an Intel processor in a PC104 formfactor. The two spheres composing the dipole are of known

Page 8: Underwater autonomous manipulation for intervention missions AUVs

ARTICLE IN PRESS

Fig. 6. Sensing a moving target using the ultrasonic tracker.

Fig. 7. The target in our recovery trial.

Fig. 8. Underwater scenes of the target recovery task.

G. Marani et al. / Ocean Engineering 36 (2009) 15–2322

diameter and, because its geometrical configuration, it is possibleto determine the exact position and orientation with only onecamera.

The localization of the circle within the image is done using thefollowing sequence of steps:

Image filtering. � Edge extraction using Canny filter applied to the color image

and using the color contrast gradient.

� Circle extraction using the line segments found in digital

images (Kim and Kitajima, 2005).

This combination of algorithms had the best performance androbustness with respect to false or missed detections in anunderwater environment. The processing software uses OpenCVand the Intel Performances Primitive optimization libraries(Bradski et al., 2005), in order to decrease the overall computa-tional time and speedup the frame rate. In our configuration weachieved a performance of 10 samples per second, making itpossible to link in real-time the position of the target to themanipulator controller.

A detailed discussion on the computer vision algorithmshereby adopted has been given in Marani et al. (2007).

The entire sequence of operation involved in this experimenthas been coded within the high level controller layer (see Fig. 3)and consists in the following subtasks:

Extract the arm and perform a visual scan (in 3D) of thesurrounding space, using the attached camera (Fig. 8a). Thetrajectory of the scan was a sweep along the x, y and z

directions of the arm workspace.

� During the scan, locate the target. � Once the target has been detected, the arm enters in a tracking

state (visual servoing), in order to place the gripper to aconstant relative position with respect to it. If the target moves,the arm follows it while maintaining the same relativeposition.

Page 9: Underwater autonomous manipulation for intervention missions AUVs

ARTICLE IN PRESS

G. Marani et al. / Ocean Engineering 36 (2009) 15–23 23

Once the tracking system detects no movements of the targetfor a sufficient amount of time, the arm proceeds with theshort sequence of movements finalized to hook the snap link(Fig. 8b). During the time frame of the hooking operation, anymovement of the target with respect to the arm may still becorrected using the video feedback.

The accuracy detection with this configuration was under acentimeter for the translational part, sufficient to allow the hookto center the target. A slightly higher error was measured alongthe direction orthogonal to the camera view.

The vehicle, during the whole duration of the experiment, wasin a parked position, in a shallow water area (about 7 m). Thetarget was placed within the arm workspace over a suspendedplatform, hence with a certain degree of mobility. The relativemotion target-vehicle was hereby compensated only by themanipulator.

During this experiment, the only required human interventionis to confirm the decision on when starting the initial (short-range) search. Once the vehicle has reached the target site, the lastdecision if the target is the correct one was left to the supervisor,responsible to issue a command to start the autonomoussequence of operation. From this part on, the arm performsautonomously all the subsequent operations, in order to recoverythe object. If any errors arise during any of the above autonomoussteps (for example no target was detected during the scan), thearm returns to its initial parking position and the mission isaborted. A mission log allows the operator to verify later the causeof the failure.

This experiment was intended to test the feasibility of the short-range target detection. It will be matter of future work to address theissue involved in a medium and long range searches, for navigatingtoward the target. A video recording of the above experiment isdownloadable from the ASL web sites (see footnote 1).

7. Conclusion

Autonomous manipulation for underwater intervention is achallenging technology milestone, and today there are still veryfew examples worldwide of applications using such technology. Inthis paper, we presented one of the first approaches, implementedwith SAUVIM, that involved the development of a robustautonomous manipulation framework over several years ofresearches. The solution has been tested with an experimentaldemo where the vehicle-manipulation system was used torecovery a submerged target. During this experiment, the humanintervention was limited to only the initial decision of starting theoperations, hence for confirming that the vehicle is in the targetsite. Results show that the presented approach is promising forautonomous manipulations, and represents an important passagetoward the development of a higher level of autonomy forintervention AUVs.

Acknowledgments

The SAUVIM project has been jointly developed by theAutonomous Systems Laboratory (ASL) of the University of Hawaii,Marine Autonomous Systems Engineering, Inc. in Hawaii, and NavalUndersea Warfare Center (NUWC) in Rhode Island. This researchwas sponsored by ONR (N00014-97-1-0961, N00014-00-1-0629,N00014-02-1-0840, N00014-03-1-0969, N00014-04-1-0751).

References

Benjamin, M.R., 2007. Software architecture and strategic plans for underseacooperative cueing and intervention. White paper, NAVSEA-DIVNPT, Code2501.

Bradski, G., Kaehler, A., Pisarevsky, V., 2005. Learning-based computer visionwith Intel’s open source computer vision library. Intel TechnologyJournal 9 (2).

Evans, J., Redmond, P., Plakas, C., Hamilton, K., Lane, D., 2003. Autonomous dockingfor Intervention-AUVs using sonar and video-based real-time 3D poseestimation. In: Oceans 2003. Proceedings, 22–26 September, vol. 4,pp. 2201–2210.

Funda, J., Lindsay, T.S., Paul, R.P., 1992. Teleprogramming: Toward delay invariantremote manipulation. Presence, vol. 1, Winter, pp. 29–44.

Kim, M.H.E., Kitajima, H., 2005. The extraction of circles from arcs represented byextended digital lines. IEICE Transactions on Information and Systems E88-D(2).

Kim, J., Marani, G., Chung, W.K., Yuh, J., Oh, S.R., 2002. Dynamic task priorityapproach to avoid kinematic singularity for autonomous manipulation. In:Proceedings of the IEEE/RSJ International Conference on Intelligent Robots andSystems, pp. 1942–1947.

Kim, J., Marani, G., Chung, W.K., Yuh, J., 2006. Task reconstruction method for real-time singularity avoidance for robotic manipulators. Advanced Robotics 20 (4),391–498.

Marani, G., Kim, J., Yuh, J., Chung, W.K., 2002. A real-time approach for singularityavoidance in resolved motion rate control of robotic manipulators. In:Proceedings of the IEEE International Conference on Robotics and Automation,pp. 1973–1978.

Marani, G., Kim, J., Chung, W.K., Yuh, J., 2003. Algorithmic singularities avoidancein task-priority based controller for redundant manipulators. In: Proceedingsof the IEEE/RSJ International Conference on Intelligent Robots and Systems,October 27–31, pp. 1942–1947.

Marani, G., Medrano, I., Choi, S.K., Yuh, J., 2005. A client–server orientedprogramming language for autonomous underwater manipulation. In: Pro-ceedings of the Fifteenth (2005) International Offshore and Polar EngineeringConference, Seoul, Korea, June 19–24.

Marani, G., Yuh, J., Choi, S.K., 2006. Autonomous manipulation for an interventionAUV. In: Roberts, G., Sutton, B. (Eds.), Guidance and Control of UnmannedMarine Vehicles, IEE’s Control Engineering Series.

Marani, G., Choi, S.K., Yuh, J., 2007. Experimental study on autonomousmanipulation for underwater intervention vehicles. In: Proceedings of theSeventeenth (2007) International Offshore and Polar Engineering Conference,Lisbon, Portugal, July 1–6.

Paul, R.P., Sayers, C.P., Stein, M.R., 1993. The theory of teleprogramming. Journal ofthe Robotics Society of Japan 11 (6), 1419.

Sayers, C.P., Paul, R.P., Catipovic, J., Whitcomb, L., Yoerger, D., 1998. Teleprogram-ming for subsea teleoperation using acoustic communication. IEEE Journal ofOceanic Engineering 23 (1), 6071.

Wang, H.H., Rock, S.M., Lees, M.J., 1995. Experiments in automatic retrieval ofunderwater objects with an AUV. In: Oceans ’95. MTS/IEEE. Challenges of OurChanging Global Environment. Conference Proceedings, vol. 1, October 9–12,pp. 366–373.

Yu, S.C., Kim, T.W., Weatherwax, S., Collins, B., Yuh, J., 2006. Development ofhigh-resolution acoustic camera based real-time object recognition systemby using autonomous underwater vehicles. In: Oceans 2006, September 2006,pp. 1–6.

Yu, S.C., Kim, T.W., Marani, G., Choi, S.K., 2007. Real-time 3D sonar imagerecognition for underwater vehicles. In: Symposium on Underwater Technol-ogy and Workshop on Scientific Use of Submarine Cables and RelatedTechnologies, 17–20 April, pp. 142–146.

Yuh, J., Choi, S.K., 1999. Semi-autonomous underwater vehicle for interventionmission: an AUV that does more than just swim. Sea Technology 40 (10),37–42.

Yuh, J., Choi, S.K., Ikehara, C., Kim, G.H., McMurty, G., Ghasemi-Nejhad, M.,Sarkar, N., Sugihara, K., 1998. Design of a semi-autonomous under-water vehicle for intervention missions (SAUVIM). In: Proceedings ofthe 1998 International Symposium on Underwater Technology, 15–17 April,pp. 63–68.