18
THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY Int J Med Robotics Comput Assist Surg (2011). REVIEW ARTICLE Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/rcs.408 Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature G. P. Moustris 1 * S. C. Hiridis 2 K. M. Deliparaschos 1 K. M. Konstantinidis 2 1 Department of Signals, Control and Robotics, School of Electrical and Computer Engineering, National Technical University of Athens, Greece 2 General, Laparoendoscopic and Robotic Surgical Clinic, Athens Medical Centre, Greece *Correspondence to: G. P. Moustris, Department of Signals, Control and Robotics, School of Electrical and Computer Engineering, National Technical University of Athens, 15773 Zographou Campus, Athens, Greece. E-mail: [email protected] Accepted: 12 May 2011 Abstract Background Autonomous control of surgical robotic platforms may offer enhancements such as higher precision, intelligent manoeuvres, tissue- damage avoidance, etc. Autonomous robotic systems in surgery are largely at the experimental level. However, they have also reached clinical application. Methods A literature review pertaining to commercial medical systems which incorporate autonomous and semi-autonomous features, as well as experimental work involving automation of various surgical procedures, is presented. Results are drawn from major databases, excluding papers not experimentally implemented on real robots. Results Our search yielded several experimental and clinical applications, describing progress in autonomous surgical manoeuvres, ultrasound guidance, optical coherence tomography guidance, cochlear implantation, motion compensation, orthopaedic, neurological and radiosurgery robots. Conclusion Autonomous and semi-autonomous systems are beginning to emerge in various interventions, automating important steps of the operation. These systems are expected to become standard modality and revolutionize the face of surgery. Copyright 2011 John Wiley & Sons, Ltd. Keywords minimally invasive surgery (MIS); robotic surgery; autonomous robots Introduction The future of robotic surgical systems depends upon improvements in the present technology and development of new radically different enhancements (1). Such innovations, some of them still in experimental stage, include miniaturization of robotic arms, proprioception and haptic feedback, new methods for tissue approximation and haemostasis, flexible shafts of robotic instruments, implementation of the natural orifice transluminal endoscopic surgery (NOTES) concept, integration of navigation systems through augmented-reality applications and, finally, autonomous robotic actuation. Definitions and classifications The classification of robotic systems depends on the actual point of view one takes. There are multiple classifications of robotic systems applied in medicine, with some being more preferable. A first high-level classification was proposed Copyright 2011 John Wiley & Sons, Ltd.

Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERYInt J Med Robotics Comput Assist Surg (2011). REVIEW ARTICLEPublished online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/rcs.408

Evolution of autonomous and semi-autonomousrobotic surgical systems: a review of the literature

G. P. Moustris1*S. C. Hiridis2

K. M. Deliparaschos1

K. M. Konstantinidis2

1Department of Signals, Control andRobotics, School of Electrical andComputer Engineering, NationalTechnical University of Athens, Greece2General, Laparoendoscopic andRobotic Surgical Clinic, AthensMedical Centre, Greece

*Correspondence to: G. P. Moustris,Department of Signals, Control andRobotics, School of Electrical andComputer Engineering, NationalTechnical University of Athens,15773 Zographou Campus, Athens,Greece.E-mail: [email protected]

Accepted: 12 May 2011

Abstract

Background Autonomous control of surgical robotic platforms may offerenhancements such as higher precision, intelligent manoeuvres, tissue-damage avoidance, etc. Autonomous robotic systems in surgery are largely atthe experimental level. However, they have also reached clinical application.

Methods A literature review pertaining to commercial medical systemswhich incorporate autonomous and semi-autonomous features, as well asexperimental work involving automation of various surgical procedures, ispresented. Results are drawn from major databases, excluding papers notexperimentally implemented on real robots.

Results Our search yielded several experimental and clinical applications,describing progress in autonomous surgical manoeuvres, ultrasoundguidance, optical coherence tomography guidance, cochlear implantation,motion compensation, orthopaedic, neurological and radiosurgery robots.

Conclusion Autonomous and semi-autonomous systems are beginning toemerge in various interventions, automating important steps of the operation.These systems are expected to become standard modality and revolutionizethe face of surgery. Copyright 2011 John Wiley & Sons, Ltd.

Keywords minimally invasive surgery (MIS); robotic surgery; autonomousrobots

Introduction

The future of robotic surgical systems depends upon improvements in thepresent technology and development of new radically different enhancements(1). Such innovations, some of them still in experimental stage, includeminiaturization of robotic arms, proprioception and haptic feedback, newmethods for tissue approximation and haemostasis, flexible shafts ofrobotic instruments, implementation of the natural orifice transluminalendoscopic surgery (NOTES) concept, integration of navigation systemsthrough augmented-reality applications and, finally, autonomous roboticactuation.

Definitions and classifications

The classification of robotic systems depends on the actual point of view onetakes. There are multiple classifications of robotic systems applied in medicine,with some being more preferable. A first high-level classification was proposed

Copyright 2011 John Wiley & Sons, Ltd.

Page 2: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

G. P. Moustris et al.

by Taylor and Stoianovici (2), in which they dividedsurgical robots into two broad categories, surgicalcomputer-aided design/manufacturing (CAD/CAM) systemsand surgical assistants. Surgical CAD/CAM systems aredesigned to assist in planning and intraoperative naviga-tion through reconstruction of preoperative images andformation of three-dimensional (3D) models, registrationof this data to the patient in the operating room, anduse of robots and image overlay displays to assist in theaccurate execution of the planned interventions.

Surgical assistant systems are further divided into twoclasses: surgical extenders, which are operated directly bythe surgeon and essentially extend human capabilities incarrying out a variety of surgical tasks, with emphasis onintraoperative decision support and skill enhancement;and auxiliary surgical supports, which work side-by-sidewith the surgeon and provide support functions, such asholding an endoscope.

Complementary to the previous classification, Wolfand Shoham (3) summarize a division according toautonomous function. They present four categories formedical robots, passive robots, semiactive robots, activerobots and remote manipulators. Loosely correlating thetwo classifications, one could say that passive, semiactiveand active robots fall under the surgical CAD/CAM andauxiliary surgical support categories, while the remotemanipulators are identified to the surgical extender class.Passive robots provide support actions in surgery and donot perform any autonomous or active actions. Typicalexamples include the Acrobot (4), the Arthrobot (5) andthe MAKO system (6). Semiactive robots are closelyrelated to the surgical assistant class and perform similaroperations, viz. support tasks such as holding a tool orautomated stereotaxy, e.g. the NeuroMate stereotacticrobot. On the contrary, active robots exhibit autonomousbehaviour and operate without direct interaction withthe surgeon. Prominent examples include the CyberKnife(Accuray Inc., Sunnyvale, CA, USA) and RoboDoc (CurexoTechnology Corp., Fremont, CA, USA) (7). Multiplepublications have assessed its efficacy and it is discussedfurther below (8,9). Probot also represents one of thefirst applications of an autonomous robot in the clinicalsetting, initially used in 1991 for a transurethral resectionof the prostate (10). For the first time in history, a roboticdevice was used for removal of human tissue.

Remote manipulators, or surgical extenders, areprobably the most common surgical robots in use today.One of the most successful commercial robots in thisclass is the da Vinci robot (Intuitive Surgical, Sunnyvale,CA, USA), which was originally implemented for heartsurgery (11). In this master–slave telemanipulator systemthe surgeon sits at a master console next to the patient,who is operated on by the slave arms (Figure 1). Thesurgeon views the internal organs through an endoscopeand, by moving the master manipulator, can adjust theposition of the slave robot. The surgeon compensates forany soft-tissue motion, thus closing the servo-control loopby visual feedback. The high-definition 3D images andmicromanipulation ability of the robot make it ideal for

Figure 1. The da Vinci SI telesurgical robot. Reproduced bypermission of Intuitive Surgical Inc

Figure 2. A view of the MiroSurge telesurgical system. TwoMIRO surgical manipulators are clearly visible. Reproduced bypermission of the German Aerospace Centre

transpubic radical prostatectomy, with reduced risk ofincontinence and impotence (12).

A more recent telesurgery robot is the MiroSurge system(13) (Figure 2), developed by the German AerospaceCentre (DLR). The system consists of a master–slaveplatform, with the slave platform involving three roboticmanipulators (MIRO surgical robots; see Figure 3), twocarrying surgical tools and one carrying an endoscope.

Remote manipulators belong to a broad field of roboticscalled telerobotics. Niemeyer et al. (14) present a moreengineering-orientated classification of telerobots withrespect to control architecture and user interaction.However, this classification holds true for surgicaltelemanipulators as well. Depending on the degree ofuser interaction, three categories are defined, direct ormanual control, shared control and supervisory controlrobotic systems. In direct control the surgeon operatesthe slave robot directly through the master console. Thisinvolves no autonomy on the slave end and the robotmirrors the surgeon’s movements (although some filtering

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 3: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

Evolution of autonomous and semi-autonomous robotic surgical systems

Figure 3. A close-up of the MIRO robotic manipulator used inthe MiroSurge surgical robot. Reproduced by permission of theGerman Aerospace Centre

may take place, e.g. tremor reduction and movementscaling). Apparently, this mode has the most surgeoninvolvement. At the other end, in supervisory control theprocedure is executed solely by the robot, which actsaccording to a computer program that the surgeon inputsinto it prior to the procedure. The surgeon (supervisor)gives high-level directives and the robot has to operateautonomously in order to carry them out, closing the looplocally. The surgeon is still indispensable in planning theprocedure and overseeing the operation, but does notpartake directly. Because the robot performs the entireprocedure, it must be individually programmed for thesurgery.

Finally, in shared control the surgeon and the controllershare the command of the manipulator and work togetherin order to carry out a task. This means that thehuman and the robot share the same resources, e.g. themanipulators. A prominent technique in shared controlis the use of virtual fixtures (15). Virtual fixtures areinvisible ‘rulers’ that constrain the surgeon’s movements.When the surgeon drives the manipulator towards thefixture, the controller starts to apply a deterrent forceon the master console (force feedback), pushing thesurgeon away from this location (passive assistance).Conversely, the controller can apply an assistive force tohelp the surgeon move towards a correct path (activeassistance). Other applications of shared control includemotion compensation (e.g. compensation of the motionof a beating heart) and obstacle avoidance. Obviously,shared control combines the intelligence of the surgeonand the robot, thus the robot presents a limited autonomy.

From the previous classifications, it is clear that robotsmay have varying levels of autonomy. Autonomous orsemi-autonomous modes have already been incorporatedinto medical robotics; however, most of them belong tothe surgical CAD/CAM class. Autonomic performance of

specific tasks can lessen the workload for the surgeonand potentially speed up the operation time. ‘Smarttools’ with a degree of intelligence seem to be more infavour of modern surgeons than fully-automated systems,which replace the surgeon’s role by using imaging androbotics technology (16). An example of the technologicalopportunities and the clinical dismay is the Minervasystem for neurosurgery (17). However, research onrobotics in surgery has also focused on developingautonomous robotic systems for biopsy (e.g. PAKY-RCM) (18), orthopaedic surgery, e.g. RoboDoc (7) andArthrobot (5), neurosurgery (e.g. NeuroMate), etc.

Before analysing the autonomy of various roboticsystems that are involved in medical practice, one mustfirst define what constitutes ‘autonomy’, along with theparameters that affect it. In the next section a formalanalysis of the problem is presented, orientated towardssurgical application.

The problem of autonomy

The word ‘autonomy’ may have different meanings,depending on the system and application. Different robotspresent different degrees of autonomy. Generally, by theword ‘autonomy’ one means that the robot operates on itsown so as to perform a specific task. However, autonomyis not solely a number of preprogrammed movements (inthis case the ‘robot’ would be classified as an automaton);it involves perception of the environment by the robot anda corresponding adaptation of its behaviour to the newparameters that come up. Thus, the robot seems to presentagency, i.e. purposeful actuation in the environment.Of course, different tasks require different actions, withvarying complexity. For example, it is easier to instructthe robot to move from point A to point B than to tell itto cut along a prescribed path or perform a suture. Thus,a first parameter affecting the level of autonomy is thecomplexity of the mission which the robot has to carry out.A second significant parameter is the actual environmentin which the robot operates. For example, tasks in astatic environment are easier to handle than those in aslowly changing environment, which in turn are easierthan those in a dynamic one. This observation accountsfor the vast majority of autonomous medical robotsthat operate in a largely static environment, using solidbony structures and fiduciary markers for registration.Thus, environmental difficulty is the second parameteraffecting robot autonomy. A third parameter is the humanindependence, e.g. does the robot depend on surgeonfeedback in order to complete the task or does it operatecompletely without human input? Is human interactionneeded throughout the mission or is it applicable to smalltime windows? Is the robot teleoperated or does it moveautonomously?

The previous three parameters combined can definethe level of autonomy of a robot, and can be depictedin a diagram called the Autonomy Levels for UnmannedSystems (ALFUS) model (Figure 4), which is described

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 4: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

G. P. Moustris et al.

Figure 4. A depiction of the ALFUS diagram, used in describingthe level of autonomy of a robotic system

in the ALFUS framework (19), a collaborative effortinvolving several US organizations which formed theALFUS Ad Hoc Work Group to address the issueof autonomy in robotic systems. The framework alsospecifies metrics in order to quantify each axis of thediagram.

In the field of robotics, the notion of autonomy isheavily dependent on the principle of feedback. As anexample, consider a human and a robot performing asimple mundane task. Even though it is difficult to imaginea human completely cut off from his environment, thisis easy when it comes to robots. Sensors, e.g. encoders,cameras, etc., provide necessary information for the actualstate of the system. This information synthesizes thefeedback signal, which is used by the controller in orderto exhibit autonomous behaviour. The environment isperceived through sensor information and by processingthis information the robot creates a structured imageof the environment (external state) and itself (internalstate). This constitutes the ‘sense’ phase. Both perceptionsare essential for carrying out a task successfully. Althougha human can easily perceive and process the environment,the robot must formalize it in a very accurate wayin order to ‘understand’ it. Having reconstructed theseimages, the problem is then transferred to the planningtask. Planning is the process of computing the futureinternal states the system must acquire, e.g. move ajoint along a path, in order to complete the task.Each action can be characterized by preconditions andpostconditions. Preconditions indicate what is requiredto perform an action, while postconditions describepossible situations after the action. The ‘planning’ processinvolves parameters that express quantities in the actualenvironment, e.g. the position and torque of the jointalong a path, and as such both internal and external states(self and environment) must be previously known throughsensing. Having computed the planning, the problemshifts to the acting phase. Acting is the actual movementof the system in the environment. This can be achievedthrough actuators (electrical motors, pneumatic motors,etc). Note that the actuators impose their own limits in

the actual movement; hence, these limitations must betaken into account during the planning phase.

The above steps constitute three important opera-tions in robot control: sense–plan–act. Robot controlarchitectures use these three phases in various ways inorder to achieve the desired behaviour. The older, butlargely abandoned, architecture places these phases ina sequential pattern, i.e. the sense–plan–act cycle. Thisarchitecture is also called ‘deliberative control’ (20). Atthe other end, there is the ‘reactive control’ paradigmthat does away with planning altogether. Deliberativecontrol is slow and depends heavily on internal mod-els and accurate information, while reactive control isfast, computationally ‘light’ but cannot exhibit high-levelbehaviour. Hybrid architectures also exist, leveraging theadvantages of both paradigms. It is doubtful, however,that planning can be avoided in surgical robots, sincesurgical skills and manoeuvres are very complex in nature.

The surgical field is a special environment for arobot and should be managed according to the previousframework. The laparoscopic environment consists mainlyof soft tissue, bony tissue, air and fluid. It is obviouslya dynamic environment, with constant alteration of theshape of its constituents during the operation. Perceptionof this environment must result in a digitized image of theoperating field. Preoperative imaging examinations arenot of great help, because the deformation of tissues (withinsufflation of CO2, respiratory movements, instrumentmanipulations) may obscure correct registration to realanatomy. Also, planning of the surgical manoeuvres isa very complex task that the system must take intospecial account. The control algorithms must possessthe knowledge of appropriate techniques for each phaseof an operation. These techniques comprise a set ofcomplex movements that can be learned from anexpert, i.e. a surgeon using a manipulator which willrecord his movements, or be mathematically plannedand described in a suitable manner. Having a databaseof these movements the robot, by selectively filteringthe appropriate ones, should robustly fit them to anactual operating scenario, under the directions of thesurgeon when necessary (surgeon-supervised roboticsurgery). The system could also learn from its ownoperations and acquire new field knowledge that willbe incorporated into the existing corpus. In complextasks, many hierarchical levels of planning can coexist.Depending on the level of autonomy required, there canbe several planning algorithms operating in parallel. In thecase of laparoscopic surgery, autonomy should probablybe introduced in the context of task execution, i.e. as anintelligent tool obeying the instructions of the supervisingsurgeon [an idea also mentioned by Baena and Davies(10)]. In such a setting, the surgeon should instruct therobot what to do, e.g. grab, suture, etc., and the robotwill have to figure out how to do it. Decision making,i.e. what to do, is probably best to be left to the surgeon,since humans, under the correct training and experience,are better at taking decisions in unstructured or chaoticsituations than robots.

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 5: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

Evolution of autonomous and semi-autonomous robotic surgical systems

Planning and skill modelling

With a supervisor-controlled surgical robot, the surgeonis able to instruct the robot to perform certain tasksunder his supervision, as happens with the training ofyoung surgeons early in their internships. The system issupposed to keep a database with different sets of possiblesurgical manoeuvres (drawn from recording actual humanmovements) encoded in a suitable manner. This is knownas surgical skill modelling. Work towards this goal hasalready been performed by several researchers. Rosenet al. (21) have used a discrete Markov model in orderto decompose minimally invasive surgery (MIS) tasks andhave deployed it to tying an intracorporeal knot on ananimal model. Kragic et al. (22) have deployed a hiddenMarkov model (HMM), using primitive ‘gestemes’, andhave modelled two simple surgical tasks in vitreo-retinaleye surgery. More abstractly, Kang and Wen (23) havemathematically analysed knot tying and have developedthe conditions for knot placement and tension control. Aninteresting approach to skill modelling is the Language ofSurgery project at Johns Hopkins University (24,25). Themain idea behind it is that surgical skill can be learned,much like a language. Thus, one can identify elementarymotions, juxtaposed to ‘phonemes’, and by combiningthem new ‘words’ can be constructed. Again, using these‘words’, one can produces surgical ‘phrases’, and so on.The surgical procedure is decomposed in a hierarchicalmanner (Figure 5), consisting of a sequence of tasks (e.g.suturing) (26). Respectively, each task is decomposed intoa sequence of more elementary motions called surgemes,which in turn comprise a sequence of low-level motionprimitives called dexemes.

Under this framework, Lin et al. (24) have used lineardiscriminant analysis along with a Bayesian classifierin order to model a suturing task. They have createda motion vocabulary consisting of eight elementarysuturing gestures (reach for needle, position needle, insertneedle/push needle through tissue, etc.) by collectingmotion data from the da Vinci system under the commandof an expert surgeon. The system was able to classify thesurgical gestures with 90% accuracy. Reiley et al. (27)have extended the previous work using more advanced

Figure 5. Hierarchical decomposition of a surgical task accord-ing to the Language of Surgery project. Each level is decomposedinto simpler motion gestures, ranging from the entire procedure(high-level) to elementary surgical motion primitives called ‘dex-emes’ (low-level)

statistical modelling, by replacing the Bayes classifier witha three-state HMM, and increased the number of surgemesto 11; this system performed with an accuracy of 92%.Even though these results are promising, more work isneeded in order to model enough surgical tasks. Thesetasks can then be combined in the planning phase so asto produce a meaningful outcome in autonomous roboticsurgery.

Planning is the process of fitting the specifiedmanoeuvre to the actual operating condition in the mostappropriate way. The planning algorithm should alsocompensate for the change of the environment, e.g.soft tissue deformations, in the immediate future. Theoutput of this algorithm is primitives of motion, much likethe surgemes described above. However these primitivesmust be translated to a more accurate description ofrobot movements. This task should be performed by alow-level planner, which will receive the output of thehigh-level planning algorithm. The primitives of motionwill then be translated to actual trajectories that the robotmust follow in order to complete the specified task. Thisalgorithm must also take into account various constraints,e.g. distance from the surgical field, quickest route, etc.Due to the dynamic nature of the environment, the high-level planning might prove to be impossible in someinstances, e.g. respiratory motion may cause unmodelledtissue deformation, or the surgeon could move organs thatobstruct his/her line of sight. In such a case, the high-level plan can be recomputed to produce new feasibleprimitives of motion that will then be transferred tothe low-level planner. This aforementioned loop mustinclude a constant feasibility check while the robot moves,following the trajectory executed. Note that the feasibilitycheck takes place constantly when the robot actuallymoves, following the trajectory.

Based on the results of the Language of Surgeryproject, Reiley et al. developed a prototype system thatgenerates surgical motions based on expert demonstration(28). This system produces surgemes for three commonsurgical tasks (suturing, knot tying and needle passing)and combines them using dynamic time warping andGaussian mixture models. The actual motion paths areproduced using Gaussian mixture regression. The resultsare validated against HMMs models of surgemes (26),and have been classified as those belonging to an expertsurgeon. Although this work is a significant first steptowards automating surgical gestures, the system is open-loop without any experimental validation on a real robot.

Intelligent Control

‘Intelligent control’ refers to the use of various techniquesand algorithms that solve problems using artificialintelligence applications. Known intelligent algorithms,usually referred to as ‘intelligent control systems’ or‘expert systems’, include neural networks, fuzzy logic,genetic algorithms and particle swarm optimization (PSO)

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 6: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

G. P. Moustris et al.

techniques (29,30), to name some. The above methodsare often used to provide a more efficient solution (i.e.convergence to the problem solution). Neural networksand fuzzy logic are more suitable for real-time controlproblems, whereas genetic algorithms and PSO areclassified as heuristic methods, better suited for offlinepreprocessing. These intelligent algorithms can cope withimprecise data (fuzzy logic), highly non-linear models(neural networks) and large search space heuristics(genetic algorithms, PSO). A useful feature of theseintelligent techniques is that of adaptive learning, i.e.the ability to learn from previous experience. Thus, theycan incorporate field knowledge which is acquired duringactual surgical operations and improve their performanceover time (31–33).

Methods

We present results from a literature review pertainingto commercial medical systems, which incorporateautonomous and semi-autonomous features, as wellas experimental work involving automation of varioussurgical procedures. The results are drawn from majorbibliographic databases (IEEE, ScienceDirect, PudMed,SAGE, Springer, Wiley and Google Scholar). More focushas been put on newer published work (mainly in thelast decade). A selection process was also used, excludingpapers where their contribution was not experimentallyimplemented on real robots, except in cases where theresults were deemed significant enough for inclusion.

Results

Experimental work

There have been many efforts to develop surgical robotscapable of performing some tasks autonomously. Muchof this research involves visual servoing, which combinesvisual tracking and control theory, although differentmodalities are also widely in use, e.g. ultrasound imaginghas been investigated by several researchers, due to itslow cost and real-time feedback. The target operationsvary from laparoscopic surgery to cochlear implantationto heart surgery, and so on. Depending on the type ofintervention, automation is inserted into various stepsof the procedure. Analysis of experimental research ispresented in the following sections.

Autonomous suturing

Knot tying is a common procedure during surgery.Automating this task would greatly reduce surgeonfatigue and total surgery time. Building a good knot-tyingcontroller is difficult because the spatial orientations andmanoeuvring of multiple instruments must be precisely

controlled. The first to investigate robotic knot tying inMIS were Kang and Wen. They have developed a customrobotic system called Endobot (34,35), comprising twomanipulators which can be controlled in three modes;manual, in shared control mode and autonomously. Inmanual mode the surgeon operates the manipulatorsdirectly (not in a telesurgical sense), while the controlleroffers gravity compensation. In shared control, some axesare controlled by the robot while leaving the remainingaxes to the surgeon. Of course the most interestingmode is the autonomous mode. The robot operates ina supervisory fashion, performing tasks on its own. Kangand Wen describe the process of tying a square knot,having the robot follow a reference trajectory using asimple proportional–integral–derivative (PID) controller.Although they provide positive experiments, it seems thatthe robot operates using a hard-wired policy, meaningthat it always repeats the same motion and excludes anypossibility of performing the same task with unfamiliarinstrument positions.

In a similar fashion, Bauernschmitt et al. have alsodeveloped a system for heart surgery, able to reproduceprerecorded knot-tying manoeuvres (36). The systemconsists of a two KUKA KR 6/2 robotic arms, equippedwith two surgical instruments from Intuitive SurgicalInc. A third arm provides 3D vision through a suitableendoscopic camera. The surgeon controls the robot at themaster-end via two PHANToM haptic devices (SensableInc., MA, USA). The surgical instruments have beenadapted with force/strain gauges in order to captureforces at the grip. Knot-tying experiments providedpositive results, reproducing the manoeuvres even attwice the speed. However, the ‘blind’ re-execution ofprerecorded surgical gestures does not leave room for anypractical implementation in a clinical situation.

A more robust control would be provided if theuser could ‘teach’ a series of correct examples to thecontroller. An interesting study on automating sutureknot winding was published by Mayer et al. (37), usingthe EndoPAR robot, involving a class of recurrent artificialneural networks called long short-term memory (LSTM)(38). LSTM can perform tasks such as knot tyingwhere the previous states (instrument positions) needto be remembered for long periods of time in order toselect future actions appropriately. The EndoPAR robotcomprises four Mitsubishi RV-6SL robotic arms that aremounted upside-down on an aluminium gantry. Three ofthe arms hold laparoscopic grippers, attached with forcesensors, while the fourth holds a laparoscopic camera.The arms are controlled through PHANToM devices.The authors considered a knot-tying task, breaking itinto six consecutive steps; note that all three roboticarms were used. The authors used LSTMs for theirexperiments and trained them to learn to control themovement of a surgical manipulator to successfully tiea knot. The training algorithm used was the Evolinosupervisory evolutionary training framework (39). TheEvolino-trained LSTM networks in these experiments wereable to learn from surgeons and outperform them on

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 7: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

Evolution of autonomous and semi-autonomous robotic surgical systems

the real robot. The current approach only deals withthe winding portion of the knot-tying task. Therefore,its contribution is limited by the efficiency of the othersubtasks required to complete the full knot. Initial resultsusing this framework are promising; the networks wereable to perform the task on the real robot without accessto teaching examples. These results constitute the firstsuccessful application of supervised learning to MIS knottying.

Mayer et al. have also recently presented a differentapproach on automated knot tying, developing a systemable to learn to tie knots after just one demonstrationfrom the surgeon (40). It calculates motion primitivesfor the skill, along with a fluid dynamics planningalgorithm that generalizes the demonstrated knot-tyingmotion. However, this system acts more as a proof ofconcept, since its success rate in knot-tying experiments isapproximately 50%. Learning by demonstration has alsobeen investigated by van den Berg et al. (41), using twoBerkeley surgical robots (Figure 6).

The authors used a Kalman smoother to infer areference trajectory for knot tying from multiple humandemonstrations. Following a linear quadratic regulator(LQR) guided the robot towards the reference, alongsidewith an iterative learning algorithm in order to improvethe quality of convergence. In the experiments a threadwas passed through two rings, while a weight was tiedto one end, keeping the thread in tension. The goalwas to tie a knot around one ring. The system wasable to perform the knot with increasing speed, goingfaster up to seven-fold of the normal demonstration (seeFigure 7 for a graphical description of the knot-tyingmotion decomposition).

In all three approaches above, only the knot-tyingtask was considered, requiring manual help in severalpreparatory stages, e.g. grasping the needle. Tissuepiercing in suturing has also been investigated using theEndoPAR robot (42). In this setting, one robotic arm holdsa circular needle, while a second one employs a stereocamera. The surgeon uses a laser pointer to pinpointthe place of entry and the robot autonomously performsthe stitch. The system uses visual servoing to positionthe needle on the right spot. Experiments on phantoms

Figure 6. The Berkeley surgical robot, used in automaticknot-tying experiments by van den Berg et al. (41). Image 2010 IEEE

Figure 7. Knot-tying decomposition according to van den Berget al. (41). The gesture consists of three stages: in the first (1),robot A loops the thread around the gripper of robot B; in thesecond stage (2, 3), robot B grasps the thread and closes itsgrippers; in the third stage (4), both robot arms are moved awayfrom each other to tighten the knot. Image 2010 IEEE

and actual tissue provided encouraging results, albeit thetissue presented difficulties in the experiments, such asdiffraction of the laser and variable stiffness.

Visual servoing has also been deployed by otherresearchers for the automation of robotic MIS suturing.Hynes et al. have used two seven-degrees of freedom(DOF) PA-10 (Mitsubishi Heavy Industries Ltd, Tokyo,Japan) robotic manipulators to perform knot tying, usingimage feedback from a stereo camera (43). The robotswere mounted with laparoscopic graspers which weremarked with an optical pattern. This pattern was basedon Grey encoders and was used to infer the position andorientation of the tools. The system was used to replicateprerecorder knot-tying movements, although some initialsteps were done manually, e.g. passing the needle througha test foam surface. User input was also required in thebeginning in order to indicate points of interest (positionof the needle and tail). In the experiments the robot wasable to tie a knot in approximately 80 s. Failures werealso reported, mainly due to incorrect grasping of theneedle, slipping, etc. Suturing in robotic microsurgicalkeratoplasty has been reported by Zong et al. (44),using a custom suturing end-effector mounted on a six-DOF robotic manipulator. The end-effector includes aone-axis force microsensor and performs the motion oftissue piercing and subsequently pulling the thread out.Vision feedback was provided through two CCD camerasmounted on a stereo surgical microscope. The visualservo controller autonomously guided the needle tip withgreat precision, to a point specified by the user (thepoint of needle entry). However, no complete suturingexperiments were reported in the study.

Cochlear implantation

Cochlear implantation has become widespread forpatients with severe hearing impairment in the last

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 8: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

G. P. Moustris et al.

Figure 8. The micro-drilling surgical robotic system used byTaylor et al. (46) for robotic cochleostomy. Reproduced bypermission of SAGE Publications Ltd

20 years. Surgery in the middle ear requires delicatemovements, since the space is confined, involvingsensitive structures. Cochleostomy is a basic step in theprocedure, where a hole is drilled on the outer wallof the cochlea, through which the electrode implantis inserted. Perforation of the endosteal membrane bythe drill may result in contamination of the endolymphand perilymph with bone dust, increase the risk ofpostoperative infection and reduce the residual hearing.To accommodate this problem, Brett et al. have developedan autonomous micro-drilling robot performing thecochleostomy (45,46). The robot consists of the micro-drill mounted on a linear guide, attached to a passiverobotic arm (Figure 8).

During the operation, the surgeon moves the arm,placing it at the correct pose with the drill facing towardsthe desired trajectory. Following that, the arm is lockedand the drill autonomously creates the hole, leaving theendosteal membrane intact, which is then opened by aknife. The controller monitors the force and the torquetransients exerted on the tool tip and, by analysingthem, detects when breakthrough is about to occur,thus stopping the drilling (Figure 9). Clinical experiments(47,48) showed promising results.

A different approach was put forth by Majdaniet al., aiming at minimally invasive robotic cochleostomy(49,50). The main purpose here was to create an accesscanal to the inner ear and perform the cochleostomy usingan autonomous robot, without performing mastoidectomyand exposing critical anatomical structures. To this end,a specially designed robot was constructed comprisinga KKR3 (KUKA GmbH, Augsburg, Germany) six-DOFrobot with a surgical drill serving as its end effector(Figure 10).

The system used a camera along with special markers inorder to perform localization and pose estimation for therobot as well as the surgical field (patient). Preoperativeplanning using patient CT images was also used forthe calculation of the optimal drilling trajectory, takingunder consideration the distance from critical structuressuch as the facial nerve, the corda, etc. The system

Figure 9. View of a cochleostomy with the drill bit retracted andendosteal membrane intact, using the micro-drilling surgicalrobot (46). Reproduced by permission of SAGE Publications Ltd

Figure 10. View of the robotic set-up used by Majdaniet al. (49,50) for minimally invasive robotic cochleostomy.Reproduced by permission of Springer Science+BusinessMedia

Figure 11. Experiment in minimally invasive robotic cochleo-stomy using a temporal bone (49,50). Fiducial markers placedon the bone are used for localization and registration. Opticalmarkers are also placed on the robot tip and the temporal boneholder. Reproduced by permission of Springer Science+BusinessMedia

performed in a closed-loop fashion, using image feedbackfor calculating the error signals of the robot to thereference trajectory. Thereafter, the robot autonomouslydrilled the canal and the cochlea according to thepreoperative plan (Figure 11). Tests were performed in

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 9: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

Evolution of autonomous and semi-autonomous robotic surgical systems

10 cadaveric specimens with positive results. However,even though the canal was opened in all experiments, inone the cochleostomy was not completely performed,which can be attributed to noise error in the visualtracking system. Another drawback of image registrationis that the fiducial markers must always be visible fromthe camera, something which cannot be guaranteed inthe operating room. However, the results show greatpromise and, by combining the work of the same teamin automating cochlear implant insertion as describedin (51), fully autonomous robotic cochlear implantationmight be just around the corner.

Ultrasound guidance for percutaneousinterventions

Ultrasonography is a popular imaging modality forvisualizing percutaneous body structures, since it ischeap, non-invasive, real-time and with no known long-term side-effects. Megali et al. (52) describe one of theearliest attempts to guide a robot using two-dimensioanl(2D) ultrasound guidance for biopsy procedures. Theirsystem consisted of a manipulator mounted with abiopsy needle at its end-effector, an ultrsound probeand a 3D localizer. These components were integratedinto a workstation, fusing the data and providing agraphical interface to the user. The surgeon selectedthe biopsy target and the position of needle insertioninto the body by clicking on the ultrasound image inthe computer. The robot automatically acquired thecorrect pose so as to provide linear access for theneedle to the target point, although the actual biopticsampling was performed manually. Tests in a watertank showed an average accuracy of 2.05 mm, witha maximum error of 2.49 mm. A similar approach toultrasound-guided robotic transperineal prostate biopsywas presented by Phee et al. (53), where a transrectalultrasound probe was used to scan the prostate and createa 3D model with the help of an urologist. Subsequently,the entry trajectory was planned and the robotic biopsysystem would configure itself to the correct position. Theactual needle insertion was performed manually. In vivoexperiments were demonstrated, with a placement errorreaching approximately 2.5 mm.

Given their ability to reconstruct interesting structures,such as cysts, in three dimensions, 3D ultrasound (3DUS)devices have also been used in order to provide guidanceto biopsy robots. Since 2006, the Ultrasound TransducerGroup at Duke University has performed several feasibilitystudies regarding the use of real-time 3D ultrasound forthe autonomous guidance of surgical robots, involvingbreast biopsy (54), shrapnel detection (54–56) andprostate biopsy (57). The first study investigated theguidance of a three-DOF Gantry III (Techno Inc., NewHyde Park, NY, USA) Cartesian robot, using real-time3D ultrasound (58). Three experiments were performedin order to assess the positional accuracy. In the firsttwo, the targets were submerged into a water tank, while

the ultrasound probe performed scanning (the targetsconsisted of wire models). After the coordinates of thetargets had been manually extracted, they were sent tothe robot, which moved a probe needle towards them.In the third experiment, a hypo-echoic lesion inside atissue-mimicking slurry was used. An in vivo experiment,using a canine cadaver, was also performed. The goalwas to puncture a desired position on the distal wall ofthe gall bladder. The accuracy error of the system wasapproximately 1.30 mm. However, the system did notoperate in a closed loop and relied on user input fortarget acquisition.

The latter was investigated by Fronheiser et al. (59),using the same in vitro experimental set-up, but theprocess was now streamlined. The 3DUS data werecaptured by the probe and were subsequently transferredto a MATLAB program, which analysed them andautomatically extracted the goal position. The appropriatemovement command was then passed to the robotwithout any human intervention. Breast cyst biopsy wassuccessfully demonstrated using a 2 cm spherical anechoiclesion (a water-filled balloon) in a tissue-mimickingphantom, as well as in excised boneless turkey breasttissue (55,60) (see Figure 12).

Automatic guidance using real-time 3D catheter trans-ducer probes for intravascular and intracardiac applica-tions was also investigated in (59) and further analysedin (61). Two experiments were performed using a watertank, while a third one involved a bifurcated abdominalaortic graft. In all three the goal was to drive the robotprobe to touch a needle tip at a specified position, usingthe catheter transducer for 3D imaging. Error measure-ments for the first two experiments gave an error of 3.41and 2.36 mm, respectively. No measurements were takenfor the third. Note that the MATLAB position-extractionalgorithm was not used and the needle position wasextracted manually from the 3D data.

Prostate biopsy using a forward-viewing endoscopicmatrix array and a six-DOF robot has also beendemonstrated (55,57). The robot’s gripper held the

Figure 12. Experiment in US-guided robotic biopsy, describedby Ling et al. (55). A simulated cyst is placed inside a bonelessturkey breast with the biopsy robot using real-time US guidance.(a) 3D-rendered image of the cyst in turkey breast; (b) B-scan ofthe cyst. (c–e) Simultaneous B- and C-scans, respectively, of theneedle tip penetrating the cyst. Image 2009 IEEE

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 10: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

G. P. Moustris et al.

Figure 13. A trial in robotic prostate multiple-core biopsy using3D US guidance (57). The tissue phantom is a turkey breastdivided into eight sectors. The robot has to stick each onesuccessfully. Arrows indicate placement of the needle tip. In (h),the needle is placed in the correct zone but the needle tip hasfailed to penetrate the prostate surface. Image 2009 IEEE

transducer, which was equipped with an echogenicbiopsy needle and targeted a turkey breast acting asthe prostate phantom. The 3DUS produced a volumetricrepresentation of the phantom, which was then passed toa program to automatically calculate its coordinates. Thevoxels of the prostate phantom were divided into eightequal sectors, while the robot was expected to sampleeach one of them (Figure 13). The robot autonomouslyperformed the biopsy with a success rate of 92.5%.

Since ultrasound can provide real-time vision feed-back, visual servoing has been investigated by severalresearchers as a means of guiding a robot. Vitrani et al.(62) describe the use 2D ultrasound for the guidance of aMIS robot in heart mitral valve repair surgery. The robot,holding surgical forceps, is introduced through a trocarin the patient’s torso, while an ultrasound probe, placedin the oesophagus, provides 2D imaging. The forcepsintercept the echographic plane at two points. Keepingthe probe still, the surgeon designates new coordinatesfor the forceps on the ultrasound image and the visualservo controller is expected to carry out the command.Simulation and in vitro results show and exponential con-vergence and robustness of the control, which is furtherexemplified by in vivo experiments on porcine models(63). Similar experiments are presented by Stoll et al.(64), using ultrasound images to get a robotic manipula-tor to touch a target (grape) submerged in a compoundof oil and water. The reported success rate was 88%.Percutaneous cholecystostomy via robotic needle inser-tion is described in Hong et al. (65), where the authorsused a five-DOF robot to reach the gallbladder. A notablefeature in this work was the compensation of movementand deformation of the target by involuntary motions,such as respiration. The visual controller analysed theultrasound images and updated the correct insertion pathin real time. However, during the actual insertion the sub-ject had to hold his/her breath to stop the deformation.The problem of tumour mobility in breast biopsy wasalso considered by Mallapragada et al. (66), describingan interesting system which manipulates the ultrasoundprobe as well as the breast, in order to compensate forout-of-plane motions and keep the tumour visible.

Visual servoing using 3DUS was first demonstrated byStoll et al. (67). The authors used a PHANToM robot,mounted with a hollow steel cannula at its end-effector,and a 3DUS scan head in order to localize the instrumentand provide pose information. The instrument featured apassive marker at its end, which enabled the estimationof position and orientation. The US data was fed toa PC which calculated the error of the instrument’stip to a goal position and issued movement commandsthrough a linear PD controller. Experiments showed aposition error <1 mm but, due to the slow update rate of2 Hz, instrument velocities >3 mm/s could destabilize thesystem. A faster visual servo controller utilizing 3DUS waspresented by Novotny et al. (68), by means of performingmuch of the image processing on a graphics-processingunit (GPU). GPUs are specially designed to perform veryfast computations in image manipulation, and thus thecontrol loop was able to attain a speed of up to 25 Hz. Adifferent approach to ultrasound visual servo control hasbeen described by Sauvee et al. (69), deploying a non-linear model predictive controller (NMPC). The NMPCwas used to control a Mitsubishi PA10 robot, respectingsystem constraints such as actuator saturation, joint limitsand ultrasound range.

Motion compensation

Motion compensation refers to the apparent cancellationof organ motion in the surgical field through imageprocessing and robot control algorithms. Typically, themotion of the field (e.g. heart beat, respiratory motion,etc.) is captured by an imaging device in real time,is rectified and presented to the surgeon as still.Concurrently, the robot maintains a steady pose withrespect to the field, essentially tracking its motionand moving along with it. This function, however, istransparent to the surgeon on the master end of therobotic telesurgery system, who effectively operates ona static image without perceiving the motion of therobot on the slave end. This approach is particularlyinteresting in off-pump coronary artery bypass graftsurgery (CABG), because it can obviate the need formechanical and vacuum stabilizers. The control schemefalls under the shared control paradigm, since both thecontroller (software) and the surgeon use the robot at thesame time. Motion compensation present challenges ontwo ends. The first is the image capture and rectificationof the motion itself (although different modalities, suchas ultrasound, have also been used), as it can presentvery fast dynamics (e.g. beating heart). This mandatesthe use of high-speed cameras (range 500–1000 fps) andincreased processing power. At the other end, the controlof the robot is also demanding because of having to trackvery fast-moving targets.

Among the first attempts to develop a motioncompensator for beating heart surgery was the workpresented by Nakamura et al. (70), who introduced thenotion of heartbeat synchronization. The authors used a

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 11: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

Evolution of autonomous and semi-autonomous robotic surgical systems

six-DOF robot and a high-speed camera at 995 fps in orderto track a point on the image, created using a laser pointer.The image was moved in the image buffer so as to keepthe point at the same position, thus no rectification wasperformed. In vitro and in vivo experiments on a porcinebeating heart were positive, giving a maximum trackingerror of approximately 0.5 mm. Tracking of a beatingheart was also investigated by Ginhoux et al. (71). Theauthors placed four light-emitting diodes (LEDs) on theheart surface in order to capture the motion with a 500 fpscamera. Model predictive control (MPC) algorithms werealso used, employing a heart-beat model for reference.In vivo tests on a pig heart produced encouraging resultsof low variance tracking error with a median value of 0.09and −0.25 px on the x and y axes, respectively. MPC wasfurther developed in (72,73). Motion prediction was alsoinvestigated in (74), using a least squares approach and anartificial neural network implementation. An interestingfeature in this study was the ability to predict the motionof visually occluded parts of the heart and the fusionof biological signals (ECG and RSP) in the estimationalgorithms. The algorithms, however, were not tested ona real robot.

Use of biological signals for reference estimation in MPCwas also treated by Bebek et al. (75–77). However, theauthors did not employ vision tracking but used insteada sonomicrometry system to collect motion data from theheart. This bypassed the problem of visual occlusion of thesurgical field by the robotic manipulators or other surgicaltools. Experiments with a PHANToM robot produced anRMS error of approximate 0.6 mm in the three axes.3DUS-guided motion compensation for beating heartmitral repair was presented by Yuen et al. (78–81). Theauthors were able to control a one-DOF linear guide foran anchoring task, using feedback from a 3DUS system.Due to the latency of the capturing process, predictivefilters were also employed. Experiments showed an RMSsynchronization error of 1.8 mm. Based on these results,Kesner and Howe have also presented an ultrasound-guided cardiac catheter utilizing motion compensation(82).

A different approach was presented by Cagneau et al.(83), using force feedback from a force sensor mountedon a MC2E robot for motion compensation. Under theassumption that the motion is periodic, the authors usedan iterative learning controller, along with a low-passfilter, to cancel the motion. In vitro results showed thepotential of this approach; however the assumption ofperiodicity was an oversimplification of the actual motionof the heart. A more robust approach to motion estimationwas discussed by Duindam and Sastry (84), using ECGand respiratory signals in order to model and estimate thefull 3D motion of the heart’s surface.

Optical coherence tomographyguidance for vitreoretinal surgery

Optical coherence tomography (OCT) is a relatively newoptical tomographic technology which uses light in order

to capture 2D and 3D images of optical scattering media,at a µm level (85). OCT has achieved real-time 3Dmodes and is mostly used in retinal surgery, as wellas optical biopsies. Due to its unique features, OCThas recently been integrated into a vitreoretinal roboticsurgery system, providing real-time guidance. This workwas described by Balicki et al. (86), using the peelingof epiretinal membranes as a reference application. Theauthors modified a vitreoretinal pick (25 gauge), passingthrough the cannula a single optical fibre to act as theOCT probe. The fibre was connected to an OCT system,and was mounted onto a high precision Cartesian robot.A force/torque-sensing handle was also attached to therobot for ‘hands-on’ control.

The system accommodated three tasks: a ‘safetybarrier’ task, much like a hard virtual fixture, wherethe robot constrained the probe from approaching theretinal surface closer than an imposed limit; a ‘surfacetracking’ task, where the robot tracked the motion ofthe surface, keeping a steady distance of 150 µm; anda ‘targeting’ task, where the robot would insert thepick in a user-designated location. In vitro experimentsproduced encouraging results, although further researchis also needed to overcome limitations in this study.For example, the probe was always perpendicular tothe surface, while in actual surgery oblique angles arecommon. Better controller design is also important, inorder to minimize overshoot and tracking errors.

Clinical Applications

Autonomous and semi-autonomous systems have alreadybeen used in neurosurgery and orthopaedics, mainlybecause the bony framework of these operations offersa good material for stereotactic orientation of theinstruments. At the same time, many projects arestill in the experimental phase for thoracoscopic andlaparoscopic surgery because, as mentioned previously,tissues in these settings are deformable and thepreoperative images may differ from the intraoperativeconditions.

Examples of orthopaedic robots

Replacement of hip joints that have failed as a resultof disease or trauma is very common. In the currentmanual procedure, the cavity is cut by the surgeon byhandheld broaches and reamers forced into the femur,which leaves a rough and uneven surface. In orderto obtain higher precision, research led to a roboticapproach for sculpting the femoral cavity (87). TheRobodoc system was developed in the mid-1980s andis now widely commercially available (88). Clinical trialshave confirmed that the femoral pocket is more accuratelyformed using the Robodoc. Also, because of the needto provide precise numerical instructions to the robot,preoperative CT images are used to plan the bone-milling

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 12: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

G. P. Moustris et al.

procedure. This gives the surgeon an opportunity tooptimize the implant size and placement for each patient.Titanium pins are used in the femoral condyles andgreater trochanter for registration purposes. The controlof Robodoc is essentially autonomous: the robot followsthe planned cutting paths without the surgeon’s guidance.After the pocket is milled, the surgeon continues as in themanual procedure (87).

Recent reports on approximately 130 hip replace-ments from an ongoing clinical study in the USA usedradiographs to compare Robodoc-treated patients with acontrol group (89). The Robodoc cases showed signifi-cantly less space between the prosthetic and the bone.Placement of the implant was also improved. Further-more, no intraoperative femoral fractures occurred forthe Robodoc group, whereas three were observed in thecontrol group. The results also showed improved pros-thetic fit, and the overall complication rate was reducedto 11.6% from the reported manual procedure rates of16.6–33.7%. In addition, the surgical time decreased dra-matically as surgeons gained experience with the systemand modified the procedure: the first 10 cases aver-aged 220 min, whereas the current level is 90–100 min.Robodoc has succeeded in improving fit. However, anumber of disadvantages are still there to overcome: thetraumatic procedure of pin placement and a slow pin-finding registration process. Efforts aim toward reducingthe number of pins and even eliminating them altogether,using other registration techniques. Many other issuesarise from the process of fixing the femur to the base ofthe robot, which is time-consuming and may also be thecause of postoperative pain. In relation to this, motionof the bone within the fixator during cutting can be amajor problem. Several incidents of femur motion canextend the operation significantly. Better fixation or con-tinuous monitoring and registration should be furtherdeveloped (87). Finally, although prosthetic fit and posi-tioning appear to be improved, it is crucial to addressthe question of whether this improves treatment in thelong term. More studies showing significant correlationbetween implant fit and long-term outcome are expectedin the future (88).

In a large consecutive series of 143 total hipreplacements (128 patients) using the Robodoc system,the authors concluded that the system achieves equalresults as compared to a manual technique. However,there is a high number of technical complications directlyor indirectly related to the robot (90). Another recentstudy compared a non-fiducial based surface registrationtechnique (DigiMatch) with the conventional locator pin-based registration technique in performing cementlesstotal hip arthroplasty (THA) using the Robodoc system.The authors concluded that the advantages of theDigiMatch technique were the lack of need for priorpin implantation surgery and no concern for pin-relatedknee pain. Short-term follow-up clinical results showedthat DigiMatch Robodoc THA was safe and effective (91).

Total hip arthroplasty

The HipNav system for accurate placement of theacetabular cup implant is being developed (92). Thesystem consists of a preoperative planner, a range-of-motion simulator and an intraoperative trackingand guidance system. The range-of-motion simulatorhelps surgeons to determine the orientation of theimplants at which impingement would occur. Used inconjunction with the planning system and preoperativeCT scans, the range-of-motion simulator permits surgeonsto find the patient-specific optimal orientation of theacetabular cup (88). A 2003 study aimed towards a non-invasive registration of the bone surface for computer-assisted surgery (CAS), by developing an intraoperativeregistration system using 2D ultrasound images. Theapproach employs automatic segmentation of the bonesurface reflection from ultrasound images tagged withthe 3D position to enable the application of CAS tominimally invasive procedures. The authors concludedthat ultrasound-based registration eliminates the needfor physical contact with the bone surface, as in point-based registration (93). Navigational systems are underdevelopment for various knee-related procedures, suchas anterior cruciate ligament replacement. Most roboticassistant systems for the knee, however, are aimed attotal knee replacement (TKR) surgery.This procedurereplaces all of the articulator surfaces by prostheticcomponents. Several robotic TKR assistant systems havebeen developed to increase the accuracy of the prostheticalignment. Many of these systems include an image-basedpreoperative planner and a robot to perform the bonecutting (94).

Spine surgery

Spinal fusion procedures attach mechanical supportelements to the spine to prevent relative motion ofadjacent vertebrae. Current research in spinal surgeryfocuses on image-guided passive assistance in aligningthe hand-held surgical drill. Preoperative CT images areintegrated with tracking devices during the procedure.Targets may be attached to each vertebra to permitconstant optical motion tracking during the procedure.Using these techniques, Merloz et al. reported a farlower rate of cortical penetration for computer-assistedtechniques compared with the manual procedure (95).Work is under way on the use of intraoperative ultrasoundor radiograph images to register the CT data withthe patient (96). The screws may then be insertedpercutaneously, eliminating the need for exposing thespine.

Examples of neurosurgical robots

Image-guided techniques were applied for the first timein the field of neurosurgery. Just prior to surgery,

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 13: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

Evolution of autonomous and semi-autonomous robotic surgical systems

stereotactic frames were attached to the patient’s headbefore the imaging process and remained in placethroughout the operation. The instruments were guidedby calculating the relationship between the frame andlesion observed in the image (87). ‘Frameless stereotaxy’is a newer image-guided approach, using optical trackersfor navigation and less invasive fiducial markers orvideo images for registration of the instruments (97,98).In the past 15 years, a number of robotic systemshave been developed to enhance stability, accuracyand ease of use in neurosurgical procedures (99–101).In spite of the rigid cranium, which stands as agood reference material for image-guided surgery,brain tissue itself is soft and prone to unwantedshifting during the procedure. In effect, this alters thespatial relationship between the preoperative imagingexamination and the actual patient anatomy. Deformabletemplates for non-rigid registration have been proposedto overcome this limitation. These templates are oftenbased on biomechanical models of soft tissue (102).Alternatively, the use of intraoperative imaging wouldalso permit continuous monitoring of brain anatomy andinstruments. This would require compatible machinerywhich would integrate both the imaging data andthe space constraints, i.e. robotic manipulators (103).The StealthStation (Medtronic, MN, USA) visualizesboth instruments and anatomy in real time andperforms surgical actions accordingly. Intraoperativenavigation allows for less invasive surgery and moreprecise localization without the need of continuousintraoperative imaging. Another prominent neurosurgicalrobot is the Neuromate (Renishaw plc, Gloucestershire,UK). Neuromate (Figure 14) is a stereotactic robotused in various functional neurosurgical procedures,such as deep brain stimulation (DBS) and stereotacticelectroencephalography (SEEG). It can also providesterotaxy in neuro-endoscopy, radiosurgery, biopsy andtranscranial magnetic stimulation (TMS), supporting bothframe-based and frameless stereotaxy.

Figure 14. The Neuromate stereotactic neurosurgical robot.Reproduced by permission of Renishaw plc

Stereotactic radiosurgery

Radiosurgery aims to administer high doses of radiationin a single session to a small, critically located intracranialvolume without opening the skull. The goal is thedestruction of cells in order to hold the growthor reduce the volume of tumours. Radiosurgery hasbecome an important treatment alternative to surgeryfor a variety of intracranial lesions (104). Stereotacticradiosurgery (SRS) in selected patients with pituitaryadenoma delivers a favourable tumour growth control,preserving the functional status. Thus, it has becomean attractive treatment modality and is often usedinstead of external beam radiotherapy (104–107).Current radiosurgery systems include the Gamma Knife,manufactured by Elekta (based in Sweden); Novalis,manufactured by BrainLabs (based in Germany); andCyberKnife, manufactured by Accuray (based in theUSA). CyberKnife is the name of a frameless roboticradiosurgery system invented by John R. Adler, StanfordUniversity Professor of Neurosurgery and RadiationOncology (108,109).

Cyberknife

Cyberknife (Figure 15) uses a miniature linear accelerator(LINAC), which is mounted on a robotic arm to deliverradiation to the selected target. A real-time targetingsystem eliminates the need for the previously usedhead frame. The position of the patient is located byimage guidance cameras; the robotic arm is guided toprecisely deliver small beams of radiation that convergeat the tumour from multiple angles. The cumulativedose is high enough to destroy the cancer cells, whileradiation exposure to surrounding healthy tissue isminimized. The level of accuracy achievable by thissystem allows higher doses of radiation to be used,resulting in greater tumour-killing effect and a higherlikelihood of radiosurgical success. During the actualtreatment, patient movement is monitored by the system’slow-dose X-ray cameras. The CyberKnife’s computer-controlled robotic arm compensates for any changes in

Figure 15. The CyberKnife system. Reproduced by permission ofAccuray Inc

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 14: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

G. P. Moustris et al.

tumour during treatment, using the synchrony respiratorytracking system (110). Radiosurgery achieves equivalentgrowth control, hormonal remission and neurologicalcomplication rates when compared to conventionalradiotherapy, but the damage to surrounding tissues isless.

One of the best indications for radiosurgery of pituitaryadenomas is residual or recurrent tumour that is not safelyremovable using microsurgical techniques. In addition,Cyberknife can easily apply the advantages of multisessionradiosurgery for perioptic lesions, due to the lack of needfor stereotactic frame fixation. This is one of the greatestadvantages of CyberKnife. In fractionated radiation, thetumour control rate is in the range 76–97% (111). Thetumour control rate for pituitary adenomas followingtreatment with Gamma Knife is in the range 93.3–94%(112). Endocrinopathies respond well with Gamma Knifeat a ratio of 77.7–93% and the normalization rateis in the range 21–52.4% (112,113). In fractionatedradiation, endocrinological improvement is 38–70%(114,115). As a result, current results of Cyberknife(endocrinological improvement 100%, endocrinologicalnormalization 44%) are similar to that of Gamma Knifeand a little superior to that of fractionated radiation.

Complication rate ranges for Gamma Knife andfractionated radiation (most commonly visual loss)have been 0–12.6% and 12–100%, respectively (116).Complication rates (visual disturbance 7.6%) were similarto that of Gamma Knife and much superior to thatof fractionated radiation. There were no incidences ofpituitary dysfunction, probably due to the multisessionradiosurgery.

Indications for spinal radiosurgery

Currently evolving indications for spine radiosurgeryusing CyberKnife include lesions of either benignor malignant histology as well as spinal vascularmalformations (117). The most important indication forthe treatment of spinal tumours is pain, and spinalradiosurgery is most often used to treat tumour pain.Radiation is well known to be effective as a treatmentfor pain associated with spinal malignancies, with a92% improvement in pain after CyberKnife therapy. Thisbeneficial result includes radicular pain caused by tumourcompression of adjacent nerve roots (117). Anotherindication concerns partially resected tumours duringopen surgery. In that case, fiducials can be left in placeto allow for postoperative radiosurgery treatment to theresidual tumour. Such treatments can be given early in thepostoperative period, as opposed to the usual delay beforethe surgeon permits external beam irradiation (117).

CyberKnife radiosurgery offers the ability to deliverhomogeneous radiation doses to non-spherical structures,such as the trigeminal nerve. Preliminary results havebeen reported by Romanelli et al. for the treatment ofpatients with trigeminal neuralgia (118). Although a70% short-term response rate has been described, the

long-term safety and efficacy demand further studies tobe conducted (119).

Discussion

Clinical implementationand acceptance issues

Safety is an obvious concern for robotic surgery,and regulatory agencies require that it should beaddressed for every clinical implementation. As withmost complex computer-controlled systems, there is noaccepted technique that can guarantee safety for allsystems in every circumstance (120,121). Some roboticsdevelopers have asserted that it is important to keepcontrol of the procedure in the hands of the surgeon,even in image-guided surgery. A system developed byHo et al. for knee surgery prevents motion outside ofthe planned workspace (122). In contrast, the Robodoclets autonomous control the cutting instrument, whilethe surgeon monitors progress. This ‘freedom’ of therobot has raised concerns, especially in Europe, overaccepting the autonomous mode. Thus, it is important toinclude user interfaces so that the surgeon supervises thesystem’s plan of action and status in real time during theoperation.

Robots will be successful in surgery only if they proveto be beneficial in terms of patient outcomes and totalcosts. Unfortunately, in many cases outcome cannot beassessed until many years after the procedure (e.g. roboticvs manual hip replacement). Early acceptance of thetechnology increases the number of cases, and cliniciansoften improve the procedure, which results in betteroutcomes and lower costs. Ability to use the robot formultiple procedures is an important feature not found incertain robotic systems (e.g. knee replacement systemsare unable to perform hip replacements). In contrast,telesurgical systems aim towards a variety of conditionsand even specialties and this is probably the reason fortheir wider acceptance. People react differently when afailure comes from a robot than when it comes from ahuman. The question of responsibility in case of morbidityor mortality still remains, especially when dealing withautonomous systems. Concerns about the legal frameworkcovering robotic autonomous systems may also bringdifficulties with insurance coverage. Technologies in allof these areas should be developed in a way that givesconsideration to their potential benefits and shortfalls(123).

Emerging trends

Research in surgical robots has already produced newdesigns, breaking the telemanipulation paradigm. Forexample, mobile mini-robots for in vivo operation havealready been described in the literature (124,125), as wellas hyper-redundant (snake) robots (126,127), continuum

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 15: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

Evolution of autonomous and semi-autonomous robotic surgical systems

robots (128), NOTES mini-robots (129), fixed-base robots(130) and crawler robots (131). However, a true potentialin revolutionizing medicine lies with micro/nanorobotics.Micro/nanorobots present a paradigm shift in currentrobotic technology and could bring about a breakthroughin many fields, such as medicine, drug delivery,fabrication, telemetry, etc. However, they also presentmajor challenges regarding fabrication, power supply,actuation and localization techniques.

In MIS, several areas of application have been proposed.For example, a first application could be the circulatorysystem, where the nanobots could enter the blood flowand reach target sites in order to perform actions suchas targeted drug delivery, removal of plaques fromvessels and destruction of blood clots, act as a stentto maintain blood flow, etc. Pioneering work towardsthis goal has been conducted by Martel et al., who havemanaged to navigate a small magnetic bead throughthe carotid artery of a living swine through magneticpropulsion utilizing MRI technology (132,133). Anotherapplication area is the central nervous system, where thenanorobot could navigate through available space in orderto reach neural sites. Such a space could be the spinalcanal, the subarachnoid space or the brain ventricles.The nanorobots could provide services such as targeteddrug delivery on cancer cells in brain tumours, act asmarkers for active neuronavigation in brain surgery incooperation with stereotaxy or perform neurostimulationon selected neural sites. The urinary system is a thirdpossible application area, where the nanorobots couldenter the urinary tract and reach the prostate andkidneys in order to dissolve kidney stones or depositradioactive seeds on cancerous cells in the prostate.Several other targets have also been proposed, such asthe eye, the ear, the fetus, etc. [for an up-to-date review,see (134)].

As micro- and nanotechnologies evolve, a variety ofsensors and actuators operating in the submillimeterrange has emerged. As a result, various researchgroups started recently to develop microrobotic systemsfor a wide range of applications: precision tooling,endoscopic surgery, biological cells manipulation (135),AFM microscopy, etc. However, most of these devicesare not really autonomous, either concerning energysupply or intelligence. But autonomy is a major issuefor a lot of innovative applications of micro-robots wheretele-operation is not possible or not desirable (136). Onthe way towards these fascinating innovations, one mustalways identify the key parameters that limit downscaling(137,138). There has also been an increased interestin the use of microelectro-mechanical systems (MEMS)for surgical applications. MEMS technology not onlyimproves the functionality of existing surgical devicesbut also adds new capabilities, allowing the surgeonsto develop new techniques and perform totally newprocedures (139). MEMS may provide the surgeon withreal-time feedback on the operation, thus improving theoutcome (140).

Conclusion

As depicted by the progress reviewed here, robotictechnology is going to change the face of surgery in thenear future. Robots are expected to become the standardmodality for many common procedures, including hipreplacement, heart bypass, cochlear implantation andabdominal surgery. As a result, surgeons have tobecome familiar with technology, and technology shouldcome closer to the everyday needs of a surgicalteam. Autonomous and semi-autonomous modes areincreasingly being investigated and implemented insurgical procedures, automating various phases of theoperation. The complexity of these tasks is also shiftingfrom the low-level automation early medical robotsto high-level autonomous features, such as complexlaparoscopic surgical manoeuvres and shared-controlapproaches in stabilized image-guided beating-heartsurgery. Future progress will require a continuousinterdisciplinary work, with breakthroughs such asnanorobots entering the spotlight. Autonomous roboticsurgery is a fascinating field of research involving progressin artificial intelligence technology. However, it shouldalways be faced with caution and never allow theexclusion of human supervision and intervention.

References1. Gharagozloo F, Najam F. Robotic Surgery: Theory and Operative

Technique, 1st edn. McGraw-Hill Medical: New York, 2008.2. Taylor RH, Stoianovici D. Medical robotics in computer-

integrated surgery. IEEE Trans Robotics Autom 2003; 19(5):765–781.

3. Wolf A, Shoham M. Medical automation and robotics. InSpringer Handbook of Automation. Springer: Berlin, 2009;1397–1407.

4. Jakopec M, Rodriguez y Baena F, Harris SJ, et al. The hands-onorthopaedic robot ‘acrobot’: early clinical trials of total kneereplacement surgery. IEEE Trans Robotics Autom 2003; 19(5):902–911.

5. Kwon D-S, Lee J-J, Yoon Y-S, et al. The mechanism andregistration method of a surgical robot for hip arthroplasty.In Proceedings of IEEE International Conference on Roboticsand Automation 2002 (ICRA ‘02), vol. 2, Washington, DC,2002; 1889–1894.

6. Lonner JH, John TK, Conditt MA. Robotic arm-assisted UKAimproves tibial component alignment: a pilot study. Clin OrthopRelat Res 2010; 468(1): 141–146.

7. Taylor RH, Mittelstadt BD, Paul HA, et al. An image-directedrobotic system for precise orthopaedic surgery. IEEE TransRobotics Autom 1994; 10(3): 261–275.

8. Hagio K, Sugano N, Takashina M, et al. Effectiveness of theROBODOC system in preventing intraoperative pulmonaryembolism. Acta Orthop Scand 2003; 74(3): 264–269.

9. Bauer A, Borner M, Lahmer A. Robodoc – animal experimentand clinical evaluation. In CVRMed–MRCAS ‘97. Springer:Berlin, 1997; 561–564.

10. Baena FR, Davies B. Robotic surgery: from autonomous systemsto intelligent tools. Robotica 2010; 28(02): (special issue ):163–170.

11. Guthart GS, Salisbury JK. The Intuitive telesurgery system:overview and application. In IEEE International Conference onRobotics and Automation (ICRA ‘00), vol. 1, 2000; 618–621.

12. Dasgupta P, Jones A, Gill IS. Robotic urological surgery: aperspective. BJU Int 2005; 95(1): 20–23.

13. Hagn U, Konietschke R, Tobergte A, et al. DLR MiroSurge:a versatile system for research in endoscopic telesurgery. IntJ Comput Assist Radiol Surg 2010; 5(2): 183–193.

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 16: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

G. P. Moustris et al.

14. Niemeyer G, Preusche C, Hirzinger G. Telerobotics. In SpringerHandbook of Robotics. Springer: Berlin, 2008; 741–757.

15. O’Malley MK, Gupta A. Passive and active assistance for humanperformance of a simulated underactuated dynamic task. In11th Symposium on Haptic Interfaces for Virtual Environmentand Teleoperator Systems (HAPTICS 2003); 348–355.

16. Tavakoli M, Patel RV, Moallem M. Haptics for TeleoperatedSurgical Robotic Systems. World Scientific: Singapore, 2008.

17. Dario P, Hannaford B, Menciassi A. Smart surgical tools andaugmenting devices. IEEE Trans Robotics Autom 2003; 19(5):782–792.

18. Cleary KR, Stoianovici DS, Glossop ND, et al. CT-directedrobotic biopsy testbed: motivation and concept. In MedicalImaging 2001: Visualization, Display, and Image-GuidedProcedures, Mun SK (ed.). SPIE: San Diego, CA, 2001;231–236.

19. Huang H-M. The autonomy levels fFor unmanned systems(ALFUS) framework: interim results. In Performance Metricsfor Intelligent Systems (PerMIS) Workshop, Gaithersburg, MD,2006.

20. Arkin RC. Behavior-based Robotics. MIT Press: Cambridge, MA,1998.

21. Rosen J, Brown JD, Chang L, et al. Generalized approach formodeling minimally invasive surgery as a stochastic processusing a discrete Markov model. IEEE Trans Biomed Eng 2006;53(3): 399–413.

22. Kragic D, Marayong P, Li M, et al. Human–machine col-laborative systems for microsurgical applications. Int J RoboticsRes 2005; 24(9): 731–741.

23. Hyosig Kang, Wen JT. Robotic knot tying in minimally invasivesurgeries. In IEEE/RSJ International Conference on IntelligentRobots and Systems 2002, vol. 2; 1421–1426.

24. Lin HC, Shafran I, Murphy TE, et al. Automatic detection andsegmentation of robot-assisted surgical motions. In MedicalImage Computing and Computer-Assisted Intervention (MICCAI)2005. Springer: Berlin, 2005; 802–810.

25. Reiley CE, Lin HC, Yuh DD, et al. Review of methods forobjective surgical skill evaluation. Surg Endosc 2011; 25(2):356–366.

26. Reiley CE, Hager GD. Task versus subtask surgical skillevaluation of robotic minimally invasive surgery. In Proceedingsof the 12th International Conference on Medical ImageComputing and Computer-Assisted Intervention (MICCAI), PartI. Springer: Berlin, 2009; 435–442.

27. Reiley CE, Lin HC, Varadarajan B, et al. Automatic recognitionof surgical motions using statistical modeling for capturingvariability. Stud Health Technol Inform 2008; 132: 396–401.

28. Reiley CE, Plaku E, Hager GD. Motion generation of roboticsurgical tasks: learning from expert demonstrations. In AnnualInternational Conference of the IEEE, Engineering in Medicineand Biology Society (EMBC), 2010; 967–970.

29. Antsaklis PJ, Passino KM, Saridis GN. An Introductionto Intelligent and Autonomous Control. Kluwer Academic:Dordrecht, 1992.

30. Kennedy J, Eberhart RC, Shi Y. The particle swarm. In SwarmIntelligence. Morgan Kaufmann: San Francisco, CA, 2001;287–325.

31. Haykin S. Neural Networks and Learning Machines, 3rd edn.Prentice Hall: Englewood Cliffs, 2008.

32. Goldberg DE. Genetic Algorithms in Search, Optimization, andMachine Learning. Addison-Wesley Professional: Boston, 1989.

33. Passino KM, Yurkovich S. Fuzzy Control. Addison Wesley:Menlo Park, 1997.

34. Kang H, Wen JT. EndoBot: a robotic assistant in minimallyinvasive surgeries. In IEEE International Conference onRobotics and Automation, (ICRA), vol 2, 2001; 2031–2036.

35. Kang H, Wen JT. Autonomous suturing using minimallyinvasive surgical robots. In IEEE International Conference onControl Applications, 2000; 742–747.

36. Bauernschmitt R, Schirmbeck EU, Knoll A, et al. Towardsrobotic heart surgery: introduction of autonomous proceduresinto an experimental surgical telemanipulator system. Int J MedRobot 2005; 1(3): 74–79.

37. Mayer H, Nagy I, Knoll A, et al. The Endo[PA]R system forminimally invasive robotic surgery. In IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS), vol. 4,2004; 3637–3642.

38. Mayer H, Gomez F, Wierstra D, et al. A system for roboticheart surgery that learns to tie knots using recurrent neural

networks. In IEEE/RSJ International Conference on IntelligentRobots and Systems 2006; 543–548.

39. Schmidhuber J, Wierstra D, Gomez F. Evolino: hybridneuroevolution/optimal linear search for sequence learning.In 19th international Joint Conference on Artificial Intelligence,Edinburgh, UK. Morgan Kaufmann: San Francisco, CA, 2005;853–858.

40. Mayer H, Nagy I, Burschka D, et al. Automation of manualtasks for minimally invasive surgery. In Fourth InternationalConference on Autonomic and Autonomous Systems (ICAS‘08), Gosier, Guadeloupe, 2008; 260–265.

41. van den Berg J, Miller S, Duckworth D, et al. Superhumanperformance of surgical tasks by robots using iterative learningfrom human-guided demonstrations. In IEEE InternationalConference on Robotics and Automation, Anchorage, AK, 2010;2074–2081.

42. Staub C, Osa T, Knoll A, et al. Automation of tissue piercingusing circular needles and vision guidance for computer aidedlaparoscopic surgery. In IEEE International Conference onRobotics and Automation (ICRA), 2010; 4585–4590.

43. Hynes P, Dodds GI, Wilkinson AJ. Uncalibrated visual servoingof a dual-arm robot for mis-suturing. In First IEEE/RAS–EMBSInternational Conference on Biomedical Robotics andBiomechatronics (BioRob), 2006; 420–425.

44. Zong G, Hu Y, Li D, et al. Visually servoed suturing forrobotic microsurgical keratoplasty. In IEEE/RSJ InternationalConference on Intelligent Robots and Systems, 2006;2358–2363.

45. Brett PN, Taylor RP, Proops D, et al. A surgical robot forcochleostomy. In 29th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBS)2007; 1229–1232.

46. Taylor R, Du X, Proops D, et al. A sensory-guided surgicalmicro-drill. Proc Inst Mech Eng C J Mech Eng Sci 2010; 224(7):1531–1537.

47. Coulson CJ, Reid AP, Proops DW. A cochlear implantationrobot in surgical practice. In 15th International Conference onMechatronics and Machine Vision in Practice (M2VIP), 2008;173–176.

48. Coulson CJ, Taylor RP, Reid AP, et al. An autonomous surgicalrobot for drilling a cochleostomy: preliminary porcine trial.Clin Otolaryngol 2008; 33(4): 343–347.

49. Majdani O, Rau TS, Baron S, et al. A robot-guided minimallyinvasive approach for cochlear implant surgery: preliminaryresults of a temporal bone study. Int J Comput Assist RadiolSurg 2009; 4(5): 475–486.

50. Eilers H, Baron S, Ortmaier T, et al. Navigated, robot assisteddrilling of a minimally invasive cochlear access. In IEEEInternational Conference on Mechatronics (ICM), 2009; 1–6.

51. Hussong A, Rau TS, Ortmaier T, et al. An automated insertiontool for cochlear implants: another step towards atraumaticcochlear implant surgery. Int J Comput Assist Radiol Surg 2009;5(2): 163–171.

52. Megali G, Tonet O, Stefanini C, et al. A computer-assistedrobotic ultrasound-guided biopsy system for video-assistedsurgery. In Medical Image Computing and Computer-assistedIntervention – MICCAI 2001, Niessen W, Viergever M (eds).Springer: Berlin, 2001; 343–350.

53. Phee L, Di Xiao, Yuen J, et al. Ultrasound guided robotic systemfor transperineal biopsy of the prostate. In Proceedings ofthe 2005 IEEE International Conference on Robotics andAutomation (ICRA), 2005; 1315–1320.

54. Rogers AJ, Light ED, von Allmen D, et al. Real-time 3Dultrasound guidance of autonomous surgical robot for shrapneldetection and breast biopsy. In Proceedings of SPIE, Lake BuenaVista, FL, USA, 2009; 72650O.

55. Liang K, Light ED, Rogers AJ, et al. 3D ultrasound guidanceof autonomous surgical robotics: feasibility studies. In IEEEInternational Ultrasonics Symposium (IUS), 2009; 582–585.

56. Rogers AJ, Light ED, Smith SW. 3D ultrasound guidance ofautonomous robot for location of ferrous shrapnel. IEEE TransUltrason Ferroelect Frequ Control 2009; 56(7): 1301–1303.

57. Liang K, Rogers AJ, Light ED, et al. Simulation of autonomousrobotic multiple-core biopsy by 3D ultrasound guidance.Ultrason Imaging 2010; 32(2): 118–127.

58. Pua EC, Fronheiser MP, Noble JR, et al. 3D ultrasound guidanceof surgical robotics: a feasibility study. IEEE Trans UltrasonFerroelect Frequ Control 2006; 53(11): 1999–2008.

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 17: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

Evolution of autonomous and semi-autonomous robotic surgical systems

59. Fronheiser MP, Whitman J, Ivancevich NM, et al. 3D ultrasoundguidance of surgical robotics: autonomous guidance andcatheter transducers. In IEEE Ultrasonics Symposium, 2007;2527–2530.

60. Liang K, Rogers AJ, Light ED, et al. Three-dimensionalultrasound guidance of autonomous robotic breast biopsy:feasibility study. Ultrasound Med Biol 2010; 36(1): 173–177.

61. Whitman J, Fronheiser MP, Smith SW. 3D ultrasound guidanceof surgical robotics using catheter transducers: feasibility study.IEEE Trans Ultrason Ferroelect Frequ Control 2008; 55(5):1143–1145.

62. Vitrani M-A, Morel G, Ortmaier T. Automatic guidance of asurgical instrument with ultrasound based visual servoing. InIEEE International Conference on Robotics and Automation(ICRA), 2005; 508–513.

63. Vitrani M-A, Morel G, Bonnet N, et al. A robust ultrasound-based visual servoing approach for automatic guidance ofa surgical instrument with in vivo experiments. In FirstIEEE/RAS-EMBS International Conference on BiomedicalRobotics and Biomechatronics (BioRob), 2006; 35–40.

64. Stoll J, Dupont P, Howe R. Ultrasound-based servoingof manipulators for telesurgery. In Proceedings of SPIETelemanipulator and Telepresence Technologies VIII, Boston,MA, USA, 2002; 78–85.

65. Hong J, Dohi T, Hashizume M, et al. An ultrasound-drivenneedle-insertion robot for percutaneous cholecystostomy. PhysMed Biol 2004; 49(3): 441–455.

66. Mallapragada VG, Sarkar N, Podder TK. Robotic system fortumor manipulation and ultrasound image guidance duringbreast biopsy. In 30th Annual International Conference of theIEEE, Engineering in Medicine and Biology Society (EMBS),2008; 5589–5592.

67. Stoll J, Novotny P, Howe R, et al. Real-time 3D ultrasound-based servoing of a surgical instrument. In IEEE InternationalConference on Robotics and Automation (ICRA), 2006;613–618.

68. Novotny PM, Stoll JA, Dupont PE, et al. Real-time visualservoing of a robot using three-dimensional ultrasound. InIEEE International Conference on Robotics and Automation,2007; 2655–2660.

69. Sauvee M, Poignet P, Dombre E. Ultrasound image-based visualservoing of a surgical instrument through nonlinear modelpredictive control. Int J Robotics Res 2008; 27(1): 25–40.

70. Nakamura Y, Kishi K, Kawakami H. Heartbeat synchronizationfor robotic cardiac surgery. In IEEE International Conferenceon Robotics and Automation (ICRA), vol. 2, 2001; 2014–2019.

71. Ginhoux R, Gangloff JA, de Mathelin MF, et al. Beating hearttracking in robotic surgery using 500 Hz visual servoing,model predictive control and an adaptive observer. In IEEEInternational Conference on Robotics and Automation (ICRA),vol. 1, 2004; 274–279.

72. Ginhoux R, Gangloff J, de Mathelin M, et al. Active filteringof physiological motion in robotized surgery using predictivecontrol. IEEE Trans Robotics 2005; 21(1): 67–79.

73. Gangloff J, Ginhoux R, de Mathelin M, et al. Modelpredictive control for compensation of cyclic organ motionsin teleoperated laparoscopic surgery. IEEE Trans Control SystTechnol 2006; 14(2): 235–246.

74. Ortmaier T, Groger M, Boehm DH, et al. Motion estimation inbeating heart surgery. IEEE Trans Biomed Eng 2005; 52(10):1729–1740.

75. Bebek O, Cavusoglu MC. Predictive control algorithms usingbiological signals for active relative motion canceling in roboticassisted heart surgery. In IEEE International Conference onRobotics and Automation (ICRA), 2006; 237–244.

76. Bebek O, Cavusoglu MC. Model based control algorithmsfor robotic assisted beating heart surgery. In 28th AnnualInternational Conference of the IEEE, Engineering in Medicineand Biology Society (EMBS), 2006; 823–828.

77. Bebek O, Cavusoglu MC. Intelligent control algorithms forrobotic-assisted beating heart surgery. IEEE Trans Robotics2007; 23(3): 468–480.

78. Yuen S, Kesner S, Vasilyev N, et al. 3D ultrasound-guidedmotion compensation system for beating heart mitral valverepair. In Medical Image Computing and Computer-AssistedIntervention – MICCAI 2008, New York. Springer: Berlin, 2008;711–719.

79. Yuen SG, Novotny PM, Howe RD. Quasiperiodic predictivefiltering for robot-assisted beating heart surgery. In IEEE

International Conference on Robotics and Automation (ICRA),2008; 3875–3880.

80. Yuen SG, Kettler DT, Howe RD. Robotic motion compensationfor beating intracardiac surgery. In 10th InternationalConference on Control, Automation, Robotics and Vision(ICARCV), 2008; 617–622.

81. Yuen SG, Kettler DT, Novotny PM, et al. Robotic motioncompensation for beating heart intracardiac surgery. IntJ Robotics Res 2009; 28(10): 1355–1372.

82. Kesner SB, Howe RD. Design and control of motioncompensation cardiac catheters. In IEEE InternationalConference on Robotics and Automation (ICRA), 2010;1059–1065.

83. Cagneau B, Zemiti N, Bellot D, et al. Physiological motioncompensation in robotized surgery using force feedbackcontrol. In IEEE International Conference on Robotics andAutomation, 2007; 1881–1886.

84. Duindam V, Sastry S. Geometric motion estimation andcontrol for robotic-assisted beating-heart surgery. In IEEE/RSJInternational Conference on Intelligent Robots and Systems(IROS), 2007; 871–876.

85. Zysk AM, Nguyen FT, Oldenburg AL, et al. Optical coherencetomography: a review of clinical development from bench tobedside. J Biomed Opt 2007; 12(5): 051403.

86. Balicki M, Han J-H, Iordachita I, et al. Single fiber opticalcoherence tomography microsurgical instruments for computerand robot-assisted retinal surgery. In Medical Image Computingand Computer-Assisted Intervention – MICCAI 2009. London,UK. Springer: Berlin, 2009; 108–115.

87. Howe RD, Matsuoka Y. Robotics for surgery. Annu Rev BiomedEng 1999; 1(1): 211–240.

88. Kazanzides P, Mittelstadt BD, Musits BL, et al. An integratedsystem for cementless hip replacement. IEEE Eng Med Biol1995; 14(3): 307–313.

89. Bargar WL, Bauer A, Borner M. Primary and revision total hipreplacement using the Robodoc system. Clin Orthop Relat Res1998; 354: 82–91.

90. Schulz AP, Seide K, Queitsch C, et al. Results of total hipreplacement using the Robodoc surgical assistant system:clinical outcome and evaluation of complications for 97procedures. Int J Med Robot 2007; 3(4): 301–306.

91. Nakamura N, Sugano N, Nishii T, et al. Robot-assisted primarycementless total hip arthroplasty using surface registrationtechniques: a short-term clinical report. Int J Comput AssistRadiol Surg 2009; 4(2): 157–162.

92. Moody JE Jr, DiGioia AM, Jaramaz B, et al. Gaugingclinical practice: surgical navigation for total hipreplacement. In Medical Image Computing and Computer-Assisted Interventation – MICCAI’98. Springer: Berlin, 1998;421.

93. Amin DV, Kanade T, DiGioia AM, et al. Ultrasound registrationof the bone surface for surgical navigation. Comput Aided Surg2003; 8(1): 1–16.

94. Fadda M, Bertelli D, Martelli S, et al. Computer assistedplanning for total knee arthroplasty. In CVRMed–MRCAS ‘97.Springer: Berlin, 1997; 617–628.

95. Merloz P, Tonetti J, Eid A, et al. Computer-assisted versusmanual spine surgery: clinical report. In CVRMed–MRCAS ‘97.Springer: Berlin, 1997; 541–544.

96. Lavallee S, Troccaz J, Sautot P, et al. Computer-assistedspinal surgery using anatomy-based registration. In Computer-integrated Surgery: Technology and Clinical Applications,Taylor R, Lavallee S, Burdea G, et al. (eds). MIT Press:Reading, 1996; 425–449.

97. Grimson WEL, Lozano-Perez T, White SJ, et al. An automaticregistration method for frameless stereotaxy, image guidedsurgery, and enhanced reality visualization. In IEEE ComputerSociety Conference on Computer Vision and PatternRecognition (CVPR), 1994; 430–436.

98. Grimson E, Leventon M, Ettinger G, et al. Clinical experiencewith a high precision image-guided neurosurgery system.In Medical Image Computing and Computer-AssistedInterventation – MICCAI ‘98. Springer: Berlin, 1998; 63–73.

99. Glauser D, Fankhauser H, Epitaux M, et al. Neurosurgical robotMinerva: first results and current developments. J Image GuidSurg 1995; 1(5): 266–272.

100. Goradia T, Taylor R, Auer L. Robot-assisted minimally invasiveneurosurgical procedures: first experimental experience. InCVRMed–MRCAS ‘97. Springer: Berlin, 1997; 319–322.

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs

Page 18: Evolutionofautonomousandsemi-autonomous roboticsurgical ...users.ntua.gr/kdelip/resources/pdf/Moustris-et-al... · Figure 1. The da Vinci SI telesurgical robot. Reproduced by permission

G. P. Moustris et al.

101. Kwoh YS, Hou J, Jonckheere EA, et al. A robot with improvedabsolute positioning accuracy for CT guided stereotactic brainsurgery. IEEE Trans Biomed Eng 1988; 35(2): 153–160.

102. Kyriacou S, Davatzikos C. A biomechanical model of soft tissuedeformation, with applications to non-rigid registration of brainimages with tumor pathology. In Medical Image Computing andComputer-Assisted Intervention – MICCAI ‘98. Springer: 1998;531–538.

103. Masamune K, Kobayashi E, Masutani Y, et al. Developmentof an MRI-compatible needle insertion manipulator forstereotactic neurosurgery. J Image Guid Surg 1995; 1(4):242–248.

104. Cho CB, Park HK, Joo WI, et al. Stereotactic radiosurgery withthe CyberKnife for pituitary adenomas. J Korean Neurosurg Soc2009; 45(3): 157–163.

105. Feigl GC, Bonelli CM, Berghold A, et al. Effects of gammaknife radiosurgery of pituitary adenomas on pituitary function.J Neurosurg 2002; 97(5, suppl): 415–421.

106. Izawa M, Hayashi M, Nakaya K, et al. Gamma kniferadiosurgery for pituitary adenomas. J Neurosurg 2000;93(suppl 3): 19–22.

107. Sheehan JM, Vance ML, Sheehan JP, et al. Radiosurgeryfor Cushing’s disease after failed transsphenoidal surgery.J Neurosurg 2000; 93(5): 738–742.

108. Chang SD, Adler JR. Robotics and radiosurgery – thecyberknife. Stereotact Funct Neurosurg 2001; 76(3–4):204–208.

109. Kilby W, Dooley JR, Kuduvalli G, et al. The CyberKnife roboticradiosurgery system in 2010. Technol Cancer Res Treat 2010;9(5): 433–452.

110. Sayeh S, Wang J, Main W, et al. Respiratory motion trackingfor robotic radiosurgery. In Treating Tumors that Move withRespiration, Urschel HC, Kresl JJ, Luketich JD, et al. (eds).Springer: Berlin, 2007; 15–29.

111. Flickinger JC, Nelson PB, Martinez AJ, et al. Radiotherapy ofnonfunctional adenomas of the pituitary gland. Results withlong-term follow-up. Cancer 1989; 63(12): 2409–2414.

112. Ganz JC, Backlund EO, Thorsen FA. The effects of GammaKnife surgery of pituitary adenomas on tumor growth andendocrinopathies. Stereotact Funct Neurosurg 1993; 61(suppl1): 30–37.

113. Park YG, Chang JW, Kim EY, et al. Gamma knife surgeryin pituitary microadenomas. Yonsei Med J 1996; 37(3):165–173.

114. Rush S, Cooper PR. Symptom resolution, tumor control, andside effects following postoperative radiotherapy for pituitarymacroadenomas. Int J Radiat Oncol Biol Phys 1997; 37(5):1031–1034.

115. Tsang RW, Brierley JD, Panzarella T, et al. Radiation therapyfor pituitary adenoma: treatment outcome and prognosticfactors. Int J Radiat Oncol Biol Phys 1994; 30(3): 557–565.

116. Witt T, Kondziolka D, Flickinger J, et al. Gamma Kniferadiosurgery for pituitary tumors. In Gamma Knife BrainSurgery, Lunsford L, Kondziolka D, Flickinger J (eds). Karger:Basel, 1998; 114–127.

117. Szeifert GT, Kondziolka D, Levivier M, et al. Radiosurgery andPathological Fundamentals. Karger: Basel, 2007.

118. Romanelli P, Heit G, Chang SD, et al. Cyberknife adiosurgeryfor trigeminal neuralgia. Stereotact Funct Neurosurg 2003;81(1–4): 105–109.

119. Lim M, Villavicencio AT, Burneikiene S, et al. CyberKniferadiosurgery for idiopathic trigeminal neuralgia. NeurosurgFocus 2005; 18(5): 1–7.

120. Davies BL. A discussion of safety issues for medical robots.In Computer-integrated Surgery: Technology and ClinicalApplications, Taylor RH, Lavallee S, Burdea GS, et al. (eds).MIT Press: Reading, 1996; 287–296.

121. Fei B, Ng WS, Chauhan S, et al. The safety issues of medicalrobotics. Reliabil Eng Syst Safety 2001; 73(2): 183–192.

122. Ho SC, Hibberd RD, Davies BL. Robot assisted knee surgery.IEEE Eng Med Biol 1995; 14(3): 292–300.

123. Elliott C, Paterson L, Clarke G, et al. Autonomous Systems:Social, Legal and Ethical Issues. Royal Academy of Engineering:London, 2009.

124. Rentschler ME, Platt SR, Dumpert J, et al. In vivo laparoscopicrobotics. Int J Surg 2006; 4(3): 167–171.

125. Shah BC, Buettner SL, Lehman AC, et al. Miniature in vivorobotics and novel robotic surgical platforms. Urol Clin N Am2009; 36(2): 251–263.

126. Ota T, Degani A, Zubiate B, et al. Epicardial atrial ablationusing a novel articulated robotic medical probe via apercutaneous subxiphoid approach. Innovations (Phila) 2006;1(6): 335–340.

127. Ota T, Degani A, Schwartzman D, et al. A highly articulatedrobotic surgical system for minimally invasive surgery. AnnThorac Surg 2009; 87(4): 1253–1256.

128. Aron M, Haber G-P, Desai MM, et al. Flexible robotics: a newparadigm. Curr Opin Urol 2007; 17(3): 151–155.

129. Dumpert J, Lehman AC, Wood NA, et al. Semi-autonomoussurgical tasks using a miniature in vivo surgical robot. In AnnualInternational Conference of the IEEE, Engineering in Medicineand Biology Society (EMBC), 2009; 266–269.

130. Oleynikov D, Rentschler M, Hadzialic A, et al. In vivo camerarobots provide improved vision for laparoscopic surgery. IntCongr Ser 2004; 1268: 787–792.

131. Patronik NA, Zenati MA, Riviere CN. Preliminary evaluation ofa mobile robotic device for navigation and intervention on thebeating heart. Comput Aided Surg 2011; 10(4): 225–232.

132. Martel S. Magnetic resonance propulsion, control and trackingat 24 Hz of an untethered device in the carotid artery of aliving animal: an important step in the development of medicalmicro- and nanorobots. Conf Proc IEEE Eng Med Biol Soc 2007;2007: 1475–1478.

133. Martel S, Mathieu J-B, Felfoul O, et al. Automatic navigationof an untethered device in the artery of a living animal using aconventional clinical magnetic resonance imaging system. ApplPhys Lett 2007; 90(11): 114105.

134. Nelson BJ, Kaliakatsos IK, Abbott JJ. Microrobots for minimallyinvasive medicine. Annu Rev Biomed Eng 2010; 12(1): 55–85.

135. Chang WC, Keller CG, Hawkes EA, et al. Microdevice com-ponents for a cellular microsurgery suite. In 13th InternationalConference on Solid-State Sensors, Actuators and Microsystems(TRANSDUCERS ‘05), Digest of Technical Papers, vol. 1, 2005;209–212.

136. Caprari G, Estier T, Siegwart R. Fascination of downscalingAlice the sugar cube robot. J Micromechatron 2001; 1(13):177–189.

137. Abbott JJ, Nagy Z, Beyeler F, et al. Robotics in the small, PartI: microbotics. IEEE Robotics Automat 2007; 14(2): 92–103.

138. Purcell EM. Life at low Reynolds number. Am J Phys 1977; 45:3–11.

139. Weldon C, Tian B, Kohane DS. Nanotechnology for surgeons.Wiley Interdis Rev Nanomed Nanobiotechnol 2011; 3(3):223–228.

140. Rebello KJ. Applications of MEMS in surgery. Proc IEEE 2004;92(1): 43–55.

Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs