9
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 11, NOVEMBER 2011 3501 Haptic Interaction for Mobile Assistive Robots Huanran Wang, Student Member, IEEE, and Xiaoping P. Liu, Senior Member, IEEE Abstract—This paper presents an assistive robotic system, for which the user can interact with a mobile robot using a haptic interface, for senior and disabled people. The system consists of a nonholonomic PeopleBot mobile robot and a Phantom Omni haptic interface. Several technical problems related to the system are addressed and solved. Specifically, the backstepping technique is used, and a new tracking algorithm is developed for the robot to follow the user on a 2-D space. An adaptive fuzzy logic-based windowing method is proposed to solve the velocity estimation problem in the controller. A simple haptic rendering algorithm is designed to generate the haptic feedback based on the position error. Experiments show that the user can guide the movement of the robot quite easily and smoothly using the haptic interface. Index Terms—Assistant robot, haptics, nonholonomic mobile robot, tracking algorithm, velocity estimation. I. I NTRODUCTION T HE RAPID aging population and the existence of a large number of people with different kinds of disabilities pose a great challenge for engineers and researchers to design and build suitable and practical healthcare robotic systems. As- sistive robotic systems are finding many applications in the healthcare domain and showing great potentials. In particular, assistant robotic systems are designed for people who lose partly or completely their mobility or sensory capabilities. These systems enable people, who would otherwise need help, to live independently, which is a very important index of the quality of modern life [1]. The paper [2] provides an excellent survey on the history and current research progress on assistive robots. One problem for assistive robotic systems is that there is no universal or general solution to all kinds of different require- ments particularly from the clinical perspective. This means that the design and development of such a system must be application or task dependent. For instance, many different kinds of assistive robotic systems have been built for a variety of assistive tasks. Kitchen robot [3] was designed for helping the elderly and disabled users particularly on the task related to kitchen appliances. Helpmate [4] is one of very successful medical robot applications, and it has been used by about 80 hospitals worldwide. As an autonomous transport system, Manuscript received November 18, 2010; revised April 17, 2011; accepted June 16, 2011. Date of publication August 4, 2011; date of current version November 9, 2011. This work was supported by the Natural Sciences and Engineering Research Council of Canada. The Associate Editor coordinating the review process for this paper was Dr. Atif Alamri. The authors are with the Department of Systems and Computer Engineer- ing, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail: hrwang@ sce.carleton.ca; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIM.2011.2161141 this robot carries medical data records including X-ray pho- tos, diagnostic samples, medical instruments, etc. Wheelchair Navigation System [5] is an autonomous wheelchair system providing mobility to persons who cannot power the wheelchair due to motor, sensory, perceptual, or cognitive impairments. The system contains three operating modes: general obstacle avoidance, door passage, and automatic wall following. A walk- ing assistance system [1] began in 1994 to develop a mobility aid robot to satisfy the needs of the frail blind. The system acts like a wheeled walker, and it is equipped with different distance sensors (ultrasonic, laser, etc.) to detect the obstacle in front of the system, and brakes were used to hold back the user from colliding with those obstacles. While these robotic systems are very promising in assisting people to live a more independent life, there are several draw- backs in the current systems: 1) The interaction between the user and system is not suf- ficient or not intuitive. Most of the current systems are aimed to be autonomous [3]–[5]. This reduces the burden on the user, which is of course a very desirable feature, but due to the low level of current artificial intelligence technology, most of the current systems cannot satisfy the various, different and unstructured requirements for an independent living style. In addition, if the user is lack of physical exercise, the physical and mental health will be adversely affected. Moreover, due to the various mobility problems of the user group, the input device should be a high degree of freedom input device to track any input movement. 2) Human communication channels are adequately used. In the literature, human–robot interaction is mostly through visual and sound (voice commend) channels. However, the aged and people who need healthcare accommodation usually lose partially or completely their visual or audi- tory capabilities; thus, it may be challenging for them to interact with the assistive robot using voice or to locate the robot visually. In addition, the changes of hearing and vision are the most dramatic comparing to the other senses [6]. An assistive robot which has the capability of haptic interaction may compensate well in this regard. 3) While the robotic systems aiming at the above assistive tasks have been quite intensively developed, there are some other tasks which receive relatively little attention such as moving heavy objects (grocery, furniture, etc.). For the people whose upper limb’s power is too weak to manipulate the heavy objects, the assistant robot should act as a power amplifier to cover most of the weight. The remaining weights are assigned as an acceptable burden for the user as an exercise. 0018-9456/$26.00 © 2011 IEEE

Haptic Interaction for Mobile Assistive Robots

  • Upload
    xp

  • View
    221

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Haptic Interaction for Mobile Assistive Robots

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 11, NOVEMBER 2011 3501

Haptic Interaction for Mobile Assistive RobotsHuanran Wang, Student Member, IEEE, and Xiaoping P. Liu, Senior Member, IEEE

Abstract—This paper presents an assistive robotic system, forwhich the user can interact with a mobile robot using a hapticinterface, for senior and disabled people. The system consists ofa nonholonomic PeopleBot mobile robot and a Phantom Omnihaptic interface. Several technical problems related to the systemare addressed and solved. Specifically, the backstepping techniqueis used, and a new tracking algorithm is developed for the robotto follow the user on a 2-D space. An adaptive fuzzy logic-basedwindowing method is proposed to solve the velocity estimationproblem in the controller. A simple haptic rendering algorithmis designed to generate the haptic feedback based on the positionerror. Experiments show that the user can guide the movement ofthe robot quite easily and smoothly using the haptic interface.

Index Terms—Assistant robot, haptics, nonholonomic mobilerobot, tracking algorithm, velocity estimation.

I. INTRODUCTION

THE RAPID aging population and the existence of a largenumber of people with different kinds of disabilities pose

a great challenge for engineers and researchers to design andbuild suitable and practical healthcare robotic systems. As-sistive robotic systems are finding many applications in thehealthcare domain and showing great potentials. In particular,assistant robotic systems are designed for people who losepartly or completely their mobility or sensory capabilities.These systems enable people, who would otherwise need help,to live independently, which is a very important index of thequality of modern life [1]. The paper [2] provides an excellentsurvey on the history and current research progress on assistiverobots.

One problem for assistive robotic systems is that there is nouniversal or general solution to all kinds of different require-ments particularly from the clinical perspective. This meansthat the design and development of such a system must beapplication or task dependent. For instance, many differentkinds of assistive robotic systems have been built for a varietyof assistive tasks. Kitchen robot [3] was designed for helpingthe elderly and disabled users particularly on the task relatedto kitchen appliances. Helpmate [4] is one of very successfulmedical robot applications, and it has been used by about80 hospitals worldwide. As an autonomous transport system,

Manuscript received November 18, 2010; revised April 17, 2011; acceptedJune 16, 2011. Date of publication August 4, 2011; date of current versionNovember 9, 2011. This work was supported by the Natural Sciences andEngineering Research Council of Canada. The Associate Editor coordinatingthe review process for this paper was Dr. Atif Alamri.

The authors are with the Department of Systems and Computer Engineer-ing, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail: [email protected]; [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIM.2011.2161141

this robot carries medical data records including X-ray pho-tos, diagnostic samples, medical instruments, etc. WheelchairNavigation System [5] is an autonomous wheelchair systemproviding mobility to persons who cannot power the wheelchairdue to motor, sensory, perceptual, or cognitive impairments.The system contains three operating modes: general obstacleavoidance, door passage, and automatic wall following. A walk-ing assistance system [1] began in 1994 to develop a mobilityaid robot to satisfy the needs of the frail blind. The system actslike a wheeled walker, and it is equipped with different distancesensors (ultrasonic, laser, etc.) to detect the obstacle in front ofthe system, and brakes were used to hold back the user fromcolliding with those obstacles.

While these robotic systems are very promising in assistingpeople to live a more independent life, there are several draw-backs in the current systems:

1) The interaction between the user and system is not suf-ficient or not intuitive. Most of the current systems areaimed to be autonomous [3]–[5]. This reduces the burdenon the user, which is of course a very desirable feature,but due to the low level of current artificial intelligencetechnology, most of the current systems cannot satisfy thevarious, different and unstructured requirements for anindependent living style. In addition, if the user is lack ofphysical exercise, the physical and mental health will beadversely affected. Moreover, due to the various mobilityproblems of the user group, the input device should bea high degree of freedom input device to track any inputmovement.

2) Human communication channels are adequately used. Inthe literature, human–robot interaction is mostly throughvisual and sound (voice commend) channels. However,the aged and people who need healthcare accommodationusually lose partially or completely their visual or audi-tory capabilities; thus, it may be challenging for them tointeract with the assistive robot using voice or to locatethe robot visually. In addition, the changes of hearingand vision are the most dramatic comparing to the othersenses [6]. An assistive robot which has the capability ofhaptic interaction may compensate well in this regard.

3) While the robotic systems aiming at the above assistivetasks have been quite intensively developed, there aresome other tasks which receive relatively little attentionsuch as moving heavy objects (grocery, furniture, etc.).For the people whose upper limb’s power is too weak tomanipulate the heavy objects, the assistant robot shouldact as a power amplifier to cover most of the weight. Theremaining weights are assigned as an acceptable burdenfor the user as an exercise.

0018-9456/$26.00 © 2011 IEEE

Page 2: Haptic Interaction for Mobile Assistive Robots

3502 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 11, NOVEMBER 2011

Fig. 1. Overview of the developed assistant system.

In this paper, a new assistive robotic system with hapticfeedback is presented. The proposed system is developed toassist the senior and disabled people who lose partly or com-pletely their mobility or sensory capabilities, but need to moverelatively heavy objects in a home-like environment. Our designobjective is that the user can guide the robot to move in the2-D planar with a very smooth manner but without mucheffort. In the proposed system, the user interacts with a mobilerobot locally using a haptic interface to perform these tasks.For this purpose, several technical issues were addressed. Inspecific, a nonlinear controller was developed to control themovement of the robot, a new method based on fuzzy logic wasdeveloped to estimate the velocity of the haptic device, and ahaptic rendering algorithm based on the error information wasdesigned to present force feedback to the user.

The paper is organized as follows: Section II gives anoverview of the proposed system. Section III describes thedetails of the control algorithm. Section IV introduces the newvelocity estimation method, and Section V presents the hap-tic rendering algorithm. The implementation is given inSection VI, and the experimental results and the analysis areshown in Section VII. The final part is the conclusion andfuture work.

II. OVERVIEW OF THE PROPOSED SYSTEM

An overview of the proposed assistive robotic system isshown in Fig. 1. The system has two main hardware compo-nents: a mobile robot and a haptic interface. The mobile robotis a PeopleBot robot from MOBILEROBOTS which is a verypowerful robotic platform suitable for human–robot interaction.It is nonholonomic with two different types of wheels: twodriving wheels and one omnidirectional wheel for balance [7].The height of the mobile robot is 1 115 mm which is suitable forthe user to manipulate the haptic device. The haptic interfaceis a Phantom Omni arm from Sensable Technology which haslight weight, and it is easy to be mounted on the top of the robot.A special light wood panel is used to fix the Phantom Omni onthe top of the robot.

Fig. 2. Coordinate of the mobile robot and haptic device.

III. CONTROLLER DESIGN FOR THE MOBILE ROBOT

One of the technical problems in the development of the pro-posed assistive robotic system is to design a stabilizing controlalgorithm for the mobile robot in 2-D planar. In this paper,we formulate this control problem as a mobile robot trackingproblem. Y. Kanayama conducted pioneering work on the non-holonomic mobile robot tracking problem [8]. Then, the back-stepping methodology is adopted to design the controller. Thismethodology was first adopted in the mobile robot control in[9] and then developed by several other researchers [10]–[13].Because the mobile base is a differential-drive noholonomicmobile robot, the control signals are translational and rotationalvelocities rather than motor torques. The controller is designedbased on the kinematics model of the mobile robot.

The coordinate arrangement of the system is shown inFig. 2. The point Ph = [xh, yh]T and Pm = [xm, ym]T arethe reference positions of the human (also the position of thePhantom stylus) and mobile robot in the world coordinate XY ,respectively.

Consider the following kinematics model for the nonholo-nomic mobile robot:

Pm =

⎡⎣ xm

ym

θm

⎤⎦ =

⎡⎣ cos θm 0

sin θm 00 1

⎤⎦ [

vm

ωm

](1)

where Pm is the position and posture vector of the mobile robot.xm and ym are the Cartesian coordinates for the robot. θm isthe angle between the robot heading direction and X axis. vm

is the translational velocity and ωm is the rotating velocity ofthe robot, respectively.

We define the error Pe as

Pe =

⎡⎣xe

ye

θe

⎤⎦ =

⎡⎣ xh − xm

yh − ym

αV I − θm

⎤⎦ . (2)

Here, αV I is a virtual angle compared to the real angle θm. Ifθe = 0, αV I = θm. Now, we assume the position of the mobile

Page 3: Haptic Interaction for Mobile Assistive Robots

WANG AND LIU: HAPTIC INTERACTION FOR MOBILE ASSISTIVE ROBOTS 3503

robot is [xm

ym

]=

[vm cos αV I

vm sin αV I

]. (3)

Considering the following candidate Lyaponuv function:

V1 =12x2

e +12y2

e . (4)

The derivative of (4) is

V1 =xexe + yeye = xe(xh − xm) + ye(yh − ym)

=xe(xh − vm cos αV I) + ye(yh − vm sin αV I). (5)

If we make[vm cos αV I

vm sin αV I

]=

[xh + cxxe + cV F xV F

yh + cyye + cV F yV F

]

=[xh + cxxe + sign(xe)cV F

√x2

e + y2e

yh + cyye + sign(ye)cV F

√x2

e + y2e

]. (6)

Here, we introduce a virtual force xV F (on the X axis) andyV F (on the Y axis), which acts like a force to push the mobilerobot moving to the direction which can minimize the error.This force utilizes the error from both X and Y axis. cx, cy , andcV F are the gains for the error xe, ye, and xV F , respectively.

Then, substitute (6) into (5), we can get

V1 =xe(xh − vm cos αV I) + ye(yh − vm sinαV I)

=xe

(−cxxe − sign(xe)cV F

√x2

e + y2e

)+ ye

(−cyye − sign(ye)cV F

√x2

e + y2e

)= −cxx2

e − sign(xe)xecV F

√x2

e + y2e

− cyy2e − sign(ye)yecV F

√x2

e + y2e < 0. (7)

For the rotational error, we have

θe = αV I − θm. (8)

Then, we choose the composite candidate Lyapunov func-tion as

V = V1 +12θ2

e . (9)

Take (8) into (9) and get

V = V1 + θeθe = V1 + θe(αV I − θm)= V1 + θe(αV I − ωm).(10)

Then, we can choose the rotational controller as

ωm = αV I + cθθe (11)

substitute (11) into (10), we have

V = V1 + θe(αv − ωm)

= −cxx2e − sign(xe)xecV F

√x2

e + y2e

− cyy2e − sign(ye)yecV F

√x2

e + y2e − cθθ

2e < 0. (12)

The derivation of the composite Lyaponuv function is smallerthan 0. Based on the Lyapunov stable theory, the system isstable.

From (6), we can obtain the controller for the translationalvelocity of the mobile robot. Together with the rotational con-troller (11), we have the controller for the mobile robot{

vm =√

(xh + cxxe + xV F )2 + (yh + cyye + yV F )2ωm = αV I + cθθe.

(13)

Here

xV F = sign(xe)cV F

√x2

e + y2e

yV F = sign(ye)cV F

√x2

e + y2e

αV I = arctan

(yh + cyy2

e + sign(ye)yecV F

√x2

e + y2e

xh + cxx2e + sign(xe)xecV F

√x2

e + y2e

).

In controller (13), the actual controller inputs are the positionerror xe, ye in Cartesian coordinate, and the rotation error θe.The position error signal is the difference between the robotposition and the Phantom stylus which can be obtained fromthe robot’s and Phantom’s encoders separately. Moreover, therotation error is the difference between the virtual angle αV I

and the desired posture for the mobile robot which can be readfrom the mobile robot’s gyroscope.

IV. ADAPTIVE WINDOWING VELOCITY

ESTIMATION BASED ON FUZZY LOGIC

One important issue for the designed controller is the es-timation of the phantom velocity in (13). xh and yh are thevelocities of the Phantom which also represent the velocity ofhuman input in each direction. The Phantom haptic device onlyprovides us the encoder information that can be converted to theposition in the Cartesian space. Then, the velocity informationis derived from this position information which is so-called“velocity estimation” problem. This problem exists in a verywide range, particularly for the control problems, such as motorcontrol, robot manipulator control, etc. [14]. There are twogeneral approaches to solve the velocity estimation problem:model-based or nonmodel-based techniques. The detailed liter-ature review can be found in [15].

How to choose a suitable method to solve the velocity estima-tion problem depends on the specific system. For the proposedsystem in this paper, several perspectives are considered asdistinguishing features comparing to the other systems.

• The refresh rate of the mobile robot control loop is100 Hz—a relatively low refresh rate compared to somereal-time system. Hence, lots of computation can be doneat this refresh rate.

• The developing software is Visual Studio C++ 2005 whichis a powerful platform for implementing the nonmodel-based techniques. While the model-based approachesare more suitable to be implemented in the MATLAB/Simulink Environment.

Page 4: Haptic Interaction for Mobile Assistive Robots

3504 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 11, NOVEMBER 2011

Fig. 3. Diagram of fuzzy logic windowing velocity estimation method.

Based on the above considerations, the nonmodel-basedmethod is chosen. The velocity V can be calculated using thefirst-order differentiation as

V =Pk − Pk−n

nT. (14)

Here, Pk is the current position of the mobile robot. Pk−n isthe next nth position of the mobile robot. n is the size of thewindow to estimate the velocity. T is the sampling time.

One of the nonmodel-based methods for the haptic deviceis the discrete adaptive window technique [16]. The main ideaof this technique is using an adaptive law based on the “un-certainty band” to change the size of window for the velocityestimation. However, this adaptive law cannot be implementedon our proposed system. Because the refresh rate of the mobilecontrol loop is low, the phantom velocity will change greatlyduring the sampling period comparing to the high refresh ratesystem. It is very hard to set a reasonable uncertainty band forthis type of system.

In this paper, a novel adaptive law for changing the estima-tion window size is proposed. The current position error andits first-order differentiation value are merged together by thefuzzy logic to change the estimation window size adaptively.Due to the distinguishing characteristics of fuzzy logic, such asapproximation rather than accuracy, multiple inputs, etc., thismethod is particularly suitable for merging these two kinds ofdata together [17]–[19]. This algorithm is the combination ofthe fuzzy logic velocity estimation method [20] and adaptivewindow method [16]. Fig. 3 shows the detailed diagram of theproposed method.

A. Fuzzification

The first step for the fuzzy logic algorithm is the fuzzi-fication. The two inputs are the position error between thecurrent mobile robot and the human input and its first-orderdifferentiation value. The outputs are the classified value thatthese two input belonging to the specific membership function.A simple fuzzification membership function has been chosenfor the two inputs which is shown in Fig. 4. The simple trianglefunction is chosen as the membership function.

In Fig. 4, NB, NM, ZE, PM, and PB are the abbreviationsfor negative big, negative medium, zero, positive medium, andpositive big, respectively.

B. Fuzzy Rules

The fuzzy rules are derived from the real-time operationin our proposed system: The main affecters to change thewindow size in the velocity estimation problem are the positionerror Pe and its first-order differentiation value (the change of

Fig. 4. Fuzzification membership function.

TABLE IFUZZY RULES

position error) Pe. The windows size is changed by the differentincreasing and decreasing trend between these two values. It isworth mentioning that the fuzzy output and the window sizehave a reverse relationship. That means when fuzzy output isPB, the window size is small, and when fuzzy output is NB, thewindow size is big. The rules are listed below:

1) When Pe and Pe are PB, the distance between the humanhand and mobile robot is large and will be increasinggreatly due to the increasing change of Pe. Hence, thewindow size should be small enough that can catch therapid change of the velocity, which is corresponding toPB in Table I.

2) When Pe and Pe are NB, the distance between the humanhand and mobile robots is nearly constant and will staythe same in the near future. Thus, the window size shouldbe large enough to cancel the noise while calculating thevelocity.

3) When Pe is PB, but Pe is NB, the distance between thehuman hand and mobile robots is large but will stay thesame in the near future. Hence, the window size shouldbe medium that can both cancel the noise and catch therapid change of the velocity and vice versa.

4) When Pe and Pe are ZE, the distance is in the mediumand the change trend of the distance is medium. Thus, thewindow size should be medium to suit for the cancelationof the noise and catch the change of the velocity.

The above rules do not cover all the possible situations. Theleft scenario can be derived from the above rules. The summaryof the fuzzy logic rules are in Table I.

C. Defuzzification

The simple weighted average defuzzification method hasbeen adopted to calculate the output (window size for thevelocity estimation). The equation is:

nwindow = nmax −⌊kwin

Σfm(o)oΣfm(o)

⌋. (15)

Page 5: Haptic Interaction for Mobile Assistive Robots

WANG AND LIU: HAPTIC INTERACTION FOR MOBILE ASSISTIVE ROBOTS 3505

Fig. 5. Feedback force on the Phantom Omni.

Here, nwindow is the size of the velocity estimation windowwhich is calculated by the integer value of the fuzzy outputvalue. nmax is the max window size which is chosen by trialand error method. kwin is the output gain for the fuzzy logic.fm is the fuzzy membership function. o is the average value ofthe membership function.

D. Velocity Calculation

The size of the velocity estimation window has been given bythe fuzzy logic algorithm. Then, the estimation of the velocitycan be calculated by the equation:

V =Pk − Pk−nwindow

nwindowT. (16)

V. HAPTIC RENDERING ALGORITHM

From Sections III and IV, we have the controller for themobile robot, but there is no haptic feedback for the user.It is hard for the human operator to know the position errorbetween the human and the robot or whether he/she shouldwait for the response of the mobile robot. Therefore, the hapticrendering algorithm based on the error information should beadded to the system. In the proposed system, haptic renderingis to compute the feedback force based on measurements of theoperator’s motion. [21] and [22] are the comprehensive reviewfor the haptic rendering algorithm. In the previous works, thesensing, collision detection, surface point determination, forcecalculation, etc., have been extensively investigated. For our ap-plication, there is no virtual object, so only the force calculationis considered. Due to this reason, we develop a sample hapticrendering algorithm based on Hooke’s Law which utilizes theposition error information for the system. This haptic algorithmacts as an indicator that when the error is growing, the forcefeedback is increasing at the same time. Therefore, the humanoperator feels the force feedback and waits for the response ofthe mobile robot to follow the human movement. This processis the same as human beings interacting with each other. Fig. 5shows the decomposition of the feedback force.

The absolute value of the feedback force along X axis (fx)and Y axis (fy) can be calculated as{

|fx| = kx|xe||fy| = ky|ye|.

(17)

where kx and ky are the gain for X and Y axis, respectively.

Fig. 6. Implementation diagram of the assistive robotic system.

Therefore, the absolute value (|f |) and the direction (θf ) ofthe feedback force are

|f | =√

f2x + f2

y =√

(kxxe)2 + (kyye)2 (18)

θf = arccosxe√

x2e + y2

e

. (19)

VI. SOFTWARE IMPLEMENTATION

The system software implementation is illustrated in Fig. 6.The arrow direction presents the flow of the signal. In thisimplementation diagram, starting from left block, the humanmanipulates the Phantom and changes the position of the stylus.Then the position signal is sent to the computer as the commandsignal to control the movement of the mobile robot. Then, thecomputer sends the force feedback commend back to Phantombased on the position error signal. Finally, human feels thefeedback force and makes the corresponding manipulation tothe Phantom.

For the software development, we choose Visual Studio C++2005 as the developing tool. The MOBILEROBOTS providesthe ARIA API package for Peoplebot. OpenHaptics providedby Sensable is used for the Phantom Omni. All the header filesand dynamic-link libraries are combined in one project to beused by the program. Because the haptic rendering algorithmneeds at least 1000 Hz refresh rate to be executed which is avery fast rate comparing to the control algorithm of the robot,two separate threads are created in the program. The controlalgorithm of the mobile robot is put in the main thread. The sec-ond thread contains the haptic rendering algorithm. The com-munication of each thread is done at the end of the main thread.

VII. EXPERIMENTAL RESULTS

The system and the corresponding algorithms are developedand designed separately in previous sections. From the controltheory perspective, the experiments are designed to observe theinput and output signal of the system. Hence, the experimentsare focused on the performance of the proposed algorithms inthis paper. Two kinds of experiments are carried:

• The designed trajectory is used to test the trackingperformance of the mobile robot under the simulationcircumstances.

Page 6: Haptic Interaction for Mobile Assistive Robots

3506 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 11, NOVEMBER 2011

Fig. 7. Designed trajectory experiment for the robot (45◦). (a) Position of the mobile robot in 2-D plant. (b) The orientation of mobile robot. (c) The translationalvelocity of the mobile robot. (d) The absolute value of the position error.

• The real human input from the haptic device is used totest the tracking performance of the mobile robot underthe real circumstances.

For these experiments, we choose the gains as follows:⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩

cx = 0.02cy = 0.02cθ = 0.1cV F = 0.02kx = ky = 0.01nmax = 220kwin = 200.

A. Results of Designed Trajectory Experiment

Because the mobile robot of our system is a nonholonomicmobile robot, it is different from the omnidirection mobilerobot particularly on the rotation feature. Comparing to theomnidirection mobile robot, the nonholonomic mobile robotcannot follow the desired trajectory precisely if the direction ofthe trajectory is different from the current posture of the robot.Because these robots restrict on the condition that the transla-tional velocity of the robot is always orthogonal to the drivenwheels’ axis [23], We design two experiments to test therotation feature of our system. For the first one, we make therobot to follow a straight line with the velocity 141 mm/sand the direction which is 45◦ to the heading of the robot.For the second experiment, this line is changed to the velocity100 mm/s and the direction is 90◦ to the heading of the robotthat is vertical to the initial robot posture. The results of theexperiments are shown in Figs. 7 and 8. Because there is nonoise in the input signal for the designed trajectories, in orderto verify the efficiency of the control algorithm, the velocityestimation algorithm is not applied in these two experiments.

From these experimental results, we can conclude that for adesigned given trajectory, the mobile robot is able to follow theposition [Figs. 7(a) and (b) and 8(a) and (b)] and the velocityof the trajectory [Figs. 7(c) and 8(c)]. Because our mobilerobot is nonholonomic mobile robot, in order to follow thetrajectory which direction is different from the robot, the robotturns and moves forward at the same time. Therefore, there isa position error that appears in the results at the beginning ofeach experiment [Figs. 7(d) and 8(d)].

B. Results of Human Input Experiment

The real human input experiment is carried out to evaluatethe performance of the controller, velocity estimation, andhaptic rendering algorithm. The human user holds the Phantomstylus firmly and slowly moves it. First, the user moves the sty-lus in the translational direction then slightly turns the stylus tothe left. These two actions verify the translational and rotationalperformance of the system, respectively. Fig. 9 shows the resultof the experiment.

From the results, we can conclude that the tracking controlalgorithm works well on the human input. From Fig. 9(a) and(d), the position error finally converges to zero. The result ofthe forcefeed back algorithm is shown in Fig. 10. The feedbackforce is to the direct ration of the position error and points tothe mobile robot. With this feedback force information, thehuman user knows the error between the robot and his/herposition. If the user moves too fast and the robot cannot followits movement, the feedback force will increase. Then, the userknows he/she moves too fast for the robot to follow and willslow down the movement. The effectiveness of the fuzzy logicadaptive velocity estimation method is shown in Fig. 11. Whenthe position error and its changing rate change rapidly (in 0–10 sand 40–50 s), the window size shifts correspondingly.

Page 7: Haptic Interaction for Mobile Assistive Robots

WANG AND LIU: HAPTIC INTERACTION FOR MOBILE ASSISTIVE ROBOTS 3507

Fig. 8. Designed trajectory experiment for the robot (90◦). (a) Position of the mobile robot in 2-D plant. (b) The orientation of mobile robot. (c) The translationalvelocity of the mobile robot. (d) The absolute value of the position error.

Fig. 9. Human input experiment. (a) Position of the mobile robot in 2-D plant. (b) The orientation of mobile robot. (c) The translational velocity of the mobilerobot. (d) The absolute value of the position error.

VIII. CONCLUSION AND FUTURE WORK

This paper presents a novel assistive robot, which has greatpotential for healthcare applications. Haptic interaction is ap-plied to compensate for the universality of the system. Thesystem consists of two parts: one is the mobile robot fromMOBILEROBOTS’s Peoplebot; the other is the haptic devicefrom Sensable’s Phantom Omni. The mobile robot works as thebase for the system to carry the haptic device and loads. Thehaptic device is used as a medium to communicate the user’s in-

tention to the robot. In order to control the movement of mobilerobot in the 2-D plane by using the haptic device, we developeda tracking algorithm based on the backstepping technique. Afuzzy logic adaptive windowing method is proposed to solve thevelocity estimation problem in the controller. A simple hapticrendering algorithm is designed for calculating the feedbackforce. The force feedback is proportional to the position errorbetween the robot position and human operator. Hence, theuser can “feel” this error rather than observe the error. This

Page 8: Haptic Interaction for Mobile Assistive Robots

3508 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 11, NOVEMBER 2011

Fig. 10. Feedback force in human input experiment.

Fig. 11. Changing window size of the proposed velocity estimation method.

feature enables the people suffering from the hearing and visionsensory losing to interact with the robot through their hapticchannel. By “feeling” the error, the user commands the robot tomove toward the direction of the less feedback force. Both ofthe designed trajectory and the human input experiments showthe effectiveness of the designed control and haptic renderingalgorithms.

The developed system is, however, only the first prototype to-ward a real-life assistive robot for senior or people with disabil-ities. More research work should be done for the completenessof the system. Performance criteria for specific applicationsshould be determined, and the system performance should beassessed within the application context since the assistive robotsystem depends highly on specific applications. In the future,we will investigate all these problems and apply the developedsystem on clinical cases and improve the system based onspecific application requirements.

REFERENCES

[1] G. Lacey, “User involvement in the design and evaluation of a smartmobility aid,” J. Rehabil. Res. Develop., vol. 37, no. 6, pp. 709–723,Nov./Dec. 2000.

[2] M. Kassler, “Robotics for health care: A review of the literature,” Robot-ica, vol. 11, no. 6, pp. 495–516, 1993.

[3] M. Johnson, E. Guglielmelli, G. Di Lauro, C. Laschi, M. Carrozza, andP. Dario, “6 giving-a-hand system: The development of a task-specificrobot appliance,” in Advances in Rehabilitation Robotics, vol. 306,Z. Bien and D. Stefanov, Eds. Berlin, Germany: Springer-Verlag, 2004,ser. Lecture Notes in Control and Information Sciences, pp. 127–141.

[4] J. F. Engelberger, “Health-care robotics goes commercial: The ‘helpmate’experience,” Robotica, vol. 11, pt. 6, pp. 517–523, 1993.

[5] S. P. Levine, D. A. Bell, L. A. Jaros, R. C. Simpson, Y. Koren, andJ. Borenstein, “The navchair assistive wheelchair navigation system,”IEEE Trans. Rehabil. Eng., vol. 7, no. 4, pp. 443–451, Dec. 1999.

[6] D. C. Dugdale, Aging Changes in the Senses. [Online]. Available:http://www.nlm.nih.gov/medlineplus/ency/article/004013.htm

[7] Performance PeopleBot Operations Manual, MOBILEROBOTS Com-pany, Amherst, NH, 2007.

[8] Y. Kanayama, Y. Kimura, and F. Miyazaki, “A stable tracking controlmethod for an autonomous mobile robot,” in Proc. IEEE Int. Conf. Robot.Autom., 1990, pp. 384–389.

[9] W. Dong, W. Huo, S. Tso, and W. Xu, “Tracking control of uncer-tain dynamic nonholonomic system and its application to wheeled mo-bile robots,” IEEE Trans. Robot. Autom., vol. 16, no. 6, pp. 870–874,Dec. 2000.

[10] R. Fierro and F. Lewis, “Control of a nonholonomic mobile robot usingneural networks,” IEEE Trans. Neural Netw., vol. 9, no. 4, pp. 589–600,Jul. 1998.

[11] T. Fukao, H. Nakagawa, and N. Adachi, “Adaptive tracking control of anonholonomic mobile robot,” IEEE Trans. Robot. Autom., vol. 16, no. 5,pp. 609–615, Oct. 2000.

[12] J. Liu, Design of the Robotic System and Matlab Simulation. Beijing,China: Tsinghua Press, 2008.

[13] H. Wang and X. P. Liu, “Human-robot interaction via haptic device,” inProc. IEEE Int. Symp. HAVE, Oct. 2010, pp. 1–6.

[14] R. Kavanagh, “Number-theoretic approach to optimum velocity decod-ing given quantized position information,” IEEE Trans. Instrum. Meas.,vol. 50, no. 5, pp. 1270–1276, Oct. 2001.

[15] L. Bascetta, G. Magnani, and P. Rocco, “Velocity estimation: Assessingthe performance of non-model-based techniques,” IEEE Trans. ControlSyst. Technol., vol. 17, no. 2, pp. 424–433, Mar. 2009.

[16] F. Janabi-Sharifi, V. Hayward, and C. J. Chen, “Discrete-time adaptivewindowing for velocity estimation,” IEEE Trans. Control Syst. Technol.,vol. 8, no. 6, pp. 1003–1009, Nov. 2000.

[17] S. Stubberud and K. Kramer, “Data association for multiple sensortypes using fuzzy logic,” IEEE Trans. Instrum. Meas., vol. 55, no. 6,pp. 2292–2303, Dec. 2006.

[18] Y.-B. Byun, Y. Takama, and K. Hirota, “Combined channel estimation anddata decoding based on fuzzy logic,” IEEE Trans. Instrum. Meas., vol. 51,no. 2, pp. 342–346, Apr. 2002.

[19] J. Stover, D. Hall, and R. Gibson, “A fuzzy-logic architecture forautonomous multisensor data fusion,” IEEE Trans. Ind. Electron., vol. 43,no. 3, pp. 403–410, Jun. 1996.

[20] F. Yusivar, D. Hamada, K. Uchida, S. Wakao, and T. Onuki, “A newmethod of motor speed estimation using fuzzy logic algorithm,” in Proc.IEMD, May 1999, pp. 278–280.

[21] K. Salisbury, F. Conti, and F. Barbagli, “Haptic rendering: Introduc-tory concepts,” IEEE Comput. Graph. Appl., vol. 24, no. 2, pp. 24–32,Mar./Apr. 2004.

[22] A. El Saddik, “The potential of haptics technologies,” IEEE Instrum.Meas. Mag., vol. 10, no. 1, pp. 10–17, Feb. 2007.

[23] R. Siegwart and I. R. Nourbakhsh, Introduction to Autonomous MobileRobots. Cambridge, MA: MIT Press, 2004.

Huanran Wang (S’06) received the B.Sc. and M.Sc.degrees with the major in electrical engineering andpattern recognition from Harbin Engineering Univer-sity, Harbin, China, in 2005 and 2008, respectively.He is currently working toward the Ph.D. degree inelectrical engineering at Carleton University, Ottawa,ON, Canada.

His research interests include assistive robotics,haptics, as well as smart material and its application.

Page 9: Haptic Interaction for Mobile Assistive Robots

WANG AND LIU: HAPTIC INTERACTION FOR MOBILE ASSISTIVE ROBOTS 3509

Xiaoping P. Liu (SM’06) received the B.Sc. andM.Sc. degrees from Northern Jiaotong University,Beijing, China, in 1992 and 1995, respectively, andthe Ph.D. degree from the University of Alberta,Edmonton, AB, Canada, in 2002.

Since July 2002, he has been with the De-partment of Systems and Computer Engineering,Carleton University, Ottawa, ON, Canada, wherehe is currently a Canada Research Chair Professor.He was an Associate Editor for Intelligent ServiceRobotics, the International Journal of Robotics and

Automation, Control, and Intelligent Systems, and the International Journal ofAdvanced Media and Communication. He has authored or coauthored morethan 150 research articles. His research interests include interactive networkedsystems and teleoperation, haptics, micromanipulation, robotics, intelligent sys-tems, context-aware intelligent networks, and their applications to biomedicalengineering.

Dr. Liu is a member of the Professional Engineers of Ontario. He is anAssociate Editor for several journals including the IEEE/ASME TRANSAC-TIONS ON MECHATRONICS and the IEEE TRANSACTIONS ON AUTOMATION

SCIENCE AND ENGINEERING. He was the General Chair of the 2008 IEEEInternational Workshop on Haptic Audio Visual Environments and Their Ap-plications and the 2005 IEEE International Conference on Mechatronics andAutomation. He was the recipient of the 2007 Carleton Research AchievementAward, the 2006 Province of Ontario Early Researcher Award, the 2006 CartyResearch Fellowship, the Best Conference Paper Award of the 2006 IEEEInternational Conference on Mechatronics and Automation, and the 2003Province of Ontario Distinguished Researcher Award.