10
2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013 Design and Performance Analysis of an Indoor Position Tracking Technique for Smart Rollators Payam Nazemzadeh 1 , Daniele Fontanelli 2 , David Macii 2 , Tizar Rizano 1 , Luigi Palopoli 1 1 Department of Information Engineering and Computer Science 2 Department of Industrial Engineering University of Trento, Trento, Italy E–mail: {payam.nazemzadeh,daniele.fontanelli,david.macii}@unitn.it, {rizano,palopoli}@disi.unitn.it Abstract—This paper presents a position tracking technique based on multisensor data fusion for rollators helping elderly people to move safely in large indoor spaces such as public buildings, shopping malls or airports. The proposed technique has been developed within the FP7 project DALi, and relies on an extended Kalman filter processing data from dead-reckoning sen- sors (i.e. encoders and gyroscopes), a short-range radio frequency identification (RFID) system and a front Kinect camera. As known, position tracking based on dead-reckoning sensors only is intrinsically affected by growing uncertainty. In order to keep such uncertainty within wanted boundaries, the position values are occasionally updated using a coarse-grained grid of low-cost passive RFID tags with known coordinates in a given map-based reference frame. Unfortunately, RFID tag detection does not provide any information about the orientation of the rollator. Therefore, a front camera detecting some markers on the walls is used to adjust direction. Of course, the data rate from both the RFID reader and the camera is not constant, as it depends on the actual user’s trajectory and on the distance between pairs of RFID tags and pairs of markers. Therefore, the average distance between tags and markers should be properly set to achieve a good trade-off between overall deployment costs and accuracy. In the paper, the results of a simulation-based performance analysis are reported in view of implementing the proposed localization and tracking technique in a real environment. Index Terms—Indoor localization, position tracking, Kalman filter, data fusion, performance evaluation I. I NTRODUCTION Today’s robots are no longer (and not only) the stiff and dangerous machines used in factory automation throughout the last decades. They have evolved into intelligent and flexible agents that interact closely with their users, improve their quality of life and do not pose significant safety threats. A perfect application arena for this technology is the so- called Ambient Assisted Living (AAL). In the portfolio of the technological solutions for AAL, robotic assistants play already an important role and many researchers believe that this is expected to grow further. DALi is a research project with the aim of developing a cognitive walker (called c- Walker) assisting elderly people with deambulation problems to navigate in complex environment, such as shopping malls, airports or stations that they could perceive as intimidating [1]. The DALi project has offered a practical motivation to the work presented in this paper. The problem can be shortly summarized in the following terms: in order to assist users in their activities in a large and partially structured environment, a robotic assistant requires real-time position tracking. This problem can be hardly solved with conventional GPS receivers, because the GPS signals are generally too weak to be detected indoors and, in any case, GPS positioning accuracy is not sufficient for the considered application. What we need is instead an accurate local positioning system (LPS) (i.e. with accuracy better than one meter) at an affordable cost. The importance of indoor localization in several application areas has stimulated considerable research efforts in the last few years. Localization techniques can be roughly classified in two groups, i.e. relative and absolute localization methods. In relative localization methods, the distance of a device from an initial position is measured using dead-reckoning sensors such as encoders, gyroscopes and accelerometers [2], [3]. An inevitable problem of such techniques is the accumulation of the measurement uncertainty contributions. As a result, the estimation errors tend to grow indefinitely, thus mak- ing measured position values useless after some time. This problem can be avoided if absolute localization methods are used. In fact, these techniques leverage external devices or landmarks deployed in the environment in known positions, which are detected by sensing systems such as cameras [4], ultra-sound sensors [5] or laser scanners [6]. Well-known wireless solutions for localization are based on triangulation from fixed anchors and rely on: radio signal strength intensity (RSSI) measurements [7], [8], [9] time-of-flight (ToF) or time-of-arrival (ToA) measurements of Radio Frequency (RF) signals [10], [11] or a combination thereof [12]. Another pos- sibility for absolute localization is the use of short-range Radio Frequency Identification (RFID) systems, which have the key advantage of being less affected by the level of crowdedness of the environment. A survey on wireless systems for indoor localization (including networks of passive and active RFIDs) can be found in [13]. Active RFID-based solutions require a smaller number of tags in the environment [14], [15]. However, passive RFIDs are much cheaper and do not need any mainte- nance (e.g. battery replacement). Therefore, they are preferable for localization in large environments. In [16] passive RFID tags are buried in carpets. Another trigonometric algorithm for indoor localization of mobile robots based on passive RFID 978-1-4673-1954-6/12/ $31.00 2012 IEEE

Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

Embed Size (px)

Citation preview

Page 1: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Design and Performance Analysis of an IndoorPosition Tracking Technique for Smart Rollators

Payam Nazemzadeh1, Daniele Fontanelli2, David Macii2, Tizar Rizano1, Luigi Palopoli11Department of Information Engineering and Computer Science

2Department of Industrial EngineeringUniversity of Trento, Trento, Italy

E–mail: {payam.nazemzadeh,daniele.fontanelli,david.macii}@unitn.it, {rizano,palopoli}@disi.unitn.it

Abstract—This paper presents a position tracking techniquebased on multisensor data fusion for rollators helping elderlypeople to move safely in large indoor spaces such as publicbuildings, shopping malls or airports. The proposed techniquehas been developed within the FP7 project DALi, and relies on anextended Kalman filter processing data from dead-reckoning sen-sors (i.e. encoders and gyroscopes), a short-range radio frequencyidentification (RFID) system and a front Kinect camera. Asknown, position tracking based on dead-reckoning sensors onlyis intrinsically affected by growing uncertainty. In order to keepsuch uncertainty within wanted boundaries, the position valuesare occasionally updated using a coarse-grained grid of low-costpassive RFID tags with known coordinates in a given map-basedreference frame. Unfortunately, RFID tag detection does notprovide any information about the orientation of the rollator.Therefore, a front camera detecting some markers on the wallsis used to adjust direction. Of course, the data rate from both theRFID reader and the camera is not constant, as it depends onthe actual user’s trajectory and on the distance between pairs ofRFID tags and pairs of markers. Therefore, the average distancebetween tags and markers should be properly set to achieve agood trade-off between overall deployment costs and accuracy. Inthe paper, the results of a simulation-based performance analysisare reported in view of implementing the proposed localizationand tracking technique in a real environment.

Index Terms—Indoor localization, position tracking, Kalmanfilter, data fusion, performance evaluation

I. INTRODUCTION

Today’s robots are no longer (and not only) the stiff anddangerous machines used in factory automation throughout thelast decades. They have evolved into intelligent and flexibleagents that interact closely with their users, improve theirquality of life and do not pose significant safety threats.A perfect application arena for this technology is the so-called Ambient Assisted Living (AAL). In the portfolio ofthe technological solutions for AAL, robotic assistants playalready an important role and many researchers believe thatthis is expected to grow further. DALi is a research projectwith the aim of developing a cognitive walker (called c-Walker) assisting elderly people with deambulation problemsto navigate in complex environment, such as shopping malls,airports or stations that they could perceive as intimidating [1].

The DALi project has offered a practical motivation to thework presented in this paper. The problem can be shortly

summarized in the following terms: in order to assist users intheir activities in a large and partially structured environment,a robotic assistant requires real-time position tracking. Thisproblem can be hardly solved with conventional GPS receivers,because the GPS signals are generally too weak to be detectedindoors and, in any case, GPS positioning accuracy is notsufficient for the considered application. What we need isinstead an accurate local positioning system (LPS) (i.e. withaccuracy better than one meter) at an affordable cost.

The importance of indoor localization in several applicationareas has stimulated considerable research efforts in the lastfew years. Localization techniques can be roughly classifiedin two groups, i.e. relative and absolute localization methods.In relative localization methods, the distance of a device froman initial position is measured using dead-reckoning sensorssuch as encoders, gyroscopes and accelerometers [2], [3]. Aninevitable problem of such techniques is the accumulationof the measurement uncertainty contributions. As a result,the estimation errors tend to grow indefinitely, thus mak-ing measured position values useless after some time. Thisproblem can be avoided if absolute localization methods areused. In fact, these techniques leverage external devices orlandmarks deployed in the environment in known positions,which are detected by sensing systems such as cameras [4],ultra-sound sensors [5] or laser scanners [6]. Well-knownwireless solutions for localization are based on triangulationfrom fixed anchors and rely on: radio signal strength intensity(RSSI) measurements [7], [8], [9] time-of-flight (ToF) ortime-of-arrival (ToA) measurements of Radio Frequency (RF)signals [10], [11] or a combination thereof [12]. Another pos-sibility for absolute localization is the use of short-range RadioFrequency Identification (RFID) systems, which have the keyadvantage of being less affected by the level of crowdednessof the environment. A survey on wireless systems for indoorlocalization (including networks of passive and active RFIDs)can be found in [13]. Active RFID-based solutions require asmaller number of tags in the environment [14], [15]. However,passive RFIDs are much cheaper and do not need any mainte-nance (e.g. battery replacement). Therefore, they are preferablefor localization in large environments. In [16] passive RFIDtags are buried in carpets. Another trigonometric algorithm forindoor localization of mobile robots based on passive RFID

978-1-4673-1954-6/12/ $31.00 2012 IEEE

Page 2: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

tags has been proposed by Park et al. [17]. An importantdrawback of localization based on RFID technologies is thatat least one tag within the range of the reader has to be foundat each sampling time. As a result, a very large number oftags can be needed for position tracking, which could not beeconomically profitable in large environments. Kulyoukin etal. [18] propose to reduce the number of tags by using RFIDmats. This solution is viable in corridors, but not in largerooms without obstacles or forced paths.For all the reasons above, position tracking performance canbe improved if relative and absolute localization techniquesare combined together, e.g. using multi-sensor data fusionalgorithms [19]. In [20] a Bayesian estimation algorithm fusesvarious data received from GPS, range sensors, inertial andmagnetic sensors and a barometer. A method for fusing datafrom inertial sensors and data received from a wireless camerais described in [21]. Choi et. al. propose to solve the positiontracking problem by fusing data from RFID tags, odome-ters [22], cameras [23] and ultrasonic sensors [24]. In [25] anextended Kalman filter (EKF) is used to fuse odometry data,measured by two encoders, and data sent from a gyroscopeto localize a smart rollator. Although Kalman filters canstrongly reduce the estimation uncertainty, both measurementsystems used in [25] are dead reckoning. Hence, the totaluncertainty accumulates over time. A different approach isfollowed in [26], where an automated wheelchair is localizedby a camera recognizing landmarks on the ceiling, while twoencoders and an azimuth sensor are used when the landmarksare out of view.

A clear conclusion from the discussion of the related work isthat a reliable and cost–effective LPS requires the integrationof different measurement devices, with complementary char-acteristics in terms of cost, accuracy, range, availability andcomputational requirements. The approach described in therest of this paper extends and refines the basic idea presentedin [27]. Like in other similar solutions, a Kalman filter is usedto fuse the data coming from a relative localization systemworking at a high sampling rate with those collected from anabsolute localization system. However, the infrastructure ofthe absolute system has to be configured so as to minimize itscost, with a little impact on accuracy. In the rest of the paper,at first, in Section II, our specific problem is formalized. Thenin Section III the position tracking algorithm is described.Finally, in Section IV the simulation settings are defined andseveral results are reported and commented.

II. PROBLEM OVERVIEW

The position tracking technique proposed in this paper relieson a multi-sensor data fusion technique based on the datacollected from:

• two incremental encoders installed in both rear wheels;• a triaxial gyroscope located in the central top part of the

rollator;• an RFID reader with a range of some tens of cm placed

below the rollator and with the antenna approximately 20cm above the ground;

• a front camera, particularly a Kinect.

The encoders are used to estimate the relative position and ori-entation of the c-Walker smart rollator by using a simple kine-matic model. Relative orientation estimation can be improvedalso by integrating the angular velocity values collected froma high-performance gyroscope (i.e., exhibiting a very smalldrift). Unfortunately, model-related as well as encoder andgyroscope measurement uncertainty contributions accumulateover time and completely disrupt localization performance af-ter some time. Such unbounded uncertainty growth is commonto all pure dead-reckoning localization approaches relying onincremental sensors subject to measurement bias and drift [22],[28]. Therefore, absolute position and orientation data arenecessary not only to initialize the system, but also to adjusttarget location periodically. As far as the position updatesare concerned, an RFID reader can be used to detect somelow-cost passive tags deployed in the environment at knownlocations, e.g., stuck on the ground or embedded in the tiles.Short-range RFID tag reading is a quite attractive solutionin this context since passive tags are cheap. Also, positionmeasurement uncertainty can be kept smaller than the RFIDreading range (i.e., in the order of a few tens of cm).

The front camera is used as a bearing-only sensor, a solutionoften used in robotics for localization and map constructiondue to its relatively low computational burden and good accu-racy [29], [30]. In this way, just the orientation estimation erroris directly adjusted. Since the system can rely also on a map ofthe chosen environment and an absolute reference is neededfor drift and bias removal, we will assume that the camerais able to detect specified markers inside the environment(e.g., wallpapers, signs, elevators, paintings) whose positioninside the map is assumed to be known a priori. As a Kinectdevice will be used in this project for such duties as objectrecognition, we do not need to add any additional camerato the system. Kinect is a motion sensing device made byMicrosoft based on an RGB camera and a depth sensor whichcan capture 3D video data in various light conditions. Bestperformances are achieved at distances between 80 cm and 4m from the camera with a horizontal field of view of about60 degrees. The depth sensor is a combination of an infraredlaser projector and a CMOS sensor which gives the Kinectthe ability of estimating the distance between different objects.The appropriate accuracy of Kinect as well as its low pricehave made it quite interesting for indoor robot navigation [31],[32].

In this paper the depth maps collected by the Kinect are usedto estimate the orientation of the c-Walker rollator with respectto known markers. In order to limit the computational andcommunication bandwidth requirements associated to Kinectimage processing, marker detection is not repeated at everyframe, but only every few seconds. In this way, the Kinectprocesses its depth map data and returns rollator orientation(if a marker is detected) at a very low rate, which is expectednot to saturate the communication and computational resourcesavailable on board of the c-Walker.

Page 3: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Camera

Gyroscope

Encoders

RFID reader

RFID tags

Walls

Markers

Fig. 1. The c-Walker rollator of DALi within the reference frame 〈W 〉 ={OW , XW , YW , ZW }.

A more detailed description of the context where the c-Walker prototype is supposed to be used is shown in Fig. 1.〈W 〉 = {Ow, Xw, Yw, Zw} represents a fixed, right-handedworld reference frame, where Π = Xw × Yw describes theplane of motion, Zw points outwards plane Π and, finally, Owis the origin of the reference frame. With the described choiceof 〈W 〉, the tags can be placed regularly on a grid of planeΠ (possibly embedding the tags in the floor). Consider that ifDR represents the distance between adjacent tags, the numberof RFID tags must be kept quite limited and DR should bequite larger than the RFID reading range (which typically isin the order of a few tens of cm), to assure estimation withacceptable error ranges and to avoid simultaneous (ambiguous)readings from multiple tags. Similar considerations hold forthe visual markers to be detected by the Kinect: they can beassumed to be placed on the walls at a distance DM from oneanother, with DM larger than the maximum reading range ofthe Kinect.

From a kinematic view–point, the rollator consists of twopassive front caster wheels and two fixed rear wheels. If dis the wheel axle length, r the radius of the wheels and ωrand ωl are the angular velocities of the right and left wheels,respectively, it follows that

v =r

2(ωr + ωl),

ω =r

d(ωr − ωl),

(1)

where v is the tangential velocity of the rollator and ω isits angular velocity around the mid–point of the wheels axle.

The Cartesian coordinates of this point on plane Π (denotedas (x, y) in Fig. 1) determine the actual position of therollator. The orientation of the c-Walker with respect to axisXw is expressed instead by variable θ. Therefore, the systemdescribing the motion of the rollator is given by

s =

xyθ

=

cos(θ) 0sin(θ) 0

0 1

[vω

]=

r2 cos(θ) r2 cos(θ)

r2 sin(θ) r

2 sin(θ)rd − rd

[ωrωl

].

(2)where s = [x, y, θ]T represents the state of the system. Theposition tracking algorithm proposed in this paper must be ableto keep the difference s− s between actual and estimated statewithin a given interval ±εs with 90% probability. It is worthemphasizing that the proposed localization solution relies on amixture of event-based and time–based measurement results.Indeed, the RFID tags are detected only when the rollatormoves close to them (which depends on trajectory), while theKinect can detect one of the markers only if it is within thefield of view of the camera. Evidently, the larger the valuesof DR and DM , the lower the average rate of position andorientation adjustments. In order to limit the computationalburden associated with marker detection and recognition,markers are not continuously tracked by the camera. In fact,the frame acquisition period can be so low as a few seconds.The data received from different sensing devices are fused bymeans of an extended Kalman filter (EKF). The behavior ofthe proposed algorithm is summarized by the block diagramin Fig. 2. The encoders are used to predict position andorientation. The gyroscope is used to update the orientationin the short-term. The event–based measures from the RFIDreader and the Kinect camera are used, when available, toupdate the estimated state of the system.The problem of selecting the best configuration in a givenenvironment can be tackled using a Monte Carlo approach.At first, the state estimation errors over a large set of randomtrajectories are computed for different values of DR and DM .Then, the values of DR and DM are chosen so as to makeuncertainty smaller than ±εs with a minimum number of tagsand markers in the chosen environment.

III. ALGORITHM DESCRIPTION

The sensor data from encoders and gyroscope are collectedwith a sampling period Ts. Therefore, the continuous timekinematic model (2) can be discretized with period Ts, yield-ing to

sk+1 = sk +

r2 cos(θk) r2 cos(θk)

r2 sin(θk) r

2 sin(θk)rd − rd

[TsωrkTsωlk

]. (3)

The integration algorithm adopted to obtain (3) is the Eulerforward method and relies on the assumption that the wheelvelocities are constant during every sample period. As aconsequence, ∆Φrk = Tsωrk and ∆Φlk = Tsωlk representthe angular displacements of the right and left wheels, respec-tively, during the k–th integration period. Note that (3) can be

Page 4: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Predict the position (EKF predict step)

Position Tracking result

at (k+1)Ts

Process Data

Left encoder

Right encoder

at kTs at kTs

at (k+1)Ts

at (k+1)Ts

Cam

era

Gyroscope

RFID reader

Planar position measured(RFID tag read)

Update the positionusing the available data

(EKF update step)

Orientation measured(camera or gyroscope)

Fig. 2. Block diagram of the proposed multi-sensor data fusion techniquefor rollator position tracking.

rewritten assk+1 = sk + f(sk)∆Φk, (4)

where ∆Φk = [∆Φrk , ∆Φlk ]T . Since the values of ∆Φk aremeasured by the encoders, they will be affected by the ad-ditive independent uncertainty contributions εk = [εrk , εlk ]T .Therefore, (4) can be changed as follows, i.e.

sk+1 = sk + f(sk)(∆Φk + εk). (5)

where ∆Φk = [∆Φrk , ∆Φlk ]T is the vector of values actuallymeasured at time kTs.The output of (4) is simply a linear function of the state, i.e.

ok = Cksk + ζk, (6)

where the dimension of Ck changes in time as a function ofthe available measurement data coming from the gyroscope,the RFID reader and/or the camera, respectively, and ζk isthe vector of the corresponding uncertainty contributions attime kTs. In particular, depending on the available measures,the terms of the system output equation can be expressed asfollows, i.e.

• in the case of gyroscope only:

ok = θgk, Ck = Cgk , ζk = ζgθk ; (7)

where θgk denotes the orientation measured by the gyro-scope and ζgθk is the corresponding uncertainty;

• in the case of gyroscope + RFID:

ok =

xkykθgk

, Ck =

[CrkCgk

], ζk =

ζxk

ζykζgθk

; (8)

where (xk, yk) are the coordinates of the detected RFIDtag in frame 〈W 〉 and ζxk

and ζyk represent the corre-sponding position offsets along axes Xw and Yw;

• in the case of gyroscope + camera:

ok =

[θgkθck

], Ck =

[CgkCck

], ζk =

[ζgθkζcθk

]; (9)

where the meaning of θgk and ζgθk is the same as above, θckis the orientation measured by the Kinect when a markeris detected and ζcθk is the corresponding measurementuncertainty, and finally

• in the case of gyroscope + RFID + Camera:

ok =

xkykθgkθck

, Ck =

CrkCgkCck

, ζk =

ζxk

ζykζgθkζcθk

. (10)

Observe that in (7)-(10) Ck results from the combination ofone or more of the following matrices:

Crk =[I2 02

], for RFID,

Cgk =[0 0 1

], for Gyroscope,

Cck =[0 0 1

], for Camera,

(11)

where 02 is a two-element, all-zero column vector and I2 isthe 2× 2 identity matrix.

IV. EXTENDED KALMAN FILTER (EKF)

As known, for a given dynamic system the Kalman filteris the optimal minimum mean-square-error (MMSE) stateestimator when process and measurement noises are whiteand gaussian. In this paper, (4) represents a nonlinear system.Therefore, an extended Kalman Filter (EKF) must be used tocombine the data collected from encoders, RFID tags, gyro-scope and Kinect. In general, an EKF is able to combine thedata collected from various sensors by weighing variances andcovariances associated with different uncertainty contributions.Since external position and orientation data are not alwaysavailable and some uncertainty contributions are not stationary,the EKF described in the following can be hardly consideredas the optimal state estimator. Nonetheless, it is still the bestlinear MSE estimator [33]. Potentially, the system model (4)and the related EKF could be extended to include the inputsdescribing the effect of systematic measurement uncertaintycontributions (e.g. gyroscope and encoder offsets). However,in the rest of this paper such terms will be assumed to havebeen preliminarily estimated and compensated through sensorcalibration, as customary in this kind of applications. Thus, theelements of εk and ζk can be supposed to have a negligiblemean. The specific problem of dynamic systematic offsetestimation using the same EKF will be instead investigatedin a future work.

In the following, the right and left wheel encoders will be as-sumed to be nominally identical, but affected by uncorrelatednoise terms. Due to the superimposition of various randomuncertainty contributions (e.g. limited resolution, mechanicalvibrations) we can reasonably assume that εrk ∼ N (0, σ2

rk)

Page 5: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

and εlk ∼N (0, σ2lk

). Also, the variances σ2rk

and σ2lk

of bothencoders change over time, as they depend on the wheelsdisplacement in the time interval between kTs and (k+ 1)Ts.As a consequence, the prediction equation of the EKF is givenby

s+k+1 = sk + f (sk)∆Φk, (12)

where sk denotes the state estimated at time kTs and super-script ·+ stands for prediction [33]. Moreover, by lineariz-ing (4) the state prediction covariance matrix results from

P+k+1 = FkPkF

Tk +GkQkG

Tk . (13)

where Fk is the Jacobian of (12) with respect to sk at[sk, ∆Φk], i.e.,

Fk = I3 +∂[f(sk)∆Φk]

∂sk

∣∣∣[sk, ∆Φk]=

=

1 0 − r2

(∆Φrk +∆Φlk

)sin θk

0 1 r2

(∆Φrk +∆Φlk

)cos θk

0 0 1

, (14)

Gk is the Jacobian of (12) with respect to ∆Φrk and ∆Φlkcomputed at [sk, ∆Φk], i.e.,

Gk=∂[f(sk)∆Φk]

∂∆Φk

∣∣∣[sk, ∆Φk]=

r2 cos θkr2 cos θk

r2 sin θk

r2 sin θk

rd − rd

, (15)

and Qk is the covariance matrix associated with εk. Since theencoders noises are weakly correlated in practice, Qk is a 2×2diagonal matrix with σ2

rkand σ2

lkon the main diagonal. The

Kalman gain is given by

Kk+1 = P+k+1C

Tk+1

(Ck+1P

+k+1C

Tk+1 +Dk+1

)−1, (16)

where Ck+1 is defined in (7)-(10) depending on the setof available measurement data at time kTs, as described inSection III. Similarly, the covariance matrix Dk+1 associatedto ζk+1 are defined as follows, i.e.

• Gyroscope only:Dk+1=σ

2θgk+1

;

• gyroscope + RFID:

Dk+1 =

σ2xk+1

ρk+1σxk+1σyk+1

0

ρk+1σxk+1σyk+1

σ2yk+1

0

0 0 σ2θgk+1

;

• Gyroscope + camera:

Dk+1=

[σ2θgk+1

0

0 σ2θck+1

];

• Gyroscope + RFID + camera:

Dk+1=

σ2xk+1

ρk+1σxk+1σyk+1

0 0

ρk+1σxk+1σyk+1

σ2yk+1

0 0

0 0 σ2θgk+1

0

0 0 0 σ2θck+1

.

In the matrices listed above, ρk+1 is the correlation coefficientbetween ζxk+1

and ζyk+1, while σ2

xk+1and σ2

yk+1are the

corresponding variances. Similarly, σ2θgk+1

and σ2θck+1

are thevariances of ζgθk+1

and ζcθk+1, respectively.

After the Kalman gain is computed, the estimated systemstate and the corresponding covariance matrix can be updatedusing the measurement data record ok+1 collected at time (k+1)Ts, i.e.

sk+1 = s+k+1 +Kk+1

(ok+1 − Ck+1s

+k+1

),

Pk+1 = (I3 −Kk+1Ck+1)P+k+1.

(17)

The position measurement uncertainty associated with RFIDtag reading can be modeled by a uniform distribution witha circular symmetry around every tag and with radius equalto the reading range R. This assumption is reasonable ifthe short-range RFID reader installed on the rollator is quiteisotropic. In this case the probability of reading a tag isapproximately the same in every point of the circle centeredin the tag, regardless of rollator speed or movement direction.On the contrary, the probability of reading a tag when therollator is farther than R is zero (since the passive tag is offand it cannot be turned on). Under these assumptions, it canbe proved that the mean values of the position errors alongaxes Xw and Yw are zero, while the respective variances areσ2xk+1

= σ2yk+1

= R2

4 with ρk+1 = 0. Such expressions showthat the uncertainty associated with the coordinates measuredalong axes Xw and Yw are uncorrelated and stationary.On the other hand, the variance σ2

θgk+1of the orientation data

measured with the gyroscope is typically non-stationary andmonotonically increasing. In fact, it is a function of bothangular speed and time because of the dead–reckoning effects.A detailed description of gyroscope measurement uncertaintyis out of the scope of this paper. To a first approximation,it is reasonable to assume that ζgθk ∼ N (0, σ2

θgk), with σθgk

growing linearly as a function of time in the case of constantmotion. This assumption has been verified also experimentally,as shown in Section V-B.

The case of the orientation measurements based on theKinect is instead quite different. In fact, if a marker is withinthe field of view of the Kinect, the uncertainty affecting themeasured angle depends mostly on the relative orientationbetween the marker and the Kinect sensor, on the distanceat which the marker is recognized and on the resolution ofthe collected image. Since such factors are independent fromeach other and the orientation measurement data may differconsiderably at different distances from a given marker, we canreasonably assume that ζcθk ∼N (0, σ2

θck). It is worth noticing

that σθck is independent of time. Therefore, the camera-basedorientation values can be also used to reset the uncertaintyaccumulated by the gyroscope, thus improving short-termperformance.

V. PERFORMANCE ANALYSIS AND SETUP OPTIMIZATION

As mentioned in the Introduction, the objective of the paperis to evaluate the performance of the proposed technique with

Page 6: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

0 10 20 30 40 50 60 70 800

5

10

15

20

25

30

35

40

45

2S

3S

3F1S

1F

2F

Fig. 3. Examples of rollator trajectories used for Monte Carlo simulations. The map refers to the Department of Information Engineering and ComputerScience of the University of Trento. Symbols 1S, 2S and 3S denote the starting positions, whereas 1F, 2F and 3F represent the finishing points. RFID tagsand markers for the camera are represented with * and ◦, respectively.

different configuration settings in order to find a good trade-off between deployment costs and accuracy, while keepingthe computational burden due to image processing quite low.To this aim, several multi-parametric simulations have beenperformed by changing those quantities that most seriouslyinfluence system complexity, i.e. DR (which affects the RFIDgrid granularity and, consequently, the costs of a possiblesetup) and DM (which has a relevant impact on accuracy,computational load and setup costs). In order to extendsignificantly the results reported in [27], multiple random,but realistic trajectories have been generated. The chosenenvironment is the Department of Information Engineeringand Computer Science of the University of Trento, Trento,Italy. The details of the algorithm used to generate the testingtrajectories are reported in Section V-A. The sensor parametervalues used in simulations have been estimated by followingthe procedures described in Section V-B. Finally, the resultsassociated with different possible configurations are reportedin Section V-C.

A. Trajectory generation

The trajectories used for Monte Carlo simulations are gen-erated by a path planning algorithm that can be applied toany map of a real indoor environment. The geometric entities(e.g. walls, partitions, furniture) in the map are described asclosed polylines. An algorithm based on approximate celldecomposition is applied to divide the map into rectangularcells of different sizes [34]. By intersecting these cells withthe closed polylines, every cell can be classified into threepossible types: fully occupied, partially occupied or empty.The empty cells represent those areas which are accessibleto the rollator. In the map of the Department there are 7006

empty cells. A graph is constructed from these empty cellsusing the position of the cell centers as nodes of the graph.Two nodes in the graph are connected if the correspondingcells are adjacent. The weights of the edges between two nodesrepresent the Euclidean distance between the cell centers. TheDijkstra’s shortest path algorithm is applied to the graph tofind a sequence of cells that connects the initial position of therollator with the finishing point. Using the coordinates of thecenter of these cells, a path consisting of line segments linkingpairs of subsequent cell centers is constructed. The resultingpiecewise linear trajectory provides a good approximation ofthe path that the c-walker has to follow. In order to providea smoother transition between segments, a portion of everypair of subsequent segments is replaced by arcs of a circle.Fig. 3 shows three possible rollator trajectories on the map.Symbols 1S, 2S and 3S denote the starting positions, whereas1F, 2F and 3F represent the corresponding finishing points.RFID tags and visual markers are highlighted by symbols *and ◦, respectively.

B. Sensor characterization

In order to evaluate position tracking accuracy throughsimulations in realistic conditions, some of the sensors thatare expected to be used on the c-Walker rollator have beentested on the field.At first, two E5-series optical incremental encoders have beeninstalled on the rear wheels of a kart emulating the rollatorprototype. Several data records have been collected withthe kart moving along both straight and circular paths. Thedistance values read from the encoders have been measuredat known positions. The reference points were marked by

Page 7: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

0 100 200 300 400−5

0

5

10

15

20

25

30

Wheels angular displacement (rad)

Enc

oder

Unc

erta

inty

(ra

d)

std Leftstd Rightmean Leftmean Right

Mean (after calibration)

Standard deviation

Fig. 4. Mean values (after calibration) and standard deviations of left-wheel and right-wheel encoders. The markers refer to the experimental resultscollected in 30 experiments over a straight-line path and in 10 furtherexperiments over a circular path. Dotted and continuous lines result fromleast-squares linear fitting.

some stickers put on the floor of our laboratory. Encodersperformance is evaluated according to the following procedure:

• In all experiments the differences between the angulardisplacements measured by both right and left wheelencoders and the corresponding values estimated at everyreference point are calculated.

• The obtained error histograms as well as their mean andstandard deviation are computed.

• The mean and standard deviation patterns are plotted asa function of the total angular displacements of bothwheels.

• Given that the mean errors make the measured distancesdrift away from the actual values, such systematic con-tributions have been compensated through a preliminarycalibration.

• Finally, mean and standard deviation have been interpo-lated through least-squares linear fitting, as a function ofthe total angular displacement, as shown in Fig. 4.

In running the Monte Carlo simulations we have assumedthat the encoders are calibrated and that the residual offsetsare zero. The standard deviation values are used instead tobuild Qk in (13).

A similar approach has been followed to test the per-formance of a triaxial gyroscope Invensense IMU3000. Inparticular, several sets of data have been collected usinga custom embedded platform placed on top of the rollatorpushed repeatedly over straight and rectangular paths. It isworth reminding that the gyroscope measures angular velocityand not its orientation, which is instead obtained by integratingthe angular velocity data in time. The main steps of the testingprocedure are listed below.

• In all experiments the differences between measured andactual orientation angles with respect to Xw have beencomputed at the same time.

0 8 16 24 32 40 48 56 64−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

time (s)

Gyr

osco

pe U

ncer

tain

ty (

rad)

stdmean

Mean

Standard Deviation

Fig. 5. Mean values (after calibration) and standard deviations of the orien-tation measured through a gyroscope as a function of time. The markers referto the results extrapolated from the randomization of multiple experimentaldata over elementary trajectories. Dotted and continuous lines result insteadfrom least-squares linear fitting.

• The measurement errors have been used to build differenthistograms related to homogeneous trajectories.

• The mean and the standard deviation patterns of suchhistograms have been plotted as a function of time.

• As previously discussed for the encoders, the systematiccontributions coming from the mean error have beencompensated through a preliminary sensor calibration.

• After calibration, 200 Monte Carlo simulations based onthe combination of the results of the elementary pathshave been performed over different random trajectoriesto extrapolate a more trustworthy histogram of gyroscopeuncertainty as a function of time in multiple scenarios(which could not be easily determined experimentally).

• Both mean values and standard deviations have beenestimated through least-squares linear fitting, as shownin Fig. 5.

Again, in running the Monte Carlo simulations, we haveassumed that the gyroscope is calibrated and that the residualoffset is zero. The standard deviation values are used insteadto compute σ2

θgk+1.

The performances of the RFID reader for localizationpurposes have been analyzed using a high-frequency (HF)ISC.MR101-USB evaluation kit by Feig Electronics operatingat 13.56 MHz. The reader is designed to detect passive tagscompliant with Standards ISO 15693 (Vicinity Cards) andISO 18000-3 (Smart Labels). The main steps of the testingprocedure are listed below.

• A tag is placed on the floor and the RFID reader is movedradially towards the tag and backwards in eight directions(i.e. north, north-east, east, south-east, south, south-west,west, north-west), while keeping the antenna parallel toground.

• The reading range of the device in every direction isestimated by measuring repeatedly the distances at which

Page 8: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

the tag is detected or stops being detected.• After checking the ommidirectional behavior of the

reader, a histogram of the measured range values is built.The resulting mean value of R is used to compute σ2

xk+1

and σ2yk+1

, as described in Section IV.Since the algorithm for marker recognition is still underdevelopment, the Kinect-based orientation measurementaccuracy could not be evaluated on the field. Therefore,it was deduced from information and data available in theliterature [35], [36].

C. Simulation results

In order to evaluate the performance of the proposed tech-nique in different configurations and to find a good trade-offbetween accuracy and cost, the position tracking algorithmhas been applied to 200 different paths generated as describedin Section V-A. All the trajectories have a duration of 180s, but the total traveled distance is different because theaverage velocity of the user varies from case to case. Thesensor parameters used in all the simulations result from thecharacterization procedures described in Section V-B and areshortly summarized in the following, i.e.

• Encoders and gyroscope sample time: Ts = 4 ms;• Encoders standard deviation: σrk = σlk = 0.066∆Φrk +

0.006 rad;• Standard deviation of gyroscope-based orientation mea-

surements: σθgk = 0.0098kTs;• Standard deviation of camera-based orientation measure-

ments: σθck = π/180 rad, assuming a range between 0.8m and 3.5 m with a horizontal field of view of 62◦;

• RFID reader range: R = 25 cm with a tolerance of ±5cm;

Fig. 6 shows the estimation errors of state variables x (a),y (b) and θ (c) as a function of time for the trajectory number2 (i.e., between positions 2S and 2F) shown in Fig. 3. Thedistances between pairs of RFID tags and pairs of visualmarkers are equal to DR = 3 m and DM = 8 m, respectively.Observe that the state estimation errors lie safely within ±εs,where εs = [1 m, 1 m, 0.2 rad]. Similar results can be derivedfor all the other trajectories in the set.

Fig. 7 shows the average root mean square(RMS) errors of the Euclidean distances betweenmeasured and actual positions as a function ofDR ∈ {1 m, 1.5 m, 2 m, 2.5 m, 3 m, 3.5 m, 4 m} andDM ∈ {6 m, 8 m, 10 m, 12 m, 14 m, 16 m, 18 m, 20 m} forall the 200 trajectories. The values of DR and DM are chosenpurposely larger than the maximum ranges of RFID readerand Kinect, respectively, not to detect multiple tags or markersat the same time. A tolerance of ±25 cm affects marker andtags placement on the map due to the physical constraints ofthe environment (e.g. doors or passages). Each point of thesurface in Fig. 7 is obtained as follows. At first, the RMSvalues of the 200 Euclidean distances between measuredand actual positions are computed at every simulation time.

0 20 40 60 80 100 120 140 160 180−1

−0.5

0

0.5

1

Est

imat

ion

erro

r of

x (

m)

time (s)

(a)

0 20 40 60 80 100 120 140 160 180−1

−0.5

0

0.5

1

time (s)

Est

imat

ion

erro

r of

y (

m)

(b)

0 20 40 60 80 100 120 140 160 180−0.2

−0.1

0

0.1

0.2

time (s)

Est

imat

ion

erro

r of

θ (

rad)

(c)

Fig. 6. Estimation errors of state variables x, y and θ as a function of timefor the trajectory no. 2 shown in Fig. 3. The distances between pairs of RFIDtags and pairs of visual markers are equal to DR = 3 m and DM = 8 m,respectively.

Then the RMS errors related to the last 60 s of simulationare averaged, because the EKF reaches the steady state after120 s in all conditions. The points of the surface confirmthat the positioning uncertainty tends to grow when eitherDM or DR increases, as expected. Fig. 7 shows also that thepositioning standard uncertainty is below 1 m for DR ≤ 1.5m (regardless of the value of DM ) and, dually, for DM ≤ 8m when DR ≤ 3.5 m. In such conditions the RMS errors liebetween 0.5 m and 0.9 m in most cases. However, the totalnumber of markers and tags to be deployed in the chosenenvironment differs considerably. In fact, it ranges from about295 items when DR = 3 m and DM = 8 m to more than3000 items when DR = 1 m and DM = 6 m. Evidently,in terms of trade-off between cost and accuracy the formersolution is preferable and easier to deploy. Fig. 8 depictsthe box-and-whiskers plots of the estimation errors of statevariables x (a), y (b) and θ (c), respectively, over 180 s indifferent configurations of interest (namely for various pairsof DR and DM values). Observe that when DR = 3.5 m andDM = 8 m, the errors along Xw and Yw are within ±1 m

Page 9: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

1

2

3

4

5

10

15

200

0.5

1

1.5

2

2.5

3

DR

(m)DM

(m)

Ave

rage

RM

S p

ositi

on e

rror

(m

)

Fig. 7. Steady-state RMS errors of the Euclidean distances between measuredand actual positions computed over 200 trajectories as a function of DR andDM .

with almost 90% probability. Also, the orientation errors aremuch smaller than in other cases due to quite frequent updatesfrom the Kinect. On the other hand, the configurations in thecenter of the surface (e.g. when DR = 2.5 m and DM = 12m) tend to be quite bad in all respects. Indeed, not only theorientation corrections are not frequent enough to keep x andy position errors below 1 m, but they are also quite expensivein terms of number of markers and tags.

VI. CONCLUSIONS

The analysis described in this paper provides essentialguidelines for the design of a localization and position trackingsystem to be implemented on a smart rollator called c-Walker.The target positioning uncertainty should be steadily below 1m. The results reported in the paper can be summarized asfollows:

• Encoders accuracy is essential to achieve good perfor-mance. As a rule of thumb, encoder resolution should beat least 0.1 degrees. Also, possible systematic offsets mustbe properly estimated and compensated (e.g. through apreliminary calibration).

• The proposed technique is able to meet the specifiedaccuracy requirements (to be used also for long-termtrajectory planning), even when the distance between theRFID tags embedded in the floor is one order magnitudelarger than the antenna reading range. A distance of about3.5 m between tags looks adequate. This result is veryimportant because it drastically reduces deployment costsand complexity.

• If no platform orientation measures are available, local-ization accuracy tends to degrade quite quickly, evenif a good gyroscope is used. A front Kinect collectingframes every 2 s and detecting suitable visual markerson the walls at a distance of about 8 m from each other

−1.5

−1

−0.5

0

0.5

1

1.5

x−ax

is e

stim

atio

n er

ror

(m)

DR=1m

DM

=20m

DR=1.5m

DM

=16mD

R=2m

DM

=14mD

R=2.5m

DM

=12mD

R=3m

DM

=10mD

R=3.5m

DM

=8mD

R=4m

DM

=6m

(a)

−1.5

−1

−0.5

0

0.5

1

1.5

y−ax

is e

stim

atio

n er

ror

(m)

DR=1m

DM

=20m

DR=1.5m

DM

=16mD

R=2m

DM

=14mD

R=2.5m

DM

=12mD

R=3m

DM

=10mD

R=3.5m

DM

=8mD

R=4m

DM

=6m

(b)

−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

Orie

ntat

ion

estim

atio

n er

ror

(rad

)

DR=1m

DM

=20m

DR=1.5m

DM

=16mD

R=2m

DM

=14mD

R=2.5m

DM

=12mD

R=3m

DM

=10mD

R=3.5m

DM

=8mD

R=4m

DM

=6m

(c)

Fig. 8. Box-and-whiskers plots representing the estimation errors of statevariables x, y and θ for different pairs of DR and DM values.

can effectively solve this problem, while keeping thecomputational burden within reasonable limits.

VII. ACKNOWLEDGMENT

The research leading to these results has received fundingfrom the European Union Seventh Framework Programme

Page 10: Design and Performance Analysis of an Indoor Position …disi.unitn.it/~palopoli/publications/ipin2013.pdf · 2013 International Conference on Indoor Positioning and Indoor Navigation,

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

(FP7/2007-2013) under grant agreement n◦ ICT-2011-288917“DALi - Devices for Assisted Living”.

REFERENCES

[1] “The DALi Project,” http://www.ict-dali.eu/dali, November 2011.[2] Ching-Hsien Hsu and Chia-Hao Yu, “An accelerometer based ap-

proach for indoor localization,” in Proc. Symposia and Workshops onUbiquitous, Autonomic and Trusted Computing (UIC-ATC), Brisbane,Australia, Jul. 2009, pp. 223–227.

[3] M. D. Cecco, “Sensor fusion of inertial-odometric navigation as afunction of the actual manoeuvres of autonomous guided vehicles,”Measurement Science and Technology, vol. 14, no. 5, pp. 643–653, May2003.

[4] S. Se, D. Lowe, and J. Little, “Mobile robot localization and mappingwith uncertainty using scale-invariant visual landmarks,” The interna-tional Journal of robotics Research, vol. 21, no. 8, pp. 735–758, Aug.2002.

[5] W. D. Rencken, “Concurrent localisation and map building for mobilerobots using ultrasonic sensors,” in Proc. IEEE/RSJ Int. Conf. onIntelligent Robots and Systems (IROS), vol. 3, Yokohama, Japan, Jul.1993, pp. 2192–2197.

[6] K. Lingemann, A. Nuchter, J. Hertzberg, and H. Surmann, “High-speedlaser localization for mobile robots,” Robotics and Autonomous Systems,vol. 51, no. 4, pp. 275–296, Jun. 2005.

[7] Y. Touati, H. Aoudia, and A. Ali-Cherif, “Intelligent wheelchair local-ization in wireless sensor network environment: a fuzzy logic approach,”in Proc. IEEE Int. Conf. on Intelligent Systems (IS), London, UK, Jul.2010, pp. 408–413.

[8] K. Whitehouse, C. Karlof, and D. Culler, “A Practical Evaluation ofRadio Signal Strength for Ranging-based Localization,” SIGMOBILEMob. Comput. Commun. Rev., vol. 11, no. 1, pp. 41–52, Jan. 2007.

[9] P. Pivato, L. Palopoli, and D. Petri, “Accuracy of RSS-Based CentroidLocalization Algorithms in an Indoor Environment,” IEEE Trans. onInstrumentation and Measurement, vol. 60, no. 10, pp. 3451–3460, Oct.2011.

[10] I. Guvenc and Chia-Chin Chong, “A Survey on TOA Based WirelessLocalization and NLOS Mitigation Techniques,” IEEE CommunicationsSurveys & Tutorials, vol. 11, no. 3, pp. 107–124, Third Quarter 2009.

[11] A. De Angelis, M. Dionigi, A. Moschitta, and P. Carbone, “A Low-CostUltra-Wideband Indoor Ranging System,” IEEE Trans. on Instrumenta-tion and Measurement, vol. 58, no. 12, pp. 3935–3942, Dec. 2009.

[12] D. Macii, A. Colombo, P. Pivato, and D. Fontanelli, “A data fusiontechnique for wireless ranging performance improvement,” IEEE Trans.on Instrumentation and Measurement, vol. 62, no. 1, pp. 27–37, Jan.2013.

[13] Hui Liu, H. Darabi, P. Banerjee, and Jing Liu, “Survey of wireless indoorpositioning techniques and systems,” IEEE Trans. on Systems, Man, andCybernetics, Part C: Applications and Reviews, vol. 37, no. 6, pp. 1067–1080, Nov. 2007.

[14] Guang-Yao Jin, Xiao-Yi Lu, and Myong-Soon Park, “An indoor local-ization mechanism using active RFID tag,” in Proc. IEEE Int. Conf. onSensor Networks, Ubiquitous, and Trustworthy Computing, Taichung,Taiwan, Jun. 2006, pp. 40–43.

[15] S. Jain, A. Sabharwal, and S. Chandra, “An improvised localizationscheme using active rfid for accurate tracking in smart homes,” in Proc.Int. Conf. on Computer Modelling and Simulation (UKSim), Cambridge,UK, Mar. 2010, pp. 51–56.

[16] A. A. Nazari Shirehjini, A. Yassine, and S. Shirmohammadi, “AnRFID-based position and orientation measurement system for mobileobjects in intelligent environments,” IEEE Trans. on Instrumentationand Measurement, vol. 61, no. 6, pp. 1664–1675, Jun. 2012.

[17] S. Park and S. Hashimoto, “Autonomous mobile robot navigationusing passive RFID in indoor environment,” IEEE Trans. on IndustrialElectronics, vol. 56, no. 7, pp. 2366–2373, Jul. 2009.

[18] V. Kulyukin, A. Kutiyanawala, E. LoPresti, J. Matthews, and R. Simp-son, “iWalker: Toward a rollator-mounted wayfinding system for theelderly,” in Proc. IEEE Int. Conf. on RFID, Las Vegas, NV, USA, Apr.2008, pp. 303–311.

[19] D. Macii, A. Boni, M. De Cecco, and D. Petri, “Tutorial 14: multisensordata fusion,” IEEE Instrumentation Measurement Magazine, vol. 11,no. 3, pp. 24–33, Jun. 2008.

[20] L. Klingbeil, M. Romanovas, P. Schneider, M. Traechtler, and Y. Manoli,“A modular and mobile system for indoor localization,” in Proc. Int.Conf. on Indoor Positioning and Indoor Navigation (IPIN), Zurich,Switzerland, Sep. 2010, pp. 1–10.

[21] A. Colombo, D. Fontanelli, D. Macii, and L. Palopoli, “A WearableEmbedded Inertial Platform with Wireless Connectivity for Indoor Po-sition Tracking,” in Proc. IEEE Intl. Instrumentation and MeasurementTechnology Conference (I2MTC), Hangzhou, China, May 2011, pp. 1–6.

[22] Byoung-Suk Choi, Joon-Woo Lee, and Ju-Jang Lee, “Localization andmap-building of mobile robot based on RFID sensor fusion system,”in Proc. International Conference on Industrial Informatics (INDIN),Daejeon, Korea, Jul. 2008, pp. 412–417.

[23] Byoung-Suk Choi and Ju-Jang Lee, “Sensor network based localizationalgorithm using fusion sensor-agent for indoor service robot,” IEEETrans. on Consumer Electronics, vol. 56, no. 3, pp. 1457–1465, Aug.2010.

[24] Byoung-Suk Choi, Joon-Woo Lee, Ju-Jang Lee, and Kyoung-Taik Park,“A hierarchical algorithm for indoor mobile robot localization usingRFID sensor fusion,” IEEE Trans. on Industrial Electronics, vol. 58,no. 6, pp. 2226–2235, Jun. 2011.

[25] M. Ibraheem, “Gyroscope-enhanced dead reckoning localization systemfor an intelligent walker,” in Proc.Int. Conf. on Information Networkingand Automation (ICINA), vol. 1, Kunming, China, Oct. 2010, pp. V1–67–V1–72.

[26] H. Wang, C.-U. Kang, T. Ishimatsu, and T. Ochiai, “Auto-navigation ofa wheelchair,” Artificial Life and Robotics, vol. 1, no. 3, pp. 141–146,Sep. 1997.

[27] P. Nazemzadeh, D. Fontanelli, and D. Macii, “An Indoor PositionTracking Technique based on Data Fusion for Ambient Assisted Living,”in Proc. IEEE Int. Conf. on Computational Intelligence and VirtualEnvironments for Measurement Systems and Applications (CIVEMSA),Milan, Italy, Jul. 2013, pp. 7–12.

[28] F. Evennou and F. Marx, “Advanced integration of WIFI and inertialnavigation systems for indoor mobile positioning,” EURASIP JournalAppl. Signal Process., vol. 2006, pp. 164–164, Jan. 2006.

[29] P. Jensfelt, D. Kragic, J. Folkesson, and M. Bjorkman, “A frameworkfor vision based bearing only 3D SLAM,” in Proc. IEEE Int. Conf. onRobotics and Automation (ICRA), Orlando, FL, USA, May 2006, pp.1944–1950.

[30] T. Lemaire, S. Lacroix, and J. Sola, “A practical 3d bearing-only SLAMalgorithm,” in Proc. Int. Conf. on Intelligent Robots and Systems (IROS),Edmonton, Alberta, Canada, Aug. 2005, pp. 2449–2454.

[31] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping:Using Kinect-style depth cameras for dense 3D modeling of indoorenvironments,” The international Journal of Robotics Research, vol. 31,no. 5, pp. 647–663, Apr. 2012.

[32] D. Correa, D. Sciotti, M. Prado, D. Sales, D. Wolf, and F. Osorio,“Mobile robots navigation in indoor environments using kinect sensor,”in Proc. 2nd Brazilian Conference on Critical Embedded Systems(CBSEC), Campinas, Sao Paulo, Brazil, May 2012, pp. 36–41.

[33] T. K. Y. Bar-Shalom, X. Rong Li, Estimation with Application toTracking and Navigation – Theory, Algorithm and Software. John.Wiley and Sons, 2001.

[34] R. A. Brooks and T. Lozano-Perez, “A subdivision algorithm in config-uration space for findpath with rotation,” IEEE Trans. on Systems, Manand Cybernetics, vol. SMC-15, no. 2, pp. 224–233, Mar.–Apr. 1985.

[35] S. Obdrzalek, G. Kurillo, F. Ofli, R. Bajcsy, E. Seto, H. Jimison,and M. Pavel, “Accuracy and robustness of kinect pose estimation inthe context of coaching of elderly population,” in Proc. 2012 AnnualInt. Conf. of the IEEE Engineering in Medicine and Biology Society(EMBC), San Diego, CA, USA, Aug.-Sep. 2012, pp. 1188–1193.

[36] J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer visionwith microsoft kinect sensor: A review,” IEEE Trans. on Cybernetics,vol. 43, no. 5, pp. 1318–1334, Oct. 2013.