8
Virtual Testbed for Indoor Localization Stephan Adler, Simon Schmitt, Heiko Will, Thomas Hillebrandt and Marcel Kyas Freie Universit¨ at Berlin AG Computer Systems & Telematics Berlin, Germany {stephan.adler, simon.schmitt, heiko.will, thomas.hillebrandt, marcel.kyas}@fu-berlin.de Abstract—We present a novel, easy to use virtual testbed for the evaluation of localization algorithms. Our testbed enables researchers to easily run tests on a huge body of real world range-based indoor localization data. The data consists of a dense grid of reference points belonging to one or multiple maps. Each point consists of a ground truth value and an arbitrary number of ranging values. Each ranging value belongs to a certain anchor node on a fixed position. The reference data is gathered by a robot which carries (arbitrary) localization devices. The robot stores its location as a ground truth value and simultaneously uses the localization device to measure the distance to a set of anchors in range. The ground truth value is gathered by an optical reference system which is applied to the robot. It is possible to define paths through a map using a web interface. Our system uses our experimental gathered reference points to deliver a dataset of ranging values for the current path. Therefore the researcher can run a virtual experiment by himself and can adjust several parameters. Our system enables other researchers to run reproducible experiments on real word data. The expensive and complex deployment of a dedicated infrastructure and experimental setup can be avoided as well as the error-prone task of modelling a localization system and running a simulation. Our system will be open to the research community and will help to develop a better understanding of the field of range based indoor localization. I. I NTRODUCTION In the field of range based indoor localization researchers often deal with a major problem: How do we test the algo- rithms or systems we have designed? Setting up experiments for an indoor localization scenario requires a lot of effort and a lot of hardware. As radio based localization hardware is rather expensive and not a mass product, only a few working groups were able to carry out large localization experiments. As a workaround it is possible to use standard radio transceivers as localization devices [1] or to simulate the behavior of radio based ranging devices. However, the standard radio transceiver approach has the drawback of huge ranging errors and a slow sample rate. The simulation approach has a big drawback because it is hard to find a channel and error model which rep- resents realistic behavior of radio based transmissions. These issues cause a problem within the academic community as it is hard to find reliable and realistic models for simulation and even harder to set up experiments to evaluate and reproduce the results of research. To solve this problem we would need an easy to use simulation system with realistic assumptions or a cheap and reproducible standard experiment setup. As stated above, such a system does not exist yet. Therefore we designed a system where researchers can run virtual experiments in a big set of experimental gathered reference data. By using this approach we avoid the construction of inexact error and channel models as well as the hardware problem and the reproducibility. The approach of such a virtual testbed is well known from other domains like embedded systems or wireless sensor networks. A system which offers these possibilities needs several components to work. The heart of such a system is a big database. This database should at least consist of a map which is covered by a dense grid of reference points. Each reference point should consist of a ground truth value as well as the measured distance to a set of anchor nodes. The position of these anchor nodes must be known. We will also need an interface to access the data which should be able to provide a form of graphical feedback to the user. To run the initial experiment we must be in possession of radio ranging devices which can be used as anchor nodes. We also need a reference system which is able to accurately localize itself on the map to deliver a ground truth value for each measurement. We presented such a reference system at the previous IPIN [2]. With the creation of our reference system we are able to build a database as described above and are now able to present the first approach of our virtual testbed system. In this paper we will present how we built the system, how we evaluated it, the results of the evaluation, and the use cases for third parties. II. RELATED WORK To get an impression of how experiments are carried out when testing indoor localization methods and algorithms we look at publications where real world experiments are carried out. To decrease the amount of literature we focused on radio- and range-based localization techniques. He et al. present a testbed for evaluation the effects of non line of sight (NLOS) time of arrival (TOA) based indoor localization system [3]. This paper is of special interest for us, as the authors use the same localization hardware as we do in our lab. The hardware in use is the nanoLOC device, which is a special transceiver which implements range-measurement features [4] developed by the Nanotron GmbH. He et al. also describe a combined approach of simulation and field testing. The group developed a complex channel model to simulate the radio properties of the Nanotron transceiver and evaluated it by running several field tests. What makes this work outstanding is the evaluation

Virtual testbed for indoor localization

Embed Size (px)

Citation preview

Virtual Testbed for Indoor LocalizationStephan Adler, Simon Schmitt, Heiko Will, Thomas Hillebrandt and Marcel Kyas

Freie Universitat BerlinAG Computer Systems & Telematics

Berlin, Germany{stephan.adler, simon.schmitt, heiko.will, thomas.hillebrandt, marcel.kyas}@fu-berlin.de

Abstract—We present a novel, easy to use virtual testbed forthe evaluation of localization algorithms. Our testbed enablesresearchers to easily run tests on a huge body of real worldrange-based indoor localization data. The data consists of a densegrid of reference points belonging to one or multiple maps.Each point consists of a ground truth value and an arbitrarynumber of ranging values. Each ranging value belongs to acertain anchor node on a fixed position. The reference datais gathered by a robot which carries (arbitrary) localizationdevices. The robot stores its location as a ground truth valueand simultaneously uses the localization device to measure thedistance to a set of anchors in range. The ground truth value isgathered by an optical reference system which is applied to therobot. It is possible to define paths through a map using a webinterface. Our system uses our experimental gathered referencepoints to deliver a dataset of ranging values for the currentpath. Therefore the researcher can run a virtual experiment byhimself and can adjust several parameters. Our system enablesother researchers to run reproducible experiments on real worddata. The expensive and complex deployment of a dedicatedinfrastructure and experimental setup can be avoided as wellas the error-prone task of modelling a localization system andrunning a simulation. Our system will be open to the researchcommunity and will help to develop a better understanding ofthe field of range based indoor localization.

I. INTRODUCTION

In the field of range based indoor localization researchersoften deal with a major problem: How do we test the algo-rithms or systems we have designed? Setting up experimentsfor an indoor localization scenario requires a lot of effort and alot of hardware. As radio based localization hardware is ratherexpensive and not a mass product, only a few working groupswere able to carry out large localization experiments. As aworkaround it is possible to use standard radio transceivers aslocalization devices [1] or to simulate the behavior of radiobased ranging devices. However, the standard radio transceiverapproach has the drawback of huge ranging errors and a slowsample rate. The simulation approach has a big drawbackbecause it is hard to find a channel and error model which rep-resents realistic behavior of radio based transmissions. Theseissues cause a problem within the academic community as itis hard to find reliable and realistic models for simulation andeven harder to set up experiments to evaluate and reproducethe results of research.

To solve this problem we would need an easy to usesimulation system with realistic assumptions or a cheap andreproducible standard experiment setup. As stated above, sucha system does not exist yet. Therefore we designed a system

where researchers can run virtual experiments in a big set ofexperimental gathered reference data. By using this approachwe avoid the construction of inexact error and channel modelsas well as the hardware problem and the reproducibility. Theapproach of such a virtual testbed is well known from otherdomains like embedded systems or wireless sensor networks.

A system which offers these possibilities needs severalcomponents to work. The heart of such a system is a bigdatabase. This database should at least consist of a map whichis covered by a dense grid of reference points. Each referencepoint should consist of a ground truth value as well as themeasured distance to a set of anchor nodes. The position ofthese anchor nodes must be known. We will also need aninterface to access the data which should be able to provide aform of graphical feedback to the user.

To run the initial experiment we must be in possession ofradio ranging devices which can be used as anchor nodes.We also need a reference system which is able to accuratelylocalize itself on the map to deliver a ground truth value foreach measurement. We presented such a reference system atthe previous IPIN [2].

With the creation of our reference system we are able tobuild a database as described above and are now able to presentthe first approach of our virtual testbed system. In this paperwe will present how we built the system, how we evaluatedit, the results of the evaluation, and the use cases for thirdparties.

II. RELATED WORK

To get an impression of how experiments are carried outwhen testing indoor localization methods and algorithms welook at publications where real world experiments are carriedout. To decrease the amount of literature we focused on radio-and range-based localization techniques. He et al. present atestbed for evaluation the effects of non line of sight (NLOS)time of arrival (TOA) based indoor localization system [3].This paper is of special interest for us, as the authors use thesame localization hardware as we do in our lab. The hardwarein use is the nanoLOC device, which is a special transceiverwhich implements range-measurement features [4] developedby the Nanotron GmbH. He et al. also describe a combinedapproach of simulation and field testing. The group developeda complex channel model to simulate the radio properties ofthe Nanotron transceiver and evaluated it by running severalfield tests. What makes this work outstanding is the evaluation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

of simulated results as well as the very realistic field testingsetup. The paper also uses a good categorization of severaldifferent multipath conditions which we will also use in thispaper. The authors describe four different conditions whichare:

free space describes the perfect outdoor scenario. Thereceiver receives only the direct signal of thesender and no reflected signals at all

LOS (line of sight) sender and receiver have a directline of sight connection but the receiver willalso receive multipath parts of the senderssignal

NLOS-DP (non line of sight - direct path) the sender hasno line of sight to the receiver, but the signaltravelling the direct path between sender andreceiver is still detectable

NLOS-UDP (non line of sight - undetectable direct path)direct line of sight between sender and re-ceiver does not exist. The part of the signalwhich travels the direct path is not detectableand the receiver only receives multipaths partsof the signal

Besides the described paper from He et al. several dif-ferent field test experiments can be found in the literature.The simplest approach is to use the received signal strengthindicator (RSSI) to calculate the distance between two sensornodes. This technique is simple to implement but consideredinaccurate. A common scenario is to set up a square area andplace sensor nodes around this area and then use the RSSI tolocalize nodes within that area. This method is described byLiu et al. [5] and in similar ways by other authors [6], [7],[8]. A drawback in this approach is that almost only free spaceand LOS conditions are tested which are rarely seen in realindoor localization scenarios.

Another class of field tests tries to use time of arrival(TOA) based approaches. As the TOA measurement with radiodevices can be difficult due to extremely short time periodswhich need to be gathered, the behavior can be imitated byusing ultra-sound based ranging methods. A big advantage ofthis approach is that the sensor hardware needed for such anexperiment is less expensive and sensors are easily available.Ultrasound based experiments were conducted by Wendeberget al. [9] as well as by Priyantha et al. [10] and Moore etal.[11].

Bahillo et al. [12] describe one of the most realistic exper-iment setups we found in the literature. The authors used theresults to evaluate the Robust Least-Squared Multilateration(RSLM) localization algorithm. The experiment used IEEE802.11b devices to measure RSSI and round trip time (RTT)based ranging values. The experiment was carried out in anoffice floor with 8 anchors placed in NLOS positions. Onemobile node was used to walk through a path of approximately68m. Unfortunately the paper does not disclose how theground truth values were gathered or how many individualpositions were measured.

III. SYSTEM OVERVIEW

To understand our work it is important to briefly understandthe system we built. Parts of the system were already publishedand demonstrated in [2] and [13].

As shown in Fig. 1, the virtual testbed consists of areference system that carries a system under test, a data fusioncomponent, a database back end, and a user front end. Theposition of the reference system is transferred to the datafusion component, along with the raw range measurements ofthe system under test. The data fusion component interpolatesthe positions of the system under test and stores these referencepositions and the raw data of the system under test in thedatabase. The user can then perform virtual test runs usingthe user front end.

A. Reference System

We previously presented a reference system for indoorlocalization testbeds [2]. A low-cost TurtleBot [14] robotcarried the system under test while directed remotely throughhallways in our office-like building. It consists of a Roombacleaning robot [15] and a laptop that is mounted on top ofthe Roomba. We use the Robot Operating System (ROS) [16]and its already provided localization components to estimatethe robot’s position relative to a coordinate system defined bya pre-drawn floor plan. For localization, odometry data fromthe Roomba is combined with visual distance measurementstaken by a Kinect depth sensor [17] mounted on the robot. Animplementation of Adaptive Monte-Carlo Localization [18] isused for localization. The use of a rack allows for mountingan additional mobile component of a system under test. Theuse of open-source software assures a high customizabilityfor hard- and software. We implemented a middleware thatallows us to collect data produced by a system under test aswell as the ground truth positions from the robot itself. Weare able to continuously calculate the robot’s position with anaverage position estimation error of 6.7 cm. The system hasbeen evaluated by driving along a grid of known positionswhile performing self-localization.

To achieve better odometry data we acquired a Q.borobot [19]. This is a commercial robot capable of robust indoorlocalization given a pre-drawn 2D map of the environment. Ituses the same software components as the previous referencesystem. Instead of a Microsoft Kinect camera, it uses atop-mounted Asus Xtion PRO LIVE 3D camera [20] whichprovides the same data output. It has bigger wheels whichallow the traversal of higher door sills. We collect the groundtruth data for the virtual testbed using this Q.bo robot as thereference system. Due to its similarities to the TurtleBot, thelocalization error is comparable. Another benefit of the Q.bois its robust casing.

The reference system is able to traverse and localize in anenvironment using a pre-drawn 2D accurate floor plan. Thefloor plan does not have to contain movable objects like tablesor chairs, but it is advised to remove such objects since thereference system can not traverse that space and would therebyproduce a lot of gaps in the virtual testbed.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

Reference System

System Under TestData Fusion

Precise position

Raw dataDatabase Back End User Front End

Raw data with

interpolated positions

Generate

virtual test runs

Fig. 1. Software components used in the virtual testbed.

We use a Dijkstra implementation to produce a path frag-ment of roughly 2m in our environment, respecting movableobjects not present in the pre-drawn floor plan. The algorithmsearches positions not already traversed while considering thecurrent angle of the robot and the space already traversedearlier. Space near already traversed space is rated higher,so that the algorithm produces a tight path without criss-crossing the environment. Path fragments are spaced witha distance of approximately 20 cm. When the current targetposition is reached, another path fragment is computed. Whilecollecting data, the robot traverses its environment with amaximum speed of 10 cm/s, if only driving forward. Whileturning, slightly more data is acquired on the turning position,depending on the turning angle. The ground truth position isacquired with 10Hz, so that reference positions with a distanceof approximately one centimeter can be transferred to the datafusion component.

While evaluating the system we found that the path planningapproach using already provided navigation components ofROS does not work as good as we expected. Therefore weare currently researching better ways to gain a dense andconsistent traversal. As a workaround we partly controlled thereference system manually when we conducted our evaluationexperiments.

B. System Under Test

As the system under test we use our Wireless SensorNetwork (WSN) nodes [21]. The nodes consist of a modifiedversion of the Modular Sensor Board (MSB) A2 which isequipped with a Nanotron nanoPAN 5375 [4] transceiver.This hardware enables the sensor nodes to measure inter-noderanges using time-of-flight in the 2.4GHz frequency band.One mobile node is attached to the reference system. Thisnode tries to perform range measurements using all predefinedanchors. To correctly model the real-world behavior, the failureof a range measurement is also stored in the database. Whena range measurement was taken, the software running on theQ.bo robot attaches a timestamp to it and transfers it to thedata fusion component.

Since we only have a limited amount of nodes available, wecan not fully equip an environment with anchors. Thereforewe plan to virtually increase the anchor count in the futureby letting the reference system repeat a traversal of theenvironment using a new anchor placement. However, sincethis strategy is not the same as having all the anchors in placefor one run, it has to be evaluated in a future work.

C. Data Fusion

We merge reference data and data from the system undertest by using unique timestamps on the robot. The collectedinformation from both systems is stored in two queues, orderedin time. When a position update from the reference systemis received, all range measurements from the mobile sensornode currently stored in the queue are iterated. The referenceposition of a range measurement is computed using linearinterpolation between two subsequent ground truth positions,defining the time interval surrounding the range measurement.However, the maximum distance between these positions isnormally limited by one centimeter due to the design of thereference system. One has to be aware that the interpolation isaffected by the localization error of the reference system. Sincethe expected localization error of the WSN is significantlyhigher than the combined error of the reference system andthe interpolation process, linear interpolation suffices andcan benefit the integrity of the data if reference positionsare delayed for some reason. When such two ground truthpositions are available, the range measurement is removedfrom its queue and sent along with the interpolated referenceposition to the database back end.

D. Database Back End

A PostgreSQL database installed on a separate server is usedto store the data received via a HTTP REST interface from thedata fusion component on the reference system. We also storethe actual hardware identifier of the sensor node, since it mightbe the case that single nodes produce false measurements, e.g.due to defects or wrong calibration. This can later be evaluatedusing the filled database. We use PostGIS, a spatial databaseextender, to efficiently run queries that output only those rangemeasurements that were received on a certain position.

We designed the database to store data received from atime-of-flight based measurement system like ours, but itcan easily be adapted to accommodate the needs of othersystems, e.g. storing RSSI values, data from barometers orother fingerprinting data.

E. User Front End

The database back end can be accessed via a website. Ascalable map of the building in which the runs were recorded ispresented to the user. The user can define a virtual experimentby drawing any path on the map. Then, the user has tochoose between anchor positions that were available duringthe recording. The ground truth positions on that path and therange measurements from the WSN received while traversing

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

these ground truth positions are then available for download.Due to the low speed of the reference system, multiple rangemeasurements with the same reference position are stored inthe database. To get a more realistic virtual experiment, theuser can download only a random subset of measurements.The downloadable result is a comma separated format list,easily accessible by other applications like Matlab or Excel.A typical sample contains the following values:

• Unix timestamp of the sample time• Array of distances to each anchor• Ground truth of the measurementThe file also contains a file header with a list of used anchors

and their ground truth values.

IV. EVALUATION

In this paper we present the first promising results ofexperiments we were able to run on our testbed. To evaluateour approach we use a real driven path through our office floor.The path is afterwards used to fetch previously gathered datafrom the database. In our evaluation we compare the rangingdata for the real run with the ranging data fetched from thereference database which we will refer to as virtual run in thefollowing section.

In our system the term experiment describes a certain setup.A setup consists of a set of anchor nodes and their positions, alocation and a map. For each experiment we carry out multipleruns. A run means that our robot travels a certain path withinthe setup described by the experiment. While traveling its paththe robot continuously measures its current ground truth valueas well as the ranges to each anchor whereas a −1 distancevalue indicates a failed measurement. The result is written toa measurement database and will be referred to as sample inthis paper.

A. Proof of concept setup

For our proof of concept evaluation we set up a fixedexperiment in a conference room in our office building. Theroom has the size of 6.80m × 7.50m and is equipped with9 anchor nodes which have a constant LOS connection tothe robot (Fig. 2). We choose this simple setting to compareourselves with equally set up scenarios and will choose a morecomplex setup later in this paper.

In the first run as shown in Fig. 2b our robot drove throughour office floor on a manually controlled path which we willrefer to as real path. This floor was beforehand mapped by thesame robot under the same circumstances. The path of the runwe denote as mapping run is depicted in Fig. 2a and contains308 700 valid samples. The measurements of the mapping runare stored in a database like it is described in III-D.

We now take the ground truth value of each sample of thereal path and query the database for samples of the mappingrun in a 5 cm radius around this ground truth value. As aresult of the database query we receive a result set of nmeasurements within our defined radius with n ≥ 0. If wereceive zero results we drop the current sample. Otherwisewe randomly select one out of n measurements and use it to

generate a virtual run. The results of this database request areshown in Fig. 2c. It can easily be seen that the virtual pathdiffers from the real path. This can be explained by the fact thatthe database does not contain a value for every ground truthvalue of the real path. The slight blurring effect of the path isexplainable by the randomization when we pick samples fromthe database. A better result could be achieved if the referencedata were more dense in spatial manner, which will be thecase as we will gather further data in the closer future. Weare also researching different strategies to generate the virtualrun, by using different methods to pick reference values fromthe database to generate more realistic paths.

The two resulting datasets, one of the real path as well asthe one from the virtual path, can now be evaluated.

1) Ranging results: To proof our concept we expect theproperties of the virtual run to be close to the properties ofthe real run. One of the key properties is of course the spatialform of the path which can be compared in Fig. 2b versusFig. 2c. We can see that the path of the virtual run is similarto the path of the real run to a certain extent. However, thepath which results from the database request looks blurry andhas some outliers in it. The main reason for this are missingreference measurements which are close enough to the pathwe gave as input. Another reason is the parametrization of ourdatabase request, which is a topic of future research. Otherkey figures can be compared in Table I. The table shows thatour database did not contain a suitable sample for each pointof our real path which also explains the difference betweenboth runs. This problem has no huge impact on the two othervalues. The average anchor count shows how many nodescould be reached by the robot for each sample and is slightlylower for the virtual run. The difference can be explainedby different environmental conditions as well as a differentpath constellation for the mapping run. The average rangingerror indicates the overall ranging error of the whole run.Both values differ by just 5 cm (2.75%) which is a completelyexpectable and promising result.

Other key properties are the error distribution of the rangingmeasurements as well as the distribution of the average errorfor each anchor. Both distributions should be closely relatedto each other so that it is hard to distinguish between a realand a virtual path. The error distribution of the measurementscan be found in Fig. 3a for the real path and in Fig. 3b for thevirtual path. As the error distribution of a measurement systemhas a big impact on the results of localization algorithms itis important that the error distribution of the virtual run isstrongly related to the error distribution of the real run. To findout whether both distributions belong to the same distributionfunction we ran a Kolmogorow-Smirnow-Test (K-S test) onboth datasets. As the K-S test becomes extremely sensitivefor large sample sets we randomly chose 100 ranging errorvalues from the virtual and the real run and ran the K-S teston both datasets. We repeated the test multiple times and thetest returned a positive result for the H0-hypothesis, indicatingthe consistency of both error distributions. The K-S test for100 randomly selected ranging errors resulted in a p− value

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

0

0.5

1

1.5

2

2.5

3

0 1 2 3 4 5 6 7 8 9 10

Ave

rage

anch

orer

ror

inm

Anchor ID

real runvirtual run

Fig. 4. Average error per anchor

of 0.682 which confirms our assumption.In Fig. 4 we compare the average error per anchor, which

is also an indicator for the comparability of the real versusthe virtual path. The results show a significant difference forsome of the anchors. These differences are hard to explain ascalculated effects of our system. We assume that the source ofdifferences are slight changes in the environment between themapping run and the real run. We will examine this assumptionin future research. Nevertheless the impact of these differencesdepends on what problems one wants to solve using the virtualtestbed. For the testing of localization algorithms we see onlya minor problem here. The research on anchor selection orspatial distribution of anchors could be negatively influencedby the different error behavior per anchor.

2) Localization results: In addition to the evaluation of theranging results of the testbed we also test the localizationperformance of the real and the virtual run. We will compare

TABLE IPARAMETERS OF REAL VS. VIRTUAL RUN

real path virtual pathsuccessful rangings 6453 6363average anchor count 8.21 8.01average ranging error 1.82m 1.87m

three different algorithms. The first two both use differentleast square methods [22], [23]. We use an implementation ofthe linear least squares (LLS) as well as the non-linear leastsquares (NLLS) algorithm. As a third algorithm we chose theMin-Max algorithm [24].

Table II shows the different key parameters for the testedlocalization algorithms. It can easily be seen that the averageposition errors between real and virtual run are very similarfor all algorithms. The relative difference between real andvirtual run for each algorithm is

• 3.17% for LLS• 0.83% for NLLS• 1.12% for Min-Max

which is actually better than we expected. As already statedbefore, the different behavior of average error per anchorseems to have only a negligible effect on the performanceof the localization algorithms. The only outlier can be foundin the maximum error ratings of the LLS algorithm. Thisbehavior can be explained by a disadvantageous measurementconstellation in the real run data which causes the algorithmto fail like Zekavat and Buehrer describe in [25].

B. Real world setup

As we have proven that our testbed works quite well undersimplified conditions we carried out another experiment in ouroffice building. For this experiment we set up 25 anchorswhich are distributed over one whole office floor of ourbuilding. The building has a width of 70m and a depth of 30m.The robot has mostly NLOS-DP or NLOS-UDP connections

0 1 2 3 4 5 6 7

01

23

45

6

1

2

3 4

5 6

7

8

9

(a) Mapping run

0 1 2 3 4 5 6 7

01

23

45

6

1

2

3 4

5 6

7

8

9

(b) Real run

0 1 2 3 4 5 6 7

01

23

45

6

1

2

3 4

5 6

7

8

9

(c) Virtual run

Fig. 2. Paths of the proof of concept setup. All numbers in meter.

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

0

100

200

300

400

500

600

-4 -2 0 2 4 6 8 10 12 14

freq

uenc

y

ranging error in m

(a) Real run

0

100

200

300

400

500

600

-4 -2 0 2 4 6 8 10 12 14

freq

uenc

y

ranging error in m

(b) Virtual run

Fig. 3. Error distribution of real versus virtual run

TABLE IIRESULTS OF DIFFERENT LOCALIZATION ALGORITHMS

algorithm

position errorproof of concept setup real world setup

real run virtual run real run virtual runaverage maximum average maximum average maximum average maximum

LLS 3.78m 23.08m 3.90m 84.75m 3.65m 101.10m 3.95m 75.08mNLLS 2.41m 10.78m 2.43m 9.99m 1.98m 11.87m 2.34m 8.70mMin-Max 0.89m 3.71m 0.90m 3.72m 1.52m 8.54m 1.75m 7.11m

to some anchors, the exact amount of reachable anchors canbe looked up in Table III.

For our experiment the reference system traveled a shortstraight path through our office floor as depicted in Fig. 5. Wechose this path because our database already contained enoughreference data for that area. At the time of our experiment ourdatabase contained 1 617 275 reference samples for the wholefloor.

As one can see in Fig. 5 we placed a lot of anchors inour building to get as much rangings per sample as possible.In a practical application the amount of anchors could bereduced, but this setup allows us to later research on thesmallest possible setup or how anchor constellation influencesthe performance. To do so we would just have to disablecertain anchors from the virtual path generation.

1) Ranging results: As in Section IV-A we also generate avirtual run by fetching measurements in a 5 cm radius aroundeach sample of the real run. The resulting path is again veryclose to the path of the real run. In Table III we can comparethe most significant parameters of both runs.

TABLE IIIPARAMETERS OF REAL VS. VIRTUAL RUN

real path virtual pathsuccessful rangings 4977 3981average anchor count 8.69 8.01average ranging error 1.85m 1.94m

In the proof of concept setup (IV-A) the ranging errorsbetween both runs are equally distributed. This assumptionis again proofed by a K-S test under the same conditions asthe previous test which results in a confirmation of the H0hypothesis with a p− value of 0.799 for 100 samples. Theseresults prove that the virtual runs can be used to simulate realruns even if we mostly have NLOS links to our anchors. Therelative difference of the average ranging error is 4.86% whichis slightly higher than in the proof of concept setup but stilla remarkably good result considering the higher dynamic ofthis setup.

In Fig. 6 we see that the error per anchor statistics inthis setup matches better than in the proof of concept setup(Fig. 3b). The only big differences can be found for anchorswhich are relatively far away from our real path and thereforeonly have a few samples in our dataset. We assume thereason for the better behavior is that the experiment took placein a populated floor while the proof of concept experimentwas conducted in an empty room. Within a populated setupenvironmental conditions change very often, leading to ahigher noise over the whole experiment. In the empty roomthe environmental conditions are static to our knowledge, butmight have been changed between the mapping and the realrun, which could lead to a significantly different behaviorbetween both runs.

2) Localization results: To evaluate the localization perfor-mance of the virtual run we use the ranging data our testbed

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

Fig. 5. Real world setup with real run

0

2

4

6

8

10

12

2 4 6 8 10 12 14 16 18 20 22

Ave

rage

anch

orer

ror

inm

Anchor ID

real runvirtual run

Fig. 6. Error per anchor - real world setup

delivered to us to run the three different localization algorithmswe use in Section IV-A2. The result can be seen in Table II.The relative average localization error for all three algorithmsis

• 8.21% for LLS• 18.18% for NLLS• 15.13% for Min-Max

which is significantly higher than the errors in the proof ofconcept setup. Interestingly the relative localization perfor-mance is worse than the average ranging performance. Weassume that the reason for this observation lies in a highererror for single anchors. Another explanation could be that

the algorithms are sensible on the spatial distribution of errorprone measurements which is explained by Hillebrandt etal. [26].

V. CONCLUSION

We have shown that our virtual testbed for indoor localiza-tion provides the possibility to carry out extensive reproducibleexperiments. The results of these experiments will help theindoor localization community to develop a better understand-ing of the behavior of ranging techniques and the impacts ofinevitable measurement errors on localization algorithms. Thetestbed allows us to run numerous experiments on empiricalreal world data.

The testbed can be used for• performance analysis of indoor localization algorithms• experiments on anchor placement• testing of cooperative localization• evaluation of localization hardware• developing strategies for certain application scenarios

and more research topics regarding indoor localization. Weinvite the research community to explore the data and use itfor own research as well as contributing to our testbed byrunning experiments and integrate them into our system.

VI. FUTURE WORK

In the near future we will improve our datasets to com-pletely map our whole office building. We also plan to extendthe experiments to other buildings like storage areas or olderbrick buildings. We would also like to test different rangingdevices and hope to find partners from the industry for this

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

purpose. The ranging devices are not limited to radio basedhardware but can also be used for RSSI or ultrasound basedsystems. We are going to evaluate the low-cost localizationsystem we presented [1] using our testbed.

Currently we are examining different strategies to fetchmeasurements from the database for a given path and theirimpact on the accuracy of the virtual run. We also expect toincrease the accuracy by increasing the density of the gathereddata.

Another important aspect which needs to be addressed is acloser look at the error distribution of our ranging system. Ascan be seen in Fig. 3a and Fig. 3b we have a significant partof rangings which seem to be too short. Most of the knownchannel models do not explain the occurrence of short rang-ings. We need to research whether this is a unique behaviorof the Nanotron ranging system or if we also see these effectsfor other ranging systems which would significantly changethe common channel error models we see in the literature.

VII. ACKNOWLEDGMENTS

This work was funded in part by the German FederalMinistry of Education and Research (BMBF, Project “VIVE”).

REFERENCES

[1] H. Will, S. Pfeiffer, S. Adler, T. Hillebrandt, and J. Schiller, “Distancemeasurement in wireless sensor networks with low cost components,”in Indoor Positioning and Indoor Navigation (IPIN 2011), 2011.

[2] S. Schmitt, H. Will, B. Aschenbrenner, T. Hillebrandt, and M. Kyas, “Areference system for indoor localization testbeds,” in Indoor Positioningand Indoor Navigation (IPIN), 2012 International Conference on, 2012,pp. 1–8.

[3] J. He, K. Pahlavan, S. Li, and Q. Wang, “A testbed for evaluation of theeffects of multipath on performance of toa-based indoor geolocation,”Instrumentation and Measurement, IEEE Transactions on, vol. PP,no. 99, pp. 1–1, 2013.

[4] Nanotron Technologies GmbH, “Nanopan 5375 RF module datasheet,”Berlin, Germany, 2009. [Online]. Available: http://www.nanotron.com

[5] D. Liu, P. Ning, and W. K. Du, “Attack-resistant location estimationin sensor networks,” in Proceedings of the 4th internationalsymposium on Information processing in sensor networks, ser. IPSN’05. Piscataway, NJ, USA: IEEE Press, 2005. [Online]. Available:http://dl.acm.org/citation.cfm?id=1147685.1147704

[6] N. Bulusu, J. Heidemann, and D. Estrin, “GPS-less low-costoutdoor localization for very small devices,” IEEE PersonalCommunications, vol. 7, no. 5, pp. 28–34, Oct. 2000. [Online].Available: http://dx.doi.org/10.1109/98.878533

[7] K. Whitehouse, C. Karlof, A. Woo, F. Jiang, and D. Culler, “Theeffects of ranging noise on multihop localization: an empirical study,” inProc. 4th international symposium on Information processing in sensornetworks. IEEE Press, 2005, p. 10.

[8] R. Crepaldi, P. Casari, A. Zanella, and M. Zorzi, “Testbed implementa-tion and refinement of a range-based localization algorithm for wirelesssensor networks,” in Mobility ’06: Proceedings of the 3rd internationalconference on Mobile technology, applications & systems, ACM. NewYork, NY, USA: ACM, 2006, p. 61.

[9] J. Wendeberg, J. Muller, C. Schindelhauer, and W. Burgard, “Robusttracking of a mobile beacon using time differences of arrival withsimultaneous calibration of receiver positions,” in Indoor Positioningand Indoor Navigation (IPIN), 2012 International Conference on, 2012,pp. 1–10.

[10] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan, “The cricketlocation-support system,” in Proceedings of the 6th annual internationalconference on Mobile computing and networking, ser. MobiCom ’00.New York, NY, USA: ACM, 2000, pp. 32–43. [Online]. Available:http://doi.acm.org/10.1145/345910.345917

[11] D. Moore, J. Leonard, D. Rus, and S. Teller, “Robust distributed networklocalization with noisy range measurements,” in Proc. 2nd internationalconference on Embedded networked sensor systems. ACM, 2004, pp.50–61.

[12] A. Bahillo, S. Mazuelas, R. M. Lorenzo, P. Fernandez, J. Prieto,R. J. Duran, and E. J. Abril, “Hybrid rss-rtt localizationscheme for indoor wireless networks,” EURASIP J. Adv. SignalProcess, vol. 2010, pp. 17:1–17:12, Feb. 2010. [Online]. Available:http://dx.doi.org/10.1155/2010/126082

[13] S. Schmitt, H. Will, T. Hillebrandt, and M. Kyas, “A virtual indoorlocalization testbed for wireless sensor networks,” in Poster and Demon-stration Sessions of IEEE SECON 2013 (SECON 13 Posters-Demos),New Orleans, USA, Jun. 2013, pp. 239–241.

[14] “The TurtleBot concept.” [Online]. Available:http://www.willowgarage.com/turtlebot

[15] “Roomba 531 vacuum cleaner.” [Online]. Available:http://www.irobot.com/de/product.aspx?id=127

[16] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs,R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,”in ICRA Workshop on Open Source Software, 2009.

[17] “Microsoft Kinect.” [Online]. Available: http://www.xbox.com/en-US/kinect

[18] D. Fox, “Kld-sampling: Adaptive particle filters and mobile robotlocalization,” in In Advances in Neural Information Processing Systems(NIPS, 2001.

[19] “Q.bo, thecorpora, robotic company.” [Online]. Available:http://thecorpora.com/

[20] “Asus xtion pro live.” [Online]. Available:http://www.asus.com/Multimedia/Xtion PRO LIVE/

[21] M. Baar, H. Will, B. Blywis, T. Hillebrandt, A. Liers, G. Wittenburg,and J. Schiller, “The scatterweb msb-a2 platform for wireless sensornetworks,” Freie Universitat Berlin, Department of Mathematics andComputer Science, Institute for Computer Science, Telematics andComputer Systems group, Takustraße 9, 14195 Berlin, Germany,Tech. Rep. TR-B-08-15, Sep. 2008. [Online]. Available: ftp://ftp.inf.fu-berlin.de/pub/reports/tr-b-08-15.pdf

[22] I. Guvenc, C.-C. Chong, and F. Watanabe, “Analysis of a linear least-squares localization technique in los and nlos environments,” in Vehic-ular Technology Conference, 2007. VTC2007-Spring. IEEE 65th, 2007,pp. 1886–1890.

[23] S. Venkatesh and R. Buehrer, “A linear programming approach tonlos error mitigation in sensor networks,” in Proc. 5th internationalconference on Information processing in sensor networks. ACM, 2006,pp. 301–308.

[24] S. Simic and S. Sastry, “Distributed localization in wireless ad hocnetworks,” UC Berkley, Tech. Rep. UCB/ERL M02/26, 2001.

[25] R. Zekavat and R. Buehrer, Handbook of Position Location:Theory, Practice and Advances, ser. IEEE Series on Digital& Mobile Communication. Wiley, 2011. [Online]. Available:http://books.google.de/books?id=2kJuNmtDMkgC

[26] T. Hillebrandt, H. Will, and M. Kyas, “Quantitative and spatial eval-uation of distance-based localization algorithms,” Lecture Notes inGeoinformation and Cartography, Proc. of LBS 2012, 2012.