Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
DUAL FOOT MOUNTED PEDESTRIAN NAVIGATION
SYSTEM USING OBLU
COLLEGE : B.M.S. INSTITUTE OF TECHNOLOGY AND MANAGEMENT, YELAHANKA, BENGALURU GUIDES : Prof. K.V.S. HARI
Dr. A. SHOBHA RANI STUDENTS : Ms. TANYA KURUVILLA
Mr. KOTHA AAKASH Ms. TANYA SINGH Mr. SANJAY J SHENOY
INTRODUCTION
There has been an exigent need for a navigation system that is robust, with accurate
positioning system with seamless indoor and outdoor coverage that can increase the safety in
emergency response and military operations. The most commonly used navigation system is
the GPS that provides high accuracy in many situations. But, the main challenge is to create a
navigation system that is sufficiently accurate in GPS denied environments. This is a major
problem in certain situations, such as military and disaster relief operations, where one has to
track the first responders who arrive at the scene to carry out their mission.
Global positioning systems provide a range of navigation accuracies at very low cost and low
power consumption. The devices that use GPS are both portable and are well suited for
integration with other sensors, communication links, and databases. However, the need for
alternative positioning system arises because GPS does not work in all environments,
especially indoor environments. This is a major problem in situations such as military and
disaster relief operations, where one has to track the first responders who arrive at the scene
to carry out their mission. Over thepast several years, the need for tracking systemsin indoor
environments has seen a sharp rise. In India, over 100,000 deaths occur annually due to fires
in homes and workplaces. The number of fatalities has not changed significantly over the
past 20 years despite smoke alarms, fire extinguishers and large efforts regarding information
campaigns.
Fire-fighters that enter buildings, which are on fire, experience very difficult conditions. The
heat generated by the fire in combination with the weight of the personal protection
equipment and water hose may cause exhaustion. Combined with high stress levels in smoke-
filled or dark environments (which can be expected during power outages caused by the fire)
there is an apparent risk for disorientation. Hence, there is a significant risk due to the
inability to correctly describe their movements to their supervisor. Depending on the role,
fire-fighters have different information requirements. For the smoke-divers, any localization
system will primarily increase their safety; however, for the sector chief, the technology
provides a means to rapidly comprehend whatthe situation is like and how to deploy
resources, especially in large incidents with multiple fire stations deployed at the scene.
Automatic mapping capability is an important complement to the localization technology,
since fire-fighters are not expected to have access to building floor-plans. While law
enforcement officers, fire-fighters, and military personnel have varying requirements for
localization and tracking systems, all three groups share certain keyrequirements. An
accurate positioning system could enable an alarm functionality which could prevent these
life threatening situations. For instance, this system can be used by smoke-divers and it will
increase their safety and provide a means to quickly understand what the situation is like and
how to deploy resources, especially in large incidents. This increases the need for a robust
and accurate positioning system that works in indoor and outdoor environments.
The key to achieving a system with good accuracy during indoor operations is to use
appropriate positioning sensors. One such system that provides good accuracy is the
OpenShoe, a real-time, embedded implementation of a foot mounted, zero-velocity aided
INS. An accurate positioning system could enable an alarm functionality which could
prevent these life-threatening situations where fire-fighters get lost. According to a survey
carried out by National Fire Protection Association (NFPA) one-third of the firefighting
fatalities of the fire-fighters occurred on the fire ground. This can be dramatically changed by
the use of an INS-based shoe. In addition, other applications like location of elderly in care-
centers, and of employees in an organization are some scenarios where indoor positioning is
required and has seen increasing demand.
Chapter 2
LITERATURE SURVEY
The need for alternative positioning system arises because GPS does not work in all
environments, especially indoor environments.This is a major problem in certain situations,
such as military and disaster relief operations. Over the past several years, the need for
tracking systems in indoor environments has seen a sharp rise.There are several technologies
which are being used for indoor positioning. Some of them are based on Wi-Fi or Ultra-
WideBand (UWB). These technologies assume an infrastructure like a Wi-Fi network or a
UWB network being available in the area of interest. The accuracies provided by these
technologies range from a few cms to a few meters.In the case of harsh environments like a
disaster-affected building, such assumptions of infrastructure cannot be made. Therefore,
there is a need for developing autonomous positioning systems.
Inertial Navigation System (INS) technology is capable of working in almost all
environments where GPS has difficulties, Microelectromechanical systems (MEMS) inertial
technology is seen as both a possible complement of GPS technology and a potential
alternative to GPS. INSs can provide position information whenever GPS signals are
unavailable (in tunnels, indoors, underground facilities), ensuring a possible seamless
provision of position information.
A foot-mounted INS, with low-quality inertial sensors, will not work if it is not fused with
other information when the subject is on a moving platform such as a train or vehicle.
Fortunately, both systems can provide information, which can be fused in an optimal manner
to obtain good accuracies in the position estimates.
A block diagram of a strap-down INS is shown.
Fig2.1:Block Diagram of strap down INS
The INS comprises the following two distinct parts, the Inertial Measurement Unit (IMU)
and the computational unit. The former provides information on the accelerations and
angular velocities of the navigation platform relative to the inertial coordinate frame of
reference. The angular rotation rates observed by the gyroscopes are used to track the relation
between navigation platform co-ordinate system and the navigation coordinate frame. This
information is then used to transform the specific force observed innavigation platform
coordinates into the navigation frame, where the gravity force is subtracted from the
observed specific force. The accelerations in the navigation coordinates are integrated twice,
with respect to time, to obtain the position of the navigation platform.
The navigation calculations in INS involve integration with time, which provide a low-pass
filter characteristic that suppresses high-frequency sensor errors but amplifies low-frequency
sensor errors and initialization errors. This results in a position error that grows without
bound as a function of the operation time and where the error growth depends on the error
characteristics of the sensors and the initialization error. In general, it holds that for a low-
cost INS, a bias in the accelerometer measurements causes position error growth that is
proportional to the square of the operation time, and a bias in the gyroscopes causes position
error growth that is proportional to the cube of the operation time (due to an extra integration
to obtain the relative angle between the navigation frame and body frame). The detrimental
effect of gyroscope errors on the navigation solution is due to the direct reflections of the
errors on the estimated attitude. The attitudeis used to calculate the current gravity force in
navigation coordinates and cancel its effect on the accelerometer measurements. The errors
in the cancellation of the gravity acceleration are then accumulated in the velocity and
position calculations. For a low-cost INS using gyroscopes with a bias on the order of 0.01
[°/s] this means that the position error is more than 10 [m] already after 10 [s] of operation.
A navigation system which has such an error growth, is basically useless for indoor
navigation. However, by utilizing the fact that an INS mounted on the foot of the user
regularly becomes stationary, i.e., has zero instantaneous velocity, the errors in the INS can
be estimated and partly compensated for, and the devastating cubical error growth can be
mitigated.
Fig 2.2: Block Diagram of zero-velocity aided INS
The block diagram of zero-velocity aided INS is shown. The zero-velocity aided INS
comprises the following three distinct parts, the INS, the zero-velocity detector and the
Kalman filter; the Kalman filter has a state-space model describing how the errors in INS
develop with time.3 The INS works as the backbone of the system, continuously estimating
the navigation state of the system. Whenever the zero-velocity detector detects that the
system is stationary (close to zero instantaneous velocity), this information is used as an
input to the Kalman filter that estimates the errors in the estimated navigation state. The
estimated errors are used to correct (calibrate) the internal states of the INS.
The detection of the zero-velocity events can be done using external force sensors or radar
sensors when the shoe is in contact with the ground. However, external force sensors are
prone to mechanical fatigue and may fail to detect that the foot is stationary in situations such
as when the user sits down and does not apply his weight on the shoe.
Fig 2.3: IMU sensor in shoe
Radar sensors require costlyelectronics and they cannot detect zero-velocity events if the
radar (sole) is not directed towards the ground, e.g. when the user is crawling. Therefore, the
zero-velocity events are generally detected directly from the IMU data, assuming that when
the foot-mounted INS is stationary, the angular rate of the system is zero and the specific
force vector is constant with a magnitude equal to the gravity force.
Chapter 3
HARDWARE REQUIREMENTS
3.1 Oblu
Oblu is an open source motion sensing platform for wearables and robots. It comes pre-
programmed as a shoe sensor for pedestrian navigation. It finds applications in industrial
safety and resource management, assistive robotics, gaming and Geo-survey of GPS devoid
area.Oblu is compatible with Arduino, Raspberry Pi and any other generic development
board. Under normal operating conditions, Oblu draws ~ 100 mA to 110 mA of current at an
operating voltage range of 4.2 V to 3.2 V when connected to battery. Therefore minimum
power consumption by Oblu is ~350 mW. Any generic USB data cable can be used to
program Oblu. It is possible to add add sensors, IMUs, GPS, Wi-Fi, UWB or ZigBee with
Oblu. Oblu can communicate reliably via BLE for a range of 2 meters.
Fig 3.1:Oblu
3.1.1 Block Diagram of Oblu
Fig 3.2: Oblu Block Diagram
3.1.2 Features of Oblu
The highlighting features are
1. Four 6-axis IMUs (3-axis Accelerometer + 3-axis Gyroscope):
a. Accelerometer range: ±2g, ±4g, ±8g, ±16g
b. Gyroscope range: ±250, ±500, ±1000, ±2000 deg/s
c. Max. sampling rate: 1 KHz
2. USB (Bootloader) programmable
3. JTAG programming interfaces for main controller & BLE
4. IMUs‟ placement in 2x2 array form
5. 32-bits floating pt controller with 512 Kb internal flash
6. Bluetooth Low Energy (BLE) 4.1 & USB 2.0 interfaces
7. Access to UART, SPI, I2C and GPIO pins
8. Access to ADC input of BLE (for battery indication)
9. Access to 3.3 V, 5 V, GND & RESET
10. Powering with Li-ion battery & USB, battery charging through USB
11. Max current: 100 mA
12. Onboard LED indicators; Reset button
13. Dimensions: 35.0 x 25.9 x 11.8 mm
14. Battery connector for easy attachment
15. Backup of 5 hrs 30 mins with 500mAH battery
Common Switched and Connectors
Bootloading Switch S1
Reset Switch S2
Test LED/ BT pairing LED1
Batt. Charging Status LED2
Board Power-up LED3
J1
Fig 3.3:Oblu Pin Configuration
PD00 1 VBAT 2
PD02 3 P1.6 4
PD03 5 P0.7 6
PD01 7 P0.4 8
5V 9 P0.5 10
PC03 11 P3.4 12
PC02 13 PD12/P1.5 14
IMU_INT 15 PD11/P1.4 16
J2
3.3V 1
GND 2
RST_N/ XRES 3
SWDCLK/ SCLK 4
SWDIO 5
IMU Chart
IMU#N IN
J3
PB01 1
PA19 2
PA16 3
PA09 4
PA08 5
3.3V 6
RST_N 7
PA00/TCK 8
PA01/TD1 9
PA02/TD0 10
PA03/TMS 11
GND 12
Fig 3.4:Oblu JTAG Interface
3.1.3 Oblu Microcontroller AT32UC3C
The AT32UC3C is a complete System-On-Chip microcontroller based on the AVR32UC
RISC processor running at frequencies up to 66 MHz. AVR32UC is a high-performance 32-
bit RISC microprocessor core, designed for cost-sensitive embedded applications, with
particular emphasis on low power consumption, high code density and high performance.
The processor implements a Memory Protection Unit (MPU) and a fast and flexible interrupt
controller for supporting modern operating systems and real-time operating systems. Using
the Secure Access Unit (SAU) together with the MPU provides the required security and
integrity.
The AT32UC3C incorporates on-chip Flash and SRAM memories for secure and fast access.
For applications requiring additional memory, an external memory interface is provided on
AT32UC3C0 derivatives.
Fig 3.5: AT32UC3C
3.1.4 Features of Microcontroller AT32UC3C
• High Performance, Low Power 32-bit AVR Microcontroller
– Compact Single-cycle RISC Instruction Set Including DSP Instruction Set
– Built-in Floating-Point Processing Unit (FPU)
– Read-Modify-Write Instructions and Atomic Bit Manipulation
– Performing 1.49 DMIPS / MHz
• Up to 91 DMIPS Running at 66 MHz from Flash (1 Wait-State)
• Up to 49 DMIPS Running at 33 MHz from Flash (0 Wait-State)
– Memory Protection Unit
• Multi-hierarchy Bus System
– High-Performance Data Transfers on Separate Buses for Increased Performance
– 16 Peripheral DMA Channels Improves Speed for Peripheral Communication
• Internal High-Speed Flash
– 512 Kbytes, 256 Kbytes, 128 Kbytes, 64 Kbytes Versions
– Single Cycle Access up to 33 MHz
– FlashVault ™ Technology Allows Pre-programmed Secure Library Support for End User
Applications
– Prefetch Buffer Optimizing Instruction Execution at Maximum Speed
– 100,000 Write Cycles, 15-year Data Retention Capability
– Flash Security Locks and User Defined Configuration Area
• System Functions
– Power and Clock Manager
– Internal 115KHz (RCSYS) and 8MHz/1MHz (RC8M) RC Oscillators
– One 32 KHz and Two Multipurpose Oscillators
– Clock Failure detection
– Two Phase-Lock-Loop (PLL) allowing Independent CPU Frequency from USB or CAN
Frequency
• Windowed Watchdog Timer (WDT)
• Asynchronous Timer (AST) with Real-Time Clock Capability
– Counter or Calendar Mode Supported
3.2 Inertial Measurement Unit (IMU)
An inertial measurement unit (IMU) is an electronic device that measures and reports a
body's specific force, angular rate, and sometimes the magnetic field surrounding the body,
using a combination of accelerometers and gyroscopes, sometimes also magnetometers.
GYROSCOPE
A gyroscope is a device that uses Earth‟s gravity to help determine orientation. Its design
consists of a freely-rotating disk called a rotor, mounted onto a spinning axis in the centre of
a larger and more stable wheel. As the axis turns, the rotor remains stationary to indicate the
central gravitational pull, and thus which way is “down”. The Oblu comes with an in-built
Gyroscope range: ±250, ±500, ±1000, ±2000 deg/s.
ACCELEROMETER
An accelerometer is a compact device designed to measure non-gravitational acceleration.
When the object it‟s integrated into goes from a standstill to any velocity, the accelerometer
is designed to respond to the vibrations associated with such movement. It uses microscopic
crystals that go under stress when vibrations occur, and from that stress a voltage is generated
to create a reading on any acceleration. Accelerometers are important components to devices
that track fitness and other measurements in the quantified self-movement. TheOblu comes
with an in-built Accelerometer range: ±2g, ±4g, ±8g, ±16g.
3.2.1 InvenSense 6-axis IMU
InvenSense‟s 6-axis family of motion sensors with a 3-axis gyroscope and a 3-axis
accelerometer on the same silicon die are the highest performance sensors in the market with
the lowest noise, best temperature stability, and highest sensitivity accuracy. These key
factors are key to create applications such as navigation, imaging, and augmented reality,
with the best user experience.
This portfolio includes sensors with intelligent on-board processing designed for the smart
use of power where the main processor can be kept in low power mode, such as when a
wearable device is keeping track of a user‟s steps to count calorie expenditure. These devices
enable manufacturers to eliminate the costly and complex selection, qualification, and system
level integration of discrete devices, guaranteeing optimal motion performance for
consumers.
The 6-axis family also supports highly configurable multi-interface devices that for example
support external devices such as manometers and barometers. Combined with the
InvenSenseMotionApp Platform, sensor management is offloaded from the operating system
and the motion-based complexities are abstracted using a structured set of APIs for
application development.
Fig 3.6:InvenSense 6-axis IMU
3.3 Bluetooth Low Energy (BLE)
Bluetooth Low Energy (Bluetooth LE, colloquially BLE, formerly marketed as Bluetooth
Smart) is a wireless personal area network technology designed and marketed by the
Bluetooth Special Interest Group (Bluetooth SIG) aimed at novel applications in the
healthcare, fitness, beacons, security, and home entertainment industries. Compared to
Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power
consumption and cost while maintaining a similar communication range.
Mobile operating systems including iOS, Android, Windows Phone and BlackBerry, as well
as MacOS, Linux, Windows 8 and Windows 10, natively support Bluetooth Low Energy.
The Bluetooth SIG predicts that by 2018 more than 90 percent of Bluetooth-enabled
smartphones will support Bluetooth Low Energy.
Bluetooth Low Energy is not backward-compatible with the previous (often called "classic")
Bluetooth Basic Rate/Enhanced Data Rate (BR/EDR) protocol. The Bluetooth 4.0
specification permits devices to implement either or both of the LE and BR/EDR systems.
Bluetooth Low Energy uses the same 2.4 GHz radio frequencies as classic Bluetooth, which
allows dual-mode devices to share a single radio antenna. LE does, however, use a simpler
modulation system.
Fig 3.7: Bluetooth Smart
3.3.1 How BLE Works
GATT is an acronym for the Generic Attribute Profile, and it defines the way that two
Bluetooth Low Energy devices transfer data back and forth using concepts called Services
and Characteristics. It makes use of a generic data protocol called the Attribute Protocol
(ATT).
The following diagram should explain the way that Bluetooth Low Energy devices work in a
connected environment. A peripheral can only be connected to one central device (such as a
mobile phone) at a time, but the central device can be connected to multiple peripherals.
If data needs to be exchanged between two peripherals, a custom mailbox system will need to
be implemented where all messages pass through the central device.
Once a connection is established between a peripherals and central device, however,
communication can take place in both directions, which is different than the one-way
broadcasting approach using only advertising data and GAP.
Fig 3.8:Connected Network Topology
The peripheral is known as the GATT Server, which holds the ATT lookup data and service
and characteristic definitions, and the GATT Client (the phone/tablet), which sends requests
to this server. All transactions are started by the master device, the GATT Client, which
receives response from the slave device, the GATT Server.
When establishing a connection, the peripheral will suggest a 'Connection Interval' to the
central device, and the central device will try to reconnect every connection interval to see if
any new data is available, etc. It's important to keep in mind that this connection interval is
really just a suggestion, though! Your central device may not be able to honour the request
because it's busy talking to another peripheral or the required system resources just aren't
available.
The following diagram should illustrate to data exchange process between a peripheral (the
GATT Server) and a central device (the GATT Client), with the master device initiating
every transaction.
Fig 3.9: GATT Transactions
3.3.2Profiles, Services and Characteristics
GATT consists of the following:
Profile: this defines a specific use case and consists of a group of Services that satisfy
the use case.
Service: a collection of data and associated behaviors to accomplish a specific feature
or functionality.
Characteristic: contains a single value and optional information about the value
ThereforeServices are collections of characteristics and relationships to other services that
encapsulate the behavior of part of a device. GATT transactions in BLE are based on high-level,
nested objects called Profiles, Services and Characteristics, which can be seen in the
illustration below:
Fig 3.10: Services and Characteristics
Profiles
A Profile doesn't actually exist on the BLE peripheral itself, it's simple a pre-defined
collection of Services that has been compiled by either the Bluetooth SIG or by the
peripheral designers. The Heart Rate Profile, for example, combines the Heart Rate Service
and the Device Information Service.
Services
Services are used to break data up into logic entities, and contain specific chunks of data
called characteristics. A service can have one or more characteristics, and each service
distinguishes itself from other services by means of a unique numeric ID called a UUID,
which can be either 16-bit (for officially adopted BLE Services) or 128-bit (for custom
services).
Characteristics
The lowest level concept in GATT transactions is the Characteristic, which encapsulates a
single data point (though it may contain an array of related data, such as X/Y/Z values from a
3-axis accelerometer, etc.).
Fig 3.11: Logic flow for BLE set up
Chapter 4
SOFTWARE REQUIRMENTS
4.1 Android Studio
Android Studio is the official integrated development environment (IDE) for Google's
Androidoperating system, built on JetBrains' IntelliJ IDEA software and designed
specifically for Android development. It is available for download on Windows, macOS and
Linux based operating systems. It is a replacement for the Eclipse Android Development
Tools (ADT) as primary IDE for native Android application development.
Fig 4.1: Android Studio
Android Studio was announced on May 16, 2013 at the Google I/O conference. It was in
early access preview stage starting from version 0.1 in May 2013, then entered beta stage
starting from version 0.8 which was released in June 2014.The first stable build was released
in December 2014, starting from version 1.0.The current stable version is 3.1 released in
March 2018.
4.1.1 Features Of Android Studio
The following features are provided in the current stable version:
Gradle-based build support
Android-specific refactoring and quick fixes
+ tools to catch performance, usability, version compatibility and other problems
ProGuard integration and app-signing capabilities
Template-based wizards to create common Android designs and components
A rich layout editor that allows users to drag-and-drop UI components, option to
preview layouts on multiple screen configurations
Support for building Android Wear apps
Built-in support for Google Cloud Platform, enabling integration with Firebase Cloud
Messaging (Earlier 'Google Cloud Messaging') and Google App Engine
Android Virtual Device (Emulator) to run and debug apps in the Android studio.
4.1.2 Version 3.X
CRITERION DESCRIPTION OS version Microsoft® Windows® 7/8/10 (32-bit or 64-bit), 64-bit required for
native debugging
Mac® OS X® 10.10 (Yosemite) or higher, up to 10.13 (macOS High
Sierra)
GNOME or KDE desktop Linux (64 bit capable of running 32-bit
applications)(GNU C Library (glibc) 2.19+)
RAM
3 GB RAM minimum, 8 GB RAM recommended; plus 1 GB for the
Android Emulator
Disk space 2 GB of available disk space minimum,
4 GB recommended (500 MB for IDE + 1.5 GB for Android SDK and
emulator system image)
Java version Java Development Kit (JDK) 8
Screen
resolution
1280 x 800 minimum screen resolution
Table 4.1: System Specification
4.2 Matlab
MATLAB (matrix laboratory) is a multi-paradigmnumerical computing environment. A
proprietary programming language developed by MathWorks, MATLAB allows matrix
manipulations, plotting of functions and data, implementation of algorithms, creation of user
interfaces, and interfacing with programs written in other languages, including C, C++, C#,
Java, Fortran and Python.
Fig 4.2: Matlab
Although MATLAB is intended primarily for numerical computing, an optional toolbox uses
the MuPADsymbolic engine, allowing access to symbolic computing abilities. An additional
package, Simulink, adds graphical multi-domain simulation and model-based design for
dynamic and embedded systems.
4.2.1 Features
Symbolic integration, differentiation, transforms, and linear algebra
Algebraic and ordinary differential equation (ODE) solvers
Simplification and manipulation of symbolic expressions
Unit systems for specifying, converting, and computing using SI, US, and custom unit
systems
Plotting of analytical functions in 2D and 3D
Symbolic expression conversion to MATLAB®, Simulink
®, Simscape™, C, Fortran,
and LaTeX
Variable-precision arithmetic.
Using the familiar MATLAB syntax, you can define and perform operations on
symbolic numbers, variables, expressions, and equations with the output rendered in
mathematical typeset. Symbolic Math Toolbox™ supports a wide variety of
mathematical functions for your computations. Matlab performs arithmetic and
calculus analytically including differentiation, definite and indefinite integration,
limits, series, and summations and products. You can compute transforms and their
inverses, including Fourier, Laplace, and Z-transforms.
Fig 4.3: Fusion Results on Matlab
Chapter 5
INTERFACING OBLU WITH APPLICATION
PLATFORM
The method toconstruct the tracked path profile of the wearer of Oblu, when the Bluetooth
interface is used for data transmission, is described in the diagram below.
Fig 5.1:Interfacing Obluwith Application Platform
Command Flow Sequence
Step 1: START - Start receiving data packet from Oblu
Step 2: ACK - Send acknowledgement for last data packet received from Oblu
Step 3: SWDR – Oblu sends DATA packet containing displacement and orientation
information for each stride, at every step, to Application. (Step = whenever detects zero
motion or standstill is detected).
Step 4: Application acknowledges receiving last DATA packet by sending appropriate ACK
to Oblu. (Cycle of steps 3 and 4 is repeated until Application sends STOP. On receiving
STOP command, Oblu executes Step 5)
Step 5: STOP - (i) Stop processing in Oblu (ii) Stop all outputs in Oblu.
Description
Step 1: START – Start receiving data packet from Oblu
Command to START transmitting stepwise dead reckoning data consists of 3 bytes: {0x34,
0x00, 0x34}
START command results in following:
oblu transmits 4 bytes acknowledgement in response to START command from the
application platform.
"A0 34 00 D4"
A0: Acknowledgement state
34 = Start command state
00 D4 = Checksum
Followed by 4-byte acknowledgement, oblu sends out 64-bytes tracking data in form
of packets via wireless communication. Below figure illustrates how the packets look
like:
Fig 5.2:Data Packet for Stepwise Dead Reckoning
Fig 5.3:Data Packet Example
Data Packet Description
These packets consist of 64 bytes.
1. Header: First 4 bytes are „Header‟.
a. Among these 4 bytes first is the header which is STATE_OUTPUT_HEADER which is
170 (0xAA).
b. 2nd and 3rd byte jointly tells the data packet number sent by the device since the
beginning of stepwise dead reckoning. Thus the packet number is a 16-bits variable.
c. 4th byte depicts the payload i.e. the number of bytes (0x3A) this data packet contains
which consists of required information asked by the user.
2.Payload: Next is the payload, which consisted of 14 elements of type „float‟ thus
comprising of 14*4=56 bytes. a. First 4 elements consisted of the delta vector i.e. the change
in position x, y, z and the angle of rotation around z-axis (the change in the angle of the x-y
plane). As these elements are of type float, thus are of 32 bits. The change in position is
between two successive ZUPT instances, i.e. the displacement vector from one step to the
next one. b. The other 10 elements of the payload are used to form the covariance matrix,
which is a 4x4 symmetric matrix, thus 10 elements. These are not used in the step-wise dead
reckoning. These are used in data fusion from two oblu devices in case of dual foot mounted
system.
3. Step Counter: Next two bytes consisted of step counter, which is a counter taking record
of ZUPT instances observed. This is essentially number of times velocity of oblu goes down
below predefined threshold. Hence it is the number of steps taken by the wearer, if oblu is
used for foot mounted pedestrian tracking.
4.Checksum: The last two bytes of 64 bytes are consisted of check sum which is sum of all
the bytes received prior to these. These are used to cross-check correctness of the data
transferred.
Step 2: ACK - Send acknowledgement for last data packet received from oblu
ACK consists of 5 bytes:
- 1st byte: 01
- 2nd byte: P1 (First byte of data packet number of last data packet received)
- 3rd byte: P2 (Second byte of data packet number of last data packet received)
- 4th byte: Quotient of {(1+P1+P2) div 256}
- 5th byte: Remainder of {(1+P1+P2) div 256}
Pseudo code for ACK generation:
i=0;
for(j=0; j<4; j++)
{
Header[j] =buffer [i++] & 0xFF; // 4-bytes assigned to HEADER; buffer is the 64 bytes data packet from
oblu
}
packet_number_1=header[1]; // First byte of data packet number
packet_number_2=header[2]; // Second byte of data packet number
ack = createAck(ack,packet_number_1,packet_number_2); // Acknowledgement created
<code to send acknowledgement>//ACKNOWLEDGEMENT SENT
// HOW TO CREATE ACKNOWLEDGEMENT
// ACK consists of 5 bytes
createAck(byte[] ack, int packet_number_1, int packet_number_2)
{
ack[0]=0x01; // 1st byte
ack[1]= (byte)packet_number_1; // 2nd byte
ack[2]= (byte)packet_number_2; // 3rd byte
ack[3]= (byte)((1+packet_number_1+packet_number_2-
(1+packet_number_1+packet_number_2) % 256)/256); // 4th byte – Quotient of {(1+P1+P2) div
256}
ack[4]= (byte)((1+packet_number_1+packet_number_2) % 256); // 5th byte- Remainder of
{(1+P1+P2) div 256}
returnack;
}
Step 3: SWDR - Perform stepwise dead reckoning (SWDR) on the received data
Oblu transmits tracking data with respect to its own frame which itself is not fixed with
respect to the user‟s global reference frame. Therefore the data should undergo rotational
transformation before being presented to the end user in a global reference frame. (Here
global refers to the coordinate axis in which the system is initialized.)
The first four bytes of Payload are the position and heading change vector, i.e. dx, dy, dz, da
(da is the change in angle of rotation about z-axis). These values are the output from the
device at every ZUPT instance (ZUPT instance = Foot at complete standstill = Step
detection). As mentioned earlier, each set of delta values describe movement between two
successive steps. The delta values prior and after each ZUPT instance, are independent to
each other as the system is reset every ZUPT, i.e. the delta values in one data packet is
independent of data value in data packet transmitted just before, just after and all other. The
position and heading change vector are with reference to the coordinate frame of last ZUPT
instance.
The „da‟ corresponds to the deviation in the orientation of oblu. It is rotation of the x-y plane
about the zaxis, and z-axis is always aligned with the gravity. The da is thus used for
updating the orientation of the system, and obtaining the global reference frame. The two
dimensions of displacement vector (dx, dy) are transformed to the global frame by applying
rotation and thus (x,y) in the global frame is obtained. As we do not consider any change in
alignment in z-axis, updating the z coordinate is performed by simply adding present dz to
the z obtained for previous ZUPT.
Thus x,y,z coordinates in the global reference frame are obtained at every ZUPT.
Pseudo code for construction of tracked path:
x_sw[0]=x_sw[1]=x_sw[2]=x_sw[3]=0.0; // Initialize once
// (x_sw[0], x_sw[1], x_sw[2]) = Final (X,Y,Z) in the user‟s reference frame
// x_sw[3] = Cumulative change in angle around z-axis in the user‟s ref frame
//x_sw[0], x_sw[1], x_sw[2] and x_sw[3] are float (IEEE754 Single precision 32-bits)
packet_number_1=header[1]; // First byte of data packet number
packet_number_2=header[2]; // Second byte of data packet number
//DO FOLLOWING AFTER SENDING THE ACKNOWLEDGEMENT
packet_number = packet_number_1*256 + packet_number_2; //PACKAGE NUMBER
ASSIGNED
if(packet_number_old != packet_number) // Perform Stepwise Dead Reckoning on receiving new data
packet
{
for(j=0;j<4;j++) // Only first 4 4-bytes data packets are required
dx[j]=(double)payload[j];// (dx[0],dx[1],dx[2]) = Displacement in (x,y,z) in Oblu‟s reference frame
// dx[3] = Change in angle around z axis, as indicated by Oblu
stepwise_dr_tu(); // Perform Stepwise Dead Reckoning on the received data
packet_number_old=packet_number;
}
// STEPWISE DEAD RECKONING
voidstepwise_dr_tu()
{
sin_phi=(float) Math.sin(x_sw[3]); //
cos_phi=(float) Math.cos(x_sw[3]); //
delta[0]=cos_phi*dx[0]-sin_phi*dx[1]; // Transform / rotate two dimensional displacement vector
(dx[0], dx[1]) to the user‟s reference frame
delta[1]=sin_phi*dx[0]+cos_phi*dx[1]; // Transform / rotate two dimensional displacement vector
(dx[0], dx[1]) to the user‟s reference frame
delta[2]=dx[2]; // Assuming only linear changes along z-axis x_sw[0]+=delta[0]; // Final X in the user‟s
reference frame
x_sw[1]+=delta[1]; // Final Y in the user‟s reference frame
x_sw[2]+=delta[2]; // Final Z in the user‟s reference frame
x_sw[3]+=dx[3]; // Cumulative change in orientation in the user‟s reference frame
distance+=Math.sqrt((delta[0]*delta[0]+delta[1]*delta[1]+delta[2]*delta[2])); // Distance of the
tracked data path
}
Step 4: Application acknowledges receiving last DATA packet by sending appropriate ACK
to “Oblu”.
(Cycle of steps 3 and 4 is repeated until Application sends STOP. On receiving STOP
command, Oblu executes Step 5)
Follow the steps laid down in Step 2 for generating ACK for the received DATA packet.
Step 5: STOP
Stop processing in Oblu
Stop all outputs in Oblu
(i) Stop Processing in Oblu by sending 3 bytes command: {0x32, 0x00, 0x32}
(ii) Stop all outputs in Oblu by sending 3 bytes command: {0x22, 0x00, 0x22}
Chapter 6
ANDROID LIFECYCLE AND LIFECYCLE METHODS
An android application is required to transmit the data from the device where we can process
the data and track the motion. The android application designed for this purpose is Open
Xoblu. Open Xoblu – a PDR tool for oblu – performs Pedestrian Dead Reckoning (PDR) on
the data transmitted from oblu, over Bluetooth, to the Android device (Smartphone, Tab etc).
The features are:
- Graphically display of the tracked path and present coordinates in realtime
- Computation of current coordinates and saving that in a log file
- Option of using floorplan or a map in background of graphical display
- Option to break the tracking sessionand continue later (highly useful for land survey)
- Option to tag a location with text and photo, in realtime
- Direction alignment using phone‟s compass
- Zoom in, out, fit, rotation of tracked path
- Option to tag the tracked with Indoor / Outdoor
- Display of average and instantaneous stride length and frequency, step count, distance
covered.
6.1 Android Lifecycle
Fig 6.1:Android lifecycle
6.2 Android’s Core Components
Activity Class
An Activity object is similar to a WindowsForm. It usually presents a single
graphical visual interface (GUI) which in addition to the displaying/collecting of
data, provides some kind of „code-behind„functionality.
A typical Android application contains one or more Activity objects.
Applications must designate one activity as their main task or entry point. That
activty is the first to be executed when the app is launched.
An activity may transfer control and data to another activity through an interprocess
communication protocol called intents.
Service Class
Services are a special type of activity that do not have a visual user interface. A
service object may be active without the user noticing its presence.
Services are analogous to secondary threads, usually running some kind of
background „busy-work„for an indefinite period of time.
Applications start their own services or connect to services already active.
Broadcast Receiver Class
A BroadcastReceiver is a dedicated listener that waits for a triggering system-wide
message to do some work. The message could be something like: low-battery, Wi-Fi
connection available, earth-quakes in California, speed-camera nearby.
Broadcast receivers do not display a user interface.
They typically register with the system by means of a filter acting as a key. When the
broadcasted message matches the key the receiver is activated.
A broadcast receiver could respond by either executing a specific activity or use the
notification mechanism to request the user„s attention.
Content Provider Class
A content provider is a data-centric service that makes persistent datasets available to
any number of applications.
Common global datasets include: contacts, pictures, messages, audio files, emails.
The global datasets are usually stored in a SQLite database (however the developer
does not need to be an SQL expert)
The content provider class offers a standard set of parametric methods to enable
other applications to retrieve, delete, update, and insert data items.
6.3 Lifecycle Methods
onCreate()
Called when the activity is first created.
This is where you should do all of your normal static set up create views, bind data to
lists, and so on.This method is passed a Bundleobject containing the activity's previou
s state, if that state was captured.
Always followed by onStart().
onRestart()
Called after the activity has been stopped, just prior to it being started again.
Always followed by onStart().
onStart()
Called just before the activity becomes visible to the user.
Followed by onResume() if the activity comes to the foreground, or onStop() if it
becomes hidden.
onResume()
Called just before the activity starts interacting with the user.
At this point the activity is at the top of the activity stack, with user input going to it.
Always followed by onPause().
onPause()
Called when the system is about to start resuming another activity.
This method is typically used to commitunsaved changes to persistent data, stop anim
ations and other things that may be consuming CPU, and so on.It should do whatever
it does very quickly, because the nextactivity will not be resumed until it returns.
Followed either by onResume() if the activity returns back to the front, or by onStop(
) if it becomes invisible to the user.
The activity in this state is killableby the system.
onStop()
Called when the activity is no longer visible to the user.
This may happen because it is being destroyed, or because another activity (either an
existing one or a new one) has been resumed and is covering it.
Followed either by onRestart() if the activity is coming back to interact with the user,
or by onDestroy() if this activity is going away.
The activity in this state is killableby the system.
onDestroy()
Called before the activity is destroyed.
This is the final call that the activity will receive.
It could be called either because the activity is finishing (someone called finish
()on i), or because the system is temporarily destroying this instance of the activity to
save space.
You can distinguish between these two scenarios with the isFinishing() method.
The activity in this state is killableby the system.
Chapter 7
EXISTING SYSTEM- SINGLE FOOT SYSTEM
There are several technologies which are being used for indoor positioning. Some of them
are based on WiFi or Ultra-WideBand (UWB). These technologies assume an infrastructure
like a WiFi network or a UWB network being available in the area of interest. The accuracies
provided by these technologies range from a few cms to a few metres. However, in the case
of harsh environments like a disaster-affected buildings, such assumptions of infrastructure
cannot be made. Therefore, there is a need for developing autonomous positioning systems.
The main challenge in developing an infrastructure-free positioning solution is to create a
technology that is sufficiently accurate in GPS denied environments. Inertial Navigation
System (INS) technology is capable of working in almost all environments where GPS has
difficulties. INS can provide position information in tunnels, indoors, underground facilities
and ensuring a possible seamless provision of position information.
The existing Pedestrian Navigation System is a single foot-mounted tracking system. The
oblu is attached with the shoe and an application platform runs a pre-installed PDR
application. The foot sensor detects steps, computes the movement for every step and
transmits the computation data via bluetooth to the application platform at every step to
construct the path.
The moment the Start button is pressed in the app, the global reference frame is defined. The
global reference frame coincides with the local reference frame of the sensor. The
displacement and the heading change is transmitted to the application platform at the first
detected step. The platform constructs the path in the global reference frame. This process is
continued for the following steps and the path is thereby traced.
Fig 7.1: Single foot System
7.1 Application to Generate Single Foot Data
Fig 7.2: Menu view Fig 7.3: Dashboard view
Fig 7.4: Footprint view
Fig 7.5:Device connection
7.2 Results of Single Foot Mounted Application
Fig 7.6:Oblu mounted on Left foot
Fig 7.7:Oblu mounted on Right foot
7.3 Limitations of Single Foot System
The navigation calculations in INS involve integration with time, which provide a low-pass
filter characteristic that suppresses high-frequency sensor errors but amplifies low-frequency
sensor errors and initialization errors. This results in a position error that grows without
bound as a function of the operation time and where the error growth depends on the error
characteristics of the sensors and the initialization error. In the present system (Foot
Mounted INS), it is found that the calculations in the INS are integration of distance
functions with respect to time which ideally provides a LPF characteristics which implies
that it removes high frequency errors in the sensor but in turn it amplifies the low frequency
error parameters as well as initialization. This therefore results in a position error as function
of operation time leading to a misleading position co-ordinates.
In general, in an INS,
The error due to accelerometer measurements is proportional to square of the
operation time.
The error due to gyroscope measurements is proportional to cube of operation time.
It is observed that the drawback of the existing foot-mounted ZUPT–aided (zero velocity
update) INS is that due to internal noise in the sensors, there is a drift error which grows with
time. These drift errors are random in nature. The estimated trajectories drift away from the
actual path as time progresses. Despite having a calibration phase before the walking starts,
systematic heading drifts are still persistent. Another important observation to be made is that
the drift obtained are symmetrical. These errors are large scale manifestations of modelling
errors in the system. One possible way these errors can be mitigated is to use foot mounted
INS on both feet as suggested in such that the symmetrical modelling errors cancel out. It is
intuitive that dual shoe-mounted approach will give better tracking performance as compared
to single shoe-mounted approach.
Chapter 8
PROPOSED SYSTEM- DUAL FOOT SYSTEM
In the existing system, it is observed that there are certain errors in measurement of certain
parameters like the co-ordinates of the responder during his trajectory in the path followed.
To overcome these limitations, a system with the INS mounted on both feet with the one
navigation system on each feet this thereby reduces the error in measurement. The human
body is non-rigid, the relative positions of the foot-mounted INSs are not fixed, and one
cannot directly relate the navigation solution of one foot-mounted INS to another. The data
obtained from both the feet are fused by using an application program. The open source
android is modified as per the current system requirements. This makes the system better and
more reliable than the existing system. Therefore dual foot mounted scheme is implemented
with two Oblu, one attached to each foot. Both the units must have same or similar
orientation.
The project involves:
Simultaneously connecting two Oblu modules with an android phone.
Data fusion is performed.
Modifying the Application including the GUI for dual-foot mounted configuration.
Chapter 9
APPLICATION TO GENERATE DATA FROM DUAL
FOOT
The application for single foot allows the option for only one BLE connection. After the
connection is established, the Inertial parameters for each step are generated. The Inertial
parameters generated for each step are: x-coordinates, y-coordinates, z-coordinates, average
speed, distance, step count, stride length, stride frequency and heading. It also generates the
trajectory of the path travelled by the individual wearing the Inertial sensor (OBLU). The
Inertial parameters are generated in a matrix format which are stored in the internal memory
that can be accessed at a later stage. These files are also known as log files.In the Single Foot
Application, in order to establish a new connection the user has to disconnect from the
previously connected device.
The Dual Foot Application allows us to establish a connection to two Inertial Sensors(BLE‟s)
simultaneously. The moment we select two devices and establish a connection, the
application goes to the next activity. The activity has been designed to show the Inertial
parameters obtained from both the BLE devices. The Graphical User Interface (GUI) is
designed as a scroll view for Device1 and Device 2. The user can now mount two Oblu‟s,
one on each foot and successfully collect data from the left and right foot for each step taken.
The data generated from the right and left foot are stored in a matrix format separately for
each foot in the internal memory of the mobile. The stored data forms two log files which can
be accessed later to perform data fusion in order to generate an accurate path.
Fig 9.1: GUI for Device 1
Fig 9.2: GUI for Device 2
Chapter 10
DATA FUSION AND GENERATION OF
ACCURATE PATH
The Graphical User Interface (GUI) of this application is designed in such a way that gives
us access to the internal memory of the mobile and select the respective log files for right and
left foot. The application has two button views named “SELECT FILE A” and “SELECT
FILE B”. On click of “SELECT FILE A”, we get access to the internal memory in order to
select the log file of the left foot, similarly on click of “SELECT FILE B” we can select the
log file for the right foot. Hence, the log files for both left and right foot are selected.
There is another button view named “FUSION”. On click of “FUSION” the range constraint
algorithm is performed on the data of the two feet which generates an accurate path in the
next activity. This path is more accurate compared to the path that was generated in the
single foot application as the drift errors are reduced.
The range constraint algorithm can be explained as follows when one foot is in its stance
phase, the position of the foot that is in motion cannot exceed a threshold value, which we
define as the maximum separation between the two feet. The maximum possible separation
between the two feet at any instance of time should not exceed 1m. Initially both the feet are
at [0, 0, 0]. The initial position of one foot acts as a reference point for the other foot. This
process is continued for the further steps. If the separation between the two feet is exceeding
1m, it is adjusted to 1m. In this way we obtain an accurate path.
Fig 10.1: Activity 1
Fig 10.2: Path for log files from left and right foot
Fig 10.3: Path Traced
10.1 Generated Data Sets
The generated log files from both feet contain parameters such as Time Stamp, Step Count,
X, Y, Z, Distance, Heading, Latitude, Longitude, Indoor /Outdoor.
10.1.1 Right Foot Data Set
Fig 10.4: Right foot Log files
10.1.2 Left Foot Data Set
Fig 10.5: Left foot Log files
Chapter 11
CONCLUSION
The drawback of the existing single foot-mounted ZUPT–aided (zero velocity update) INS is
that due to internal noise in the sensors, there is a drift error which grows with time. These
drift errors are random in nature. The estimated trajectories drift away from the actual path as
time progresses.
To overcome these limitations, a system with the INS mounted on both feet with the one
navigation system on each feet (Dual foot) this thereby reduces the error in measurement.
The human body is non-rigid, the relative positions of the foot-mounted INSs are not fixed,
and one cannot directly relate the navigation solution of one foot-mounted INS to another.
The data obtained from both the feet are fused by using an application program. This makes
the system better and more reliable than the existing system. The path traced from the Single
foot system and Dual foot system are shown below. This clearly indicates how the Dual foot
system provides a much more accurate trajectory compared to the Single foot system.
Fig 11.1: Trajectory of Single foot System Fig 11.2: Trajectory of Dual foot System
REFERENCES
[1] “A Prototype of a First-Responder Indoor Localization System”, K.V.S. Hari, John-Olof
Nilsson , Isaac Skog , Peter Händel , JouniRantakokko and G.V. Prateek , Journal of the
Indian Institute of Science,2013.
[2] “Long-term Performance Evaluation of a Foot-mounted Pedestrian Navigation Device”,
Amit K Gupta, Isaac Skog and Peter Handel, IEEE INDICON 2015.
[3] “Data Fusion of Dual Foot-Mounted INS to Reduce the Systematic Heading Drift,Prateek
G V, Girisha R, Peter Händel and K.V.S. Hari, 4th International Conference on Intelligent
Systems, Modelling and Simulation, 2015.
[4] Overall system architecture: john-olof Nilsson, Zachariah, IssacSkog,peter handel,
"cooperative localization by dual foot mounted inertial sensors and inter-agent ranging"
EURASIP journal on advances in signal processing 2013,2013:164.