52
VOL. 9, ISSUE 1, JANUARY - MARCH 2016 Need for Sensor Fusion Sensor Characteristics Sensor Fusion Algorithms Virtual Sensor Sensor Redundancy and its Applications Sensor Funsion for Efficient Diagnostics Issue with Sensor Fusion Sensing the Future Sensor Fusion - A Need for Next Generation Automobiles Brain

Sensor Fusion - A Need for Next Generation Automobiles

  • Upload
    ngodat

  • View
    219

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Sensor Fusion - A Need for Next Generation Automobiles

VOL. 9, ISSUE 1, JANUARY - MARCH 2016

Need for Sensor FusionSensor CharacteristicsSensor Fusion AlgorithmsVirtual SensorSensor Redundancy and its ApplicationsSensor Funsion for Efficient DiagnosticsIssue with Sensor FusionSensing the Future

Sensor Fusion - A Need forNext Generation Automobiles

Brain

Page 2: Sensor Fusion - A Need for Next Generation Automobiles

Colophon

[email protected]

Rahul UplapAVP

Reena Kumari BeheraDr. Somnath SenguptaSmita NairDr. Nitin SwamyNarendra Kumar S SNaresh AdepuSrinivasa BuggaAditya PiratlaPranjali ModakSmitha K P

Designed and Published by

Suggestions and Feedback

Mind’sye Communication, Pune, IndiaContact : 9673005089

[email protected]

Page 3: Sensor Fusion - A Need for Next Generation Automobiles

Editorial

Scientist Profile

Book Review

Articles

Editorial 3

Rahul Uplap

Rudolf Emil Kalman 9

Srinivasa Bugga

33

Milind Potdar

Need for Sensor Fusion 4

Pallavi Bhure & Pallavi Kalyanasundaram

Sensor Characteristics 10

Smitha K P & Varsha Phatak

Sensor Fusion Algorithms 16

Jiji Gangadharan & Anusha Baskaran

Virtual Sensor 24

Milind Potdar

Sensor Redundancy and its Applications 28

Aditya Piratla

Sensor Fusion for Efficient Diagnostics 34

Dr. Nitin Swamy

Issues with Sensor Fusion 38

Ann Mary Sebastian & Reecha Yadav

Sensing the Future 44

Sushant Hingane

Multi-Sensor Fusion: Fundamentals and

Applications with Software

Contents

TechTalk@KPIT, Volume 9, Issue 1, 2016

1

Page 4: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 2016

2

Page 5: Sensor Fusion - A Need for Next Generation Automobiles

Editorial

Please send yourfeedback to :[email protected]

TechTalk@KPIT, Volume 9, Issue 1, 2016

3

Rahul UplapAVPKPIT Technologies Limited,Pune, India

This TechTalk edition brings to you some exciting articles on 'Sensor Fusion': a much

talked about topic lately in the automotive world. The rapid advancement in technology

has resulted into a plethora of thrilling features being available in the car of today. Some

of the features which we grew up watching only in the custom made Aston Martin driven

by James Bond in his movies; are a reality today in our everyday cars.

Today exciting features like your car adjusting its suspension in anticipation of a pothole

on the road ahead of it, are no longer a dream but a reality, thanks to a combination of

multiple technologies working flawlessly. This feature might sound quite naïve, but one

could appreciate the complexity in its implementation when I would say that a camera

capturing the image of the road identifying the pothole, coupled with a radar scanning the

area to determine the exact distance of the pothole from the moving vehicle is fused to

create exact intelligence for the suspension control system to adjust the suspension of

each wheel independently. “Sensor Fusion” is the backbone making the automobiles of

today, more and more intelligent.

As the world today talks of self-driving cars as the certain future of automobiles,

passenger safety and comfort are the two key drivers demanding computation of

massive data within and around the car to provide specific information to the computing

systems to take intelligent decisions. This has led to exponential growth in complexity to

handle multiple and different types of sensors. Sensor fusion is the solution to optimize

the processing capability to handle large and diverse data and provide exact information

to realize control action in real-time.

Passenger safety and comfort is demanding In-Vehicle, Vehicle to Vehicle (V2V) and

Vehicle to Infrastructure (V2I) information to be captured, processed and acted upon.

This has brought in different sensors like Camera, Radars, LiDARs (Light Detection and

Ranging) to reproduce the 3 dimensional world around the car as it moves, mapping

stationary as well as moving objects with utmost accuracy. Clubbed with information

being fed from various wireless technologies like DSRC, mobile networks, GPS, location

mapping services like Google, etc.; cars are seamlessly connecting to other cars as well

as the surrounding infrastructure providing vital information about hazardous situations

like accidents, improving traffic efficiency by reporting congestions, dynamic traffic

control, etc.

The processing of information from this variety of sensors needs sophisticated data

fusion architectures, frameworks, algorithms, which are fast emerging to handle large

amount of information in real-time. The articles in the following pages have highlighted all

these facets of sensor fusion, which I am sure you would admire to get acquainted with.

Page 6: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 20164

Page 7: Sensor Fusion - A Need for Next Generation Automobiles

About the AuthorsPallavi Bhure

Pallavi Kalyanasundaram

Need for Sensor Fusion

TechTalk@KPIT, Volume 9, Issue 1, 20165

Areas of InterestEmbedded System Design andModel Based Design

Areas of InterestImage Processing, Computer Vision, Algorithm Development andUI Development

Page 8: Sensor Fusion - A Need for Next Generation Automobiles

Figure 1: Levels of multi-sensor data fusion

Target System

SENSOR FUSION

Physical Environment

Sensor

Data Level Fusion

Feature Level Fusion

Decision Level Fusion

Data level fusion mainly includes signals (e.g. data from RADAR) and/or images (e.g. data from camera) as raw data from the sensors. Fusion at this level, generally acts as an additional signal/image processing stage depending on the input data. Few common implementation methods include: Weighted average, Kalman Filter, etc., for signals and Logical filters, image algebra, etc., for pixels.

Feature level fusion aims to obtain the attributes (features) like shape, depth, speed, etc., from the sensor data or from the data level fusion stage. Implementation method includes geometric transformations.

Decision level fusion is used to get to a decision based on its input. Feature level and decision level fusion play an important role in object recognition. Bayesian Estimation is popular for this level of fusion.

A complete in-depth knowledge about the fusion levels can be inferred from [2]. Additionally, there are other classifications methods available for sensor fusion, which are described by Wilfried Elmenreich [3].

Now, to create anything from scratch needs investment of resources, time and money. With a focus to stand tall in the current competitive industry, one has an option to adopt solutions which allows to reduce design and integration effort, lower design risk and accelerate time-to-market. Following this path, the current inclinations are towards adopting the following.

1) System On Chip (SoC) Solutions

To incorporate multiple sensors on a single hardware platform urges the need for optimized integration thereby increasing the burden on System on Chip (SoC) solutions. SoC is basically an Integrated Circuit (IC) which incorporates all the components of a system into a single chip. In-line to this, miniaturization of sensors will play an important role with the number of sensors in a system increasing gradually.

2) Sensor hub is a processing unit (e.g. microcontroller) which integrates and processes data from different sensors to reduce the load on the central processor thereby improving performance and reducing power consumption.

3) Sensor IP (Intellectual Property) Subsystems Pre-integrated sensor and actuator-specific IP blocks together with software in a single subsystem. Example-Synopsys’s Design Ware Sensor and Control IP Subsystem[4].

4) Intelligent Sensing Frameworks (ISFs) ISFs can be categorized under IP subsystems. Example - The ISF from Free scale allows the designer to focus on designing algorithms and applications rather than complex sensor integration. It provides open APIs and sensor drivers simplifying sensor data acquisition [5].

TechTalk@KPIT, Volume 9, Issue 1, 2016

6

I. Introduction

II. Technological Trends

Let’s start with a simple example. Consider a single camera (sensor) which is equivalent to human eye view in 2D then addition of another camera at a specific angle and position, allows to visualize the same environment in a 3D space. This obtained 3D data is further processed to find its importance in numerous applications like calculating depth of an object. Recently, with the advent of “smart city” concept, the dependency on sensor fusion for controlling home appliances remotely has increased to a great extent.

Based on the above scenarios, sensor fusion definition can be precisely stated as: “The art of processing data from multiple sensors with an aim to replicate a physical environment or induce intelligence to control a phenomenon with increased precision and reliability.”

Sensor fusion, which has its roots buried in military applications, has spread across different horizons including automotive, consumer electronics, medical, industrial control systems, robotics, diagnostics and more. Some interesting applications include augmented reality, video games (motion gaming), wearable devices used for health monitoring, object detection and tracking, (surveillance), etc.According to Semico research , the number o f sys tems incorporating sensor fusion is predicted to grow from 400M units in 2012 to over 2.5B units in 2016- an annual growth rate of almost 60% [1].

This article is organized as follows. The description of the current maturity level of sensor fusion, highlighting the latest technological trends is provided in Section II. Section III gives a brief description about the usage of sensor fusion with respect to different applications and technologies in various domains of the automotive sector. Section IV gives a glimpse of what is in store for the future with respect to sensor fusion.

In order to get a complete picture about the trends in a particular technology, it is preliminarily necessary to get well-versed with its underlying components and techniques. Figure 1 helps us to understand the different levels at which multi-sensor data fusion can occur. Every application can be categorized to fall under one or combinations of the following fusion levels.

Page 9: Sensor Fusion - A Need for Next Generation Automobiles

Sensor fus ion fo rms a compos i te understanding from two or more sensors which can take place in either a distributed or centralized system. Additionally, RADAR and LiDAR can actually diminish the requirement for camera processing because they provide accurate 3D information to detect and classify the objects [7].

A current trend combines an Microcontroller Unit (MCU) with three or more MEMS (Micro Electro Mechanical System) sensors in a s i ng l e package . One examp le i s STMicroelectronics’ LIS331EB.It combines a high-precision 3-axis, digital accelerometer with a microcontroller in a single package. The LIS331EB can also internally process data sensed by external sensors (for a total of nine), such as for gyroscope, magnetometer, and pressure sensors. Functioning as a sensor hub, it fuses together all inputs with the iNEMO Engine software. STMicroelectronics’ iNEMO engine sensor fusion software suite applies a set of adaptive prediction and filtering algorithms to make sense of (or fuse) the complex information coming from multiple sensors [8].

The uncertainty of the driving environment make driving a very dangerous task. According to a study in European member states, there are more than 1,200,000 traffic accidents a year with over 40,000 fatalities. This shows up the growing demand for automotive safety systems [9].Therefore in automotive industry, there is huge interest in active safety systems. External sensors are increasingly important and some examples are RADAR and camera systems. Today, a sensor is usually connected to a single function. However, all active safety systems need information about the vehicle surroundings such as lane geometry and position of the other vehicles. The use of sensor fusion to replace redundant and costly sensors with software has attracted recent attention. The sensors are divided in number of subgroups; internal sensors measuring motion of vehicle, external sensors measuring object surrounding the vehicle and the sensors communicating with other v e h i c l e s a n d i n f r a s t r u c t u r e . T h e

TechTalk@KPIT, Volume 9, Issue 1, 2016

7

ARM and Sensor Platforms Inc. are extending their collaboration to the Open Sensor Platform (OSP) which is an open source framework to simplify development of embedded sensor-based products utilizing ARM® architecture [6]. Such initiatives set the stage for a standardized way for developing new solutions on a common platform.

After getting insights of the various dimensions of sensor fusion, one can understand the place of sensor fusion in different areas of automotive domain which is covered in the subsequent section III.

With the cars becoming “smarter”, the number of sensors integrated into the automobile is ought to increase its bounds. These sensors provide improvements from the perspective of performance, safety, comfort, efficiency, environment protection, driver assistance and other features related to transport. Moreover, these sensors have become indispensable components for modern automobiles. Automotive sector can be broadly classified in following domains. 1. Engine electronics 2. Transmission electronics 3. Chassis electronics 4.Active safety 5. Driver a s s i s t a n c e 6 . P a s s e n g e r c o m f o r t7. Entertainment systems. A brief discussion is provided on how sensor fusion is widely spread across these domains.

In control applications, for engine and powertrain subsystems, the quantity and the diversity of sensors used has increased exponentially. Sensors are the part of subsystems like engine control, seat-control, navigation, etc. This sensor data is available on the different busses as a “pass-through” from different electronic functional blocks. Example: Speed information is extracted from wheel-speed sensor and can be accessed by ABS system. This speed information is made available to the powertrain bus participants and to all other busses through the gateway. This information is used by radio for volume adaptation or rain-sensor for adaptive wiping.

Figure 2 shows a functional view of data flow in fully equipped sensors and control system in a vehicle. Input sensors include GPS, Inertial Measurement Units (IMU), cameras, LiDAR, RADAR and ultrasound. Each sensor has sensor processing unit to process raw data in order to create object representation that can be used by the next stage in the hierarchy of sensor fusion. Figure 2 shows different types of sensor fusion occurring at various levels. Consider an instance where raw data from cameras can be fused to extract depth information and it can be combined with additional information of nearby vehicles. This information comes from dedicated short-range communication (DSRC).

III.Sensor Fusion in Automotive Domain

Figure 2: Functional view of data flow invehicle’s sensing and control system [7]

Sense Understand Act

Raw data Object parameters- Time stamp

- Dimensions

- Position/velocity

3D Map

V2V / V21comm.

SensorFusion

“Maps”a priori info

Actions- Do nothing

- wam

- Complement

- Control

Driver state

ActionEngine

VehicleControls- Brake/acc- Steering- etc.

Visualization/DisplaySub-system

Compressed data

Sens orProcessing

Sens orProcessing

Sens orProcessing

Sens orProcessing

GPS

IMS

Cameras

Radars

3D ScanningLidars

Ultra soundsensors

Page 10: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 2016

8

communication is made possible by Controller Area Network (CAN).

As shown in Figure 3, ADAS system will also use a new w i re less se l f -powered accelerometers and strain gauges in tires, which will be integrated with pressure / temperature sensors. These sensors will be able to calculate tire-road friction and tire forces. Integration with in-vehicle cameras and human body sensors will enable monitoring of driver-vehicle interaction.

Current infotainment systems offer complete i n f o r m a t i o n , e n t e r t a i n m e n t a n d communications capabilities. It includes 3D and augmented navigation, multimedia support and smart apps for device integration, high speed connectivity, good user interfaces and a new generation of automotive cloud services. Sensor fusion is used to collect and aggregate camera, sensor and on-board diagnostic data. This data is then integrated with navigation system for more accurate and intelligent routing.

One of the most exciting technologies today is development of autonomous vehicles. Automakers continuously announce their progress in this direction. Looking ahead, semi-autonomous, and fully autonomous vehicles will be driven on the roads along with traditional vehicles. Eventually, we can easily realize, all new autonomous vehicles will be able to drive themselves, changing our lives almost as dramatically. Sensor fusion plays a major role in developing an autonomous vehicle. Future research in this area should be devoted to defining a single technology to cover all the specifications required by the applications. Additionally, with Internet of Things (IoT) creating a lot of buzz currently in the automotive industry, the near future will witness disruptive innovations with the integration of sensor fusion and IoT.

IV.Future of Sensor Fusion

References[1] “Designware Technical bulletin”, 2013 Synopsis,

available at:

https://www.synopsys.com/Company/Publications/DWT

B/Pages/dwtb-sensor-subsystem-2013Q3.aspx.

[2] Luo, Ren C., and Michael G. Kay. "A tutorial on

multisensor integration and fusion." Industrial

Electronics Society, 1990. IECON'90., 16th Annual

Conference of IEEE. IEEE, 1990.

[3] Wilfried Elmenreich, “An introduction to sensor fusion”,

research report, 2001.

[4] “Designware Sensor and Control IP subsystem”, 2015

Synopsys, available at:

https://www.synopsys.com/dw/ipdir.php?ds=sensor_sub

system.

[5] “Freescale Intelligent Sensing Framework”, Freescale

Semiconductor, available at:

http://www.freescale.com/products/sensors/intelligent-

sensing-framework:INTELLIGENT-SENSING-

FRAMEWORK.

[6] “ARM and Sensor Platforms Deliver an Open Source

Framework for Sensor Devices”, 24 June 2014,available

at: https://www.arm.com/about/newsroom/arm-and-

sensor-platforms-deliver-an-open-source-framework-for-

sensor-devices.php.

[7] Fernando Mujica, Ph.D.Director, Autonomous Vehicles

R&DKilby Labs, Texas Instruments,“Scalable electronics

driving autonomous vehicle technologies”,April 2014,

available at:

http://www.ti.com/lit/wp/sszy010a/sszy010a.pdf.

[8] Morrie Goldman, “Sensor Fusion Comes of Age”,

Mouser Electronics, available at:

http://www.mouser.in/applications/sensor-fusion-age/.

[9] Mahdi Rezaei, Reza Sabzevari, “Multisensor Data

Fusion Strategies for Advanced DriverAssistance

Systems”, Sensor and Data Fusion, In-Teh, Croatia, pp.

141-166, 2009.

[10] “New sensor fusion approach recognizes rain, snow and

ice on the road”, Oct 12, 2010, available at:

http://www.continental-corporation.com.

V. ConclusionThanks to the simultaneous advances in hardware platforms, signal/image processing algorithms and sensor technologies(optical, thermal, wireless, MEMS, etc.), real-time sensor fusion has come a long way and there is no looking back. As described in Section III, sensor fusion has given automotive domain the pleasure to enjoy some great applications which were once just concepts or in our imaginations. Sensor fusion has also played a big role in adding intelligence to a system. Subsequently, sensor fusion has shown immense potential to replicate the ultimate intelligent system, HUMAN BRAIN.

Figure 3: Intelligent tire system by Continental [10]

Page 11: Sensor Fusion - A Need for Next Generation Automobiles

SC

IEN

TIS

T P

RO

FIL

E Scientist Profile

Rudolf Emil KalmanBorn : May 19, 1930

Rudolf Emil Kalman is an electrical engineer, mathematician, researcher and an inventor. He is mostly known for his co-invention and development of Kalman Filter, a mathematical algorithm which is widely used in signal processing, control systems, and guidance navigation and control, in particular, aviation. He is a creator of modern control theory and system theory.Rudolf was born in Budapest, Hungary on May 19, 1930. As a son of an electrical engineer, Rudolf also followed in the footsteps of his father and pursued a career in Mathematics. In 1943, Rudolf immigrated to United States along with his family, during the World War II. He earned a Bachelor’s degree and Master’s degree in Electrical Engineering at Massachusetts Institute of Technology (MIT), Cambridge in 1953 and 1954 respectively. Rudolf received Doctorate of Science in 1957 from Cambridge University, New York. During the years at MIT and Columbia, Rudolf showed wide interest in control systems. His study and research involved engineering mathematical applications such asa controlling device to convert the output of a given stream of data or other input to get a desired output. The best example of this mathematically engineered control is a controlling device installed on an automobile engine, used to limit the top speed of the vehicle. In the later part of his career, he started demonstrating an individualistic approach to research.In 1958, he moved to Maryland and there he was employed as a Research Mathematician at the Research Institute for Advanced Studies (RIAS) in Baltimore. Rudolf was in RIAS until 1964 as a research Mathematician and then as associate director of research. His advanced knowledge involves programming robotics and machines to respond to continuous changing conditions as well as to maintain self-control. One such application is automatic pilot system installed in airplanes-that can prevent an unmanned craft from crashing on the ground.Rudolf conducted innovative research about fundamenta l system concepts such as controllability and observability, and developed solid theories on the structural aspects of engineering system. He invented the theory and design of linear system, by using quadratic criteria, introducing the analytical work of Constantin Caratheodory in optional control theory. He revealed the interrelations between Russian mathematician Lev Pontryagin’s maximum principle and Hamilton – Jacobi Bellman equations and variational calculus.Most important part of Rudolf’s research was the development of Kalman filter at RIAS. Kalman filter involves a set of algebraic equations which are used to solve the real time problems. This technique is widely used in digital computers of control systems,

navigation systems and avionics and also in outer space vehicles to trace signals from a long sequence of electric and gyroscopic systems. In 1960, Rudolf visited NASA Ames Research Center for presenting his ideas on the proposed filter during Apollo program. Apollo II Lunar module landed on the moon in July 1969, which used Kalman’s filter. This filter is also used to solve non-linear problems in every modern military and commercial control systems. This is used in NASA space shuttle, navy submarines, radar tracking algorithms for anti-ballistic missile applications, satellite orbital determination, data processing, and nuclear power plant instrumentation and even in global positioning system (GPS) navigation.Rudolf was a professor at Stanford University during 1964 – 1971. He served as a Graduate Research Professor and Director at the Center for Mathematical Systems Theory (CMST), University of Florida from 1971 – 1992.He has accomplished outstanding research work at University of Florida. He held the chair for mathematical system theory at the ETH (Swiss Federal Institute of Technology), Zurich from 1973.Awards & Honorsl“Outstanding young scientist of the year” award

from Maryland Academy of Science in 1962lIEEEmedal of Honor in 1974lRufus Oldenburger medal In 1976lIEEE Centennial medal in 1984lKyoto prize in advanced technology in 1985lSteele prize in 1987lRichard E. Bellman control heritage award in

1997lCharles Stark Draper prize from national

academy of engineering in 2008l“National Medal of Science” received from

Barak Obama, the President of United States, for his research work and innovation, in 2009.

Rudolf is a member of the National Academy of Sciences (USA), the National Academy of Engineering (USA) and the American Academy of Arts and Sciences (USA). He is a foreign member of Hungarian, French and Russian Academies of Science. He received many doctorates from various institutions and academies. He influenced many researchers by his significant numerous lectures.In addition, Rudolf published more than 50 technical articles. Rudolf Kalman has significantly contributed to the advances in digital computing and Kalman filter. Kalman’s contribution to modern system theory led to develop many mathematical tools in the areas of engineering, statistics and econometrics. Kalman filter gave the opportunity to use computers in control and communications technology.

TechTalk@KPIT, Volume 9, Issue 1, 2016

9

AuthorSrinivasa BuggaAreas of interest

Parallel Processing,

Cryptography and

Model Based Development

Page 12: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 201610

Page 13: Sensor Fusion - A Need for Next Generation Automobiles

About the Authors

Smitha K P

Sensor Characteristics

TechTalk@KPIT, Volume 9, Issue 1, 2016

11

Varsha Phatak

Areas of Interest

Multicore Programming and

Parallel Computing

Embedded Firmware Development,

Automation and

Automotive Embedded Design

Areas of Interest

Page 14: Sensor Fusion - A Need for Next Generation Automobiles

B. Absolute and Relative Sensors

Depending upon reference selected for stimulus, sensors can be classified as absolute or relative. Absolute type of sensors detect inputs in reference to absolute physical scale which is independent of measurement conditions. Temperature-sensitive resistor is a good example of this type of sensor, where we can relate its electrical resistance directly to absolute temperature scale. Another example is absolute pressure sensor which measures pressure relative to vacuum as shown in fig. 2 [1].

Sensors which measures the input relative to fixed or variable reference are known as relative sensors. Pressure gauge is one of the example for the same, which produces output signal with respect to atmospheric pressure as shown in fig. 2 [1].

C. Stimulus

Sensors can also be classified according to property of what it measures or which phenomenon it senses and generates output. Figure 3 shows different types of sensors where classification is done based on stimulus.

D. Transduction Principles

Classification can also be done on the basis of sensor’s working principle and its transduction property. Based on this, it could be p h o t o e l e c t r i c , m a g n e t o - e l e c t r i c , thermoelectric, photoconductive, chemical transformation, physical transformation, photo-magnetic, electrochemical process, spectroscopy, biological, etc.

TechTalk@KPIT, Volume 9, Issue 1, 2016

12

I. Introduction

II. Classification of Sensors

Automation and control systems are more than just data processing units which comprise of direct interaction with surrounding physical world. Control system being subset of mechatronics directs and regulates other devices or subsystems. Measurement is essential factor required by any mechatronic system. Measurement consists of data acquisition from surrounding environment, other sub systems or from the system under the test itself. The obtained data could be fed as input to microprocessor or micro controller for controlling the whole system. Any measurement system comprises of sensors, transducers and signal processing devices. Sensors are devices which detect and convert the physical signals from the surrounding into output measurable signal that can be used for further processing. These physical signals include temperature, light, sound, motion or any other parameter. Figure 1 shows the role of sensor in embedded application.

Depending on classification purpose, different criteria can be selected. Some of these criteria are discussed here.

A. Active and Passive Sensors

Sensors can be classified as active or passive based on the energy source they are connected to. Sensors which require external power which is also known as excitation signal for performing their operations are referred as active sensors. This input signal is modified by sensor to generate output signal. Their own properties change in response to external effect and these properties can be subsequently changed into electric signal. Hence active sensors are also called as parametric sensors. Thermistor, resistive strain gauge are few exampls of the same.

Sensors which directly generate electrical signals in response to an external stimulus are called passive sensors. Passive sensors don’t need additional source of energy. They are also known as self-generating sensors. Thermocouple, photodiode, piezoelectric sensors are some of the examples of the passive sensors.

Figure 1: Role of sensor in embedded application

Object orenergysource

Sensor/detector

Signalprocessing/

Amplification

Datacollector &Interpreter

DataDisplay

/ Recorder

DecisionprocessResponseResult

Action

Stimulus

Figure 2 : Absolute vs relative pressure sensor

Absolute pressure sensor

REF

Relative pressure sensor

REF

Figure 3 : Different types of sensors according to stimulus

Wave (amplitude, phase, polarization), Spectrum, Wave velocity, etc.

Fluid concentration (liquid, gas), Conductance, etc.

Reflective index, Reflectivity, Absorption, etc.

Magnetic field (amplitude, phase, polarization), flux, etc

Voltage, Current, charge, Electric, conductivity, etc.

Temperature, Humidity, pressure etc.

Acoustic

Bio-chemical

Optical

Magnetic

Electric

Thermal

Mechanical

Stim

ulu

s

Position, Velocity, Acceleration, Torque, Force, Strain, stress etc.

Page 15: Sensor Fusion - A Need for Next Generation Automobiles

E. Material Used and Field of Application

Sensors can be classified by considering the material used for manufacturing of sensors. For example conductor, insulators, biological substance, etc., are used for sensor manufacturing.

Sensors can also be classified according to field of their applications like agriculture, automotive, distribution, environment, manufacturing, meteorology, energy, telecommunication, health, military, space, power information, scientific measurement, etc.

Apart from above criteria, sensors are also classified based on specifications, physical attachment (contact and non-contact type), etc.

Sensors may have several conversion steps from input to output before it generates actual output signal. Due to this, the output they produce may not always be perfect and as per expectations. There are many performance related parameters called as specifications. Sensor specification give idea about deviation f r o m i d e a l b e h a v i o r o f s e n s o r s . Characteristics of sensors can be divided into two broad categories as static and dynamic.

A. Static Characteristics

Static characteristics are the ones which can be measured after stabilization of all transient effects to their final steady state values. The most convenient sensor to use is the one with linear transfer function. Transfer function is the relation between sensor input and output. Information like saturation, sensitivity, full scale range, hysteresis, etc., can be obtained by analyzing transfer function of a sensor. Figure 4 shows transfer function of a sensor, which is almost linear. The region T1-T2 represents most useful range of sensor data. The expectation from sensor signal is that the

sensor reproduces the exact behavior in the output signal as that of stimulus (or its changes).

III.Characteristics of Sensors

To achieve this, sensor should have a linear response within some specified range.

Static characteristics of sensors are as follows.

Accuracy

Deviation from the apparent value of the stimulus provided to the actual value is known as accuracy. Accuracy is measured by absolute and relative errors and express either as percentage of full scale or in absolute term.

Precision

Precision gives us idea of how well sensor can reproduce same output for given same precise input when it’s applied several times. Term closely related to precision is repeatability. Repeatability error is caused due to inability of sensor to reproduce same output under identical conditions.

Let us take one example, three load cells are tested for repeatability, same load (50 kg) is placed on each load cell for 10 times. Resulting data is placed in tabular form as shown in Figure 5 [2]. If we analyze the tabular data we could conclude that load cell A is globally accurate but it’s not providing repeatability. Load cell B is not accurate but its repeatability is good whereas load cell C is both accurate & producing good repeatability.

Response Time

Response time is time required by sensor output to change from its previous state to final settled value with respect to step-wise change in input provided.

Linearity

Linearity means maximum deviation from linear relation between input and output of a sensor. The output of the sensor should be linearly proportionate to the measured quantity.

TechTalk@KPIT, Volume 9, Issue 1, 2016

13

Figure 4: Transfer function of sensor

R

R1

R2

T1 T2

0-200 CT0

1200 C

Figure 5:Table of load cell output

TrailNo.

123456789

10MaximumAverageMinimum

A

10.0210.9611.209.39

10.5010.94

9.029.47

10.089.32

11.2010.09

9.02

B

11.5011.5311.5211.4711.4211.5111.5811.5011.4311.4811.5811.4911.42

C

10.0010.0310.02

9.939.92

10.0110.0810.00

9.979.98

10.089.999.92

Load cell output(mV)

Page 16: Sensor Fusion - A Need for Next Generation Automobiles

instantaneously with change in stimulus. Due to properties like mass, electric, fluid or thermal capacitance, delay wi l l be encountered where sensor waits for some r e a c t i o n t o t a k e p l a c e . D y n a m i c characteristics will give us idea regarding this transient behaviors of sensors [4]. Dynamic characteristics of sensor can be evaluated by sub jec t i ng t hem to unknown and predetermined variations in measured quantity. Depending upon the response they produce it could be zero, first or second order systems. Zero order sensors are which produces output proportional to the input irrespective of input variations. It ’s represented in equation as output = proportionality constant X input. Sensors do take time to reach actual output when input is applied, the terms first order and second order is used because relationship between input and output is described by first and second order differential equations respectively. Thermometer is example of first order system. Figure 7 illustrates zero order and first order sensor response to step input [6]. Second order response of a sensor is its response to periodic signal, accelerometer is one of the example of the same. Below are the important dynamic characteristics of sensors.

Speed of Response

Speed of response gives idea of how fast the sensor responds to changes in stimulus.

Frequency Response

It is the ratio of output and input changes as well as phase difference between input and output with the response to input frequency.

Rise Time

Time required by sensor output to reach 10% to 90% of its full response when input is applied.

Settling Time

It is the time taken by the sensor required to reach steady state output within a specific tolerance band after the application of a step increase in input.

TechTalk@KPIT, Volume 9, Issue 1, 2016

14

Resolution

Resolution of sensor is the smallest detectable change in stimulus that can be detected in output generated. The output of sensor will not be perfect when stimulus continuously varies over range. Resolution give us idea of how well sensor can produce output for smallest change in input.

Range

Range of sensor indicates the maximum and minimum limits within which input can vary.

Sensitivity

The ratio of change in output generated with respect to the change in input provided at steady state condition is referred as sensitivity. It’s given by

Sensitivity (K) = Change in sensor output / Unit change in measured parameter

Sensitivity can be linear or nonlinear.

Hysteresis

Due to structural changes in material and friction in sensor, hysteresis error may get introduced in the generated output. Hysteresis error is referred as deviation of output at specified point of input signal when it’s approached from opposite direction [3]. To explain hysteresis, for example, load cell output is taken from 0 to max input load in increasing order and later with same load cell, output is taken in decreasing order i.e. from max to 0 load. These results are plotted in graph, shown in Figure 6, which shows that hysteresis error is observed at load of 55kg [2].

Offset

Output of sensor that exists when it’s expected to be zero is referred as offset error. In other words it is difference between actual output and the rated output under rated conditions [5]. Offset occurs due to environmental changes, sensor calibration error or sensor decay over the period of time.

B. Dynamic Characteristics

Dynamic characteristics explain the behavior of sensor when input is changed. Realistically, s e n s o r o u t p u t d o e s n o t c h a n g e

Figure 6: Load cell output showing hysteresis

Load (kg)

05

101520253035404550556065707580859095

100

Increasing

0.080.451.021.712.553.434.485.506.537.648.709.85

11.0112.4013.3214.3515.4016.4817.6618.9019.93

Decreasing

0.060.882.043.104.185.136.047.028.069.35

10.5211.8012.9413.8614.8215.7116.8417.9218.7019.5120.02

Hysteresis

0.102.155.106.958.158.507.807.607.658.559.109.759.657.307.506.807.207.205.203.050.45

Output (mV)

Hysteresis = 9.75%FSO at 55 kg

Ou

tpu

t (m

V)

0 20 40 60 80 1000

5

10

15

20

11.80mV-9.85mV20mV

x 100%=9.75%FSO

Load (kg)

Figure 7: Zero vs first order sensor response

Input

Output

Input

Output

Time Time

Page 17: Sensor Fusion - A Need for Next Generation Automobiles

Fidelity

It is the degree of exactness with which sensor can reproduce output with respect to change in input, without dynamic error.

Lag

It is a period or delay in response of senor to change in stimulus.

Sensors are as important as our sensory organs such as eyes and ears. Human beings can manage to live by overcoming the situation of not having any of these organs. But we cannot design an intelligent product which can overcome lack of sensory inputs. Hence, selection of right sensors for specific application is one of the important challenge in solution development. Some generic selection criteria are given below.

A. Availability and Cost

When we select sensor for a particular product, we have to see whether it is available universally. In addition to that we have to analyze whether the market of particular sensor is stable. Once we design and implement a product with some selected sensor, it will be a great loss if we need to re-design the product with some other sensor due to lack of availability.

There are different sensors which can be used for a particular purpose. When we select one among them that should be reliable and cost effective. Reliability and cost effectiveness are two sides of a coin with respect to selecting a sensor for our product.

B. Size and Available Space

One important parameter while selecting a sensor is the available space where the sensor to be mounted. Hence, size of the sensor is an important factor while selecting a sensor. The sensor should fit in the pre-designed area allotted for it.

C. Ease of Use and Maintenance

Sensor which we are selecting should have user friendly interfaces, proper user manual that can help anyone to use it easily. The fiber optic series 66 sensors, shown in Figure 8, enhances the ease of use in fiber optic applications. Compared to other fiber optic sensors this sensor is user-friendly with a neatly arranged interface. Sensing distance and sensitivity can be adjusted easily using three different sensing modes. The sensitivity is optimized within these modes automatically. There is auto-tuning facility, which will help in learning of threshold that facilitates the teach-in process.

IV.Criteria for Choosing Sensor for Application

D. Required Signal Processing

Sensors specific to application should be selected based on the required range of processing for particular application. If we select a sensor with less signal measuring capability, then the functionality of the application will be wrong. In the other sense if the sensor has excessive measurement range than required, it can be costlier.

In the current world, sensors play a vital role in product development areas. There would not be any automation, without the use of sensors. A sensor is designed to sense specific measure and or to respond only to that particular measure. It is important to have a complete knowledge of the different classification and characteristics of the sensor while choosing it for a particular application. Often, it is essential to get details of these characteristics during the selection of sensors for our concerned developments. We can look forward how sensors conquer our world by its importance

V. Conclusion

TechTalk@KPIT, Volume 9, Issue 1, 2016

15

References

[1] http://www.ni.com/white-paper/3639/en/.

[2] http://pioneer.netserv.chula.ac.th/~tarporn/

487/HandOut/ StaticC.pdf.

[3] Fraden, Jacob. Handbook of modern sensors:

physics, designs, and applications. Springer

Science & Business Media, 2004.

[4] Kalsi, H. S. Electronic Instrumentation, 3e.

Tata McGraw-Hill Education, 2010.

[5] http://www.ni.com/white-

paper/14860/en/#toc7.

[6] http://eleceng.dit.ie/gavin/Instrument/Dynamic

/Dynamic% 20Characteristics.pdf.

[7] http://www.baumer.com/int-en/latest-

news/newsroom/news/details/artikel/new-

fiber-optic-sensors-enhanced-ease-of-use/.

Figure 8: Series 66 fiber optic sensors [7]

Page 18: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 201616

Page 19: Sensor Fusion - A Need for Next Generation Automobiles

About the Authors

Sensor Fusion Algorithms

TechTalk@KPIT, Volume 9, Issue 1, 201617

Anusha Baskaran

Areas of interestComputer Vision and Image processing Algorithm Development

Jiji Gangadharan

Areas of interestComputer Vision andImage processing Algorithm Development

Page 20: Sensor Fusion - A Need for Next Generation Automobiles

sensor sources. For the ‘m’ number of sensor sources based on the available prior information e and its corresponding likelihood function (P (Z1/e)……..P (Zm/e), Bayes’ rule is given by

The product of the individual likelihood from each sensor sources gives the posterior probability on given all observations Zi.

The recursive form of Bayes’ rule at given instant N for each sensor is given by (3).

Where,

- The state vector to be predicted at time N,

The history of states,

Similar to equation (1),the equation (3) gives the posterior density includes complete summary of all the past information at instant N-1. When the new information arrives at time N, the previous posterior acts as the current prior information and gives the new posterior density. For a given instant N, two processes to be followed are observation update is given by equation (3) and prediction state is given by

Thus future state depends only on the current state at time instant N.

Based on equations (3), (4) the complete Bayes’ data fusion techniques like Kalman filter, grid-based methods and sequential Monte Carlo methods, etc., have been derived.

(a) Kalman Filter

Kalman filter [5] [6] [8] [9] [10] is the statistical recursive linear filter which estimates the process change of an object over the time. It is the special case of Bayesian filtering where probability density of states of the object is represented by Gaussian distribution with mean µ and variance σ. In general, Kalman filter helps to evaluate and to track the different features (viz., position, velocity, noise, etc.,), from each sensor. The basic model equations of states and measurements are given by (5).

TechTalk@KPIT, Volume 9, Issue 1, 2016

18

I. IntroductionSensor is a device that perceives the behavioral changes of the object in the ambient environment and provides the necessary output. Few examples of sensors are RADAR, SONAR, Camera, GPS, Infrared light, etc. The resulting information from disparate sources collectively has less uncertainty than when these sources are used individually. Multisensory fusion [3] [4] [5] [7] is the process of fusing observations from various sensors to provide robust information about the environment.

Due to some complexity and unknown underlying phenomenon, fusion of the raw data is not possible. Data integration involves either parallel processing followed by decision-making or sequential processing using multivariate features. The order of selection of the information from the sources has to be taken care in such a way that they can interact and share the information between the modalities.

Most of the sensor fusion techniques [5] are based on probability theory, particularly Bayes’ ru le. The sensor fusion is accomplished using methods such as Kalman filter, sequential Monte Carlo methods, functional density estimates, etc. Non-Bayesian based sensor fusion techniques are based on interval methods, fuzzy logic, and theory of evidence. In this article, we are going to deal with fewwith few of the well know fusion techniques.

A. Overview of Bayes’ Rule

Bayes’ theorem [1] [2] [5] is formulated by definition from the basics of the probability as “the frequent occurrences of an event”. Using Bayes’ Rule, we can describe the event (E) made by number of observations from the experiments (Z).It is given by:

Probability of event E (prior information)

Probability of the experiment Z given the event E

Normalizing Factor

Let us discuss the role of Bayesian implication in relating the information from different

II. Probabilistic Based Data Fusion

)(EP

)( EZP

)(ZP

Figure 1: Illustration of the Bayes’ Rule

Consumption of the staple food,80% of people having high contentof Lead leads to orga damage i.e. P(Z/E)=0.8

Organ system damages (E)Excess Lead presence inIN the blood (Z)

P(E/Z) = ?

P(E) = 0.01

Using equation (1),Chance of organ system damage given more lead presence in the blood (PE/Z) is 7.8%

P(Z) = 0.103

Ne = {1

e ,2

e ,…, N

e } = {1Ne -,

Ne }.

NZ : State

Ne to be observed at time N.

The history of state observations: NZ = {

1Z , 2Z …,

NZ } = { 1NZ -, NZ }.

(4)

)(

)(*)()(

ZP

EPEZPZEP = (1)

i=1

)ZeP( i = p (e) Õm

P (Zi/e),) (2)

(3) 1

1

( / ) * ( / )( / )

( / )

NN N N N

N NN

P Z e P e ZP e Z

P Z Z

-

-=

Ne

)/ZP(e 1-N

N

)/eP(Z NN

)/ZP(e* )/eP(e )/ZP(e 1-N1-NN

1-NN 1-N=

Page 21: Sensor Fusion - A Need for Next Generation Automobiles

Where,

- state vector of interest at time ,

- control input,

- process noise

- linear combination of the state at instant and measurement noise Here, process and measurement noise are considered to be independent of each other. Statistically it follows the normal distributions that can be described as follows:

Practically, the process covariance Q and noise covariance R matrices change over the period, however they are assumed to be constant.

A, B, H are matrices describing the contribution of state controls and noise to the state transition at time k and observations respectively.

Once we build our model by using equations (5), we need to determine the necessary parameters. With our initial assumptions, the two steps will be performed iteratively. They a re t ime upda te (p red i c t i on ) and measurement update (correction), given by the Kalman filter equations in Table 1.

Consider a simple example to understand the working of Kalman filter. Assume that we have the following velocity measures of the bicycle (m/s) at every k sec interval. The sensor data L1, L2 is presented in Figure 2. The data is assumed to have Gaussian distribution with mean µ and standard deviation σ.

Model the given sensors data say L1 and L2 from the sensor 1 and sensor 2 respectively as shown in Figure 2. Model the data using the

equation (8) that follows the Gaussian distribution with mean µ and standard deviation σ. This modelled information is used to perform the data fusion using the Kalman filter.

From figure 3, P (L1) and P (L2) are modelled information from the sensor 1 and sensor 2 with the variance of and at time k. FI and P (FI) are fused Information and its corresponding modelled graph using the equation (8). As mentioned in the table 1, Kalman filter fuses the information by calculating the measurements ( ) and corresponding kalman gain from ‘m’ different sensor sources to update the state vector and covariance error . Though the given example follows the basic technique of kalman filter, assumptions of each parameter involved are applications based.

Kalman filter is considered to be the most useful and the simplest technique in the process of data fusion. But there are some constraints in using the Kalman filter technique for the fusion. The method is appropriate in approximating the location, altitude, speed using the features of the image, geometric parameters, etc. But it is not suitable for estimating the properties like spatial occupancy, discrete labels or processes whose error properties are not easily parameterized.

(b) Monte Carlo Sequential Technique

Monto Carlo [5] [7] [11] [12] are characterized as the repeated random sampling of the values to obtain the numerical results. It can be used to solve any problem by probabilistic interpretation through Bayes’ rule.

Sequential Monto carlo methods are a set of simulation based methods which deliver a suitable approach to estimate posterior probabilities especially for non-Gaussian, nonlinear, high dimensional data. It is a method where it samples sequentially and provides the weight to those sampled values to describe the probability distribution. The general procedure is given as follows.

First Generate the ‘N’ number of samples at th(k-1) time. It is a set of possible support

values in the state space where, i=1....N

TechTalk@KPIT, Volume 9, Issue 1, 2016

19

(5)

k

x

ku

1kw

-

k

z

p (w) ~N(0, Q) (6)

p (v) ~N(0, R)

Time Update Measurement Update

(i)

Predict the state kx from

1kx

-

1k k kx Ax Bu-=+

(ii) Predict the covariance error

kp

1k kp Ap A Qt

-= +

(i) Compute Kalman gain

kk

k

p HK

Hp H R

t

t=

+

(ii) Update the estimate using kz

( )k k k k k

x x K z Hx=+-

(iii) Update the covariance error

(1 )k k kp K H p=-

Table 1: General equations of the Kalman filter

Figure 2: Illustration to explain fusion of data frommultivariate sensors (data are given in meter/sec)

Sensor 1 Data

Sensor 2 Data

0.39

0.35

1 2 3 4 5 6 7 8 9 10

0.50

0.48

0.29

0.27

0.25

0.23

0.32

0.31

0.34

0.30

0.48

0.45

0.41

0.43

0.45

0.42

0.36

0.33

time (k) in sec

22exp

2

1)( s

ps

meanLL

LP-

*= (8)

Figure multivariate sensors using equation (8)

3: Graph shows the fusion of data from

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

σ1

Fused information P(F1)

Sensor 1P(L1)

Sensor 2P(L2)

σ2

2

1s 2

2s

i

kZ i

kk

kx

kp

i

ke 1-

11 k-k-k-1k WBuAxx ++=

kkk vHxZ +=

k

k kv

(7)

Page 22: Sensor Fusion - A Need for Next Generation Automobiles

necessary fused information within the Bayesian framework. It is easy to represent the data uncertainty. In general, it fails when there is no sufficient data features from the sensors.

We have dealt with few techniques based on the Bayesian theory. The techniques that follow the Bayesian framework work well in the representation of the data uncertainty. But still they have their own limitations. Those inherited bottleneck are described below.

(I) Complexity

It is difficult to state the large number of probabilities to the appropriate probability based techniques.

(ii) Inconsistency

It is hard to define the consistent set of probability and to derive the consistent outcome of the interest.

(iii) Precision of Models

Defining the probability quantitatively to the event needs to be precise.

(iv) Uncertainty

When there is an insufficient data from the source, assigning the probability to the respective event becomes difficult.

.

In data fusion, representation of data uncertainty is very important. Due to insufficient data in real world applications, the assignment of probability to each state becomes difficult in Bayesian framework based techniques. In order to overcome those issues, an alternative technique to Probabilistic based data fusion is non Bayesian based data fusion techniques have been proposed.

A. Evidential Belief Reasoning

Dempster and Shafer [5] [7] [13] [14] have formulated this technique using mathematical concepts with respect to probability inference. This method analyzes the facts in a logical way whether it is valid or not based on some beliefs. Based on the reasonable evidence, this technique handles the representation of the data uncertainty and imprecision. According to the Bayesian theory, for a given universal set X, probability density is assigned to any element xi Ԑ X. In contrary, evidential

reasoning represents belief mass ‘m’ to all possible states xi Ԑ X and also to all possible

Xsubsets of the states xi Ԑ 2 , i=1,..,N. N is the

possible propositions of the system X.

For an instance, consider the mutually exclusive set of an event X= {won, lost}.

III Non-Bayesian Based Data Fusion

TechTalk@KPIT, Volume 9, Issue 1, 2016

20

and assign the probability with its corresponding normalized weights .

Next step is choosing the support value is chosen on the basis of old support . The prediction at the kth state is given as

After the prediction the probability density and weights has to be updated using equation (10) and (11).

The outliers can be eliminated by resampling it based on the weights using (equation (12).

To avoid more likely points to be resampled, a condition is imposed where effective number particles in the sample falls to some fraction of the actual samples.

Consider the figure 4, an example for taking decision of detecting random object based on the multiple inputs from camera and radar. The fusion of data from these sensors can be done using Monte Carlo technique. The individual sensor information from the local predictor can make use of the scene understanding for the object detection. Appropriate weight can be given to different sensor based on the likelihood data and approximate into a single valid information.

Monte Carlo methods are well suited for the problems that are highly non-linear. Though, Monte Carlo handles multi modal density function, the models and must be countable and represented in a simple parametric form. It is inappropriate in the case of high dimensionality state space where defining the promising sample point is challenging.

Sensory data are represented using the probabilistic distribution and provides the

i

ke i

ke 1-

(10)

å=

-

-

==

===

N

i

ikkkkk

i

ikkkkk

iik

eezzpw

eezzpww

1

1

1

)(

)(*

å=

-=N

i

ikk

ikk ewzep

1

1 *)(

threshold

i

i

k

effN

wN <=

å1

Figure 4: Illustration to explain fusion of datacaptured from multivariate sensors

)( 1-kk eep )( kk ezp

)( 1

i

kep -

(9)

-kiw 1

å=

-- ===

N

i

ik

ikkkkk

ikk eeezzpwzep

1

11 )*)(*)(

(11)

(12)

eff

N

Page 23: Sensor Fusion - A Need for Next Generation Automobiles

From the Table 2, in evidential reasoning technique, 50 % chance of either won or lost which is considered as the inability to distinguish between the two states or partial ignorance. Thus, it provides a method of taking unawareness or an incapability to differentiate between alternatives.

Evidential reasoning methods provide a means of allocating, and combing belief masses. The properties of Dempster-Shafer evidential reasoning technique is defined as follows:

(i)

(ii)

where is belief mass ‘m ’ for an evidence E. It contains the x possibilities of E in the subsets of (i.e. )

In addition, probability interval P[E] can also be obtained using belief mass m for evidence E as

is the support or belief measure of E. Consider E contains the subset B, and then mathematically combining the mass belief of B can be given by:

is the plausibility or likelihood of E, where B be the subset should not be null set when intersect with E, and then it is given by:

Also, For every evidence E, the state of the support belief can be either true ( ) or false Consider the example from table 2, using the equation (15) the belief measure is calculated as:

As already mentioned, whereas in probability theory, Belief mass of zero means that there is insufficient evidence whereas zero probability means impossibility. Unlike probability based

methods, without any evidences there is no

way to assign the partial ignorance to the

state of the event.Similarly, using the same example from table 2, the plausibility (using equation 16) can be given by:

These measures based on the evidences are considered as the crucial parameters in finding the system to be secured or not. Thus, belief measure infers the how much the system is reliable whereas plausibility tells the risk of the system. In given example, maximum risk of 40% that variable X is not true based on the evidence E.

Consider one more evidences E’ on X with belief mass function m’ respectively.

Assume, m’ (won) =0.4, m’ (lost)=0.2, m’ ({won, lost }) =0.4

Fusing the different evidences using the Dempster’s combination rule can be defined as follows:

(i)

(ii)

is the amount of conflict between the evidences (E and E’) i.e. if k=0, then two evidences cannot be combined and this technique won’t hold good. Hence evidential belief reasoning technique can be usually used to fuse the evidences from the different sensors.

Mathematically, let’s infer the technique when two evidences (E and E’) are fused in our given example (table 2):

TechTalk@KPIT, Volume 9, Issue 1, 2016

21

Table 2:Probability based vs EvidentialReasoning Technique by assigning the

probability 'p' and mass belief 'm'respectively on the event X

Assign uncertainty to each state of the nature

Math

ematical C

on

cept

2[ ] 1XE

m EÎ

=å[ ]Em

2X 2X

E Î

[][][] Bel E P E Pl E££

[ ] [ ]B E

Bel E m BÍ

=å [ ]Pl E

[ ] [ ]B E

Pl E m BǹF

[][]=1.~p B P B+

k []Bel E

[]

[]

[ ]

[ ] 0.6

[ ] 0.1

, [ ] [ ] [ , ]

0.6 0.1 0.3 1.0

won m won

lo s t m los t

won lo s t m won m los t m wo n los

Bel

Bel

Bel t

= =

= =

= + +

=++=

[ ] [ ] [ , ]

0.6 0.3

0.9

[ ] [ ] [ , ]

0.1 0.3

0.4

Pl wo n m w on m wo n lo st

Pl los t m los t m wo n los t

= +

=+

=

= +

=+

=

[ ]Bel E

12

1[ ] '[ ]

1 E Em m E m E

k ¢Ç¹Æ¢=

12 ( ) 0m Æ=

m(F)=0, where { } is null set F

(13)

(14)

(15)

(16)

[ ]=l-Bel[~E]Pl E

(Bel[~E]).

Bel E + [Bel ~E] 1

[]Bel E

Pl[~E]

(17)

( [ ] * [ ]) ( [ ] * [ ])k m w on m lo s t m los t m wo n¢ ¢= +

=

(0.6*0.2+0.1*0.4 ) = 0.16

kl

lostwonmwonmlostwonmwonmwonmwonm(won)m

-

+=

]),[*]['])(,['*][(])['*][(12

84.0

)3.0*4.0()4.0*6.0()4.0*6.0( ++=

71428.084.0

6.0==

kl

lostwonmlostmlostwonmlostmlostmlostm(lost)m

-

+=

]),[*]['])(,['*][(])['*][(12

84.0

)3.0*2.0()4.0*1.0()2.0*1.0( ++=

14286.084.0

12.0==

kl

lostwonmlostwonmlost)(wonm

-=

]),['*],[(,12

84.0

)3.0*4.0(=

14286.084.0

12.0==

Page 24: Sensor Fusion - A Need for Next Generation Automobiles

techniques are easy to understand theoretically, it is incapable of addressing the data imprecision and imperfection.

.

In this section, we have dealt with some of the key techniques. Other techniques commonly used for fusion include extended Kalman filter, undescended Kalman filter, hybridization, fuzzy logic, etc. We are dealing with the process of data fusion in our daily life activities. In order to solve the problem of fusing the multiple sources’ information into one, we are looking for the effective solutions. But in real world, there is no single technique that provides the optimal solution for any real world data fusion problem. Although sensor fusion is being used increasingly, the area is still far from mature. A lot of research needs to be undertaken in order for it to become reliably and easily implementable for various practices.

IV Conclusion

TechTalk@KPIT, Volume 9, Issue 1, 2016

22

Belief and plausibility measure can be computed using the equation number respectively.

Quantitatively, on fusion of the two known evidences E and E’, the reliability factor (i.e. support measure) is less than the amount of risk involved (i.e. plausibility measure).

Hence, it depends upon the concept of probability and classifies the data by certainty, likelihood, and fuses them using combination rule of Dempster-Shafer. It helps in fusion of indeterminate data. However, it is a promising technique; there is possibility of producing unexpected results when fusing contradictory data. Hence, this technique is incapable of fusing highly conflicting data and therefore it is used only in the lower level of data fusion.

B. Interval Calculus

This technique helps to measure the uncertainty of the given data when sufficient probabilistic information is not available. It represents the states of the system by an interval , and considered to be an advantageous technique over the probabilistic based method of data fusion. Interval calculus technique [5] [14] deals with uncertain data limited by lower and upper bounds. E.g. For a given state x, interval is given by x [a, b]. The data does not necessarily follow uniform distribution. This bound includes the interval error which can be eliminated by simple techniques.

Mathematically, if the bounds of the events are a, b, c, d R, then the formulated arithmetic interval to fuse them can be given by:

Interval addition and multiplication are associative and commutative.

Matrix arithmetic is also possible but matrix inversion will be challenging. This method does not require any additional information viz., assigning probability or mass belief. But, merging the data from the different sensors is difficult because it is hard to establish the trust worthy dependencies among sources is difficult. It requires appropriate level of data granularity. Therefore, it is not much helpful technique in data fusion.

Other techniques like rough set based, Random set theoretic, Fuzzy reasoning, etc. Though, non- Bayesian based data fusion

Ԑ

Ԑ

References[1] Dimitri P. Bertsekas, John N. Tsitsiklis, "Introduction to

Probability," Second Edition, Athena Scientific, Belmont, Massachusetts, USA, 2008.

[2] Mario F. Triola, "Bayes' Theorem".

[3] F. Castanedo, “A review of data fusion techniques,” The Scientific World Journal, Article ID 704504, 2013.

[4] Hall, D.L. and Llinas, J., “An introduction to multisensor fusion,” Proceedings of the IEEE.1997.

[5] H. Durrant-Whyte, and T. C. Henderson. “Multisensor data fusion,” In Springer Handbook of Robotics, B. Siciliano and O. Khatib (eds.), pp. 585–610,2008.

[6] Carola Otto ,“Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems,” ISBN9783731500735,2013.

[7] Bahador Khaleghi, Alaa Khamis, Fakhreddine O. Karray ,“ Multisensor data fusion: A Review of the state-of-the-art,” Pattern Analysis and Machine Intelligence Lab, University Waterloo, 2011.

[8] G. Welch and G. Bishop, “An Introduction to the Kalman Filter,”ACM SIGGRAPH Course Notes, 2001.

[9] N. Funk, “A study of the kalman filter applied to visual tracking,”University of Alberta, 2003.

[10] Boris R.Gray Bishop and Greh Welch, “An introduction to the Kalman filter,”SIGGRAPGH conference proceedings,2001.

[11] Doucet, Arnaud, Nando de Freitas, and Neil Gordon ,“An Introduction to Sequential MonteCarlo Methods,” 2001.

[12] Olivier Cappé, Simon J. Godsill, Eric Moulines,” An Overview of Existing Methods and Recent Advances in Sequential Monte Carlo,” LTCI, TELECOM ParisTech & CNRS,2008.

[13] Amandine Bellenger, Sylvain Gatepaille, Habib Abdulrab, and Jean-Philippe Kotowicz,“An Evidential Approach for Modeling and Reasoning on Uncertainty in Semantic Applications,” URSW, volume 778 of CEUR Workshop Proceedings, page 27-38. CEUR-WS.org, (2011).

[14] David A. Schum, “Evidential foundations of probabilistic reasoning,” New York: Wiley, 1994.

])[1(28572.0][ wonbellostPl -=

85714.0][wonPl =

14286.0][][ 12 lostmlostBel ==

71428.0][][ 12 wonmwonBel ==

[a, b]+[c, d] = [a+c, b+d]

[a, b] - [c, d] = [a- c, b -d]

[a, b]x[c, d] = [min(ac, ad, bc, bd),max(ac, ad, bc, bd)][a, b]/[c, d] = [a, b] x [1/d, 1/c], 0 Ï[c, d]

Page 25: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 2016

23

Page 26: Sensor Fusion - A Need for Next Generation Automobiles

A

Physical Sensor

Primary Noise

Controlled Noise

Controlled Noise

Primary Noise

Physical Sensor

Virtual Sensor

BTechTalk@KPIT, Volume 9, Issue 1, 201624

Page 27: Sensor Fusion - A Need for Next Generation Automobiles

About the Author

Virtual Sensor

TechTalk@KPIT, Volume 9, Issue 1, 2016

25

Milind Potdar

Areas of Interest

Embedded Hardware and Software Development,

Mechatronics, Cryptography,

Real time OS, Communication,

IoT and sensors

Page 28: Sensor Fusion - A Need for Next Generation Automobiles

sensors, however most popular methods are

based on data validation or correction and

available past data.

A. Data Validation or Correction

Data Validation or correction is also known as

“Analytical method”. Physicalrelation between

different characterising variables of the

system are analysed and modelled, to arrive

at an expression of their interdependencies.

This mathematical model can be realized

using nonlinear system of equations. Further,

output data values are estimated based on

relationship with other sensor data and

process parameters. This is a very robust

method and usually its level of accuracy is

comparable to a physical sensor. With

increase in accuracy of the model it is

expected that the analytical method’s

performance will be enhanced. It is to be

noted that the analytical method heavily

depends on the mathematical model and

sensor data at given instance of time.

(1)

Where y is variable and y can be

In this method, it is always assumed that the

sensed value is never 100% accurate, hence

the raw input (y) is not a solution of the system

F(y) = 0. Error produced due to in-accurate

raw input is knows as

1. Sensor Accuracy Error - Measured raw

value of variable, whose true value is

unknown.

2. Calibration Error – This error is identified by

the measurement of y, which is a random

variable with mean y.

Data reconciliation technique is used to

remove the above error due to measurement

noise. In this technique, main assumption is

that no calibration error exists in the set of

measurements.

The mathematical expression is as follows,

(2)

Pre and post data reconciliation require data

validation and verification.

Analytical method is used in all areas where

the process model is well defined i.e. chemical

industry, gas production, oil refineries and in

engine control.

B. Data Based or Empirical Method

This method is also known as data based

modelling. In this method sensor data is an

estimate of available past measured values of

the physical quantity under consideration, and

TechTalk@KPIT, Volume 9, Issue 1, 2016

26

I. IntroductionSensors convert physical quantity to some

measurable form, generally electrical signals.

Although sensors are of paramount

importance, they have limitations in harsh

conditions. Making them operable in harsh

conditions, increases process cost, signal-

processing complexity and material cost.

These reasons lead to a high cost product. An

industrial automation system, taken as an

example will have an increased cost for

system integration, signal processing and

overall system cost for each sensor that is

added, increasing the maintenance time and

cost. It is also seen that dynamic adaptation in

large network is time consuming and

expensive. These drawbacks give an

opportunity to use virtual sensors.

Virtual sensors are also known as software or

proxy or surrogate sensors and are the best

examples of sensor fusion. Using one or more

physical sensors to extract data, virtual

sensors are software coupled with

mathematical models which are data driven

and used in place of the physical sensors. Fig.

1 illustrates a virtual sensor.

A mathematical algorithm to provide an output

e.g. pressure of a gas by knowing temperature

and gas properties uses acquired data. This is

done without a physical pressure sensor.

Virtual sensors are required if,

1. physical sensor is too expensive to

replace.

2. it is not possible to sense using physical

sensor.

3. it is not possible to install new sensor.

4. sensors are to be used in very hostile

environment and maintenance of physical

sensor is not possible

5. need frequent calibration for physical

sensor

6. behaviour of sensor is inaccurate due to

drift

There are many ways to implement virtual

II. Virtual Sensor

III. Virtual Sensor Implementation

Output

Data 1

Data 2

Data N

Virtual sensor Algorithm

Figure 1: Illustration of Virtual Sensor

Out put

F(y)=0

y=(y1,y2,…..,yn)

å=

/-n

0

)(i

σy[ ]* y2

Page 29: Sensor Fusion - A Need for Next Generation Automobiles

a resultant of its correlation with parameters

and existing measured data. In cases where

past value is not available, then actual

measurement of installed sensor is

considered. The Empirical method is more

complex than analytical method. The main

advantage is that physical understanding of

the process is not required. The process

model is implemented using measured and

simulated data to generate mathematical

models. This can be implemented using

various data and machine learning algorithm

combinations. The learning process is mainly

either active or passive. The process in which

minimizing an error function using gradient

based parameter adjustment is employed is

called active learning, however in Passive

learning, no mathematical iteration is required

and it totally depends on data in the form of

data vectors. For accurate prediction, there is

a requirement to provide all training data. The

conditions existing during data measurement

need to be replicated while designing the

mathematical models for reliable prediction.

When operating conditions change drastically,

prediction is never accurate. Following

models are used to implement empirical

method:

1. Artificial Neural network (ANN)

2. Ensemble Modelling

3. Empirical Ensemble-based Virtual

Sensing

With increase in standards of safety in

automotive industry, there is an increased

need for sensor redundancy, often requiring

system re-design. Since already lot of sensors

are available in the vehicle, it is possible to

build virtual sensors by re-using the existing

sensors. For some applications, virtual

sensors are well suited. E.g. Vehicle yaw rate

calculation, tire pressure monitoring, crank

position, pressure monitoring, climate control

and so on.

Indirect TPMS (Tire Pressure Monitoring

System) is a good example of virtual sensor.

Rated tire pressure results in good fuel

economy, good stability and less wear and

tear of tire. Traditionally a sensor is attached to

the tire, which measures pressure and sends it

to the display. If a sensor is damaged, then the

whole system needs to be replaced. However,

this problem can be solved by implementing

virtual sensor. Fig. 2 shows how iTPMS

(intelligent Tire Pressure Monitoring System)

system is built using virtual sensor.

IV. Virtual Sensor in Automotive

V. ConclusionDemand for accurate and reliable sensor

system is rising. To bring down the cost,

sensors need to be made adaptable to use

new materials or made virtual if possible. The

computation power required for virtual sensor

implementation is available almost in every

microcontroller. However ,the challenge is to

acquire correct data and design efficient

algorithms. Many research proved that the

robust algorithm and hardware make the

virtual sensor, which are reliable and

accurate, same as physical sensors. The

application area of virtual sensor is not limited

only to automotive or aerospace, but it is

getting used in life sciences and many more

places.

TechTalk@KPIT, Volume 9, Issue 1, 2016

27

ABS Sensor

Processing unit

Display

Ambient

temperature sensor

Vehicle Speed

Travel time

Tire parameters

Applied Brake details

Physical Sensor

Vehicle Data

Figure 2 : Intelligent Tyre Pressure MonitoringSystem of Working Virtual Sensor

References

[1] Virtual Sensors: Abstracting Data from Physical Sensors by SanemKabadayi, Adam Pridgen, Christine Julien, The University of Texas 2006.

[2] Presentation on “Virtual Sensing for Control” by Ulf Holmberg.

[3] Study on virtual sensors and their automotive applications by Cătălin ZAHARIA,Adrian CLENCI, University of Pitesti, Automotive and Transports Department, Romania 2013.

[4] A Review of Virtual Sensing Algorithms for Active Noise Control by Danielle Moreau, Ben Cazzolato, Anthony Zander and Cornelis Petersen , www.mdpi.com 2008.

[5] Model-based optimal emission control of diesel engines by ETH ZURICH, Institute for Dynamic Systems and Control 2010.

[6] Virtual Sensor Construction Technique for Participatory Sensing Environments by Hiroshi Sato, Atsushi Yamamoto, HisashiKurasawa,Hitoshi Kawasaki, Motonori Nakamura, and Hajime Matsumura, NTT Review, 2013.

[7] Virtual Sensors for On-line Wheel Wear and Part Roughness Measurement in the Grinding Process by Ander Arriandiaga, Eva Portillo, Jose A. Sánchez, ItziarCabanes and Iñigo Pombowww.mdpi.com, 2014.

[8] http://www.add-for.com/

Page 30: Sensor Fusion - A Need for Next Generation Automobiles

BSA

S SS

S SS

S SS

B BS

S SS

S

C BS

S S

S

S

S

S

TechTalk@KPIT, Volume 9, Issue 1, 2016

28

Page 31: Sensor Fusion - A Need for Next Generation Automobiles

About the Author

Sensor Redundancy andits Applications

TechTalk@KPIT, Volume 9, Issue 1, 2016

29

Aditya Piratla

Areas of interest

Image Processing and Computer Vision,

Innovation and

Traffic Flow

Page 32: Sensor Fusion - A Need for Next Generation Automobiles

The flow conservation for this system is:

F1=F2+F3 (1)

If we assume that all the sensors fail independently and randomly, the reliability R (F1) of estimating F1 is:

Reliability of F1 is greater than the non-failure probability of S1 which is due to redundancy of F1. If this reliability estimate of the variable is sufficient as per the requirements of a particular process then no additional redundancies are required, otherwise one has to resort to hardware or analytical redundancy techniques [1].

One of the methods to affect redundancy is sensor clustering. The reason for popularity of clustering is its ease of scalability which is vital a s s e n s o r d e p l o y m e n t i n c r e a s e s exponentially. In this method, neighboring sensors join to build one cluster. There could be multiple levels of clusters based on the complexity of the sensor network design. The minimum size of a cluster is determined by the number of sensors required to make a system completely observable [4] and the maximum number is restricted by the amount of redundancy required, the cost and the design constraints.

A. Sensor Cluster Types and Points to be Considered for Effective Redundancy

Sensor clustering is of two types, homogeneous and heterogeneous sensors. In homogeneous clustering, all the sensor nodes are identical. Topologies like these are widely used in wireless sensor networks, as high redundancy is required in order to t ransmit re l iable data. In case of heterogeneous sensors, the sensors are different either in terms of their battery capacity and/or their functionality. In cases where functionality of all the sensors in the network are same, heterogeneous clusters are cheaper in comparison to the homogeneous ones at the price of decrease in fidelity of the networks. The reason being that a cluster whether it is homogeneous or heterogeneous chooses a cluster head which acts as an aggregator of data from all the sensors as well as the gateway of data transmission to the main processing center. As the cluster head handles more load, chances of its failure are more. In homogeneous clustering, the cluster heads are chosen on a rotational basis and hence the load is equally divided reducing the probability of failure. Failure of a particular sensor does not cause any disproportional damage to the system. Incase of

III.Hardware Redundancy Through Sensor Clustering

TechTalk@KPIT, Volume 9, Issue 1, 2016

30

I. Introduction

II. Reliability Estimation of a Variable and Redundancy

Sensors are the information gateways of any system dependent on gathering values of observable physical quantities. The values obtained for the physical quantities must be reliable to ensure safe and smooth operation. This is generally done through collection of data from multiple sensors through multiple means for physical quantities under measurement (real or otherwise); the process is known as sensor redundancy. Sensor redundancy is of two types, hardware redundancy and analytical redundancy. Hardware redundancy is simply an employment of two or more sensors for the measurement of same set of variables. On the other hand, analytical redundancy is based on deducing values of observable(or quantities not directly measured) quantities through mathematical modeling of the system. While hardware redundancy is costly and is constrained by system design, analytical redundancy requires extensive knowledge of the system and is not as reliable as the former. Because of the challenges involved in ensuring redundancy a need must be establ ished for redundancy as per requirements like compliance to standards, reliability, measuring unmeasured variables, fault diagnosis, etc. The establishment of need is done by using a method known as the reliability estimation of a variable, which is elaborated in the following section. This is followed by sections on an hardware and analytical redundancy.

Reliability estimation of variables is defined as the probability with which a variable can be correctly estimated/measured in case of sensor failure. To ensure redundancy of sensors and thus the data measured through

it, the reliability estimation of a variable must be greater than the non-failure probability of the sensor. Consider a simple system with one inlet and two outlet streams as shown in Figure 1.The failure probabilities of all sensors are assumed 0.1.

52

53

51

F2

F1

F3

Figure 1: A Simple Process Unit, fitted with flowsensors S1, S2 and S3

working}are S3 and P{S2or working}is P{S1=R(F1)

0.981=0.81×0.9-0.81+0.9=

Page 33: Sensor Fusion - A Need for Next Generation Automobiles

heterogeneous sensors, only few sensors have the hardware capability to act as cluster heads. Although this allows sensor networks to be cheaper, failure of any cluster head would have a large impact on the functioning of the system [2].

Energy consumption from multiple redundant sensors is an important concern. To tackle this issue, when the sensor network is started, all sensor nodes are activated. After this, a particular sensor node evaluates its sphere of measurement. If measurement within this sphere is reliably estimated by the other sensors then the node shuts itself down. This is done in a sequential manner so that sensors do not shutdown simultaneously. In order to have sensor redundancy, these kind of design and energy constraints must be addressed.

In contrast to hardware redundancy, analytical redundancy depends on modeling of a system. This kind of approach is used in cases when the cost of sensor deployment is a constraint, sensor data is delayed or noisy and cannot be trusted or in cases of sensor failures. System modeling varies from process to process. Following section discusses the estimation of cylinder air charge estimation using analytical redundancy.

A. Application: Cylinder Air Charge Estimation

Air-fuel ratio is an important parameter in order to regulate noxious fumes in the exhaust. Typically the air intake is measured by using mass flow rate sensors like the hot-wire anemometer. The problem with this method is that the anemometer has very slow dynamics which leads to improper computation of air-flow ratio. This shortcoming is overcome through use of analytical models. Based on the air intake per induction event, cylinder air charge (CAC) is calculated which is a critical quantity needed for feed-forward computation of the fuel injector pulse width [3].

A lumped model based method for estimating the air charge is discussed in this section. Let P, V, T and m be the pressure, volume, temperature and mass of the air in the inlet manifold. By the ideal gas law:

P=mRT/V (2)

R is a gas constant. Differentiating with the assumption that the inlet air temperature remains constant gives us:

Rate of change of air mass in the intake manifold is the difference between the actual input mass air flow rate and the air pumped out from the intake manifold by the cylinders. The second parameter is a function of engine speed (N) and manifold pressure (P). Hence:

IV. Analytical Redundancy

f(N,P) is determined by the regression of the coefficients of the polynomial obtained against the engine dynamometer data. It is related to cylinder air charge per induction (CAC) through:

n is the number of cylinders in the engine.

Modeling of can be done through throttle

position (α in degrees), throttle body inlet

pressure (PTB) and manifold pressure (P). But the problem is that, PTB is generally not measured in most vehicles and even the throttle position sensor is typically not very accurate and hence for estimating cylinder air charge, the measurement from the anemometer is used. The dynamics of the anemometer can be modeled using a first order differential equation with a time constant

of around 20 milliseconds.

is the measured mass air flow. From equations (3) and (5):

Let hence :

(6)

The presence of in equation (5) points to the existence of sensor dynamics.

Equation (6) establishes the relationship of CAC with x (which is a term related to manifold pressure) and . CAC values are critical for determining the fuel injection pulse-width, which has to be calculated continuously. This cannot be done with accuracy if values from the hot-wire anemometer ( ) are used in isolation. The model developed here exploits analytical redundancy to estimate CAC con t inuous ly by incorpora t ing the unaccounted sensor dynamics. In this example, an estimate of manifold pressure (P) was used and hence x used to calculate CAC is an estimate. However, if a manifold pressure sensor were available, an error correction term can be incorporated using extended Kalman filter or non-linear observer theory for an even better estimation. In such a case, the relation between mass airflow and

(t)

t

TechTalk@KPIT, Volume 9, Issue 1, 2016

31

mdt

d)

V

RT(=P dt

d

P)]f(N, - )[MV

RT(=P

dt

dAFa

(M )

P)(N, f nN

120 =CAC

(M )

AFaAFmAFm M =M+M dt

d t

AFmM

P)f(N, -M+M dt

d

V

RT=P

dt

dAFmAFm úû

ùêë

é÷ø

öçè

æt

AFmτMV

RTx=P- ÷

ø

öçè

æ

úû

ùêë

é÷÷ø

öççè

æ÷ø

öçè

æ÷ø

öçè

æAFmAFm τM

V

RT+xN,- fM

V

RTx=

dt

d

÷ø

öçè

æ÷ø

öçè

æAFmτM

V

RTN,x+ f

nNCAC=

120

AFmM

AFmM

(3)

(4)

(5)

from (4)

Page 34: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 2016

32

manifold pressure would have further added to the analytical redundancy. The availability of manifold pressure sensors allows one to extend this analytical model for fault diagnosis in the mass flow sensors, another possible area of application for sensor redundancy.

In this article, wireless sensor clusters are discussed, which use hardware redundancy to optimize the life of a network and to ensure data fidelity. Analytical redundancy in the context of cylinder air charge estimation is also discussed which is a crucial aspect for automotive applications. Sensors are by far the most ubiquitous devices. Redundancy incorporation is a de-facto requirement for a system qualifying for reliability standards. Based on the reliability estimate of a physical quantity and the reliability demanded by the system, hardware or analytically redundant approaches are routinely applied. Selection of the redundancy technique, whether hardware or analytical is dependent on the cost, design and the ease of analytical modeling of the system. In general both the modes of redundancy are used, as some parts of a system are easily modeled while in certain other parts it is desirable to have a hardware sensor backup for better accuracy.

V. Conclusion

References

[1] Yaqoob Ali, Shankar Narsimhan, “Sensor Network Design for Maximizing Reliability of Linear Processes,”AICHE JOURNAL • MAY 1993.

[2] Daniel-Ioan Curiac, Constantin Volosencu, “Redundancy and Its Applications in Wireless Sensor Networks: A Survey,”Wseas Transactions on Computers.

[3] J.W. Grizzle , J.A. Cookyand W.P. Milamy,“Improved Cylinder Air Charge Estimation for Transient Air Fuel Ratio Control”.

[4] Controllability and Observability, http://www.ece.rutgers.edu/~gajic/psfiles/chap5traCO.pdf

Page 35: Sensor Fusion - A Need for Next Generation Automobiles

BOOK REVIEWB

OO

K R

EV

IEW

With enhanced sophistication in vehicles, there is an increased trend in the number of vehicle sensors. However, to achieve desired functionality only one sensor information might not be always sufficient. For instance, in collision avoidance the RADAR sensor, which measures distance between vehicle and concerned obstacle, is not sufficient to avoid collision. Additionally it would need camera input to classify the obstacle and take a decision. Hence, we need to use the combined information from various sensors for embedding the required functionality, which is known as the multi-sensor fusion.

At first glance, one will realize that this book provides basic information and various applications of multi-sensor fusion. This book is organized in three sections, the first section is on fundamentals, the second section is on various applications and the last section is about the research aspects. In the section on fundamentals, the author covers mathematics required to characterize sensors, tools and data structures. To explain sensor fusion the author has taken a case example of image processing based application, where he covers Kalman filter, distributed dynamic sensor fusion, optimal sensor fusion design, etc. The applications are described in such a manner that even a person with limited background in multi fusion sensors can grasp the concepts easily.

In the first section, basic information covering the topics such as sensor construction, characteristics, how to extract information from sensor, dynamic networks, etc., is presented.

Multi-Sensor Fusion: Fundamentals andApplications with Software Author - Sundararaja Iyengar andRichard R. Brooks

To implement the fusion algorithm, the required mathematics is also explained very well. The mathematical tools used to explain the concept consists of linear algebra, probability, rigid body motion, and coordinate transformation. Further, Depend ability data structures and Markov chain were also used for explanation.

The second section completely focuses on the practical aspects, in which the author describe s algorithms, techniques, data structures used in multi sensor fusion system and its software implementation. The author has further addressed fusion related issues such as selection of sensor, image registration and distributed agreements. To illustrate the concept using examples, several algorithms have been provided. Most of the algorithms is implemented in native C language. The offered software libraries contain codes for standard Kalman filter, distributed dynamic sensor algorithm and several other relevant techniques.

In the last section, the author describes the research conducted in sensor fusion pertaining to naval applications. The author briefly provides details of the research that includes the practical problems faced, and how they were solved.

Overall, this book covers all aspects of multi fusion sensor very well, along with the required fundamental concepts. The content of the book is rich with several concepts of sensor fusion, useful mathematical techniques as well as more than 50+ illustrations, making it helpful for the reader to understand the concepts associated with multi fusion sensor system. Most of the topics covered are explained through plain text with appropriate figures or diagrams for illustration. However, based on the content of the book, it seems that the reader is expected to be experienced in sensor and signal processing to quickly understand the concepts, which makes this book suitable for experienced professionals as well as students of advanced courses.

TechTalk@KPIT, Volume 9, Issue 1, 2016

33

Milind Potdar

Areas of Interest

Embedded Hardware and

Software Development,

Mechatronics, Cryptography,

Real time OS, Communication,

IoT and sensors

Page 36: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 201634

Page 37: Sensor Fusion - A Need for Next Generation Automobiles

About the Author

Sensor Fusion forE�cient Diagnostics

35

Dr. Nitin Swamy

Areas of InterestControl Systems Analysis and Design,Mathematical Modeling of Systems,Hardware/Software Based Control SystemValidation and Verification

TechTalk@KPIT, Volume 9, Issue 1, 2016

Page 38: Sensor Fusion - A Need for Next Generation Automobiles

process poses a real challenge in implementing CBM. In addition, most of the processes exhibit natural non-linearities which makes sensing all the more difficult. In order to correctly estimate system signals, one would need to have many groups of similar sensors installed in the system, and sift through the huge data generated by these sensors to determine failure due to a single parameter malfunction. In recent times, this sifting, analyzing and failure diagnosis activity has been delegated to ANNs.

ANNs are mathematical models composed of nonlinear computational elements (neurons), operating in parallel and connected by links characterized by different weights. This parallel network, called a Multilayer Perceptron (MLP), is “trained” to learn the behavior of any system, using certain inputs obtained from the system via sensors and a learning algorithm to tune the weights of the links (analogous to the human brain). The

structure of the MLP is shown in Figure 1 [3]

The output of the neural network can be expressed as

where

y = vector of outputs W=matrix of ouput weights

σ = nonlinear activation function

x = vector of inputs V = matrix of input weights

From (1), it is clear that the ANN estimates the output as a linear weighted function of its inputs operated upon by an activation function. The input vector x can be constructed from the states of the underlying dynamics of the system. By proper choice of σ, W and V, one can linearly combine the ANN inputs to estimate the system outputs. Typically, V is kept fixed and using ANN training algorithms, W is tuned to get the desired output y.

III.Artificial Neural Networks (ANNs)

TechTalk@KPIT, Volume 9, Issue 1, 2016

36

I. Introduction

II. Condition Based Maintenance (CBM)

Condition Based Maintenance Systems or Diagnostic Systems rely on sensors for relaying information regarding the health of the process under consideration. More often than not, multiple sensors measuring different physical phenomenon are employed. Based on the operating knowledge of the normal behavior of the process, the sensor outputs are analyzed to glean information about the health of the process (including tools and machines employed in the process). Instead of analyzing the sensor outputs in isolation, it is more relevant to fuse the outputs of different kinds of sensors to infer the health of the process, as loss of sensitivity in one sensor domain can be possibly offset by information from other sensors.

Typically, most industrial processes display varying levels of nonlinearities in their operation. It is well-known that Intelligent Estimation tools like Artificial Neural Networks (ANN) are extremely adept at identifying operational nonlinearities via mathematical modeling. This is in addition to the use of ANNs for fault diagnosis. The intent of this article is to explore few scenarios highlighting the use of sensor fusion using intelligent estimators in Condition Based Monitoring systems.

CBM works by using sensor data in real-time to facilitate optimized maintenance of system resources. A CBM is designed to initiate system maintenance only when monitored data necessitates it. A big enabler for this activity is the use of sensors to monitor various critical system parameters. These sensors typical ly measure tangible physical parameters like vibration, temperature, pressure, speed, voltage/current, stress / strain / shock, position and particular count / composition. The data from these sensors has to be taken through complex mathematical operations like Fast Fourier Transforms (FFT) to extract the information conveyed by the measurements. This is mostly a non-real-time activity. Based on the mathematical analysis, potential systemic problems are identified. This is then narrowed down even further by diagnosing failure conditions identified by sensors of a particular type, e.g., rail pressure abnormality in a common rail diesel EMS using rail pressure sensor information. Finally, prognostic algorithms estimate remaining useful life based on past and future operational profiles and physics of failure models.

The advantages of CBM [1] are centered on increased system availability, increased system reliability, reduced maintenance costs and reduced inventories. However, the complexity of the data acquisition and analysis

Figure 1: Two Layer MLP with sigmoid activation function

(1)y =W T s( (TV x

Page 39: Sensor Fusion - A Need for Next Generation Automobiles

IV.Applying MLP to Sensor Fusion

V. Sensor Fusion for Diagnostics (Condition Based Maintenance)

The application of MLP to sensor fusion is best illustrated by an example.

Advanced Driver Assistance Systems (ADAS) typically alert drivers to the possibility of extraneous danger while driving on the highway. The input data for this application are photographic images obtained from a forward looking camera and environment depth measurement obtained from radars. These two sensors essentially take snapshots of the same scene in front of the vehicle, but the information obtained from them is different. A radar, while allowing for precise ranging, does not allow for object recognition. On the other hand, it takes a lot of computing to discern depth from camera images.

A very elegant solution for this problem would be to feed the camera and radar information to an MLP network. The output of this network would be the complete image of the scene in front of the vehicle. The training algorithm employed would function to “fuse” the camera and radar information via proper choices of the activation functions and training algorithms. The fused information then would interpret the exact information as interposed images from the two sensors. This is indicated in Figure 2.

As seen in section IV, sensor fusion [2] allows us to combine inputs from various sensors to create comprehensive information about the process under investigation. This fusion can also help in taking decisions about the health of the process under investigation. This is illustrated with an example of detecting the source of unstable idle speed faults in an

automotive engine [4], as shown in Figure 3.

In this example, the objective is to diagnose the source of idle speed instability. Four sensors from systems that measure impact idle speed, are chosen; oxygen sensor, ignition sensor, vacuum sensor and injector current sensor. The signals obtained from these sensors are pre-processed to extract quantifiable features. These features include:

1. Average output voltage from oxygen sensor

2. Breakdown voltage of ignition system

3. Spark voltage

4. Spark dwell angle

5. Vacuum sensor voltage.

An MLP is designed, which accepts the above features as input, and via proper choice of activation functions and training algorithms, identifies fault conditions at the output as well as the source of the fault. In this manner, information from multiple sensors is fused through an ANN to help diagnose practical problems like idle speed instability in engines.

CBM of systems is very popular in diagnosing system malfunctions. One of the key techniques employed in CBM is Sensor Fusion. Sensor Fusion combines information from multiple sensors in systems and helps in identifying system performance parameters. In this article, the idea of using ANNs as fusing elements for sensor information is introduced. Typical use of ANNs in fusing camera and radar information for complete environment awareness is illustrated through an example. ANNs have the capability to learn the inherent characteristics of the system feeding their inputs. Using the fact that ANNs can be used for data fusion, it was shown through an example that complex diagnosis like detecting source of idle speed instability in automotive engines can be achieved.

VI.Conclusion

TechTalk@KPIT, Volume 9, Issue 1, 2016

37

Figure 2: Sensor Fusion for ADAS.

RADAR Image

Fused Image

Camera Image

Figure 3: Sensor Fusion in Diagnostics.

Oxygensensor

Ignitionsensor

Vacuumsensor

Injectorcurrent sensor

Measurementsin - engine

Features extractedfrommeasurements

Oxygen sensor avgoutput voltage

Breakdownvoltage of ignitionsystem

Spark voltage

Dwell angle

Vaccum sensorvoltage

Fault modeidentification

{y2, y2, y3}

(0,0,0) - idle speed normal

(0,1,0) - idle speed fault due to hammed injector

(0,0,1) - idle speed fault due to vacuum leakage

y3

y1

y2

References

[1] Condition Based Maintenance, Southwest Research Institute.

[2] Fabio Pacifici et al., 2007 Remote Sensing Data Fusion Contest:Neural Networks For Data Fusion, IEEE Data Fusion Technical Committee, 2007.

[3] Ognjen Kuljaca et al., Design of Adaptive Neural Network Controller for Thermal

Power System Frequency Control, AUTOMATIKA 52 (2011).

[4] Rungchun Guo et al., A study about fault diagnosis of automobile engine based on neural network, 2nd International Conference on Electronic & Mechanical Engineering and Information Technology (EMEIT-2012).

Page 40: Sensor Fusion - A Need for Next Generation Automobiles

38

TechTalk@KPIT, Volume 9, Issue 1, 2016

Page 41: Sensor Fusion - A Need for Next Generation Automobiles

About the Authors

Issues with Sensor Fusion

TechTalk@KPIT, Volume 9, Issue 1, 2016

39

Ann Mary Sebastian

Reecha Yadav

Areas of Interest

Computer Vision and

Image Processing

Areas of interest

Automotive electronics,

Engine Management Systems and

Control Systems

Page 42: Sensor Fusion - A Need for Next Generation Automobiles

also explores some of the considerations involved in a mul t i -sensor system implementation.

One of the most widely used models for data fusion is the Joint Directors of Laboratories (JDL) data fusion process model (Fig.1). It comprises of four levels: object refinement, situation refinement, impact assessment and process refinement. Here we use the JDL model to describe functionalities pertaining to each level of the fusion process and the concerns involved.

Level 1 – Object Refinement

The object refinement level aims at fusing data from various sensors to obtain a target’s identity, location, motion, attributes and characteristics with utmost reliability and accuracy [1]. Typical object refinement techniques include estimation techniques such as Kalman filters, Multiple Hypothesis Tracking (MHT) and identification techniques such as artificial neural networks or cluster algorithms, etc. [2]. In an Advanced Driver Assistance System (ADAS) scenario, object refinement can be well understood with Fig.2. In this example, objects (like vehicles or pedestrians) are interpreted based on certain ‘observations’- which refer to the data coming from various sensors on a vehicle. These observations are obtained post sensor refinement, wherein data coming from several sensors is represented in a common model

before proceeding with object refinement [3].

II. Joint Directors of Laboratories (JDL) Model - A Data Fusion Model

TechTalk@KPIT, Volume 9, Issue 1, 2016

40

I. IntroductionThe first thing that comes to mind, when talking about sensors is how convenient our lives have become with sensors coming into the picture. But doesn’t this idea seem familiar to the sensory organs that humans are endowed with? Ever wondered if all the good things we are able to achieve through our natural sensors can be attributed only to each of the individual sensors or if it is sensor fusion which is at work? Perhaps, the human brain and the five senses, not only represent one of the best multi-sensor fusion systems, but also serve as a motivation to extract the prowess of sensor fusion. Empirical data shows that there is simply no one perfect sensor, as applicable to the automotive industry. For instance, some sensors provide good object location estimates, while others are better suited for providing identity information. This points to the need and utility of combining data from multiple sensors in the next generation automobiles.

Multi-sensor data fusion aims to combine sensory data from different sensors, in order to achieve better interpretation of the world around us than that possible from a single sensor. Combining data from varied sources in this way is aimed at achieving better accuracy, decreased levels of uncertainty as well as making a system more robust to changes in environmental conditions. For instance, human vision employs information from two eyes which see the same scene. This is a proof of the fact that the combination of additional, independent and/or redundant data, enhances human vision. In automotive applications, such an enhancement can be achieved by employing a camera together with radar for vehicle detection, wherein the camera accounts for the features of the vehicle, while the radar accounts for the distance at which the vehicle is detected. Other applications of sensor fusion in automobiles range from the vehicle dynamics stabilization systems (e.g. Anti-lock Braking System (ABS), Electronic Stability Program (ESP)) to the Advanced Driver Assistance Systems (ADAS) like Lane Departure Warning (LDW), Adaptive Cruise Control (ACC), etc. Sensor fusion also holds promise for implementation of Intelligent Transportation Systems involving applications like speed harmonization, intersection safety, active traffic management, etc.

Though the idea of sensor fusion in automotive applications is very appealing, it is a complex one. This article discusses different stages of the data fusion process model, with an emphasis on the limitations and challenges pertaining to each of its constituent levels. It

Figure 1: The JDL data fusion process model.

Sources

Level 1Object Refinement

Level 2Situation Refinement

Level 3Impact Assessment

Level 4Process Refinement

Human

Computer

Interaction

Database Management System

Figure 2: Object Refinement [3]

SENSOR REFINEMENT OBJECT REFINEMENT

Observations Interpreted Objects

Y Y

XX

Page 43: Sensor Fusion - A Need for Next Generation Automobiles

A major limitation for Level 1 processing is the lack of sufficient amount of training data so as to help distinguish between the observed targets. Secondly, it becomes challenging to track targets when they are closely spaced or rapidly moving, as it becomes difficult to associate the sensor measurements to the appropriate identified targets.

There are three basic architectural approaches [1] to design data fusion for the implementation of object refinement, given as:

(a) Centralized Architecture: Here, raw data is transmitted from several sensors to a central fusion process that performs data correlation, tracking and target classification.

(b) Autonomous Architecture: As opposed to centralized architecture, here, each of the sensors perform maximum pre-processing, which is further transferred to a fusion process, which fuses the incoming data. Here, the data correlation, tracking and target classification are performed on the pre-processed data rather than on the raw data.

(c) Hybrid Architecture: This architecture is a combination of the above two architectures, where multiplexing, selecting and merging of raw and/or pre-processed data is required.

One of the major difficulties faced by the sensor-fused system designer is deciding which approach is the best choice for a specific data fusion system.

Level 2 – Situation Refinement

Level 2 processing aims to establish a relationship between the objects identified in Level 1, amongst themselves as well as their relationship with their environment. It then develops an interpretation of an evolving situation, based on these assessments [1]. Techniques drawn from artificial intelligence and automated reasoning are employed to achieve the goals of situation refinement [2].

Uncertainties in situation refinement may arise due to the incomplete nature of knowledge employed for reaching to a contextual description of a situation or due to shortcomings on part of the information sources themselves. Knowledge engineering (which involves identifying key information, understanding the inter-relationship between this information and also the uncertainty associated) plays a very important role in situation refinement. Currently there are no proven techniques for knowledge engineering [2].

Level 3 – Impact Assessment

Level 3 processing attempts to project the current situation into the future to assess the risks or impacts of the current situation. It utilizes methods from automated reasoning,

artificial intelligence, predictive modelling and statistical estimation. A number of prototype systems exist which enable impact analysis, but very few are deployed in operational systems [2].

A major challenge here is to determine driver intention, which makes automating the process of impact assessment challenging. Also it is very difficult to model rapidly evolving situations.

Level 4 – Process Refinement / Resource Management

This level monitors the overall data fusion process to optimize real-time performance of the ongoing data fusion [1]. It involves functions such as sensor modelling, look angle generation (to indicate where to point the sensors to track targets), computation of measures of performance (MOP) and measures of effectiveness (MOE) as well as carry out optimization of resource utilization [1][2].

The process refinement stage is relatively mature for single sensor environments. However, the same cannot be said for a multi-sensor framework involving multi-objective optimization. Challenges at Level 4 stem from the fact that system evaluation in a sensor-fused system would invo lve more considerations than those in an individual sensor-based system. For instance, considering a multi-target problem, evaluation of the data/track association part as well as the estimation part of the system must be carried out hand-in-hand. This is necessary to maintain the reliability and accuracy of the system. Use of a large number of sensors (especially co-dependent sensors) or utilization of non-commensurate sensors (e.g., those whose output cannot be measured by a common standard or those measuring very diverse physical phenomena on greatly different time scales) further adds to the challenges of Level 4 processing. A few more issues, particularly pertaining to fusion evaluation in practice, are as listed below [1]:

• The ground truth is usually not known in practice, while many of the currently employed performance measures require the ground truth to be known.

• Performance evaluation may be required to measure not only the extent to which the fusion goals are achieved but also the amount of effort/resources to achieve these goals. Such dimensions may be difficult to capture in a unified measure.

• To act as a reliable indicator of fusion performance, factors such as time, situation, context, etc., may also need to be taken into account.

TechTalk@KPIT, Volume 9, Issue 1, 2016

41

Page 44: Sensor Fusion - A Need for Next Generation Automobiles

requires special database management software to take care of their needs.

After providing an overview of the data fusion process model and the relevant issues, we now move towards the general issues faced in the actual implementation of such a system. So where does one start with the system implementation? What are the issues to be addressed? Here we attempt to find answers to the same.

The three main considerations involved in the implementation of a multi-sensor fusion system are [1]:

1. Algorithm selection,

2. Choice of when (in the processing flow) to fuse data, and

3. The role of the human in the loop.

The problem of algorithm selection depends on the nature of what the data fusion system intends to infer, the available sensor data and the application at hand. E.g., systems using image data are based on algorithms which are fundamentally different than those employing identity declarations as data inputs. However, given a specific application and a sensor suite, there exist a number of algorithms that are applicable to process and fuse the data. To select the best possible algorithm can be challenging and many trade-offs need to be considered such as throughput constraints, computer resource requirements of the algorithm, and operational constraints.

Once an algorithm has been selected for data fusion, another fundamental issue faced is to decide where, in the processing flow, the data fusion should be performed. The decision regarding which stage the data should be fused involves factors such as availability of “smart sensors” that are capable of data pre-processing, the availability of communication links that can support large data rates necessary for sending raw data to a central processor, and the computational ability of the processor itself.

The role of the human in a sensor fused system is one of the fundamental issues in the implementation. To what extent should a sensor fused system be autonomous or allow for human intervention? This is a very difficult decision to make depending on a number of variables.

Apart from the above considerations, there are some more issues worth noting while implementing a data fusion system. These include the following [5]:

Data Imperfection: Sensor data is always subject to some imprecision and uncertainty. A

III. General Issues in Multi-sensor Data fusion

TechTalk@KPIT, Volume 9, Issue 1, 2016

42

Current state of affairs calls for more research in the Level 4 area. However, improved sensor intelligence and agility holds promise for major improvements in this area with relatively modest effort [2].

Table 1: A summary of the functions and issues for each processing level of JDL model [2][4].

Apart from the challenges faced at each level, there are few more concerns related to other key components of the JDL model, namely, Human Computer Interface (HCI) and Database Management. The same are discussed below:

Human Computer Interface

With the rapidly evolving HCI technologies like 3-D displays, 3-D sound, haptic interfaces, etc., the techniques for data presentation, access and analysis have been redefined. Using these displays to relay the right information to the user, at the right time and in the right manner (based on how humans access and respond to information displays) is another important task in data fusion [2].

Database Management

Database management is another important area to be addressed in sensor fusion. Issues in management of database may emerge owing to the following facts [2]:

• Apart from the data coming from the sensors, data in a sensor-fusion approach may also include information input from user, environmental data as well as data defining the rules of the system. For the database management system to assist in the fusion process, it must be capable of not only accepting sensor data as per their contributing frequencies, but also allow smooth retrieval of data by the dependent processes/algorithms and users.

• The real-time constraint posed by automotive applications coupled with the complex data coming from the non-commensurate sensors only adds to the woes of the database management system.

It is for this reason that sensor-fusion systems, particularly for the automotive domain,

Estimation Process Issues

Level 1

Estimation of entity attributive states

·Lack of sufficient training data

·Difficulty in associating sensor data to identified targets

·Choice of architecture for data fusion

Level 2Estimation of entity relational/situational states

·Lack of suitable knowledge engineering techniques

Level 3Estimation of the impact of fused states on mission objectives

·Difficult to predict driver intention·Challenges involved in automating the

impact assessment process

Level 4Estimation of MOP/MOE states(Performance analysis)

·Complex system evaluation needs·Difficult to optimally use multiple

and/or non-commensurate sensors

Page 45: Sensor Fusion - A Need for Next Generation Automobiles

fusion algorithm should incorporate data redundancy to minimize their ill-effects. For e.g. an application employing both radar and LiDAR can benefit by using data from both of these sensors to provide a more reliable estimate of depth as opposed to using any single sensor data.

Outliers and Spurious Data: Sensor data can also be corrupted by the ambiguities and inconsistencies of the observed environment or due to situations like permanent failures, or slowly developing failures in the system [6]. Fusion of such spurious data with the correct data (e.g. in case of Kalman Filter) can lead to wrong estimates. Data redundancy in a fusion environment can be used to safeguard against outliers.

Data Modality: Data in a sensor fusion system may represent auditory, visual and tactile measurements of the observed phenomenon. This increases the demands on the data handling capabilities of fusion algorithm.

Data Correlation: Data dependencies must be taken care of in a fusion environment to avoid biased estimation, e.g. artificially high confidence value or divergence of fusion algorithm.

Data Alignment/Registration: Sensor data input to the fusion system must be aligned or brought into a common frame (e.g. a common coordinate system) before information fusion. Also known as sensor registration, data alignment is essential to derive knowledge from the available information from multiple sensors. For e.g., homography can be used to align data obtained from two different images.

Data Association: Data association in multi-target tracking systems is more challenging than in single-target tracking systems. This is majorly due to the following forms of data association needed in the former.

Measurement-to-track association: This refers to the problem of identifying from which target each measurement is originated.

Track-to-track association: This deals with distinguishing and combining tracks which are estimating the state of the same target.

Operational Timing: Each of the sensors in a sensor fused system may operate at different frequencies or may be involved in sensing different aspects of the environment at varying rates. A good data fusion algorithm should account for such timing variations in data. This issue becomes most important in real-time applications to avoid potential performance degradation.

Static vs. Dynamic Phenomena: The environment being sensed by the multi-sensor system may be time-invariant (static) or

varying in time (dynamic). For time varying systems in particular, it is of utmost importance to capture changes and update data according to inputs from the various sensors. E.g. In a traffic management scenario, real-time data regarding traffic density and vehicle speeds is ever changing and needs appropriate consideration.

Data Dimensionality: Sensor data can be pre-processed so as to reduce its dimensionality, which in turn can help reduce the computational load on the system. However, care must be taken to avoid dimensionality reduction at the cost of system reliability.

A plethora of sensor-based systems exist in most automobiles these days. Integration of these sensors, towards adding more features as well as bringing down the cost of these solutions, is the need of the hour. Hence a paradigm shift from individual sensor systems to a multi-sensor based system is needed. This article attempts to show the behind-the-scenes picture of sensor fusion with emphasis on the challenges involved in the same. It discusses a generalized fusion paradigm and the challenges to be accounted for at each level of the paradigm. A majority of these issues arise from the imperfections of the sensor data to be fused, the varied sensor technologies and the nature of the application environment. Amongst others, the issues that stand out include the lack of a ‘golden data fusion algorithm’ i.e. the fact that there is no perfect algorithm that is optimal under all conditions. Another issue that emerges is the lack of a comprehensive and unified measure of performance and effectiveness of such systems. A clear understanding of each of the discussed issues, as well as accounting for each of them, would go a long way in implementing a successful sensor fused system.

IV.Conclusion

TechTalk@KPIT, Volume 9, Issue 1, 2016

43

References

[1] D. L. Hall and S. McMullen, “Mathematical techniques in multisensor data fusion,Artech House, 2004.

[2] D. L. Hall and A. Steinberg, “Dirty secrets in multisensor data fusion”,Pennsylvania State University Park, Applied Research Lab, 2001.

[3] M.R Ghahroudi,R. Sabzevari, “Multisensor data fusion strategies for advanced driver assistance systems”, Sensor and Data Fusion, In-Teh, Croatia (2009) 141-166.

[4] A. N. Steinberg, C. L. Bowman, “Rethinking the JDL data fusion levels”,NSSDF JHAPL 38 (2004): 39.

[5] B. Khaleghi, A. Khamis, F. O. Karray, and S.N. Razavi, “Multisensor data fusion: A review of the state-of-the-art”,Information Fusion 14, no. 1 (2013): 28-44.

[6] M. Kumar, D.P. Garg, R.A. Zachery, “A generalized approach for inconsistency detection in data fusion from multiple sensors”, in: Proc. of the American Control Conference, 2006.

Page 46: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 201644

Page 47: Sensor Fusion - A Need for Next Generation Automobiles

About the Author

Sensing the Future

TechTalk@KPIT, Volume 9, Issue 1, 201645

Sushant Hingane

Areas of interestSystems and control,Modelling and simulation

Page 48: Sensor Fusion - A Need for Next Generation Automobiles

accurate in case of bad road conditions, if the camera information (called visual odometry) is used in addition to IMU (Inertial Measurement Unit).

a) Driver’s gesture recognition could become the future of user-friendly dashboards. Gestures could be tracked through cameras, touchpads, ultrasonic or optical sensors to perform the in-vehicle operations.

b) Upcoming technological cars are set to be equipped with augmented reality head-up display. The navigation information, road signs, obstacle alert, improved vision will be displayed on the windshield as well as side windows. The display system will take the front looking camera input, process it to identify the surrounding features. The output from multiple cameras and sensors will be fused to project the virtual display on windshield.

c) Significant amount of research has been put in improving real-time algorithms for RADAR and LiDAR sensor fusion. The main purpose of fusing these two sensors are to have the benefits of both in one device. LiDAR sensors can cover the area of interest in all scenarios but fails to give accurate speed information. Whereas, RADAR gives accurate speed data but fails in the lanes with multiple curves. This again calls for the improved algorithms to solve the Out-of-sequence problem [2].

d) The ‘always on’ systems continuously surveying the surroundings have cluster of sensors for situation aware control in autonomous vehicle. Situation aware control is a smart system that collects the vehicle information such as speed, location, route, fuel level, etc. After collecting this information from various sensors, it tries to comprehend the data to provide various warning levels about possible hazards. This s i tuat ion awareness can be brilliantly integrated with vehicle-to-vehicle communication system and better results can be achieved

TechTalk@KPIT, Volume 9, Issue 1, 2016

46

I. Introduction

II. Future in Automotive Domain

Few years ago, the early technology sensors were considered to be the future of humankind. Sensors that sense the physical parameters replacing human senses such as vision, audio, velocity, etc., have been a part of every technological domain. Innovative sensing devices found their applications in mobility and transportation, smart phones, wearable devices and so on. Nowadays, the technological advances we encounter in every domain, the challenge is not just the sensing, but also how can various sensors be fitted into a compact device. This need of an hour definitely calls for the sensor fusion techniques. The typical advantage of sensor fusion is that it not only gives us more accurate data but also helps in accommodating multiple features into one compact apparatus. This gives us the possibility to create a completely new market for gadgets as well as the state-of-the-art living experience. In the coming future, this technique will be carried out in more sophisticated ways using the algorithms that will facilitate the sensor data interface. The algorithms that are currently in use such as complementary filters, Kalman filters have their limitations when it comes to the ‘non-compatible’ sensor types or signal formats. The research tells us that these issues are becoming a part of history now.

The real need of sensor fusion in an automotive field is for subsystems such as active safety, driver assist, dynamic stability, ranging and proximity detection, etc. Nowadays these subsystems are equally complex and multifunctional. Moreover, the autonomous vehicles coming into market put extensive dependency on sensors. A simple information of another vehicle in proximity can be detected using several types of sensors that are on-board like ultrasonic, camera, RADAR, etc., all at the same time.

For drive safety and collision avoidance, there is a need of fusing the sensor data to improve the quality of detection [1]. Sensor fusion does an excellent job of complementing the abilities of each sensors. For example, within the area of odometry, the estimate could be more

Figure 1: Left: Sensors in drive stability and safetyRight: Rain, snow, ice detection on road using sensor fusion

[Image Source: http://www.continental-corporation.com]

Figure 2: Augmented reality heads-up display [Image source: www.autotribute.com]

Page 49: Sensor Fusion - A Need for Next Generation Automobiles

in vehicle safety, power management, route optimization, etc. [3].

Sensor hubs are commonly used in many of the apparatus these days. They have started to find their way into the wearable devices as well. Sensor hubs are the microcontroller units that take in the data from various sensors and process it, thus reducing the load on central processing unit. The most common manufacturers of the sensor hubs are the leaders in techno-innovation such as Apple, Google, Microsoft, Samsung, Bosch, etc. [4].

a) Sensor hubs are extensively used in smart phones for various functionalities such as gesture recognition, motion tracking, act iv i t ies moni tor ing, pedest r ian navigation, etc. With more advances in

sensing technology, more sensors are set to come to life creating the need of fusing them into one.

b) Numerous hobby toys available in market including Quadcopter drones are equipped with IMUs which has a fusion of a c c e l e r o m e t e r , g y r o s c o p e , magnetometer, barometer, ultrasonic

sensors and so on. The fusion improves the accuracy of the measurements and thus improves the quality of manoeuvre.

c) In gaming industries, the motion tracking is most crucial when it comes to the ‘real-life’ gaming experience. Motion controlled gaming system is the one that allows the players to interact or ‘play’ through the body movements and gestures. Sony PlayStation, Microsoft Xbox, Nintendo,

III.Sensor Hubs

etc., have been winning the hearts of gamers all over the world with the gesture recognition technology. In addition to motion capturing, sensors are also s ta t i oned fo r f ace recogn i t i on , voice/speech recognition, etc.

d) Ubiquitous system or wearable devices are the future to look forward for. Wearable devices can find its application from medical to educational. The smart watches, fitness bands, smart fabrics available in market consist of multiple sensors for motion detection, heartbeat monitoring, GPS, etc. Google’s smart glasses brings out the epitome of image processing and feature recognition.

Augmented reality might not be restricted to gaming in future, but it will control the methods of designing in the field of arts, architecture, medical, navigation, etc.

e) Internet of Things (IoT) is the term coined to describe a network of physical objects used to collect and exchange data. The ‘objects’ in these networks could be sensors as well which are connected via different communication media or could be wireless. Various industries such as automotive, energy & mining, power & utilities, manufacturing, healthcare, entertainment and financial service use internet of things for their sensory data exchange.

As more advanced technology demands, the techniques for sensor fusion is set to improve the existing drawbacks. The areas of further research focus highly on the following points [5].

a) Multi-level sensor fusion: A four level architecture is designed for decision making of the fusion levels of time-varying data, features and decisions.

b) Fault detection: More research is being carried out to make the measurement and detection more fault tolerant. Fault detection using various algorithms plays a crucial role for sensor reliability.

c) Micro-sensors: Reduced size sensors are compact, portable, lightweight and

IV.Further Research

TechTalk@KPIT, Volume 9, Issue 1, 2016

47

Figure 3: Example of sensor hub

Actuators

Accelerometer

Gyroscope

Magnetometer

Barometer

Ambient light

Proximity

SensorHub

CentralProcessing

Unit

Peripheraldevices

Figure 5: Wearable technology[Image Source:http://www.dreamstime.com/]

Figure 4: Left: Drone package delivery [Image Source: www.dronelife.com]

Right:BigDog robotic military dog [Courtesy: www.bostondynamics.com/robot_bigdog.html]

Page 50: Sensor Fusion - A Need for Next Generation Automobiles

TechTalk@KPIT, Volume 9, Issue 1, 2016

48

cheaper to manufacture. Thus fusing these micro-sensors can prove to be very advantageous.

d) Adaptive multisensory fusion: In case of the uncertainty in sensor data, the adaptive multisensory fusion algorithm ensures the robust functioning of the sensors.

Like it has always been, the innovation follows the necessity and based on the application demands in day-to-day life, sensor fusion will be playing a key role. It can be stated with a degree of certainty that the future of artificial intelligence in various domains is very promising that would reduce human efforts and bring in the ensured safety.

V. Conclusion

References

[1] Antje Westenberger, Marc Muntzinger, Michael Gabb, Martin Fritzsche and Klaus Dietmayer “Time-to-Collision Estimationin Automotive Multi-Sensor FusionwithDelayed Measurements”, Advanced Microsystems for Automotive Applications, 2013.

[2] Daniel Gohring,Miao Wang, Michael Schnurmacher, Tinosch Ganjineh“Radar/Lidar Sensor Fusion for Car-Following on Highways”, Automation,5th International Conference on Robotics and Applications (ICARA), 2011.

[3] Cheryl A. Bolstad, SA Technologies, “The Measurement of Situation Awareness for Automobile Technologies of the Future”, A presentation to the Driver Metrics Workshop, June 2008.

[4] Steve Scheirey (Hillcrest Labs), Diya Soubra (ARM), “Sensor Fusion, Sensor Hubs and the Future of Smartphone Intelligence”, A presentation at ARM Techcon, 2013.

[5] Ren C. Luo, Chih-Chen Yih, “Multisensor fusion and integration: approaches, applications and future directions”, Sensors Journal, IEEE, 2002.

World's first earthquake detector wasinvented 2000 years ago in China

A modern replica of Zhang Heng'sfamous seismoscope.

Photo: Houfeng Didong

Zhang Heng's famous seismoscope

A seismometer is a well-known instrument to detect and measure the intensity of earthquake by means of measuring the motion of seismic waves. It is very instrumental in many earthquake prone areas to keep an eye on the situation in a regular manner.

It would be astonishing to learn that Zhang Heng, who was a Chinese astronomer, Mathematician, Engineer, and inventor, invented the first seismometer in China in 132 AD. As shown in the figure, the instrument resembled a wine jar with six feet in diameter. There were eight dragons positioned along the outside of the barrel, pointing in the primary compass directions. There was a small bronze ball in each of the dragon's mouth. There were eight bronze toads placed just below the eight dragons, with their mouths open to catch the bronze ball. An incoming seismic wave would make the relevant ball drop and the sound would alert regarding the rough indication of the direction of origin of the earthquake.

The device detected the first earthquake from somewhere in the east which was later confirmed to be a correct recording. The instrument is not sensitive to the shaking or movements other than the seismic waves.

According to the experts, a simple or inverted pendulum was used for the sensing mechanism, the details

of which are not available and is said to be lost.

Scientists in China tried to recreate the seismoscope with the then available technology to replicate the system. Waves from simulated earthquakes were used to test the seismoscope from four different real-life earthquakes in China and Vietnam. The detections were very accurate and the result matched with the data gathered from modern-day seismometers.Reference: http://www.zmescience.com/science/geology/worlds-first-seismoscope-53454/

Page 51: Sensor Fusion - A Need for Next Generation Automobiles

Innovation for customers

About KPIT Technologies Limited

About CREST

Invitation to Write Articles

Format of the Articles

[email protected] .

www.kpit.com .

KPIT a trusted global IT consulting & product engineering partner focused

on co-innovating domain intensive technology solutions. We help

customers globalize their process and systems efficiently through a

unique blend of domain-intensive technology and process expertise. As

leaders in our space, we are singularly focused on co-creating technology

products and solutions to help our customers become efficient,

integrated, and innovative manufacturing enterprises. We have filed for

60+ patents in the areas of Automotive Technology, Hybrid Vehicles, High

Performance Computing, Driver Safety Systems, Battery Management

System, and Semiconductors.

Center for Research in Engineering Sciences and Technology (CREST) is

focused on innovation, technology, research and development in

emerging technologies. Our vision is to build KPIT as the global leader in

selected technologies of interest, to enable free exchange of ideas, and to

create an atmosphere of innovation throughout the company. CREST is

recognized and approved R&D Center by the Dept. of Scientific and

Industrial Research, India. This journal is an endeavor to bring you the

latest in scientific research and technology.

Our forthcoming issue to be released in will be based on

. We invite you to share your knowledge

by contributing to this journal.

Your original articles should be based on the central theme of

. The length of the articles should be

between 1200 to 1500 words. Appropriate references should be included

at the end of the articles. All the pictures should be from public domain and

of high resolution. Please include a brief write-up and a photograph of

yourself along with the article. The last date for submission of articles for

the next issue is

To send in your contributions, please write to

To know more about us, log on to

April 2016

“Mechatronics in Automotive”

“Mechatronics in Automotive”

February 26, 2016.

Page 52: Sensor Fusion - A Need for Next Generation Automobiles

y

For private circulation only.

TechTalk@KPIT January - March 2016

35 & 36, Rajiv Gandhi Infotech Park, Phase - 1, MIDC, Hinjewadi, Pune - 411 057, India.

ISSN 2394-5397

Rudolf Emil KalmanBorn : May 19, 1930

“We do talk about fuzzy things but they are not scienti�c concepts.Some people in the past have discovered certain interestin

formulated their �ndings in a non-fuzzy way,and therefore we have progressed in science.”g things,