63
Schoo Int A project requir E Bahir Dar University Institute of technology ol of Computing & Electrical Engineer telligent system based car collision prev By Nebiyou Tarekegn Solomon Genene Tesfaye Asmera Advisor Mr. Tesfamichael Agidie (M.Sc) t report submitted in partial fulfillment of rements for the award of the degree of BACHELOR OF SCIENCE In ELECTRICAL ENGINEERING June, 201 ring vention f the 10

Intelligent System Based Car Collision Prevention

Embed Size (px)

Citation preview

Page 1: Intelligent System Based Car Collision Prevention

School of Computing & Electrical Engineering

Intelligent system based car collision prevention

A project report s

requirements for the award of the degree of

ELECTRICAL ENGINEERING

Bahir Dar University

Institute of technology

School of Computing & Electrical Engineering

Intelligent system based car collision prevention

By

Nebiyou Tarekegn

Solomon Genene

Tesfaye Asmera

Advisor

Mr. Tesfamichael Agidie (M.Sc)

A project report submitted in partial fulfillment of the

requirements for the award of the degree of

BACHELOR OF SCIENCE In

ELECTRICAL ENGINEERING

June, 2010

School of Computing & Electrical Engineering

Intelligent system based car collision prevention

ubmitted in partial fulfillment of the

June, 2010

Page 2: Intelligent System Based Car Collision Prevention

Bonafide certificate

Certified that this project report entitled “INTELLEGENT SYSTEM

BASED CAR COLLISION PREVENTION” is a Bonafide work carried out by

Mr.TESFAYE ASMERA, Mr. SOLOMON GENENE, and Mr. NEBIYOU

TAREKEGN who carried out the Project under my supervision. Certified further, that

to the best of my knowledge the work reported herein does not form part of any other

project report or dissertation on the basis of which a degree or award was conferred on

an earlier occasion on this or any other candidate.

Name Mr. Solomon Lule (M.Sc)

Head of the department signature of HOD

Name Mr.Tesfamichael Agidie (M.Sc)

Advisor signature of advisor

Page 3: Intelligent System Based Car Collision Prevention

i

Abstract

Intelligent system based car collision prevention is the system that uses

different sensors to pass intelligent decision that mitigates or avoids collision.

Collision avoidance system mainly applies image and video processing techniques to

pass the intelligent decision that can warn the driver under different environment. The

system uses feature based technique that applies Hough transform method to detect

road lanes. The Hough transform is a feature extraction technique for identifying the

locations and orientations of lane features in digital image. The intelligent system uses

safe distance warning as collision avoidance method. It uses two safe distances to warn

the driver before the collision occurs. These safe distances are warned using visual and

audible system. The visual safe distance warning employs real time object detection

and tracking. Object is detected using background subtraction method while tracking is

based on the centroid of the detected object. The second safe distance is determined by

using Hondas algorithm. We use the infrared laser radar and different sensor to pass

decision in audible form.

Page 4: Intelligent System Based Car Collision Prevention

ii

Acknowledgement

We would like to take this opportunity to express our deepest gratitude to our

Advisor ATO Tesfamichael Agidie, for guiding us in the technical aspect as well as

imparting his knowledge to us, especially for his helpful and offered invaluable

assistance, support and guidance throughout this work. Special thanks dedicated to our

friends who have been supporting, guiding and advising us in some part of the project,

Appreciation is also acknowledged to those who have contributed directly or indirectly

in the completion of this project. Lastly, our deepest gratitude to beloved GOD, because

we have came through this project only with its help.

Page 5: Intelligent System Based Car Collision Prevention

iii

Table of contents Abstract ..................................................................................................................... i

Acknowledgement ..................................................................................................... ii

Table of contents ....................................................................................................... iii

List of figures ............................................................................................................ v

List of symbols, acronym and abbreviation ................................................................ vi

1. Introduction ........................................................................................................... 1

1.1 Back ground of the project ................................................................................. 1

1.2 Objective of the project ...................................................................................... 3

1.2.1 General objective ......................................................................................... 3

1.2.2 Specific objective ......................................................................................... 3

1.3 Methodology……………………………………………………………………...3

1.3.1 Literature review………………………………………………………...........3

1.3.2 MATLAB computing environment…………………………………………...3

1.4 Project scope……………………………………………………………………....4

1.5 Motivation………………………………………………………………………...4

1.6 Project goal………………………………………………………………….…….4

1.7 Document organization………………………………............................................5

2 Literature review and basics of image and video processing……………………...….6

2.1 Literature review……………………………………………...............................6

2.1.1 Introduction…………………………………………………………….…….6

2.1.2 Lane detection and departure warning method………....................................6

2.1.2.1 Lane detection method………………………………………………......7

2.1.2.2 Lane departure warning system method……………………………..…..8

2.1.3 Object detection and tracking methods……………………………………....9

2.2 Basics of image and video processing…………………………………..……….11

2.2.1 Noise removal……………………………………………...………...............11

2.2.2 Image enhancement and intensity adjustment…………………………….…12

2.2.3 Image segmentation………………………………………………….............13

2.2.4 Morphological operation…………………………………………………….13

2.2.5 Edge detection……………………………………………………………….14

2.2.6 Hough transform……………………………………………………………..14

3 System description…………………………………………………………..…….....16

Page 6: Intelligent System Based Car Collision Prevention

iv

3.1System design and algorithm development…………………………..………….16

3.1.1 System design……………………………………………………...………..16

3.1.1.1 Hard ware specification…………………………………..……….….....16

3.1.1.2 Principles of operation……………………………………………..........18

3.2 Lane detection and departure warning system………………………………….24

3.2.1 Introduction………………………………………………………………....24

3.2.2 System configuration………………………………………………………..24

3.2.3 Approach to lane detection and departure warning…………………..….…25

3.2.4 Algorithm development……………………………………………………..26

3.2.4.1 Algorithm description……………………………………………...…....28

3.2.5 Algorithm for lane detection and departure warning system………..……...31

3.3 Safe distance warning…………………………………………………….…......33

3.3.1 Introduction …………………………………………………………….......33

3.3.2 Safe distance determination…………………………………………….......33

3.3.3 Algorithm development……………………………………………………..34

3.3.4 Object detection and tracking for visual safe distance warning……….........36

3.3.4.1 Introduction……………………………………………………………..36

3.3.4.2 System configuration…………………………………………………....36

3.3.4.3 Approach to object detection and tracking………………………..….....37

3.3.4.4 Algorithm for object detection and tracking………………………........39

4 Results and Discussion………………………………………………………….…..41

4.1 Implementation and results……………………………………………...............41

4.2 Problems encountered…………………………………………………………...48

5 Conclusion …………………………………………………………………….........49

Appendixes…………………………………………………………………..……...50

References………………………………………………………………………......53

Page 7: Intelligent System Based Car Collision Prevention

v

List of figures

Fig 2.1Cartesian coordinate and polar coordinates …………………….......................15

Fig 3.1 Pin diagram of TLC 548 ADC………………………………………………..17

Fig 3.2 Functional block diagram of TLC 548 ADC…………………………………18

Fig 3.3 Interfacing circuit between infrared laser radar and CPU………………….…21

Fig 3.4 Video camera, infrared laser radar and buzzer allignment on the car………...22

Fig 3.5 The position of display unit, processor and buzzer within the car when viewed

from the top left……………………………………………………………………….22

Fig 3.6 General block diagram of the system…………………………………………23

Fig 3.7 Block diagram of lane detection and departure warning system……………..24

Fig 3.8 Flow chart of lane detection and departure warning………………………….27

Fig 3.9 Block diagram of safe distance warning system……………………………...34

Fig 3.10 Block diagram of object detection and tracking………………………….….37

Fig 3.11 Flow chart for object detection and tracking………………………….……..40

Fig 4.1 Images from video frame sequence……………………………………….…..41

Fig 4.2 Images from consecutive video sequence………………………………….....42

Fig 4.3 Oout put of back ground subtraction for images from fig4.1……………...…42

Fig 4.4 Output of back ground subtraction for images from fig4.2…………………..42

Fig4.5 Output of object detection and tracking from video sequence…………….......43

Fig 4.6 a) Output of object detection and tracking from video sequence………….….43

b) Lorry detected on the road lane…………………………………….….…...43

Fig 4.7 Input image of road lane……………………………………………………....44

Fig 4.8 Accumulator array of Hough transform…………………………………..…..45

Fig 4.9 Image result with fake and road lane detected……………………………..…45

Fig 4.10 Road lane and fake lane detected by Hough transform…………………..….46

Fig 4.11 Binary image found after noise removal………………………………..…...46

Fig 4.12 Output image after boundary extraction…………………………………......47

Fig 4.13 Lanes detected using Hough transform followed by boundary extraction…..47

Page 8: Intelligent System Based Car Collision Prevention

vi

List of symbols, acronyms and abbreviations

ADC analog-to-digital converter

CCD charge coupled camera

CHEVP canny/Hough estimation of vanishing points

CLK clock

CMOS Complementary Metal-Oxide Semiconductor

CPU- central processing unit

CS chip select

DSP digital signal processing

D out data out put

DTR data transmitter and receiver

FSB full system back up

GND ground

GHZ giga hertz

IC integrated circuit

I/O input /output

LIDAR light detection and ranging

LPT local printer terminal

MMSE minimum mean square error

MSB most significant bit

nF nano farad

OS operating system

PC personal computer

PT pan-tilt

PTZ pan-tilt-zoom

RAM random access memory

REF reference

RGB red-green blue

SATA serial advanced technology attachment

SDRAM synchronous dynamic random-access memory

TH threshold

TTL transistor-transistor logic

Page 9: Intelligent System Based Car Collision Prevention

vii

USB universal serial bus

WT wavelet transform

VIN voltage input

Ω Omega

eg Example

GHZ Giga hertz

GB Gigabyte

KHZ Kilo hertz

mW milli watt

rpm revolution per minute

µF micro farad

Vrel relative velocity

Page 10: Intelligent System Based Car Collision Prevention

1

Chapter 1

1 Introduction

1.1 Background of the project

Transportation is one of the most important economic activities of any countries.

Among the various forms of transportation, road transportation is one of the most

popular means of transportation. As the number of vehicles on the road increases, so do

the dangers and economic losses from automobile accidents. Transportation has an

element of danger attached to it in the form of vehicle crashes. These crashes not only

cause death and injury but also bring along an immeasurable amount of agony to people

involved. There are different causes of collision and vehicle crashes. These are:

Road condition

Driver performance

Weather condition

From these causes of collision driver performance can affect transportation

greatly. It can control other collision sources. The collisions that occur due to driver

performance routed its base on the following errors.

Recognition error

Decision error

Performance error

In the past, safety systems have focused on reducing driver injury in case of an

emergency. Thus the introduction of seat belts, air bags, and more recently products

like on Star, which can automatically contact emergency services and help locate the

scene of an accident.

But, these days the development of vision based sensors facilitated the

application of computer vision technology in the traffic flow control. The application of

this technology will be able to warn drivers of dangerous situations in time to take

preventative action. With better sensors and data communication techniques, a driver

can be more aware of his or her environment. This allows the driver to react

appropriately to situations such as lane departure and collision occurrence.

A collision avoidance system is a system of sensors that is mounted on a car to

warn the driver if any dangers may lie ahead on the road. Some of the dangers that

Page 11: Intelligent System Based Car Collision Prevention

2

these sensors can pick up on include how close the car is to other cars surrounding it,

how much its speed needs to be reduced while going around a curve, and how close the

car is to going off the road. The system uses sensors that send and receive signals from

things like other cars and obstacles in the road.

Lane departure warning system is one part of collision avoidance system that is

used to control the lane maneuvering of the car. It is the system that is used to warn the

driver when the car is moving out of lane.

The introduction of charge coupled video camera (CCD camera) enabled modern

cars to use vision intelligence in determining and following road lane. In this project

lane detection and departure warning system is developed to warn the driver if the car

is moving out of lane. It uses camera mounted on the vehicle to capture video image of

road information in front of the car. A video image is made of basically a series of still

images at a very small interval time between each capture. Then the image processing

application will retrieve useful information for the issuance of warning. It involves the

applications such as feature extraction and Hough transform. Lane departure warning

system proved to be very useful on high ways or clearly marked lanes. However, lane

detection and tracking is a challenge problem if roads are with incomplete lane

markings or roads lane markings are hidden due to parked cars or shadow.

Another part of this project is the object detection and tracking for collision

avoidance. It has importance in detecting obstacles and there by tracking them. It uses

video image from video camera to detect and track the upcoming or different objects in

front of the car. This project considers for simple object detection using feature

extraction and image segmentation.

In general intelligent system based car collision prevention can be understood as

the system that can be mounted on the car or fabricated together with a component of

car in order to detect obstacles ,objects and humans, warn driver and avoid the damage

to any resource in the vehicles path and vehicles itself.

Page 12: Intelligent System Based Car Collision Prevention

3

1.2 Objective of the project

1.2.1General objective

The general objective of this project is to facilitate traffic flow control and

increase transportation safety.

1.2.2 Specific objective

This project has the following specific objectives

Developing algorithm for lane detection and departure warning system

Developing algorithm for object detection

Implementing the developed algorithm using image processing software

Implementing real time object detection

1.3 Methodology

So as to achieve the above stated objectives various methods and tools have been

used. The following list will describe the methods and the tools used in this work:

1.3.1 Literature Review

The literature review part has focused on reviewing the basis of video and image

processing along with that of the related works.

1.3.2 MATLAB computing environment

For the purpose of lane detection and real time object detection and tracking an

algorithm development for an implementation of the system on the video images

captured by camera requires suitable software. For the sake of this MATLAB

computing environment is the robust as it is high-performance language for technical

computing. It integrates computation, visualization, and programming in an easy-to-use

environment where problems and solutions are expressed in familiar mathematical

notation.

Typical uses include:

Math and computation

Algorithm development

Data acquisition

Modeling, simulation, and prototyping

MATLAB is suitable because it contains image and video processing toolbox

which provides basic framework for the development of the system. The toolbox is a

Page 13: Intelligent System Based Car Collision Prevention

4

collection of functions that extend the capability of the MATLAB numeric computing

environment. The toolbox supports a wide range of image processing operations.

1.4 Project scope

Intelligent system based car collision prevention is the wide area that requires the

contribution from different area of engineering. It consists of lane keeping system, lane

departure warning system, antilock braking system, auto brake and start system,

obstacle detection, and safe distance warning system.

The scope of this project is mainly to focus on the development of a software module

for the lane detection and departure warning as well as object detection and tracking

using the image and video processing.

1.5 Motivation

The motivation behind this project is that nowadays, there are various car

collision accidents that have damaged many useful human and material resources. To

combat these accidents many systems have been proposed, many traffic rules have been

applied but none of them resulted in avoiding the accidents. Because they are passive in

nature, they don’t pass decisions rather they give help only after collision have

occurred. Such systems are passive systems and often collision occurs when the car

goes out off lane, when obstacle suddenly enters within the lane etc.

We have thought that if vision technology is applied the accidents can be reduced to a

great extent. Using vision technology the system can pass intelligent decision with the

help of different sensors.

1.6 Project goals

The following goals has been attained in this project

The system can detect road lane markings and pass lane departure warning

The system can detect any object in front of the car

The system can track any detected object

The system can pass safe distance warning.

Page 14: Intelligent System Based Car Collision Prevention

5

1.7Document organization

This document has four main parts. The first part discusses about back ground of

the project, Project objectives, motivation and goals. The second part of the project

describes previous works on lane detection and object detection. It discusses about

image and video processing that are employed in this work. The third part of the project

is the main body of the project. It mainly focuses on the algorithm development for

road lane detection and departure warning as well as for safe distance warning. It also

cites system configuration for safe distance warning and lane departure warning.

Project implementation and results are discussed deeply under this part of the project.

The last part of the document deals with conclusion and future recommendations.

Appendixes and reference is described under this part.

Page 15: Intelligent System Based Car Collision Prevention

6

Chapter Two

Literature review and basics of image and video processing

2.1 Literature review

2.1.1Introduction

With the application of intelligent system collision can be avoided. Intelligent

system mainly involves vision technology in detection, warning and braking of the car.

Such a system uses a number of sensors and processors to mitigate collision efficiently

and effectively. The most effective and usable sensor in vision technology is the video

sensor that is used to capture images for detection purposes. Over the past years the

application of vision system is not that much used in avoiding collision. But now the

detection of object and lane markings became active research areas in collision

avoidance system. Many algorithms have been proposed for the lane departure warning

system and object detection and tracking. Some of these algorithms are based on

modeling technique while others are based on feature based techniques. The application

of modeling technique is robust to different noises. But the modeling technique requires

complex computation and different assumption. The feature based technique uses some

feature of interest for object detection and tracking as well as lane marker detection and

departure warning. However, it is easily affected by noises and different occlusions.

2.1.2 Lane detection and departure warning methods

The main feature of any collision avoidance system is firstly to keep the car

within its own lane. Keeping the car within its own lane can avoid many collisions that

will occur due the car moving out of its lane. To develop such a system different

algorithms are proposed. The lane detection and departure warning methods are

classified into:

Lane detection method

Lane departure warning method

Page 16: Intelligent System Based Car Collision Prevention

7

2.1.2.1 Lane detection method

Basically, there are two groups of approaches available in dealing with lane

detection problem. They are feature-based technique and model based technique. The

model-based technique uses a few parameters to represent the lane mapping or

structures. But this approach needs some complex computing and estimation. While,

the feature based technique localizes the lines in the perspective view images or

inversely projected view by combining the low level features such as painted lines or

line segment that are detected by simple image segmentation. A summary of various

method used for lane detection will be as follows

a. Lane detection using Cat mull-Rom spline

This method uses the modeling of road lane using the formulation of the lane

detection problem in the form of determining the set of lane model control points [11].

It uses three sets of control points. This method first determines the ground model of

the road. It then estimates what will the lane or road looks like on the image plane.

Then it detects the vanishing point of road lane and assumes it as first point. Then it

searches for control point in Edge image for the left lane model by estimation. It uses

this point to get the point on the right lane model. These points on the left and right lane

form the second group of control points. The third group of control point is at the

intersection of lane with image plane boundary. It uses interpolation to connect these

points and detect road boundary.

b. Lane detection using Hough transforms

This method uses feature based technique with normal perspective view of a

single CCD camera projection to detect road lanes. It uses low level image processing

and Hough transform. It applies the Hough transform algorithm to extract region that

looks like road markings [6].

c. Lane detection based on linear parabolic lane model

This method is one of the model based technique that successfully detect road

lane using linear parabolic model. A linear function is used to fit the near vision field,

and a quadratic function fits the far field [4]. The proposed algorithm is robust in

different illumination conditions and can efficiently handle the problem of detecting

curved road lane. It identifies the near and far field using certain threshold value. The

Page 17: Intelligent System Based Car Collision Prevention

8

algorithm indicates the lane boundary model can be described using three coefficients.

It uses weighted least square method to compute their values. It fits this model to the

lane by using lane boundary region of interest.

d. Lane detection using B snake[12]

This method uses the Perspective effect of parallel lines that is constructed with

dual external forces for generic lane boundary or marking. It enables to describe a

wider range of lane structures Compared with other lane models and is able to describe

a wider range of lane structures since B-Spline can form any arbitrary shape by a set of

control point. In addition, it is robust against shadows, noises, occasional missing and

false markings due to the use of the parallel knowledge of roads on the ground plane.

Furthermore a robust algorithm, called Canny/Hough Estimation of Vanishing Points

(CHEVP) is presented for providing a good initial position for the B-Snake. Also, a

minimum error method by Minimum Mean Square Error (MMSE) is proposed to

determine the control points of the B-Snake model by the overall image forces on two

sides of lane.

2.1.2.2 Lane departure warning system methods

In driver assistance system lane departure warning system has wide importance.

Various methods have been proposed and developed for lane departure warning system.

Most of them are based on model technique and others use sensors to pass lane

departure warning.

a. Lane departure warning system based on linear parabolic lane

model

Here the system assumes as if the car is moving on the center of left and right

road lane. It then determines both left and right road lane orientation (θl and θr)[4]. It

uses the absolute value of the summation of left and right road orientation and

compares against certain threshold TH and passes lane departure warning. It is based on

a robust lane detection algorithm that is used to compute both lane orientations θl and

θr. The symmetry measure is computed at every frame, and a lane departure alarm is

triggered when symmetry is greater than TH.

Page 18: Intelligent System Based Car Collision Prevention

9

2.1.3 Object detection and tracking methods

Basically there are many obstacles that can cause car crash on the road. These

obstacles may be moving or stationary object. But most of the time its moving objects

that can cause a collision. To avoid such destruction the car should have the system that

can detect and track any object in front of it. There are many object detection

algorithms. Some of them are:

a. Background Subtraction Techniques

Background subtraction is a commonly used class of techniques for segmenting

out objects in a scene for applications obstacle collision avoidance. Background

extraction techniques emphasizes on three important attributes: foreground detection;

background maintenance; and post processing. The concept of background subtraction

techniques revolves around comparison of an observed image with an estimate of the

image if it contained no objects of interest [10]. The areas of the image plane where

there is a significant difference between the observed and estimated images indicate the

location of the objects of interest. The name “background subtraction" comes from the

simple technique of subtracting the observed image from the estimated image and

thresholding the result to generate the objects of interest.

b. Moving object detection in wavelet compressed video

This method compares the wavelet transform (WT) of the current image with the

WTs of the past image frames to detect motion and moving regions in the current

image without performing an inverse WT operation. Moving regions and objects can be

detected by comparing the WTs of the current image with the WT of the background

scene which can be estimated from the WTs of the past image frames [9]. If there is a

significant difference between the two WTs then this means that there is motion in the

video. If there is no motion then the WTs of the current image and the background

image ideally should be equal to each other or very close to each other due to

quantization process during compression.

This method is computationally effective, as it does all the processing in WT. It

does not inverse the WT to obtain the actual pixels of the current image or the

estimated background.

Page 19: Intelligent System Based Car Collision Prevention

10

c. A Contour-Based Moving Object Detection and Tracking

This method is used to detect and track moving objects which includes non-rigid

objects. This is based on using lines computed by a gradient-based optical flow and an

edge detector. Edges are extracted by using optical flow and the edge detectors are

restored as lines. Background lines of the previous frame are subtracted. The edges are

masked by a thresholding velocity magnitude for eliminating background edges with

little motion. Unmasked edges are used for restoring lines and extracting contours of

objects in subsequent process. Contours of objects are obtained by using snakes to

clustered-lines. Contours extraction is obtained by performing line restoration, line-

based background subtraction, clustering and active contours [8]. This method is robust

because edge based feature is applied. Edge-based features are insensitive to

illumination changes.

d. Feature Based Object Tracking Using PTZ Camera

Object tracking is an important task within the field of computer vision. The

proliferation of high-powered computers and the increasing need for automated video

analysis has generated a great deal of interest in object tracking algorithms. This part

presented a feature based object tracker which uses a pan-tilt (PT) camera to keep track

of the target. The task is to keep the target at the center of the grabbed image. As the

target starts moving in the real world, its position in the grabbed image is reported in

subsequent frames through a feature based tracking algorithm. The image position error

is processed by a proportional-integral controller and the camera is re-positioned

accordingly to place the target in the pre-specified image region [13].

Page 20: Intelligent System Based Car Collision Prevention

11

2.2 Basics of image and video processing

Image and video processing has wide applications in vision based systems. It is

used to extract the needed feature for intelligence application. Every video image

captured by video camera must be processed to retrieve some information such as road

feature, obstacle character to enhance the intelligence of the system. The important part

of image and video processing that have wide application in this project are:

Noise removal and neighborhood processing

Image enhancement and adjustment

Image segmentation

Morphological operations

Region based processing

Edge detection

Image feature extraction

Image transforms

2.2.1 Noise removal

There are various ways in which the noise can be introduced to video image

captured for processing. These are noise from atmospheric conditions, camera and

material type used for image capturing. To make the developed vision system robust

the corrupted image properties should be restored. This can be accomplished by using:

Linear filtering

Non-linear filtering

The neighborhood processing involves

Defining center point (x, y)

Performing operation on pixels in a predefined neighborhood

Assigning output of operation to the center point

In linear noise filtering the center point is assigned the sum of values of

neighborhood pixels multiplied by filter coefficients. But it has less influence on real

time noises. This can be done by

a = imfilter (b, c,’replicate’) where a=filtered image

b= input image

c=filter mask

Page 21: Intelligent System Based Car Collision Prevention

12

Non-linear filtering is based on non linear operations involving the pixels of

neighborhood. There are many non linear filters, but median filter can remove small

pixel noise. Noise is reduced by calculating the median of neighborhood elements and

storing the value to central element. Generally median filter is a 2-D filter that can be

computed as follows

a=medfilt2 (b,[m, n]) where a= filtered image

b=input image

[m, n] defines neighborhood of size m x n over which

median is computed.

2.2.2 Image enhancement and intensity adjustment

The aim of image enhancement is to improve the interpretability or perception of

information in images for human viewers, or to provide better input for other automated

image processing techniques [7].

Once the noise is removed from images some of its abstract features will be

restored. To work with such images and extract its features the image must be

enhanced. This can be done by:

Scaling image

Histogram equalization

Histogram equalization is the method that increases the global contrast of many

images. Through this adjustment the intensities can be better determined on the

histogram. This allows for areas of lower local contrast to gain higher contrast.

Histogram equalization, which involves transforming the intensity values so that the

histogram of the output image approximately matches a specified histogram. This can

be done by using histeq (histogram equalization) function that produces an output

image having intensity values evenly distributed throughout the range.

Image scaling is the process of resizing a digital image. Scaling is a non-trivial

process that involves a trade-off between efficiency, smoothness and sharpness. As the

size of an image is increased, so the pixels which comprise the image become

increasingly visible, making the image appears soft. Conversely, reducing an image

will tend to enhance its smoothness and apparent sharpness. Generally image scaling

improves the brightness of the image by simply multiplying every pixel values by

constant number.

Page 22: Intelligent System Based Car Collision Prevention

13

2.2.3 Image segmentation

Image segmentation is the process of dividing digital image into multiple

regions. It shows the objects and boundaries in an image. Each pixel in the region has

some similar characteristics like color, intensity, etc. The important image

segmentation used in this project is the background subtraction method. Background

subtraction is the process of separating out foreground objects from the background in a

sequence of video frames. Many methods exist for background subtraction, each with

different strengths and weaknesses in terms of performance and computational

requirements. Frame difference is arguably the simplest form of background

subtraction. The current frame is simply subtracted from the previous frame, and if the

difference in pixel values for a given pixel is greater than a threshold T, the pixel is

considered part of the foreground.

|frame i–frame (i-1)|>T………………………………………………2.1

2.2.4 Morphological operation

Morphology is image processing operations that process images based on shapes.

Morphological operations apply a structuring element to an input image, creating an

output image of the same size. In a morphological operation, the value of each pixel in

the output image is based on a comparison of the corresponding pixel in the input

image with its neighbors.

In vision based system morphological operations on video images has great

importance. It is used for labeling and measuring objects in the binary image. Generally

there are 3-types of images

• RGB images- true color images

• Gray scale intensity images

• Binary images- logical arrays of 0s and 1s

Each of these images can be represented by different data classes. They are

double, uint8, uint16. Double data class is used for carrying operations on images and

uint8 data class is used for storing image as it takes only limited space of memory. For

the sake of different uses inter conversion of image data classes is necessary. Most

morphological operation is carried out on gray scale or binary images. The

morphological operation such as

• Removing loosely connected components

Page 23: Intelligent System Based Car Collision Prevention

14

• Labeling of object boundaries in the images has great importance in this project.

Using the function BW=bwareaopen (a, p) it’s possible to remove all loosely connected

objects that have fewer than p pixels from image a. This is simply morphologically

open binary image. By the help of different tool box function that exist in MATLAB

we have determined characteristics that play important roles in this project.

2.2.5 Edge detection

Edge detection is the process of finding sharp contrasts in intensities in image.

Edge detection is used to identify the edges in an image. Edge detection function looks

for places in the image where the intensity changes rapidly, using one of these two

criteria:

Places where the first derivative of the intensity is larger in magnitude

than some threshold

Places where the second derivative of the intensity has a zero crossing

There are different types of edge detection. Some of these are Sobel, canny,

prewwit and Gaussian edge detection methods. Among these edge detectors canny

edge detection is the robust one as it with stand noises and extracts most important

edges from images. It uses Gaussian masks to scan all over the image.

2.2.6 Hough transform

The Hough transform is a feature extraction technique for identifying the

locations and orientations of certain types of features in digital image. It describes

features in some parametric form. The purpose of this technique is to find imperfect

instances of objects within a certain class of shapes by a voting procedure. A classical

Hough transform is commonly used for detection of regular curves such as straight

lines, circles and ellipses. Line segments can be described in number of ways. But, a

convenient equation for describing lines uses parametric equations.

Xcosθ +ysin θ =r…………………………………………………………2.2

Where r is length of normal from origin to the line

θ is orientation of r with respect to x axis

For any point (x, y) on the line r and θ are constant.

Page 24: Intelligent System Based Car Collision Prevention

15

Y

r

θ

X

Fig 2.1 Cartesian coordinate and polar coordinates

In the image space the coordinate of edge points (x, y) are known while r and θ

are unknown variables. The values of r and θ can be found from edge points by using

equation. The plot of possible (r, θ) yields sinusoids in the Hough parameter space.

Collinear points in the Cartesian image space yield curves that intersect at common (r,

θ) point. The input to Hough transform should be the result of edge detection. Hough

transform can be applied by quantizing Hough parameter space into finite intervals

accumulator cells [6]. The accumulator space is plotted regularly with r as ordinate and

θ as abscissa. As Hough transform algorithm runs each (x, y) edge points are

transformed into (r, θ) curve.

Curves generated by collinear points in the image intersect peaks (r, θ) in Hough

transform space. Hence accumulator cells get incremented. This shows the existence of

straight line in the image space. This detected line segments can be extracted on the

basis of Hough peaks or local maxima in the accumulator space. The extraction can be

carried out based on thresholding to implement voting procedure.

Page 25: Intelligent System Based Car Collision Prevention

16

Chapter 3

System description

3.1 System design and Algorithm development

3.1.1 System design

This part of the project describes the specification, interfacing of the hard ware

and general block diagram of the system used in this project. The project uses different

input materials to capture signals in front of the car. It processes the digital value of

these signals using PC processor to pass different forms of warning. The input devices

are infrared laser radar, video camera, switch, CPU processor, display and buzzer. The

rating and processing speed of CPU processor greatly affects response time of the

system. The CPU processor extracts lane and object information from video captured

by video camera and infrared laser radar by using appropriate software.

3.1.1.1Hard ware specification

Intelligent system based car collision prevention is the collision avoidance

system that uses different sensors and vision technology. Vision technology uses image

processors along with CPU compatible processors to efficiently and effectively detect

lane and objects. DSP signal processor is used to extract information from sensors to

avoid collision and before collision warning. But due to unavailability of image

processors and digital signal processors we are determined to use only CPU compatible

processors to carry out collision mitigation. So, we have to use personal computers with

high memory. The personal computer used has the following specification:

Memory SDRAM above 1GB (1/0) DDR2 SDRAM

Frequency 3.4 GHZ

Operating system windows XP, windows vista

Intel Pentium Dual-Core Processor E5200 (2MB L2 cache, 2.5

GHz, 800 MHz FSB)

Storage 160GB SATA hard drive, 7200RPM

Infrared laser radar is used to determine range and relative speed of object. It has

low loss and effective than radio wave radar. It has transmitter which transmits light

and receiver which receives reflected light. The infrared laser radar has the following

specification:

Page 26: Intelligent System Based Car Collision Prevention

17

Wave length 850-950nm

Maximum viewing distance 120m

Pulse width 100ns, f=1KHZ

The interface design is needed for connecting infrared laser radar to CPU as it is

not connected directly using USB port or serial port. Infrared laser radar displays the

range and relative speed of obstacle in front of the car in digital form. But it cannot be

interfaced directly to CPU. By tapping the line that inputs analog electrical signal to the

analog and digital converter in the laser radar we can get equivalent electrical signal.

Using this electrical signal and analog to digital converter it’s possible to produce

digital signal that the computer processes. The output of analog to digital converter can

be accessed easily using PC serial port. Since the input signal from radar has high

frequency analog to digital converter with less than 16- bit is preferable. The analog to

digital converter preferred for this operation is the TLC 548 is CMOS analog-to-digital

converter (ADC) integrated circuits built around an 8-bit switched-capacitor

successive-approximation ADC. These devices are designed for serial interface with a

microprocessor or peripheral through a 3-state data output and an analog input. The

maximum I/O CLOCK input frequency of the TLC548 is 2.048 MHz

Pin diagram and pin description of TLC 548 analog to digital converter

Fig3.1 pin diagram of TLC 548 ADC

TLC548 is an 8-bit analog to digital converter with serial output. It is used in

microprocessor Peripheral or Standalone operation. It has On-Chip Software-

Controllable Sample-and-Hold Function. It generally works with 4-MHz Typical

Internal System Clock and requires power supply of Wide Supply Range of 3 V to 6 V.

it robust to other ADC as it consumes low power. It consumes maximum of 15 mW

power.

Page 27: Intelligent System Based Car Collision Prevention

18

Pin description of TLC 548

Pin 1and pin 3 is used for providing reference voltage that can be used

in differential output voltage.

Pin 2 is used as analog input. It accepts the electrical signal from

infrared laser radar.

Pin 4 is used for grounding.

Pin 5 is power supply

Pin 6 is I/O clock used for data control and operation. The I/O CLOCK

together with the internal system clock allow high-speed data transfer

Pin 7 is the data output terminal.

Pin 8 is the chip select signal used for data control.

Function block diagram of TLC 548 ADC

Fig 3.2 functional block diagram of TLC 548 ADC

TLC 548 ADC has special feature including versatile control logic, an on-chip

sample-and-hold circuit that can operate automatically or under microprocessor control,

and a high-speed converter with differential high-impedance reference voltage inputs

that ease ratio metric conversion, scaling, and circuit isolation from logic and supply

noises.

3.1.1.2Principles of operation

The TLC548 is a complete data acquisition system on a single chip. Each

contains an internal system clock, sample-and-hold function, 8-bit A/D converter, data

register, and control logic circuitry. For flexibility and access speed, there are two

control inputs: I/O CLOCK and chip select (CS). These control inputs and a TTL-

compatible 3-state output facilitates serial communications with a microprocessor or

Page 28: Intelligent System Based Car Collision Prevention

19

computer. A conversion can be completed in 17 µs or less, while complete input-

conversion-output cycles can be repeated in 22 µs for the TLC548.

The internal system clock and I/O CLOCK are used independently and do not

require any special speed or phase relationships between them. This independence

simplifies the hardware and software control tasks for the device. Due to this

independence and the internal generation of the system clock, the control hardware and

software need only be concerned with reading the previous conversion result and

starting the conversion by using the I/O clock. When CS bar is high, DATA OUT is in

a high-impedance condition and I/O CLOCK is disabled. The control sequence has

been designed to minimize the time and effort required to initiate conversion and obtain

the conversion result. A normal control sequence is:

1. CS bar is brought low. To minimize errors caused by noise at CS bar , the internal

circuitry waits for two rising edges and then a falling edge of the internal system clock

after a CS bar falling edge before the transition is recognized. However, upon a CS bar

rising edge, DATA OUT goes to a high-impedance state. The most significant bit

(MSB) of the previous conversion result initially appears on DATA OUT when CS bar

goes low.

2. The falling edges of the first four I/O CLOCK cycles shift out the second, third,

fourth, and fifth most significant bits of the previous conversion result. The on-chip

sample-and-hold function begins sampling the analog input after the fourth high-to-low

transition of I/O CLOCK. The sampling operation basically involves the charging of

internal capacitors to the level of the analog input voltage.

3. Three more I/O CLOCK cycles are then applied to the I/O CLOCK terminal and the

sixth, seventh, and eighth conversion bits are shifted out on the falling edges of these

clock cycles.

4. The final clock cycle is applied to I/O CLOCK. The on-chip sample-and-hold

function begins the hold operation upon the high-to-low transition of this clock cycle.

The hold function continues for the next four internal system clock cycles, after which

the holding function terminates and the conversion is performed during the next 32

system clock cycles, giving a total of 36 cycles. After the eighth I/O CLOCK cycle, CS

bar must go high or the I/O clock must remain low for at least 36 internal system clock

cycles to allow for the completion of the hold and conversion functions.

Page 29: Intelligent System Based Car Collision Prevention

20

Here is how the ADC can be connected to serial port. The output of the TLC548

is not directly suitable for standard serial data reception, so the circuit below uses serial

port handshaking lines in a nonstandard way which enables the communication

between computer and converter chip to be implemented with as few components as

possible. The circuit takes all the power it needs from PC serial port. The computer's

RS-232 serial port is asynchronous, so it requires hand shaking control lines because

The A/D converter's serial port is synchronous. But, its handshaking requirements are

minimal and it only requires one wire for clock and one or two wires for data. This can

be accomplished using the RS-232 port's handshake lines. The voltage conversion of

signals from RS-232 port to IC of ADC input pins is done using a combination of series

resistors and the internal input protection diodes in the IC of ADC. The output of this

ADC was able to directly output signals which seemed to be detected by RS-232 port.

Using 25 pin serial port connecter the designed interfacing circuit looks like the

following. The circuit has two parts; the TLC548 ADC IC chip and hand shake lines

formed by diodes, resistors and capacitors. All diodes are IN4148 or similar. All

resistors connected to IC are 30 kΩ and their function is to protect the IC inputs with

the diodes against overvoltage. The 1 kΩ resistor, one diode, 5.1V zener-diode, 10µF

and 100nF capacitors make the power supply which takes circuit power from serial port

µData Terminal Ready –pin. A 25 pin female D connecter is used to connect the ADC

to PC serial port. The circuit diagram of interfacing between PC serial port and ADC is

shown in fig below

Page 30: Intelligent System Based Car Collision Prevention

Fig 3.3 interfacing circuit between infrared laser radar and CPU

Video camera can be interfaced to computer using USB port. It can be accessed

by MATLAB if compatible driver software is installed for it.

be used as the display unit. Buzzer can be connected to host PC using USB port or it

can be wireless. Video camera should be placed at the position of rear view mirror in

front of the car. It should in normal projective view for lane

warning. Its viewing direction should be in horizontal direction for object detection and

tracking. The position of display unit is near the position of speedometer. This position

is suitable position for the driver potentially vie

placed at the position perpendicular to the sit of the driver. The infrared laser radar is

placed at the in front air intake of the engine in order to view any small and big

obstacles. The system is controlled by the d

inhibit the warning. The driver uses this when he or she thinks to overtake another car

and at car parking.

21

Fig 3.3 interfacing circuit between infrared laser radar and CPU

Video camera can be interfaced to computer using USB port. It can be accessed

by MATLAB if compatible driver software is installed for it. The monitor screen can

be used as the display unit. Buzzer can be connected to host PC using USB port or it

. Video camera should be placed at the position of rear view mirror in

front of the car. It should in normal projective view for lane detection and departure

warning. Its viewing direction should be in horizontal direction for object detection and

tracking. The position of display unit is near the position of speedometer. This position

is suitable position for the driver potentially view the display unit. The buzzer can be

placed at the position perpendicular to the sit of the driver. The infrared laser radar is

placed at the in front air intake of the engine in order to view any small and big

obstacles. The system is controlled by the driver. The driver can use limit switches to

inhibit the warning. The driver uses this when he or she thinks to overtake another car

Video camera can be interfaced to computer using USB port. It can be accessed

The monitor screen can

be used as the display unit. Buzzer can be connected to host PC using USB port or it

. Video camera should be placed at the position of rear view mirror in

detection and departure

warning. Its viewing direction should be in horizontal direction for object detection and

tracking. The position of display unit is near the position of speedometer. This position

w the display unit. The buzzer can be

placed at the position perpendicular to the sit of the driver. The infrared laser radar is

placed at the in front air intake of the engine in order to view any small and big

river. The driver can use limit switches to

inhibit the warning. The driver uses this when he or she thinks to overtake another car

Page 31: Intelligent System Based Car Collision Prevention

22

Fig 3.4video camera, infrared laser radar and buzzer allignment on the car

Fig 3.5 the position of display unit, processor and buzzer within the car when viewed

from the top left.

Page 32: Intelligent System Based Car Collision Prevention

23

Fig 3.6 general block diagram of the system

Page 33: Intelligent System Based Car Collision Prevention

24

3.2 Lane detection and departure warning system

3.2.1 Introduction

Lane detection and departure warning system is the system that is developed to

detect road lane markings and warn the driver if the car is moving out of its lane. To

warn the driver the system should detect lane markings. In this project we have applied

feature based lane detection technique to detect and track road lane markings. The

developed system is robust if the lane markings are visible. The departure warning will

be issued using the deviation from center of the video. The system is implemented

using feature based technique with normal perspective view of a single CCD camera to

capture video images with specific frame rates. The system applies image segmentation

to separate unwanted objects and Hough transform algorithm to detect the presence of

lane marker. By applying the feature properties of lane marker it’s possible to delete

fake lines. The system has some draw back when the lane markings are missing and

highly affected by noises, but it effective on high ways with lane markings.

3.2.2 System configuration

This part of the project describes the system block diagram for the lane detection

and departure warning system. The system is vision based system that analysis video

and passes decisions. It composed of input devices to capture road video, output

devices to promote decisions. The input device is CCD video camera and the output

devices are the display device and buzzer. The input video is processed by processors

to give certain output.

Fig 3.7 block diagram of lane detection and departure warning system

Video

camera CPU processor

Buzzer

Display device

Page 34: Intelligent System Based Car Collision Prevention

25

Video camera is used to capture road videos with specific frame rates. Now-a-

days most of the cameras are available with USB interface. By installing the driver for

the camera, the computer detects the device whenever you connect it. But for CCD

camera connected with grabber card and interfaced with computer, Windows OS

automatically detects the device. Different cameras can be used to capture video images

but we have preferred CCD camera because:

It has good resolution

The probability of noise introduction to image is low

It provides good quality image

In this project the video images are captured at specified rate of 30 frames per

second with video resolution of 640 X 480.Then display device are used to display

visual output of the system that can be used for lane departure warning. The lane

departure warning can be passed using buzzer in the form of audio.

3.2.3Approach to lane detection and departure warning

Video images of lane are captured using camera and identification of certain

features of lanes from other objects in the image is performed. Using gray scaling

followed by thresholding we can get a binary image containing different objects and

lines. But the road lanes have certain features that can be used to identify it from

different objects and fake lines that are in the captured road image. These features of

the lane are that:

Lane markings have similar width throughout the video

The distance that separates left lane and right lane is constant

The lane markings are thin and long line in the video image

Lane markings are continuous line or road lane is consisted of a number of

candidate lane continuous lane markings

The lane markings have unique shape while objects and fake lines have not

possessed it.

The lanes are parallel to each other

The lane which have many elements of lines in parallel if it is not continuous

By detecting the midline between left and right lane it’s possible to pass lane departure

warning. This is deeply explained in the algorithm development part.

Page 35: Intelligent System Based Car Collision Prevention

26

3.2.4 Algorithm development

This part explains the algorithms developed to detect and track lanes. It assumes

road lane as long linear line or a combination of a number small lane marking. It also

includes the algorithm of lane departure warning to warn the driver if the car is

departed out of the lane. The system is developed using feature based technique with

taking the identified feature into an account. The algorithm is developed by using

MATLAB video and image processing tool box. MATLAB provides support to access

serial port (also called as COM port), parallel port (also called as printer port or LPT

port) of a PC or USB port. Depending on this we have assumed to connect video

camera to capture video and pass it PC for video and image processing part. The

algorithm developed for lane detection and departure warning is explained as follows.

Page 36: Intelligent System Based Car Collision Prevention

27

No

No

Yes

No

Yes

Fig3.8 flow chart of lane detection and departure warning

Video input

Gray scaling

Frame grabbing Video frame

Filtering Histogram

equalization Image scaling

Edge detection

Hough transform

Hough peaks Hough lines Is it lane

marking?

Delete

Compute mid line Compare mid lines

Is

comparison

is greater

than TH Display

Buzzer

Page 37: Intelligent System Based Car Collision Prevention

28

3.2.4.1 Algorithm description

i. Video input

It’s the material that is used to capture video images of road lane for processing

and lane extraction. This material could be different cameras that can be interfaced with

processor using USB port. The camera resolution to capture the images should be

640X480. It captures images at 30 frames per second.

ii. Frame grabbing

Video is nothing but it’s the sequence of images taken successively at certain

frame per second. Frame grabbing is used to separate video to sequence of images. In

this project the video camera captures 30 images of road lanes within one second. This

depends on the computation time needed. During this selection memory and other

resources should be taken into consideration. Then the images will be ready for

processing.

iii. Gray scaling

The captured images are RGB (true color images). RGB images are not suitable

for processing as they are composed of red, green and green colors. To help in

processing the images in each frame should be converted to gray scale or intensity

image. A gray scale digital image is an image in which the value of each pixel is a

single sample. Each sample is stored using 8 for 256 intensities. It has value ranging

from black to white pixel.

iv. Filtering

Noises may be introduced to images during capturing. Filters are used in order to

remove these noises from image. This will help in further processing by improving

image quality. This can be done using median filter.

v. Histogram equalization

Due to various reason the intensity of light at each pixel in a single band of the

electromagnetic spectrum (e.g. visible light) there difference in pixel intensity will

occur in the captured image. But this may cause some loss of properties of image. So

applying histogram equalization distributes intensity values over the image and the

pixel intensity in the image will became similar. This has certain use when shadows

Page 38: Intelligent System Based Car Collision Prevention

29

and foggy or rainy weather condition exists. It can be implemented using histeq

(histogram equalization) function.

vi. Image scaling

It is the act of brightening the captured image. This has special wide influence on

lane detection during rainy and foggy conditions. Image can be scaled by multiplying it

by specific number. This means that each pixel value in the image will be multiplied by

this specific number.

vii. Edge detection

In this project the first step in the lane feature extraction is the edge detection of

images. Edge detection can be implemented by using canny edge detector. Canny edge

detection uses two thresholds values and scans all over the image using Gaussian

masks. It out puts the binary image with detected edges. It can be computed by using

edge function.

viii. Hough transform

After detecting the edge of object in the image we have applied Hough transform

to detect lines in the image [math works]. Hough transform detect lines in the image

using parametric equation of line.

rho = x*cos(theta) + y*sin(theta)……………………………………….3.1

The variable rho is the distance from the origin to the line along a vector

perpendicular to the line. Theta is the angle between the x-axis and this vector. The

Hough function generates a parameter space matrix whose rows and columns

correspond to these rho and theta values respectively. An array [rho] [theta] is used to

count how many pixels belong to the line through Hough Transform. The value of rho

should be greater than 10, and the value of theta should be between 30 and 150 to avoid

the detection of fake lane markings.

ix. Hough peaks

Hough peaks function works with Hough function to detect potential lines in the

image. Hough transform yields a parametric space with rho as row and theta as column.

Hough peaks finds the peak value in this parametric space. Peak value in the Hough

parametric space corresponds to potential line in the image.

Page 39: Intelligent System Based Car Collision Prevention

30

x. Hough lines

After identifying peaks in the Hough parametric space, it’s evident that the line

exists at there. The line will be plotted using Hough line function. That means Hough

line is used to convert the polar coordinates to Cartesian coordinate, because parametric

space is composed on polar coordinates rho and theta.

xi. Determine lane markings

The output of Hough lines is that every line on the road lane will be detected.

The detected line may be the desired road lane or it may be fake lane marking. So the

lane marking should be searched depending on the lane feature described under our

approach to lane detection and departure warning part. This involves

Sort the candidate lines by their position from left to right

Choose the lane candidate which has biggest count number as the lane

marking.

Around each line cluster, choose the lane candidate which is longest as

the lane marking in real scene

Delete the fake lane marking candidates

xii. Determine mid line

Determining midline of the detected left and right lane is used in determining

whether the car is moving out of lane. The mid line is computed and stored for each

frame in the video. Then the current mid line is compared against the midline of the

previous frame. If the deviation exceeds threshold value, the lane departure warning

will be issued.

Page 40: Intelligent System Based Car Collision Prevention

31

3.2.5 Algorithm for lane detection and departure warning

system

Here is the algorithm used for lane detection and departure warning. It can be

implemented using MATLAB programming language. MATLAB has built in function

to carry out some parts of this algorithm.

1. Create video input object

2. Set properties of video input object

3. While ( acquired frames < infinity)

i. Access these frames

ii. for (present position=initial position: final position)

• separate color values in RGB images

• compute luminance value Y=0.299R+0.587G+0.114B

• assign value to present position

initial position is 1st row, 1

st column of an image resolution

final position is last row, last column of an image resolution

iii. for (present position=initial position: final position)

• scan all surrounding elements

• sort all the values

• calculate median of it

assign the value of median to present position

initial position is 2nd

row, 2nd

column of an image resolution

final position is the row and column before last row and column

iv. for (present position=initial position: final position)

• create two arrays H and T of length 256 to zero values

• scan every pixels and increment H if pixel x has intensity p

H[p]=H[p]+1

• form cumulative histogram Hc and store it in H

• determine array T from stored value T[p]=(G-1/MN)*H[p]

G=gray level, M=row, N= column

• change gray level of image to q=T[p]

initial position is 1st row and 1

st column of image

final position is last row and last column of image

Page 41: Intelligent System Based Car Collision Prevention

32

v. for (present position=initial position: final position)

• multiply each pixel values by constant number k

• store output at the present position

vi. for(present position=initial position: final position)

• convolve image with Gaussian filter

• convolve smoothed with derivative of Gaussian

• find local maxima

• carry out hysteresis thresholding

vii. for (present position=initial position: final position)

• quantize the parameter space p

[amin….amax][bmin…bmax]=accumulator array

• for each edge points (x, y)

for ( a= amin; a<amax: a++)

b=-xa+y;

(p[a][b])++

• find local maxima in accumulator array

• map each maxima to image space

x=pcosβ

y=psinβ

viii. for (present position=initial position: final position)

• determine statics of road lane

• for each candidate

compare candidate against the determined statics

Delete candidates which fail to meet the determined static

• impose over the image

• calculate end points of candidate

• determine midline of left and right candidate

• compare midline of current frame to previous frame

• issue warning

Page 42: Intelligent System Based Car Collision Prevention

33

3.3 Safe distance warning

3.3.1 Introduction

Almost continuously, someone in the world dies from a traffic accident; many

more suffer injuries. During this time collision occurs when the driver loses his or her

attention of in front obstacle. When the speed of obstacle increases or decreases it may

enter into the safe distance of the car that makes it vulnerable for collision. Sometimes

collision may occur when pedestrians are suddenly crossing the road. Moreover, the

economic losses caused by traffic accidents are reaching as high as Everest hill. The

frontal car collision can be avoided if safe distance is maintained between a car and any

object in front of it. A considerable number of accidents can be avoided by recognizing

a hazard in sufficient time and making appropriate driving maneuvers. Such actions can

be achieved by warning signals to the driver within sufficient time. Suitable sensors are

required to identify hazardous situation and notify the driver within sufficient amount

of time. In this project laser radar sensors are used to determine the range and speed of

upcoming obstacle. Safe distance warning is based on time gap between the car and the

obstacle. There are safe distance warnings at two points before collision occurs. The

first warning is the visual warning, while the second warning is warning in audio.

3.3.2 Safe distance determination

Different safe distance determination algorithm has been proposed by different

car manufacturers depending on different criteria and conditions. The safe distance

warning system uses different sensors and processors to detect the presence of objects

within the safe distance of moving car. Infrared laser radar, video camera and wheel

speed sensors are used to capture road information and the properties of own car. These

are different condition that must be considered before passing safe distance warning.

Time gap to collision

Relative speed between the car and obstacles

Range between the car and the obstacle

Page 43: Intelligent System Based Car Collision Prevention

34

By considering the above criteria the system configuration for safe distance

determination and safe distance warning looks like the following.

Fig 3.9 block diagram of safe distance warning system

Infrared laser radar operates in the same way as any other radars work. But the

main difference is that it uses infrared radiation rather than radio waves. It’s highly

efficient in detecting any moving and non moving obstacles. Its uses light detection and

ranging (LIDAR) principle. LIDAR-based systems offer advantages such as small size,

simple assembly, and relatively low cost. The laser-based systems measure the distance

of obstacle traveling in front of car by using triangulation or time-of-flight

measurement where the laser pulses are reflected by the preceding obstacles. The

sensor uses a high-power laser diode to transmit infrared light pulses with a wavelength

in the range of 850 nm to 950 nm. A high-speed PIN photodiode or avalanche photo

diode receives the light reflected by the preceding vehicle and determines the distance

and the speed relative to the obstacle in front. The distance to one or more objects can

be determined by the delay time of the reflection.

R= Cxt/2 ………………………………………………………………….3.2

Where C is speed of light, t is the time delay

The speed of obstacle is determined by taking several samples of ranges in one second

and taking average value of it.

3.3.3 Algorithm development

There are various safe distance warning algorithm that proposed previously. In

this project there are two safe distance warning methodologies. The first safe distance is

determined by the maximum viewing distance of video camera. The selected CCD

camera has the maximum viewing distance up to 80m. When an obstacle enters to this

distance the video camera starts to capture an object and the first safe distance warning

is passed in visual form for the driver. So the driver will get an attention of in front

obstacle early. The second safe distance is the critical safe distance. We used

Infrared laser

radar ADC Host PC

Buzze

r

Page 44: Intelligent System Based Car Collision Prevention

35

HONDA’s algorithm to determine this critical safe distance. Critical safe distance is the

minimum distance that the driver uses to avoid collision.

D=2.2 Vrel+6.2…………………………………………………….. …….. 3.3

Where Vrel is relative velocity determined by infrared laser radar. If an obstacle enters

to this critical distance the second warning is passed by using audible warning. Here is

the algorithm used for the second warning:

1. Create input object

2. Set properties of input object

3. Read serial port

4. Determine safe distance

5. Compare safe distance to range

6. Pass audio warning

Page 45: Intelligent System Based Car Collision Prevention

36

3.3.4 Object detection and tracking for visual safe distance

warning

3.3.4.1 Introduction

Different types of collision may be avoided if the car can detect and track any

obstacle in front of it. Object detection is the process of detecting objects in the video

captured by CCD camera using digital image processing techniques. Detecting an

object is not enough to avoid collision; it should be tracked until it moves out of the

region of interest. Object tracking is the process of locating moving objects in time

using a camera. An algorithm analyses the video frames and outputs the location of

moving targets within the video frame. Different video capturing cameras can be used

for this application, but CCD camera is robust to different conditions that arises noise.

The object detection is implemented using back ground subtraction and an

object is tracked using blob analysis. For the accurate object detection model based

technique is most accurate as certain feature of objects can be lost due noises, but

model based technique requires complex computation and large memory when using

MATLAB. The noises that can cause feature degradation can be removed using noise

removal methods and an object detected and tracked using back ground subtraction and

blob analysis. Tracking is based on the centroid of the detected object.

3.3.4.2 System configuration

This part of the project describes the block diagram of object detection and

tracking. To detect and track an obstacle in front of the car there should be a material

that tells whether an object is present or not. This can be done by the camera mounted

on the car. The camera calibration can affect the maximum viewing distance of video

camera and how the object is to be detected. The camera viewing direction should

horizontal and it should be mounted on the middle center of the car. The system is

composed of different input devices and output devices as well as processor. The input

devices are used to capture any existing obstacle. This is mainly done by camera. The

output device used is the display unit that displays the detected object. PC processor

can be used to process the video images being MATLAB is installed on it.

Page 46: Intelligent System Based Car Collision Prevention

37

Fig3.10 block diagram of object detection and tracking.

3.3.4.3 Approach to object detection and tracking.

Object detection and tracking is the act of detecting and locating obstacles

captured by video camera. With camera installed on car it’s possible to capture any in

front obstacles up to 80m. Using digital image processing techniques the captured

object is differentiated from back ground objects. Object detection and tracking has

four main parts depending on their techniques. They are

Noise removal

Segmentation

Feature extraction

Blob analysis

a. Noise removal

It’s the first part of object detection and tracking which removes noises from

video frames. In real time condition there are various noises that can affect the quality

of image. The occlusion due to shadows affects the quality of images to a great extent.

Noises that are introduced to images can be removed by using median filtering.

b. Segmentation

After noise is removed from video frames image segmentation is used to separate

back ground from foreground objects. This is used to remove losses due to the presence

of shadows. But shadows can affect the efficiency of object detection. The image

segmentation is performed by using back ground subtraction. There are many types of

back ground subtraction that can be used to remove back ground from fore ground

object. The back ground subtraction is frame differencing followed by thresholding.

The system compares consecutive video frames and excludes pixels that exist in both

frames. After comparison there is some effect back ground pixels, which should be

removed by thresholding.

Camera Processor Display unit

Page 47: Intelligent System Based Car Collision Prevention

38

c. Feature extraction

After thresholding foreground objects and some effects from shadow exists.

These effects are removed by using sobel edge detector. Sobel filtering is an effective

technique used to extract the boundaries in a given image. The Sobel operator

calculates the gradient of the image intensity at each point, giving the direction of the

largest possible increase from light to dark and the rate of change in that direction. For

every pixel in the image sobel operator determine intensity variation along x and y

direction. Every object has specific feature like color or shape. In this project object is

extracted using its color feature. The output of sobel displays intensity edge of objects.

This intensity is nothing but the objects color feature. The pixel values from the first hit

of the intensity values from top, bottom, left and right are stored. By using this

dimension values a rectangular bounding box is plotted within the limits of the values

produced.

d. Blob analysis

After object edge is identified by bounding box, the centroid of detected object is

determined. This helps in tracking of detected objects. Blob tracking is used for object

tracking. Tracking process will take the object pixel coordinates as input from object

detection process. Here centroid positions are plotted in each frame. After centroid

calculation it is easy to plot the location of the moving objects with frame to frame.

Page 48: Intelligent System Based Car Collision Prevention

39

3.3.4.4 Algorithm of object detection and tracking

1. Create video input object

2. Set properties of video input object

3. While ( acquired frames < infinity)

i. Access these frames

ii. for (present position=initial position: final position)

• separate color values in RGB images

• compute luminance value Y=0.299R+0.587G+0.114B

• assign value to present position

initial position is 1st row, 1

st column of an image resolution

final position is last row, last column of an image resolution

iii. for (present position=initial position: final position)

• scan all surrounding elements

• sort all the values

• calculate median of it

assign the value of median to present position

initial position is 2nd

row, 2nd

column of an image resolution

final position is the row and column before last row and column

iv. for (present position=initial position: final position)

• select first frame(N) as back ground

• subtract first frame from second frame

• compare the output to threshold

• change back ground to second frame(N+1)

assign the output of threshold to current position

initial position is second frame

final position is the last frame

v. for (present position=initial position: final position)

• convolve frame N with sobel filter

• determine bounding box

• determine centroid

Page 49: Intelligent System Based Car Collision Prevention

40

Here is the flow chart for object detection and tracking

Fig 3.11 flow chart for object detection and tracking

Page 50: Intelligent System Based Car Collision Prevention

41

Chapter four

Results and Discussion

4.1 Implementation and results

So far we have been designing the system and developing an algorithm for its

implementation. In this project lane detection is done on video frames capture by frame

grabber. The visual safe distance warning uses the object detection and tracking to give

the initial visual warning for the driver. The prototype of Object detection and tracking

is implemented using web camera and on the sequence of video frames. Lane detection

system uses statics of the lane to identify road lane from other fake lines. The use of

Hough transform is greatly applied in the detection of road lane. The obstacle detection

is implemented on the sequence of video frames using back ground subtraction method.

The output of the system is described as following. The core of object detection and

tracking in this project is the back ground subtraction method. The output of frame

grabber is changed to gray level image followed by back ground subtraction method.

Back ground subtraction is used to differentiate an object from back ground

using frame difference followed by thresholding. It compares two consecutive frames

and cancels the common pixels. After common pixels are cancelled, it’s only intensity

of foreground object that remains.

Fig 4.1 images from video frame sequence.

Page 51: Intelligent System Based Car Collision Prevention

42

Fig 4.2 images from consecutive video sequence.

Fig4.3 out put of back ground subtraction for images of fig4.1

Fig 4.4 output of back ground subtraction for images from fig4.2

From the above figures back ground subtraction suppresses back ground from

foreground objects. These two men are moving from frame to frame, so certain feature

that represents them is not suppressed as they are moving. Back ground subtraction

identifies objects not only by their move’s from frame to frame but also by their color

feature. White objects are easily identifiable. But with camera moving at high speed

and at the presence of high shadows certain feature of objects may not be completely

identified. This was the major problem in this project. But this problem can be

suppressed by sobel filter after back ground subtraction. Using the results from back

ground subtraction foreground objects are identified from back ground. Here, after back

Page 52: Intelligent System Based Car Collision Prevention

43

ground subtraction what remaining is only the intensity of foreground object. Using

the intensity of fore ground object it’s possible to determine the bounding box that

determines the maximum tip of the intensity. After the bounding box is determined the

object is detected by calculating the centroid of bounding box. Object can be tracked by

using this centroid. The output of object detection and tracking using intensity or color

variation is shown below

Fig4.5 output of object detection and tracking from video sequence

Fig 4.6 a) output of object detection and tracking from video sequence

a) b) c)

d) e)

fig 4.6 b) the lorry detected on the road lane.

Page 53: Intelligent System Based Car Collision Prevention

44

The obstacle detection and tracking using bounding box has importance for the

drivers to clearly identify the obstacle. The problem with this intensity based object

detection exists when shadows and occlusion exists. Sometimes when an obstacles

reaches close to each other, their bounding box my merge toward each other and they

may be detected as a single object with centroid pointing at the mid distance between

them.

The lane detection sub system applies region based processing to identify the

region that looks like road lanes. We have used image statics to identify lanes from

other fake lanes. The system uses region based processing and region based statics to

identify feature of lanes on the image from video sequence. To efficiently detect lane

markings we have applied Hough transform method. It uses the output of canny edge

detection as an input and creates accumulator array to compare between lines that exists

in the image. Road lanes are usually the longest line and thin line that exists in the

image. They have the maximum peaks in the accumulator array created by Hough

transform

Fig 4.7 input image of road lane

Using canny edge detection and region based enhancement, the input image changes to

the form suitable for region based processing. The system uses different noise removal

methodology along with image enhancement to extract specific features within the road

lanes. The output of image enhancement is feed into Hough transform algorithm. The

accumulator array created by Hough transform for the above image looks

Page 54: Intelligent System Based Car Collision Prevention

45

Fig 4.8 accumulator array from hough transform

In this image there is the point of highest intensity, which shows the presence of

road lane. So Hough peaks shows the presence of lane at that point. The detected lane

looks like:

Fig 4.9 image result with fake and road lane detected

From this image it’s clear that every region that looks like lane is detected.

Generally road lane has highest peak in the accumulator array. By considering only

highest peak i.e relatively higher peak it’s possible to suppress some fake lanes. Finally

the road lane detected by hough transform looks as shown in fig below.

Page 55: Intelligent System Based Car Collision Prevention

46

Fig 4.10 road lane and fake lane detected by hough transform

Now we have road lane detected by hough transform. But still there exists some

fake lanes due to real time effect of video camera and lighting condition. So we have

applied feature based differentiation to identify the two lanes on which the car is

moving. This is done by using low level image processing described below

Fig 4.11 binary image found after noise removal

The above figure shows the conversion of output image from Hough transform to

binary image. This conversion has importance minimizing computation in feature

extraction by dividing the image into two parts. This image has only two pixels which

are 0 and 1. The region that looks like lane has highest intensity of pixel. There is also a

region that has some value of pixels but less than pixel value of lane. These are the

region of fake lanes. The region of interest is the region that contains only visible road

lanes. Using boundary extraction, those points which are not in the region of interests

are removed. The output of this boundary extraction is:

Page 56: Intelligent System Based Car Collision Prevention

47

Fig 4.12 output image after boundary extraction

Due to shadows are not removed by only boundary extraction the user defined

function is used to delete the effect of the shadow and other occlusion. The user defined

function uses the specific characteristics of road lane to delete these fake lanes. These

special properties of road lane are identified under chapter three.

Fig 4.13 lanes detected using Hough transform followed by region based statics.

Page 57: Intelligent System Based Car Collision Prevention

48

4.2 Problems encountered

During the work of this project we have passed through problems by struggling.

Video processing and image processing application are carried out on DSP processor or

image processor with high image processing speed. But due to scarcity of this

equipment, sometimes it’s possible to use personal computers with high memory.

Image processing generally requires a large memory of operation. The unavailability of

personal computer with suitable operating speed and RAM has cut our aim to apply

model based technique to detection and warning system developed. The main problems

with such a computer are the out of memory error. This error normally occurs when an

image slot memory over laps with virtual memory of the PC. Generally the problems

encountered in this project are the unavailability of required devices for implementation

of the project.

Page 58: Intelligent System Based Car Collision Prevention

49

Chapter 5

Conclusion

In this project feature based Hough transform has been employed to be detect

road lanes and to warn driver if the car is going to move out of lane. Simulation of road

lane detection has been done using MATLAB and the results are discussed. Fake lanes

detected due to different noises and occlusion is removed boundary statics determined

from properties of road lane. The detection of road lane helps in determining lane

departure and there by pass warning. So the system can pass intelligent decision to

warn the driver when the car moves out of its lane. The second part of the intelligent

system based car collision prevention is the safe distance warning. There are two safe

distances at which the driver should be warned. The first warning is passed in visual

form. The second part is passed in audible form. The developed system uses infrared

laser radar and video sensor to detect the obstacle in front of the car. An algorithm is

developed for safe distance warning if an obstacle enters to the safe distance of the car.

The developed system uses background subtraction for object detection and centroid

calculation for object tracking in visual warning system.

The interface between infrared laser radar and PC serial port is designed and it

circuit diagram has been drawn. This interface has wider importance for the critical safe

distance warning. Finally the lane detection is implemented on various video frames.

The visual warning which is passed by using object detection and tracking is

implemented using webcam on the real time objects.

Recommendations

Intelligent system based car collision prevention is the wide research areas. It

cannot meet totally under short period of time as collision avoidance system requires

integration of different sensors and a bundle of high memory capacity processors. Each

sensor needs a different interfacing .But this can be simplified by using ECU which has

sensor interfacing unit. To make the system more robust and stand alone C++ portable

program can be used. In this project simulation of lane detection and tracking as well as

visual warning using object detection and tracking is proposed and done. As this

project has wider importance and complex computation we recommend critical safe

distance braking and intersection collision avoidance.

Page 59: Intelligent System Based Car Collision Prevention

50

Appendixes

A.MATLAB code for lane detection i=imread('lane.jpg');

b=rgb2gray(i);

e=medfilt2(b, [3 3]);

f=histeq(e);

c=b*1.2;

l=200;

s=c>l;

d=bwareaopen(s,750);

imshow(d);

hold on;

[B L]=bwboundaries(d,'noholes')

for k=1:length(B)

boundary=Bk;

hk=plot(boundary(:,1),boundary(:,2),'g','linewidth',4);

end

stats=regionprops(L,'majoraxislength','minoraxis','orientation');

lanes=findlanes(B,h,stats);

hold off;

imshow(i);

hold on;

for k=1:length(lanes)

plot(lanesk(:,2),lanesk(:,1),'g','linewidth',4);

end

hold off

B. User defined function for deleting fake lines

function lanes=findlanes(B,h,stats)

l=0;

global metric;

for k = 1:length(B)

metric = stats(k).MajorAxisLength/stats(k).MinorAxisLength;

testlane(k);

end

function testlane(k)

if metric > 5 && all(Bk(:,2)>100)

if metric > 5 && all(Bk(:,1)>150) && stats(k).MinorAxisLength<20 &&

abs(stats(k).Orientation)>30 ...

&& abs(stats(k).Orientation)<90 && all(Bk(:,2)<350)

l=l+1;

lanes(l,:)=Bk;

else

delete(h(k))

end

end

end

C.MATLAB code for object detection from video sequence

function d = trackfff(video)

Page 60: Intelligent System Based Car Collision Prevention

51

if ischar(video)

avi = aviread(video);

pixels = double(cat(4,avi(1:2:20).cdata))/255;

clear avi

else

pixels = double(cat(4,video1:2:20))/255;

clear video

end

nFrames = size(pixels,4);

for f = 1:nFrames

pixel(:,:,f) = (rgb2gray(pixels(:,:,:,f)));

end

rows=480;

cols=320;

nrames=f;

for l = 2:nrames

d(:,:,l)=(abs(pixel(:,:,l)-pixel(:,:,l-1)));

k=d(:,:,l);

bw(:,:,l) = im2bw(k, .2);

bw1=bwlabel(bw(:,:,l));

imshow(pixel(:,:,l))

hold on

cou=1;

for h=1:rows

for w=1:cols

if(bw(h,w,l)>0.5)

toplen = h;

if (cou == 1)

tpln=toplen;

end

cou=cou+1;

break

end

end

end

disp(toplen);

coun=1;

for w=1:cols

for h=1:rows

if(bw(h,w,l)>0.5)

leftsi = w;

if (coun == 1)

lftln=leftsi;

coun=coun+1;

end

break

end

end

end

Page 61: Intelligent System Based Car Collision Prevention

52

disp(leftsi);

disp(lftln);

widh=leftsi-lftln;

heig=toplen-tpln;

widt=widh/2;

disp(widt);

heit=heig/2;

with=lftln+widt;

heth=tpln+heit;

wth(l)=with;

hth(l)=heth;

disp(heit);

disp(widh);

disp(heig);

rectangle('Position',[lftln tpln widh heig],'EdgeColor','r','linewidth',4);

disp(with);

disp(heth);

plot(with,heth, 'r*');

drawnow;

saveas(gcf,strcat('result',num2str(l)),'jpg');

hold off

end;

D.MATLAB code for object detection using webcam vid = videoinput('winvideo',1);

set(vid,'TriggerRepeat',Inf);

vid.FrameGrabInterval = 5;

vid_src = getselectedsource(vid);

set(vid_src,'Tag','objectdetection');

figure;

start(vid)

pause(2);

while(vid.FramesAcquired<=100) % Stop after 1000 frames

data = getdata(vid,2);

diff_im = imabsdiff(data(:,:,:,1),data(:,:,:,2));

diff = rgb2gray(diff_im);

diff_bw = im2bw(diff,0.2);

bw2 = imfill(diff_bw,'holes');

s = regionprops(bw2, 'centroid');

cd = s.Centroid;

centroids = cat(1, s.Centroid);

imshow(data(:,:,:,2));

hold(imgca,'on');

plot(imgca,centroids(:,1),centroids(:,2),'g*');

hold on;

rectangle('Position',[cd 60 33],'LineWidth',2,'EdgeColor','b');

hold(imgca,'off');

end

stop(vid)

Page 62: Intelligent System Based Car Collision Prevention

53

References

1. Rajitha tv (2008) , ‘Object tracking from video sequence’ .award of degree of

master of technology in computer science, Cochin university of science and

technology

2. Arnab Roy, sanket shinde and kyoung-don kong an, ‘approach for efficient real

time moving object detection’. department of computer science, Watson school

of engineering and applied science, state university of new York at Binghamton

3. Yifei wang, naim dahnoun, and alin achim(2009), ‘a novel lane feature

extraction algorithm based on digital interpolation’17th

European signal

processing conference(EUSIPCO 2009) Glasgow, Scotland

4. Claudio rosito jung and Christian Roberto kelber(2004), ‘lane departure

warning system based on a linear parabolic lane model’ IEEE intelligent

vehicle sysmposium university of parma, parma, Italy

5. Chew chin lee(2007), ‘moving object detection at night’, award of degree of

master of engineering university technology of Malaysia

6. Muhammad azwan nasirudin and mohd rizal arshald, ‘a feature based lane

detection system using hough transform method’, usm robotics research

group(URRG), school of electrical and electronic engineering Malaysia

7. Rafael C.Gonzalez, Richard E. Woods, Steven L.Eddins (2004), ‘Digital Image

Processing Using MATLAB’, New jersey :Pearson Prentice Hall

8. Masayuki Yokoyama and Tomaso Poggio(2005), ‘A Contour-Based Moving

Object Detection and Tracking’

9. B.Ugur Toreyina et. Al(2006), ‘Moving object detection in wavelet compressed

video’

10. Alan M. McIvor(2000), ‘ Background Subtraction Techniques’

11. Yue wang, dinggang and eam khwang teoh, ‘lane detection using catmull-rom

spline’, school of electrical engineering nanyang technological university ,

IEEE International Conference on Intelligent Vehicles 1998

12. Yue wang, dinggang and eam khwang teoh , ‘lane detection using b-snake’,

school of electrical engineering nanyang technological university

13. Abhishek rawat and tulip kumar toppo, ‘feature based object tracking using

PTZ camera’

Page 63: Intelligent System Based Car Collision Prevention

54