38
Car Driver Skills Assessment using Posture Recognition Presented By: sumit kadyan

Human Computer Intreaction

Embed Size (px)

Citation preview

Car Driver Skills Assessment using

Posture Recognition

Presented By: sumit kadyan

Authors

• Madalina –Ioana Toma (TransilvaniaUniversity of Brasov)

• Leon J.M. Rothkrantz (Delft University of technology)

• Csaba Antonya (M.I.Toma)

Need For it

• Difficult to learn driving in real life scenario

• Safety issues

• Time boundation

• Learning driving as a Novice is a key in driving career

Introduction • Recognition of driving posture with High

Accuracy• Feedback mechanism for novice drivers using

Alarm system.• Experiment conducted in real time.

What Sets it apart?

• Recognition of complete body parts.

• Use of “Markerless” Sensors.

• Provides accurate measurement of joint configuration and rapid movements of hands.

Simulation Environment

framework

KINECT EYELINK2 CLIPSLOGITECH

G27TORCS PC

Pictorial Representation

Framework Components

• Kinects-sensing upper body movements

• Torcs-3D car simulator

• Clips-For rule based expert system

• Eyelink 2 device –for sensing eye and gaze movement

Markerless Sensor

Markerless Sensor

• Uses pattern recognition principle

• Monitor process quality via control panel or via Ethernet

• Reproducibility of 0.6 mm

• Plug can be rotated 90°

• High scanning speed of 7 m/s

KINECT

KINECT Sensor

• RGB camera sensor

• Configuration is done using Sdk tool by windows

• IR Emitter and IR depth Sensor

• Used for tracking upper body movements

EYELINK II

Eye link 2

• High resolution and data rate

• Head mounted video-based eye tracker.

• Used for tracking eyes movement and head orientation

• Two eye cameras allow binocular eye tracking

CLIPS • C Language Integrated Production System.

• CLIPS incorporates a complete object-oriented language(COOL) for writing expert systems.

• COOL combines the programming paradigms of procedural, object oriented and logical (theorem proving) languages.

• Provides High Portability.

Logitech G27• Provides simulation Environment with TORCS.

• USB Interface.

TORCS virtual environment

TORCS

• 3D car simulator supporting input devices( steering wheels, joystick, game pads etc.)

• Provides connection, configuration and synchronization.

• Written in C++ and open source avaliableunder GPL license

• Easy to add/create content

• Excellent performance and stability

Related Work

• Pose Estimation

• Gaze Detection

• Focused only on Expert Drivers.

• Analyses done using offline techniques like silhouettes, bounding boxes.

How it Works?

• Takes real time parameters from sensors and environment.

• Refers to an expert rule based system to determine the driving postures and give feedback ,also sound an alarm if the novice driver posture is wrong.

• Uses the clips inference engine

• Matching takes place between current state of fact list and list of instances

Description of the rules

Defining Rules

• Rules for recognizing driving postures are stored in the knowledge base system.

• Rules for driving posture: DP1,DP2,DP3,DP4

DP1-Left hand postures

DP2-Right hand postures

DP3-Eye and Head postures

DP4-leg postures

Working

• Each group represnts a postuers runsinparalles with the other

• A driver posture is represnted a key poses

• Which is a combination of 2- 5 key poses

• These are the inputs to the CLIPS

• In a driving task the driving posture used to perform that maneuver are defined in a specific order

Finite state machine diagram

DFSM

• Determisnntic finite state machine

• , S, s0 , , F

• -Input alphabet(from the sensors)

• S-Finite set of states (showing transition in DP1 ,DP2 …DP4

• s0- Initial state(When the system is calibrated for start )

• -state transition(from one

• F-final state

Diagram of the interface

Experiment • Experiment was focused on developing a

assistive intelligent system for indoor training of novice drivers

• Experiments conducted in laboratory with proper lighting for sensors

• 2 kind of experiments• One for robustness and performance of

posture recognition the novice driver without traffic

• 2 in is the complete framework evaluation.

Two Experiments

Experiments

Conducted

Detecting Robustness and accuracy of

posture recognition for novice drivers

Complete framework evaluation and

provide feedback

Participants

• 12 participants

• 8 males and 4 females

• All having driver license

• With little or no experience

Results of Experiment 1

• Every subject performed the postures for 10 times

• Driving postures recognition rate achieves 96.4% accuracy

• Driving posture stability achieves 96.21% accuracy

• GOOD” and “WORST” messages

Table of Results

Experiment 2 : Rules

• driver needs to start the car (StC)

• driver wants to drive away (DA)

• driver keeps the lane (KL)

• driver increases the speed (IS) or decreases the speed (DS) based on traffic signs

• driver wants to take over (TO) or change lane (CL)

• driver wants to make a forward parking (FP) driver wants to stop the car (SpC).

Results of Experiment 2

• In the StC situation we achieved 88% correct postures detectioni

• In the IS and DS speed variation situations we achieved an accuracy of 100%.

• A lower accuracy of less than 70% we obtained in TO and FP

Results experiment 2

• In the StC situation we achieved 88% correct postures detection.

• In the IS and DS speed variation situations we achieved an accuracy of 100%.

• A lower accuracy of less than 70% we obtained in TO and FP

Table of Results

Conclusion

• To improve the take over and forward parking by combining probabilistic methods reducing uncertainty of certain driver postures.

References • Toma, Madalina-Ioana; Rothkrantz, Leon J.M.; Antonya, Csaba, "Car driver

skills assessment based on driving postures recognition," Cognitive Infocommunications (CogInfoCom), 2012 IEEE 3rd International Conference on , vol., no., pp.439,446, 2-5 Dec. 2012

• I. Lefter, L.J.M. Rothkrantz, P. Bouchner, P. Wiggers: “A multimodal car driver surveillance system in a military area”, Driver Car Interaction & Interface, 2010.

• Y.F. Lu, and Ch.Li: “Recognition of Driver Turn Behavior Based on Video Analysis”, Journal of Advanced Materials Research Vol. 433-44, pp 6230-6234, 2012.

• D.B. Kaber, Y. Liang, Y. Zhang, M. L. Rogers, and S. Gangakhedkar: “Driver performance effects of simultaneous visual and cognitive distraction and adaptation behavior”, Journal of Transportation Research Part F 15, pp. 491–501, 2012.