Upload
melvyn-wood
View
213
Download
0
Tags:
Embed Size (px)
Citation preview
Activity 3: Multimodality HMI for Hands-free control of an intelligent wheelchair
L. Wei, T. Theodovidis, H. Hu, D. GuUniversity of Essex
27 January 2012Ecole Centrale de Lille
1Part-financed by the European Regional Development Fund
Part-financed by the European
Regional Development Fund
1. Outline of the task within the context of the project1) To develop novel multimodal human-machine interfaces by
integration of voice, gesture, brain and muscle.
2) To understand: the user who interacts with it the system (the computer technology and its usability) the interaction between the user and the system
3) A proper balance between Functionality - defined by the set of actions or services
that it provides to its users, based on the system usability.
Usability - the range and degree that the system can be used efficiently & adequately by certain users.
2
Part-financed by the European
Regional Development Fund
System Integration3
Voice, gesture, EEG,,..
Electrodes
GPS Gyro,Laser...
Activity 1Navigation
Multimodal HMI
Activity 2Communication
1. Outline of the task within the context of the project
Part-financed by the European
Regional Development Fund
4
Intelligent Wheelchair
Pattern Recognition and Control
Feature Extraction
Training & Classification
Data Segmentation
Wheelchair Controller
Pre-processing
sEMG Amplifying
sEMGFiltering
Forehead sEMG Signal
Face Image Information
Image Acquisition
Face Detection & Image Segmentation
Logitech S5500 Web Camera
Cyberlink BrainfingersTM
Headband
Closed Eye Detection
DecisionFusion
.System Software Structure
II. Main results – Gesture based HMI
Part-financed by the European
Regional Development Fund
6
5600
4100
1300
1300
Docking Area B
Wood box barrier Pitch boundary Planedroutes
Docking Area A
Experiment 1
Part-financed by the European
Regional Development Fund
7
Experimental 1 Results: (upper) multi-modality control; (lower) joystick control
Part-financed by the European
Regional Development Fund
8
Fig.10 Planned task 2 map for indoor experiment
Experiment 2
Part-financed by the European
Regional Development Fund
9
Experimental 2 Results: (Left) multi-modality control; (Right) joystick control
Part-financed by the European
Regional Development Fund
10
II. Main results – Voice based HMITask: To use voice recognition for controlling a wheelchair
Purpose: To aid people with limited physical capability
Software: The Microsoft Speech SDK
Hardware: The Essex robotic wheelchair
Experimentation: The Essex robotic arena
Part-financed by the European
Regional Development Fund
11
Speech Recognition Structure
Driving components:
· Start: Capture voice command
· Sampling: Sample voice signal in real-time
· Calculate energy: Validate signal’s presence
· Calculate zero-crossing rate: Validate signal’s changes
· Calculate entropy: Validate signal’s utterance
· Speech recognition by parser: Microsoft Speech SDK
· Driving: Forward, Back, Left, Right, Stop)
Start
Sampling real-time signals
Calculate energy
Calculate zero-crossing rate
Calculate entropy
Speech recognition by parser
Driving
Part-financed by the European
Regional Development Fund
12
Microsoft Speech SDKFeatures: Developed by Microsoft’s Speech Technologies Group Aims to recognize audio speech and perform text-to-speech synthesizing This API can be used on common programming languages including C++
FFTW Core: FFTW is a ready-made library for computing discrete Fourier transform (DFT) Developed using the C++ language by MIT Can be used for increasing the running speed
Recognition Accuracy:Four commands are employed for control Exceptional recognition accuracy Adequate real-time control
Command AccuracyForward 90%
Back 93%Right 92%Left 86%Stop 90%
Part-financed by the European
Regional Development Fund
13
Testing Results
Environment 1
A simple corridor with no obstaclesTask: Reach destination at the same horizontal coordinate as the origin
Environment 2
An open area with two obstaclesTask: Avoid obstacles in a zigzag fashion and return back to the origin
Test Time (sec)Environment 1
Time (sec)Environment 2
1 135.1 220.1
2 134.9 223.5
3 134.5 218.3
4 135.3 214.4
5 126.4 209.1
6 123.1 208.8
7 114.8 204.6
8 116.1 197.9
9 112.9 205.3
10 115.3 206.4
Average ≈ 2min ≈ 3.5min
Part-financed by the European
Regional Development Fund
III. Future challenges and the work to be done
1) A novel multi-modal HMI will be developed by integration of voice control, gesture control, brain and muscle actuated control in order to meet the needs of different users.
2) The novel navigation and control algorithms developed in Activity 1 will be integrated to the wheelchair, including map-building, path planning, obstacle-avoidance, self-localization, trajectory generation, etc.
3) An integrated communications system allowing confidential data developed in Activity 2 to be made available at the intelligent wheelchair
14