56
Implementation of a shared control system for brain-controlled wheelchair navigation A Thesis Presented by Rui Luo to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer Engineering Northeastern University Boston, Massachusetts Feb 2018

Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Implementation of a shared control system for brain-controlled

wheelchair navigation

A Thesis Presented

by

Rui Luo

to

The Department of Electrical and Computer Engineering

in partial fulfillment of the requirements

for the degree of

Master of Science

in

Electrical and Computer Engineering

Northeastern University

Boston, Massachusetts

Feb 2018

Page 2: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Contents

List of Figures iii

List of Tables v

List of Acronyms vi

Acknowledgments vii

Abstract of the Thesis viii

1 Introduction 1

2 Background 52.1 Low throughput human machine interface . . . . . . . . . . . . . . . . . . . . . . 52.2 Map segmentation unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Human intent inference unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4 Intermediate position selection unit . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Orientation Control with NoVeLTI Framework 143.1 Problem analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2 Human intent inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.3 Orientation pie segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.4 Intermediate orientation selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4 Implementation of NoVeLTI in ROS 244.1 New NoVeLTI system architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 244.2 New NoVeLTI user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.2.1 Position control window . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.2.2 Orientation control window . . . . . . . . . . . . . . . . . . . . . . . . . 254.2.3 Overview window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5 Simulation and Experiments 285.1 Simulation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.2 Human subjects experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.3 Experiment results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

i

Page 3: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

5.3.1 Navigation time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.3.2 Navigation accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.3.3 Comparison of different confusion matrices . . . . . . . . . . . . . . . . . 325.3.4 Shared orientation control time . . . . . . . . . . . . . . . . . . . . . . . . 39

6 Conclusion 43

Bibliography 45

ii

Page 4: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

List of Figures

1.1 Two commercial BCI headsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2.1 A Human-interface-object model[27] . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Horizontal tile segmentation: Vertices on the map are iterated by a horizontal scan

[27] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Vertical tile segmentation: Vertices on the map are iterated by a vertical scan [27] . 82.4 Equidistant segmentation: Vertices on the map are iterated based on their distance

to the current intent PDF center of gravity [27] . . . . . . . . . . . . . . . . . . . . 92.5 Extremal segmentation: Vertices on the map are iterated based on their extremal

distance to the current center intent PDF center of gravity [27] . . . . . . . . . . . 9

3.1 Architecture of shared orientation control unit . . . . . . . . . . . . . . . . . . . . 153.2 An example of segmentation of orientation pie . . . . . . . . . . . . . . . . . . . . 183.3 Calculating probabilistic cost example . . . . . . . . . . . . . . . . . . . . . . . . 21

4.1 Resigned NoVeLTI architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.2 Position control window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.3 Orientation control window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.4 Overview window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5.1 Simulation system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.2 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.3 Shared position control with extredist and altertile map segmentation . . . . . . . . 315.4 Total navigation time with extredist map segmentation . . . . . . . . . . . . . . . 335.5 Total navigation time with altertile map segmentation . . . . . . . . . . . . . . . . 345.6 Difference between actual destination and intended destination with extredist map

segmentation (route 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355.7 Difference between actual destination and intended destination with extredist map

segmentation (route 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.8 Difference between actual destination and intended destination with altertile map

segmentation (route 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.9 Difference between actual destination and intended destination with altertile map

segmentation (route 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.10 Shared position control with two different confusion matrices/interface matrices . . 40

iii

Page 5: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

5.11 Shared orientation control with different intermediate orientation strategies . . . . 41

iv

Page 6: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

List of Tables

2.1 Comparison of BCI evoked signals. [27] . . . . . . . . . . . . . . . . . . . . . . . 62.2 Confusion Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3 An example of confusion matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

5.1 Experiment parameter configuration . . . . . . . . . . . . . . . . . . . . . . . . . 305.2 Interface Matrix mx91 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.3 Interface Matrix mx85 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

v

Page 7: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

List of Acronyms

BCI Brain Computer Interface

EEG Electroencephalography

EMG Electromyography

HMI Human Machine Interface

LIS Locked-in Syndrome

ROS Robot Operating System

SSVEP Steady State Visually Evoked Potentials

vi

Page 8: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Acknowledgments

Here I wish to express my gratitude to those who have supported me during the processof the thesis work:

I would like to thank my parents Yulian Zhou (周玉莲) and Lianfeng Luo (罗联峰) fortheir endless support. They’ve always shown great trust and faith in their child during the writing ofthis thesis. During the last six months of making this thesis I was suffering from an illness and it’stheir encouragement and love that carried me through those harsh days.

I would like to thank my thesis advisor Professor Taskın Padır for his guidance and pa-tience. The topic of this thesis has been changed dramatically once and thanks for his understandingand support on my current research.

I would like to thank Dr. Dmitry Aleksandrovich Sinyukov. His previous work has laid asolid foundation for this work and this thesis would not be completed without his constant supportand help. His excellent academic skill and spirit has taught me how to be a good researcher andengineer.

I would like to thank my friends Zijian Yao, Zhong Mao, Jiahao Wu and MohammadrezaSharif for their participation in my human studies so I can gather test data to evaluate my work.

I would like to thank Md Maruf Hasan, Erik Silva and Mitchel White for helping me buildthe autonomous wheelchair Norma (NOrtheastern Robotic Mobility Assistant). Although Norma’swork is not included in this thesis, but the knowledge and skills learned in that project are essentialto my work presented in this thesis

vii

Page 9: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Abstract of the Thesis

Implementation of a shared control system for brain-controlled wheelchair

navigation

by

Rui Luo

Master of Science in Electrical and Computer Engineering

Northeastern University, Feb 2018

Taskın Padır, Adviser

Individuals with physical disabilities continue to rely on electric wheelchairs and person-alized human-machine interfaces for their mobility. Even though the problem is well-studied in lit-erature, the development of reliable shared control paradigms that supports different low throughputhuman machine interfaces for semi-autonomous wheelchairs has been a challenging problem. Thiswork focuses on enhancing the shared position control methodology known as NoVeLTI (Naviga-tion via Low Throughput Interfaces) in four areas. (1) A new wheelchair orientation controller withuser interface has been developed. This controller infers user’s desired orientation of the wheelchairfrom detected commands using Bayes filter and then generates control commands to rotate thewheelchair after the desired position is reached; (2) The ROS implementation of the system archi-tecture is redesigned to ensure a stable communication channel among the system modules ratherthan relying on ROS topic; (3)A nonholonomic robot model has been implemented in the originalsystem to simulate the performance of a wheelchair in real environment; (4) An improved user in-terface has been designed and implemented in RViz according to feedback from the attenders ofour simulation experiments . A set of experiments are designed to validate the improvements andevaluate the effect of different parameters in the system. Four human experiments were conducted.Result shows that subjects were able to navigate the wheelchair via only four commands from sim-ulated BCI model to any given destination on a map at an average success rate of 90%. The averagenavigation time of a 25m route was about 60 seconds at the driving velocity of 1m/s.

viii

Page 10: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Chapter 1

Introduction

Wheelchairs are widely used as an assistive tool to help people with limited lower- and

upper-body mobility. Moreover, according there are about 37 in 100,000 people with Quadriplegia[4].

As for this specific group of people, any traditional wheelchair would fail to help because the

users are unable to manipulate traditional control interfaces such as a joystick installed on most

wheelchairs. Various kinds of unconventional human-machine interfaces (HMIs) have been devel-

oped in recent years such as Sip-and-puff system (SNP) [23], Tongue Drive System (TDS) [13] and

Eye and gaze tracking system [6], [18], [22] etc.

Among all the unconventional HMIs, Brain-Computer Interface (BCI) has the greatest po-

tential to be applied on wheelchair for people with Locked-in Syndrome (LIS) because BCI directly

establishes communication between various types of brain signals and external devices without the

necessity of any other physical interaction from the user. Different methods to acquire brain signals

have been studied in the field of BCI such as MEG[11], fMRI[19], and fNIRS[7], but the most

prevalent signal being used is Electroencephalography (EEG).

EEG-based BCIs are advantages as most of them are portable, safe and inexpensive com-

pared to other methods. What’s more important, there are many mature EEG-based BCI products

available on the market and purchasable by ordinary customers such as Epoc from Emotiv (Fig

1.1a) and Ultracortex ”Mark IV” from OpenBCI (Fig 1.1b). EEG signals are acquired by placing

electrodes on the scalp to measure the voltage fluctuations on the scalp resulting from ionic current

within the neurons of the brain. A typical BCI system would consist of signal acquisition, prepro-

cessing, feature extraction and classification units[3]. Based on the necessity of an external stimulus,

BCI signals can be classified into: evoked signals, spontaneous signals, and hybrid systems which

combines the previous two types of signals. The principles and theory of how a EEG-based BCI

1

Page 11: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 1. INTRODUCTION

(a) Epoc+ headset (Emotiv 2018) (b) Ultracortex ”Mark IV” headset (Open-

BCI 2018)

Figure 1.1: Two commercial BCI headsets

system functions is beyond the scope of this thesis so we will rather focus on their applicability.

Application of EEG-based BCI on wheelchair control has become an emerging area of

study in recent years because of not requiring physical interaction from user[25][9][21]. One way

to incorporate BCI on a wheelchair is mapping EEG signals directly into the driving commands

of the wheelchair. This type of control method is classified as direct control. Despite the fact

there are many shortcomings of this control method such as low Signal-to-noise ratio (SNR), high

responding time and early user fatigue (subjects need to focus on controlling robot via BCI all

the time), direct steering control is still a hot topic for researchers. In [8], three distinct EEG

signals were classified to mentally control the forward, left, and right movement of a BCI-based

wheelchair. The wheelchair also receives context information from other sensors to fuse with the

user command and then control the wheelchair’s motors to implement obstacle avoidance. In [5],

Steady state visually evoked potentials (SSVEP) are detected as five classes: top, bottom, left, right

and undefined. Top and bottom commands correspond to forward and backward movement while

the left and right commands correspond to left turn and right turn movement. When the undefined

signal is received, the wheelchair would stop for the sake of safety. 13 out of 15 subjects were

reported to be able to finish their tasks of navigating the wheelchair to a given destination in a

simply structured environment whose dimensions are 8.75 m long by 7.07 m wide. The average

time needed to complete one experiment was about 5 minutes with velocity of 20cm/s.

Besides direct control using BCI signals, research has been done on shared control of

wheelchair. In [17], a shared control system is developed where a coefficient ρ is set to fuse the user

command with computer generated command to determine the final velocity command. They use

Polar Polynomial Trajectory (PPT) to connect the initial position and the final one in the polar coor-

dinates rather than in Cartesian space. Compared to direct control, only two channels of SSVEP are

2

Page 12: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 1. INTRODUCTION

required and the user can specify the direction, clockwise or counterclockwise, of the steering angle.

The obstacle avoidance and steering curvature are adjusted by the onboard computer according to

user’s input. In this research, the path from the initial position to destination is predefined so there

is no inference process involved. This type of control can be referred as shared steering control,

which shares the steering control of a wheelchair between computer and human.

Apart from shared steering control mentioned above, shared position control is another

type of control strategy. In direct control method, signal from low throughput HMI, such as BCI,

are directly converted into motion control signal, while shared position control includes certain level

of self-autonomy of the robots. Usually, the subject only needs to provide an abstract command and

the robot is able to take care of low level implementation. For people with severe disabilities, it’s

usually difficult for them to drive or control the robots in step by step in the motion level due to their

limit interaction capability.

In [28], an intelligent wheelchair that combines an Motor Imagery (MI)- or P300-based

BCI and an autonomous navigation system is developed. According to the user’s intended goal

and the current position, the autonomous navigation system plans a path and then drives to the

destination fully autonomously without any user input to actually control the wheelchair. This

method can effectively alleviate user’s fatigue while controlling compared to direct control or shared

steering control. However, user can only choose his or her destination from a limited 25 predefined

locations on the map which decreases the flexibility of such system. A state-of-art shared position

control method for low throughput HMI is introduced in [27]. The new method is called NoVeLTI

(Navigation Via Low Throughput Interface). Compared to the previous shared position control

methods, there are several advantages of NoVeLTI:

• User can navigate to any vertex on a grid map generated from environment rather than being

limited to predefined locations. This improvement allows a more flexible navigation experi-

ence since user’s available destination can cover all positions the map.

• Unlike shared position control developed in[28], NoVeLTI is able to drive the wheelchair as

soon as the user gives input so to parallel the inference and actuation. Instead of staying at

the initial position while waiting for the destination being inferred, the wheelchair will move

to the current probabilistically optimal position. The parallel of inference and actuation not

only provide a better user experience but also effectively shorten the time of navigation which

saves more time for daily use.

3

Page 13: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 1. INTRODUCTION

• Considering that most current EEG-based BCIs signals are susceptible to noise or user’s un-

intentional input, an inference unit is included in NoVeLTI to infer human intents based on a

Confusion Matrix by a recursive Bayesian filter[2]. With this inference unit, the errors caused

by either the user or the BCI could be mitigated and the correct destination could still be

inferred.

• NoVeLTI is a general control system for any kind of low throughput interface. This compati-

bility makes it conveniently applicable to other low throughput interfaces.

However, NoVeLTI has been developed for a holonomic robot from the beginning, mean-

ing that it cannot be directly applied to a differential drive wheelchair. This drawback greatly re-

duced the potential of its practical application in the real world because wheelchair is the most

commonly used differential drive vehicle for people with disabilities. In this thesis, we develop an

improved version of NoVeLTI that supports nonholonomic robots and analyze the results from hu-

man subject studies to evaluate the advantages and disadvantages of this new shared control strategy.

The contributions of this thesis are as follows:

• A redesigned system structure for NoVeLTI in ROS has been developed which solves the

synchronization issue that may happen when conducting multiple experiments continuously.

• A new differential drive robot model is built and included in the original simulation system to

help us evaluate the performance of NoVeLTI on wheelchair more precisely.

• An orientation control component is developed for the shared orientation control of the wheelchair.

This new component shares the same interaction methodology with the original NoVeLTI to

guarantee a consistent user experience.

• Experiments involving four human subjects have been conducted to evaluate the performance

of NoVeLTI.

This thesis is be structured as follows: Chapter 2 presents the background of the NoVeLTI

system for the sake of completeness. Chapter 3 covers the details about the new orientation control

component added to NoVeLTI. Chapter 4 discusses the improvements of the in terms of software

architecture in ROS. Chapter 5 covers the simulation setup and our human subjects experiments

using NoVeLTI. Chapter 6 discusses our conclusions and future work.

4

Page 14: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Chapter 2

Background

NoVeLTI (Navigation Via Low Throughput Interface) is a general shared control strategy

developed for low throughput interfaces. In our work, we improved the original NoVeLTI to make

it applicable to nonholonomic robots such as a wheelchair. For completeness, we will explain the

implementation details of the NoVeLTI [27] in this chapter.

2.1 Low throughput human machine interface

Let’s define what is low throughput interface first. Usually, a low throughput interface is

defined based on its information transfer rate (ITR), however, in [27], a different way to define this

terminology is proposed from a control engineering perspective as follows:

Definition 1 Let Ttask be the average time of executing a task by a given robot/control system,

Htask be the amount of information needed to define the task. Tinfer = Htask/ITRHMI will thus

be the average time to pass the task definition into the control system. Now, this control system can

be called a control system with a low throughput HMI if

Tinfer ≈ Ttask

where ≈ means “is of the same order of magnitude as”. The task can be defined using state space

terminology. Ttask , in this case, is the average time of moving the system between two states in the

state space X or the output space Y. Htask is the average amount of information to be encoded in a

state vector.

To illustrate this, let’s consider the BCI. Table 2.1 shows different types of commonly

used evoked BCI signals in research. We can use this table to estimate Tinfer and Ttask for the

5

Page 15: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 2. BACKGROUND

Signal typeEvoked signals

Visual(SSVEP) P300[16]Auditory(AEP)[24] Errp[1]t-VEP[14] f-VEP[12] c-VEP[10]

Brain area occipital lobe(visual cortex)

parietal parietalfronto-central& parietal

Frequencyband/flickering rate 4Hz 6− 12Hz ?

detected intime domain

Bit rate,bits/min 30 100+ 100+ 20-25 3− 9 ?

Trainingrequired no no yes no no single trail

Table 2.1: Comparison of BCI evoked signals. [27]

navigation of a wheelchair using BCI. For a four commands P300 device, if we try to infer the

user’s desired destination vertex on a map with 150000 vertices like we use in our human studies,

it would take at least log1500004 ≈ 9 commands, which means 30 seconds(Tinfer), to infer the final

position under the assumption that there is no error input. For an indoor route of 20 meters, if the

wheelchair travels in 1m/s, the navigation time (Ttask) is about 20 seconds. The Ttask and Tinfer

are in the same magnitude which makes Definition 1 reasonable to use in the design of a shared

control strategy for BCI-controlled wheelchair.

Figure 2.1: A Human-interface-object model[27]

Figure 2.1 shows a model for human-robot interaction via HMI[27]. To achieve shared

position control of robot with low throughput human machine interface, the original NoVeLTI con-

tains two main components:

• Human Intent Inference: Because BCI always have limited options, which is much less than

the amount of available destinations on a map. This unit will perform the function of receiv-

ing BCI signals to infer the human’s intended destination on the map. This component is

6

Page 16: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 2. BACKGROUND

illustrated as Human block in the figure.

• Shared Control Unit: After a desired position is inferred from the inference unit, the shared

control unit will generate appropriate control command based on the results of three inner

components and send the command to the robot. This block is illustrated as Shared Control

block in the figure

2.2 Map segmentation unit

In NoVeLTI, we use different color regions on a map to represent the available choices

for user to choose. If the intended destination lies within one region, the user should choose the

corresponding region and after one command is received, the map will be segmented again. This

same process will keep repeating until the final destination is inferred. Based on the Bayesian filter

equation 2.2, how a map is divided would affect the intent probability distribution function (PDF)

vector and the time required to narrow down the desired destination. The optimal way to divide the

map to maximize the decrease of probabilistic cost is yet to be discovered, but two requirements for

such segmentation are provided in [27]:

1. To maximize the information throughput, the cumulative a priori probabilities of the regions

should be as close to the optimal a priori vector as possible

2. If human commands are detected correctly, the map segmentation process shall guarantee that

a probability of a single vertex being chosen as the intended goal would converge to 1.0

Four segmentation policies are being used in the NoVeLTI which iterate through all the

vertices on the grid map by different ways. After each iteration, all the vertices on the map will

be sorted based on a certain criteria and will be colored again based on new user input. The four

segmentation policies are as follows:

1. Horizontal tile segmentation (Figure 2.2). All vertices are iterated in a horizontal order and

then divided into multiple regions.

2. Vertical tile segmentation (Figure2.3). All vertices are iterated in a vertical order and then

divided into multiple regions.

3. Equidistant segmentation (Figure2.4). All vertices are iterated based on their Euclidean dis-

tance to the current position of robot and then divided into multiple regions.

7

Page 17: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 2. BACKGROUND

4. Extremal segmentation (Figure2.5). All vertices are iterated based on their extremal distance

to the current position of robot and then divided into multiple regions.

In the four groups of figures from Figure2.2 to Figure2.5, figure (a) shows how the vertices

are iterated in which blue box represents vertices on a map. Figure (b) shows what does the result

of the corresponding segmentation method look like on a real map. In simulation, a combination

strategy of Horizontal Tile and Vertical Tile and a combination of Equidistant and Extremal are also

provided.

Figure 2.2: Horizontal tile segmentation: Vertices on the map are iterated by a horizontal scan [27]

Figure 2.3: Vertical tile segmentation: Vertices on the map are iterated by a vertical scan [27]

8

Page 18: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 2. BACKGROUND

Figure 2.4: Equidistant segmentation: Vertices on the map are iterated based on their distance to the

current intent PDF center of gravity [27]

Figure 2.5: Extremal segmentation: Vertices on the map are iterated based on their extremal distance

to the current center intent PDF center of gravity [27]

9

Page 19: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 2. BACKGROUND

2.3 Human intent inference unit

To model the human intent of controlling wheelchair using BCI signals, the following

assumptions have been made regarding the human intent(Figure 2.1) in NoVeLTI:

1. At the beginning of the navigation process, the user has an intended state (desired position)

on the map,

2. This intended state can change over time,

3. This intended state would finally be located on a vertex of the given 2D map,

4. For a nonholonomic robot such as wheelchair, the user also has an intended orientation about

where the robot will face in the final position,

5. The actual detected command is not guaranteed to be the same as the user’s intended com-

mand because of the fact that BCI signals are noisy.

The low signal-to-noise ratio (SNR) and the fluctuation of the EEG signal make EEG

classification a challenging task. User of low throughput HMI also has a higher chance to make

errors due to the unconventional interaction method. So we use a conditional probability matrix

(confusion matrix) to model the probabilistic relationship between human intended command and

actual detected command. This matrix is structured as Table 2.2:

DetectedC1 C2 ... Cr

Intended

C1 P (D1|I1) P (D2|I1) ... P (Dr|I1)C2 P (D1|I2) P (D2|I2) ... P (Dr|I2)... ... ... ... ...Cr P (D1|Ir) P (D2|Ir) ... P (Dr|Ir)

Table 2.2: Confusion Matrix

In Table 2.2, C1, C2...Cr are the available commands from a BCI, D1, D2...Dr are the

actual detected commands from a BCI, I1, I2...Ir are the human intended commands. P (Dr|Ir) is

the probability of a command Dr being detected when Ir is intended. Table 2.3 is an example of

confusion matrix of a BCI with three available commands. According to the first column of this

matrix, when command 1 is detected (D1, the probability of command 1 being user’s real intention

10

Page 20: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 2. BACKGROUND

is I1, the probability of command 2 being user’s real intention is I2 and the probability of command

3 being user’s real intention is I3. The same rule applies to the next two columns.

D1 D2 D3

I1 0.8 0.1 0.06I2 0.2 0.85 0.04I3 0 0.05 0.9

Table 2.3: An example of confusion matrix

The acquisition of such a confusion matrix is a tricky problem itself. It is usually depen-

dent on the characteristic of BCI and how much the individual is proficient at using the BCI. It can

be generated by conducting multiple experiments with the same human subject. For demonstration

purposes, we use different simulated confusion matrices in our experiments to show that NoVeLTI

can handle confusion matrices with different confidence level. A similar matrix named as Interface

Matrix is also utilized in the simulation to model the uncertainty of low throughput interface signals.

These two matrices will model the probabilistic properties from both human and BCI perspectives.

Considering the fact that we are inferring the intended position on a grid map, it’s a straightforward

decision to choose a probability density function (PDF) that covers all the vertices on the map to

represent user’s intent in a continuous state space. The inferring process is then modeled as a proba-

bility updating function that would gradually update the probability of every vertex on a map based

on the input received from the BCI until the probability of a single vertex exceeds the threshold.

We use Bayes Filter to update the PDF over the given map for each vertex. Let

P (Iki |Ok) =

∫Xki

pk(x)dx (2.1)

be the a priori probability of the user choosing i-th region at k-th iteration given all external factors

Ok , and let P (Ik|Ok) be a vector of such probabilities. Here Xki is the i-th region at the k-th

iteration. Now using extended Bayes’s theorem we can write the probability update equation for

human intent:

P (Ik+1|Ok) = P (Ik|Dk, Ok) =P (Dk|Ik, Ok)P (Ik|Ok)

P (Dk|Ok)

= αP (Dk|Ik, Ok)P (Ik|Ok)

(2.2)

where α is the normalizer and P (D|I,O) is obtained from confusion matrix given by Table 2.2.

The initial value of P0 can be set as an uniform PDF over the whole map.

11

Page 21: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 2. BACKGROUND

After the probability of a single vertex exceeds the confidential threshold, the correspond-

ing vertex is then selected as the inferred position. The map segmentation strategy will guarantee

that the maximum probability will converge to one single vertex at the end of inference.

2.4 Intermediate position selection unit

One way to implement shared position control with low throughput HMI is to give user

a limited set of options to choose and the robot will move only after the intended destination is

fully inferred or reached a certain threshold[28]. To make user feel like actually controlling the

wheelchair and shorten the time of the navigation process (including inference time), NoVeLTI is

designed to ensure human intent inference and actuation of robot can happen at the same time. More

specifically, NoVeLTI will navigate the wheelchair to an intermediate position before the intended

position is inferred instead of waiting at the start point until the final destination is obtained. Several

requirements have been made about the selection of the optimal intermediate position:

1. This intermediate position must be within the reachability area (the set of states that the robot

can reach from its current state within THMI sec). THMI is the time needed for processing

signals from BCI and inferring user’s intent. On a grid map as we’ve shown in Section 2.2,

this set is comprised of multiple vertices near the current position of the robot

2. Before the intended position is successfully inferred, the optimal intermediate position should

be the vertex which has the minimum probabilistic cost to all other vertices on the map so it

would take shortest time to reach the goal on a probabilistic level.

For a navigation problem on a grid map, the probabilistic cost of a vertex to the destination

is defined as:

C(A) =

n∑i=1

dobst(A,Bi)p(Bi) (2.3)

where A belongs to the reachability area, dobst is the shortest path from A to Bi without obstacle,

p(Bi) is the current probability of vertex Bi being chosen as the intended navigation goal, which

is calculated in intent inference unit described in Section 2.2. However, for a large grid map it’s

impractical to find the optimal intermediate position due to high complexity, we could measure the

probabilistic cost of reaching the final destination from all the points within the reachability area

to find the optimal intermediate position on a probabilistic cost level. Four selection policies have

been proposed in previous work[27] to search for a vertex with the local minimum of probabilistic

12

Page 22: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 2. BACKGROUND

cost, which is considered to be effective for such real scenario. The four selection rules are named

as follows:

• cog2lopt: The vertex with local minimum probabilistic cost to the PDF center of gravity

• maxprob obst: The vertex with local minimum probabilistic cost to the vertex with maximum

probability in intent PDF

• nearcog obst: The vertex with local minimum probabilistic cost to the accessible vertex that

is nearest to the PDF center of gravity.

• ra maxprob: The vertex with the maximum probability of being chosen as the intended goal

The search process is implemented by a high performance single-source any-angle path planning

algorithm CWave[26] and can be completed in seconds. All four policies including a standard

shared position control strategy in which the robot will not move until the intended position is

fully inferred have been implemented in the original NoVeLTI. Experiments have been conducted

to evaluate the performance of these different policies and the results will be shown in Chapter 5.

Numerous experiments have been performed in [27] to evaluate the performance of NoV-

eLTI. Although NoVeLTI has already been applied on a real wheelchair system, there is no orien-

tation control involved. NoVeLTI was initially designed for shared position control of holonomic

robot and the user is only provided the possibility to determine the final position of the wheelchair

on a grid map. However, in a real life scenario, the orientation of the wheelchair at the destination

is also important for user. In the previous simulations or real experiments, as long as the wheelchair

reaches the final destination, the system would stop functioning and there is no way to determine

the orientation of the wheelchair. In Chapter 3, we will present the details of the new orientation

control system developed to integrate with the NoVeLTI system within the scope of the thesis.

13

Page 23: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Chapter 3

Orientation Control with NoVeLTI

Framework

A commonly adopted approach in related research on controlling the orientation of a

wheelchair using BCI is to directly map the BCI signals into commands which controls the velocity

and orientation. In [5], two of the SSVEP signals are mapped to right and left turn. The researcher

assigned a constant empirical value of 14◦/s to the rotation velocity. The user will send the signal

once to start the rotation and send another stop signal to stop the rotation. The main issue with this

approach is the overshoot when trying to stop at a certain angle. Because of the common delay of a

BCI, it may take one or two seconds for the wheelchair to respond and the desired angle may have

been passed already. In [20], 7 predefined orientations are mapped into 7 different P300 signals.

This approach saves on the inference time and user don’t need to consider the delay of the BCI but

the options of permitted orientation are too limited and can not work in a unstructured environment

where smaller rotation angle is needed.

In this chapter, we propose a new approach to control the orientation of the wheelchair

using a shared control strategy. The architecture of this orientation control unit is illustrated as

follows in the Figure 3.1: The shared control unit consists of three core components:

• Orientation intent inference: This unit receives detected user commands from BCI to infer

the user’s intent on desired orientation of the wheelchair

• Orientation pie segmentation: This unit will divide an orientation pie recursively until the

final orientation is inferred.

14

Page 24: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 3. ORIENTATION CONTROL WITH NOVELTI FRAMEWORK

Figure 3.1: Architecture of shared orientation control unit

• Optimal intermediate orientation selection: This unit will choose the optimal intermediate

orientation as a temporary goal to parallel the inference and actuation and save navigation

time as well.

Considering the fact that the wheelchair can navigate to the desired position (no orientation con-

sidered) autonomously, it’s not necessary to control the orientation of robot during the navigation

before arrival at the final destination, the orientation control unit will only take control of the system

after the robot has arrived at the desired position, thus causing no conflict with the NoVeLTI which

will be responsible for position control.

3.1 Problem analysis

Although the orientation angle of a wheelchair is a continuous value, we can model the

problem as a discrete problem similar to the position control on a grid map. Being different from

the shared position control where the target state space X is comprised of vertices, the target state

space of orientation control X is a set of angles. The cardinality (number of elements in the set) of

this state space is determined by:

|X| = 360

resol(3.1)

where resol is the accuracy (in degree) required to achieve in orientation control. If the minimal

controllable angle of the wheelchair is set to 1◦, X will contain 360 distinct state values.

15

Page 25: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 3. ORIENTATION CONTROL WITH NOVELTI FRAMEWORK

Although |X| doesn’t have the same order of magnitude of the number of vertices on a

grid map, it is still a large number considering the finite possible options a user can choose from

a BCI such as SSVEP. Based on this observation, we can treat this problem as a low throughput

human machine interface control problem too and a similar solution can be discovered.

3.2 Human intent inference

In the problem of position control, the user intent is modeled as a 2D vector which con-

tains intent probabilities on all the vertices on a grid map. Similarly, to control orientation of a

wheelchair, we can use an array to represent user’s intent probabilities on all the degrees ranging

from 0◦ to 360◦. At every iteration, the intent vector is divided into i regions, i is equal to the

number of available commands from a BCI. User will need to choose the region which contains his

or her desired orientation value by giving the corresponding BCI commands.

Let

P (Xik) =

∑Xi

P (ak)da (3.2)

be the cumulative probability of all the angles in region Xi at k-th iteration, where Xi is the i-th

region, P (ak) is the probability of angle a as desired orientation at k-th iteration.

When a new command Dk is received, we use the following Bayes filter to update the

posterior P (Xk|Dk) which will be our new estimation of user’s intent:

P (Xk|Dk) =P (Dk|Xk)P (Xk)

P (Dk)(3.3)

In equation 3.3, the priori probability P (Xk) can be obtained from equation 3.2. P (Dk) can be

written as a normalizer α. P (Dk|Xk) can be obtained using the confusion matrix we introduced in

Chapter 2. Although users will choose the colored region as we’ve explained in Section 2.2 rather

than directly choose the desired orientation, the corresponding probability P (Dr|Xr) of region Xi

can be multiplied to all the elements that belongs to the region. The normalizer will normalize the

PDF so that the sum will be 1. We can then write equation 3.3 as:

P (Xk|Dk) = αP (Dk|Xk)P (Xk) (3.4)

The above Bayes filter will recursively update the user intent vector of orientation. The

initial vector P (X0) can be set as a uniform distribution with each angle having the same probability.

Algorithm 1 is the pseudo code to implement the above Bayes filter.

16

Page 26: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 3. ORIENTATION CONTROL WITH NOVELTI FRAMEWORK

Algorithm 1: Update orientation intent probability vectorInput: A vector of orientation PDF (opdf), a vector of the current region

segmentation (pie)

Output: The estimated vector of orientation PDF based on new command (opdf)

1 prior← 0;

2 while k < size(opdf) do

3 if pie[k] ≥ 0 then

4 prior[pie[k]] += opdf[k] ;

5 end

6 end

7 posterior← 0 ;

8 for k < n cmds do

9 posterior[k] = interface matrix[detected command + k*n cmds]* priors[k];

10 total += posterior[k];

11 end

12 for k < n cmds do

13 posterior[k] = posterior[k]/total;

14 end

15 for k < size(opdf) do

16 opdf[k] = posterior[k];

17 end

3.3 Orientation pie segmentation

Unlike position control, it’s not possible to present the user a grid map for user to choose

his or her desired orientation. To make decision process more intuitive, we designed a pie diagram

named as orientation pie to help user make decisions. The orientation pie is divided into several

sectors with different colors. User will choose the correct colored sector that covers his or her

desired direction. One thing to notice is that the direction specified by the orientation pie is related

to the global map reference frame rather the wheelchair reference frame.

Figure 3.2 is an example in which the orientation pie is divided into four regions cor-

responding to the four available input commands from BCI. If the user aims to set the facing of

wheelchair to northeast(45◦) at the final position, then the user will need to keep choosing the col-

17

Page 27: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 3. ORIENTATION CONTROL WITH NOVELTI FRAMEWORK

ored sector that covered the desired angle at every iteration.

Figure 3.2: An example of segmentation of orientation pie

As shown in equation 3.4, the segmentation of the orientation pie would affect how the

orientation probability vector will be updated. The orientation pie consists of i basic sectors, the

minimal angular difference between these sectors is defined by resol and the number of sectors is

obtained from equation3.1.

One way to consider about dividing the orientation pie is to maximize the decrease of

entropy, thus maximizing the information throughput at every iteration. We want to make the cu-

mulative probability of each sector as equal as possible (We assume that the probability that every

command being chosen is equal) so after every choice, the chosen sector would be divided proba-

bilistic equally again. We call this equally distributed vector as reference vector. Another require-

ment for this segmentation strategy is that the probability of a single direction will converge to 1.0

in the end.

A one-way scan Algorithm 2 has been designed to assign elements (angles) in the array

of orientation probability to different groups (sectors with different colors). We use the cumulative

priori probabilities of angles to determine if we should include the current angle into a new colored

sector or not. If adding the current angle would result into the cumulative priori probabilities of

18

Page 28: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 3. ORIENTATION CONTROL WITH NOVELTI FRAMEWORK

current sector being greater than the corresponding priori probability in the reference vector, this

new angle would be included into the next sector.

3.4 Intermediate orientation selection

To make user feel more control over the wheelchair and to shorten the navigation time as

well, Shared Orientation Control unit can utilize the inferred user’s intent information which comes

with every update of the intent orientation PDF vector by rotating the wheelchair to an intermediate

angle similar to the intermediate position we’ve explained in Section 2.4. Theoretically, the optimal

intermediate orientation should meet the following criteria:

1. The optimal intermediate orientation must be within the reachability area (RA). The defi-

nition of RA is explained in Section 2.3. In orientation control, this RA is a set of sectors

corresponding to different desired angles.

2. The optimal intermediate orientation must be chosen so that the angular distance from the

intermediate orientation to the desired orientation is minimal.

Because the user’s desired orientation is not known to the system before it is fully inferred, we

could only use the information we have to determine the optimal intermediate orientation from a

probabilistic perspective. For a discrete system, consider the following function to calculate proba-

bilistic cost C(X) of rotating to the desired orientation from a given angle X only based of intent

orientation PDF p(x):

Xopt = argminx∈RA

(C(Xcur, X) + C(X)) (3.5)

where C(RA) is the probabilistic cost of reaching the intermediate angle within reachability area

and dang is the angular difference between two sectors in the scenario of orientation control. Be-

cause THMI is a constant value, the C(Xcur, X) is also constant for any X , thus can be ignored.

We only need to calculate the second term C(X), which measures the probabilistic cost of moving

from sector X to intended orientation:

C(X) =n∑

i=1

dang(X,Y )p(Yi) (3.6)

We can then modify the second criteria as: The optimal intermediate orientation must be

chosen so that the probabilistic cost from the intermediate orientation to the desired orientation is

minimal.

19

Page 29: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 3. ORIENTATION CONTROL WITH NOVELTI FRAMEWORK

Algorithm 2: Orientation pie segmentationInput: A vector containing the current color of orientation unit (unit color), A

vector containing the cumulative probability of each colored

sector(cur color pdf), orientation PDF vector (opdf)

Output: A vector containing the updated color of four sectors (arc vector)

1 cur color = 0;

2 cur color pdf← 0 ;

3 for i<opdf do

4 cur color pdf[cur color] += opdf[i];

5 if cur color pdf[cur color] > refrence color pdf[cur color] then

6 cur color pdf[cur color] -= opdf[i];

7 updateReferencePdf();

8 cur color++;

9 cur color pdf[cur color] += opdf[i];

10 end

11 unit color[i] = cur color;

12 end

13 cur color = 0;

14 for i<unit color do

15 if unit color[i]==cur color then

16 arc vector[cur color].color = cur color;

17 arc vector[cur color].upper angle += resol * M PI /180;

18 end

19 else if cur color < n cmd - 1 then

20 arc vector[cur color+1].lower angle = arc vector[cur color].upper angle;

21 arc vector[cur color+1].upper angle = arc vector[cur color+1].lower angle

+ resol*M PI/180;

22 cur color++;

23 arc vector[cur color].color = cur color;

24 end

25 end

20

Page 30: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 3. ORIENTATION CONTROL WITH NOVELTI FRAMEWORK

Consider the example Fig 3.3, The current orientation of wheelchair is at sector 0, the

reachability area of wheelchair before the next update comes in is sector 7 and sector 1. For sim-

plicity, we assume the angular difference between the adjacent sectors is 1. The probabilistic cost

C(X) for selecting sector X as the intermediate orientation can be calculated as follows:

C(0) = 0.1× 0 + 0.15× 1 + 0.05× 2 + 0.1× 3 + 0.1× 4 + 0.2× 3 + 0.2× 2 + 0.1× 1

= 2.05

C(7) = 0.15× 0 + 0.05× 1 + 0.1× 2 + 0.1× 3 + 0.2× 4 + 0.2× 3 + 0.1× 2 + 0.1× 1

= 2.25

C(1) = 0.1× 0 + 0.2× 1 + 0.2× 2 + 0.1× 3 + 0.1× 4 + 0.05× 3 + 0.15× 2 + 0.1× 1

= 1.85

(3.7)

Obviously, C(1) has the lowest probabilistic cost so sector 1 is selected as the optimal intermediate

orientation, which means if the wheelchair start rotating from sector 1 at next iteration, it will

have the lowest navigation cost (time) rotating to the orientation that is most likely to be the final

destination.

Figure 3.3: Calculating probabilistic cost example

21

Page 31: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 3. ORIENTATION CONTROL WITH NOVELTI FRAMEWORK

From equation 3.1, we can observe that the order of magnitude of orientation PDF vector

is not large (normally contain hundreds of values) so a force approach to calculate the probabilistic

cost for all the angles within reachability area is viable. The pseudo code is shown in Algorithm 3.

Algorithm 3: Optimal intermediate orientation selectionInput: A vector of orientation PDF (opdf)

Output: Optimal intermediate orientation (opt orientation)

1 min←∞ ;

2 cur← 0 ;

3 for i < opdf do

4 cur = calculateProbCost(i);

5 if cur < min then

6 min = cur;

7 min index = i;

8 end

9 end

10 Function calculateProbCost(index)

11 size = len(opdf);

12 cost = 0.0;

13 angular diff = 0;

14 for i<size do

15 if i ≤ size/2 then

16 angular diff = i;

17 end

18 else

19 angular diff = size - i;

20 end

21 cost += opdf[(index+i) mod size] * angular diff;

22 return cost;

23 end

24 opt orientation = (min index*resol*M PI/180.0);

22

Page 32: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 3. ORIENTATION CONTROL WITH NOVELTI FRAMEWORK

In this chapter, we have explained the implementation of the new shared orientation con-

trol unit added to NoVeLTI. The shared orientation control process consists of three steps: intent

inference, orientation pie segmentation and intermediate orientation selection. Intent inference is

achieved by a recursive Bayesian filter and human intent is modeled as an array containing the PDF

of all possible desired orientation angle. The orientation pie segmentation will make sure that the

intended orientation can be finally inferred from the limited available options. The intermediate

orientation selection will guarantee the parallel of inference and actuation to reduce the total navi-

gation time. This orientation control unit has been developed in the environment of ROS and 300

runs of experiments have been done to evaluate the performance of this unit. The experiment will

be analyzed in Chapter 5.

23

Page 33: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Chapter 4

Implementation of NoVeLTI in ROS

4.1 New NoVeLTI system architecture

In the original implementation of NoVeLTI in Robot Operating System (ROS)[15], the

communication method between the three core components (inference unit, map divider and best

pose finder) is via utilizing topic messages. The architecture of subscriber and publisher in ROS

topic is a very convenient way to exchange information between nodes. However, one issue of this

multi-thread communication system is that it cannot guarantee the execution order of the three core

components. For example, if the map divider has finished the previous work while the inference

unit hasn’t updated the newest intent pdf vector, the map divider will use the previous pdf vector for

the current iteration which can cause synchronizing issue.

If we analyze the NoVeLTI system closely, we can know that four units are actually run-

ning sequentially rather than simultaneously. To solve the synchronizing problem, we encapsulate

four units, including the new orientation, into a single node, so the communication between four

units can directly happen in one node and data would be stored and transferred on RAM without

necessity of ROS topic. The redesigned system structure diagram is shown in Figure 4.1. Only two

topics (Detected command and actuation command) will be subscribed and published by NoVeLTI.

Within NoVeLTI there are two parts: shared position control unit and shared orienta-

tion control unit. These two components share the same structure. Inference unit will receive

the detected command subscribed from ROS topic and estimate user’s intent PDF/OPDF vector for

intermediate position/orientation unit and segmentation unit. Based on the updated PDF/OPDF vec-

tor, intermediate position/orientation unit will publish the control command of wheelchair to ROS

topic while segmentation unit will provide the new segmented region information back to inference

24

Page 34: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 4. IMPLEMENTATION OF NOVELTI IN ROS

unit for preparation of the next inference. What is not shown here is that the map/orientation pie

segmentation will also publish information to RViz for visualization.

Figure 4.1: Resigned NoVeLTI architecture

4.2 New NoVeLTI user interface

4.2.1 Position control window

The view window size of this position control window will change according to the current

PDF of vertices on the map. The view will zoom in or out to help user easier to decide what colored

region the desired goal lies in. However, this position control window alone would still confuse the

user in some occasions during our experiments with human subjects. That’s why we added another

overview window which will be introduced in 3.5

4.2.2 Orientation control window

The orientation control window shows a round panel to map four LTI commands in four

colors. As shown in the above figures, the pie diagram is divided equally into four regions at

the beginning showing that each of the four sectors has the same cumulative probability. After

25

Page 35: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 4. IMPLEMENTATION OF NOVELTI IN ROS

Figure 4.2: Position control window

one command is received, the pie diagram would be divided again based on a new probability

distribution function.

Figure 4.3: Orientation control window

26

Page 36: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 4. IMPLEMENTATION OF NOVELTI IN ROS

4.2.3 Overview window

The overview window is similar to position control window, the only difference is that the

view size is fixed and it will show the entire map through the process. The reason to add this special

window is to prevent the situation that when a command is mistakenly received, the camera view of

position control window would zoom into a wrong area, which may not include the correct desired

position and user would not be able to choose the correct colored region that covers the intended

goal. With this overview window, users can refer to this window when the they lost the track of the

desired position.

Figure 4.4: Overview window

In this chapter, we have explained the implementation of the improved NoVeLTI in ROS.

A new system architecture has been developed to integrate the new orientation control unit and

solve synchronization issue between different nodes in ROS. Based on the feedback from the human

experiments we will talk about in Chapter 5, we also improved the user interface for NoVeLTI to

provide a better user experience and reduce the chance of user making inaccurate inputs.

27

Page 37: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Chapter 5

Simulation and Experiments

5.1 Simulation setup

We designed a whole system in ROS to simulate the real shared control scenario. This

system is shown in Fig 5.1. With this simulation, we can evaluate the performance of different

configurations and In the experiments, user will use four keys on the keyboard that represents the

four available signals from BCI to send command. To simulate the low throughput property of BCI,

after one command is received, the simulated BCI model will not respond to user or receive any new

command until a period of time. In real BCI like SSVEP, different input signals are distinguished

by the flickering light with different frequency. In our simulation experiments, we use color to

represent different choices. We also designed a non-kinematic model to simulate the movement

wheelchair. It is shown as an arrow on the screen which has position and orientation information

stored in it. The kinematic model of the wheelchair is characterized by two velocities: translation

velocity and rotation velocity. The two velocities are configurable but will stay the same during one

experiment.

5.2 Human subjects experiment

We collected experiment data from four human subjects using the improved NoVeLTI.

Apart from one person who had experience on the system before, all other four subjects are new to

NoVeLTI. We explained to them the goal of this experiment and how to interact with the system.

Then they were asked to navigate the wheelchair from the a predefined start point to a given des-

tination with the simulation system. Although the start point and endpoint of a route is the same

28

Page 38: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.1: Simulation system

for four subjects, what the actual path is like will dependent on subjects’ inputs. Table 5.1 shows

the parameters we’ve used in the experiments. The parameters that contain multiple values are the

variables of the experiment. We will compare the performance of improved NoVeLTI based on

different choice of these values. The parameters that only have one value are chosen based on the

specification of a real wheelchair for better simulation. Figure 5.2 shows one screen shot during one

human subject experiment. The purposes of different windows have been explained in Chapter 4.

Figure 5.2: Simulation Example

In our experiment, every subject has conducted 60 experiments with 12 different config-

urations. Each experiment with the same configuration is repeated for 5 times. We analyzed the

experiments based on different criteria which we will present in the next sections. Two routes are

29

Page 39: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Parametertranslationvelocity (m/s)

rotationvelocity (rad/s)

THMI

(s)confusionmatrix

interfacematrix

mapsegmentation

positionselection

orientationresolution(degree)

orientationselection

probabilitythreshold

Value 1.0 0.52 3.0 mx100/mx91/mx85 mx100/mx91/mx85 altertile/extredist near cog 15 opt/still 0.90

Table 5.1: Experiment parameter configuration

utilized in the experiment to analyze the performance of NoVeLTI on different type of routes. Route

1 is a long distance route about 25.5m while route 2 is a short distance route 1.2m. Three confusion

matrices/interface matrices are being used in the experiments: mx100, mx91, mx85. The bigger the

number is, the more confident NoVeLTI will be about user input.

5.3 Experiment results

5.3.1 Navigation time

In Figure5.3, we plot out the PDF entropy and distance to goal line chart and the 2 routes

the wheelchair has traveled in the experiment.The average navigation time for one experiment is

shown in Figure 5.4 and Figure 5.5. They represent the average navigation time calculated from

all 12 different configurations. It is clear that different users have shown different acceptability and

proficiency of the system. Subject 2 is the only expert in the experiment and his data has shown the

fastest navigation time. Also, we notice that: the more confident the confusion matrix is, the less the

navigation time will be, because the inference unit would take less time to infer the desired position

and orientation with a more confident confusion matrix.

If we compare the navigation time of different map segmentation methods on the same

route, we can notice that altertile approach saves more time than extremal approach, which is beyond

our expectation because altertile is a segmentation strategy that doesn’t consider topology of the

map. More subjects may be required to obtain a more reasonable result.

Another interesting fact we discovered is that despite route 2 is much shorter than route

1: The route 1 is 25.5 meters while the route 2 is only 1.2 meters. The time spent for both route

are quite close. This observation exposes one of the drawbacks of NoVeLTI when applying to short

distance route navigation. Because NoVeLTI is dependent on the inference unit to infer the desired

destination on a map, for a given map, the minimal commands required to reduce the entropy and

increase the probability of one single vertex over the threshold is fixed. For example, in our con-

figuration, it requires at least 9 commands to infer the position. Considering the delay between

each command, the minimal time would be 27 seconds not including the rotation time and naviga-

30

Page 40: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.3: Shared position control with extredist and altertile map segmentation

31

Page 41: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

tion time. However, if we analyze the result from another perspective, it shows that the rotation of

wheelchair started as soon as a new inference result came out and the wheelchair could arrive at

destination as soon as the final position is inferred.

5.3.2 Navigation accuracy

We calculated the Euclidean distance between our given destination and the actual dis-

tance subject navigated to to measure the navigation accuracy. The results are shown in Figure 5.6

to 5.9 . As for long distance route 1, shared position control with extremal map segmentation can

achieve 90% success(intended position and actual position are exactly the same) rate, the maximum

error is 0.4 meters, while the altertile map segmentation can only achieve 77% success rate and

the maximum error can be 6.8m. This great deviation from the destination is due to continuous

erroneous input from users. As for short distance route 2, extremal map segmentation can achieve

95% success rate and the maximum error is 0.3 meters while altertile map segmentation can achieve

98.3% success rate with maximum error of 0.3. Although we’ve noticed a faster navigation time

of altertile map segmentation than extremal in sector 5.3.1, the accuracy of latter map segmentation

approach is higher. According to our observation in the human subject experiments, we found that

the vertical or horizontal way to divide map makes it harder for user to locate their real destination

thus leading the wheelchair to a different destination.

5.3.3 Comparison of different confusion matrices

In this experiment, we are trying to compare how the confusion matrix/interface matrix

would affect the navigation if the subject changed his or her mind during the experiment. This

experiment also tested the ability of NoVeLTI to recover from either erroneous user input or signal

classification of BCI device. We used two interface matrices in this experiment as shown in Table

5.2 and 5.3

D1 D2 D3 D4

I1 0.91 0.03 0.03 0.03I2 0.03 0.91 0.03 0.03I3 0.03 0.03 0.91 0.03I4 0.03 0.03 0.03 0.91

Table 5.2: Interface Matrix mx91

32

Page 42: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.4: Total navigation time with extredist map segmentation

33

Page 43: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.5: Total navigation time with altertile map segmentation

34

Page 44: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.6: Difference between actual destination and intended destination with extredist map seg-

mentation (route 1)

35

Page 45: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.7: Difference between actual destination and intended destination with extredist map seg-

mentation (route 2)

36

Page 46: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.8: Difference between actual destination and intended destination with altertile map seg-

mentation (route 1)

37

Page 47: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.9: Difference between actual destination and intended destination with altertile map seg-

mentation (route 2)

38

Page 48: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

D1 D2 D3 D4

I1 0.85 0.05 0.05 0.05I2 0.05 0.85 0.05 0.05I3 0.05 0.05 0.85 0.05I4 0.05 0.05 0.05 0.85

Table 5.3: Interface Matrix mx85

According to the result shown in Figure 5.10, all experiments using confusion matrix/interface

matrix mx100 all failed to reach the intended destination while mx91 can reach the correct place.

Although both entropy values can reach 0, the destinations are totally different. The reason is that

mx100 is a deterministic confusion matrix which means every user input is considered as correct

without probabilistic information. If the user ever made a mistake, it would be impossible for user

to navigate the wheelchair to the correct destination again. Although experiment with mx100 has

shown the best results in the previous sections, this experiment shows that mx100 is not practical to

be used in the real scenario.

5.3.4 Shared orientation control time

In Figure 5.11, we compared orientation PDF entropy and angular distance to goal be-

tween two intermediate orientation selection policies. Although the two decreasing plot of entropy

show no obvious difference, we can still observe an apparent difference in the angular distance to

goal plot. The time needed for entropy to decrease to 0 takes about 15 seconds. The time needed

to reach desired orientation with intermediate orientation is much shorter than the one without such

policy. As can be seen in all Figure 5.11 about three different routes. The time needed to reach

intended orientation with optimal orientation selector is about 10 seconds, which is half of the time

needed for the same occasion but without such optimal orientation selector. This observation val-

idates the benefit of adding an intermediate orientation for shared control. This approach does not

only make user feel more control over the system but also shorten the navigation time.

In this chapter, we’ve evaluated the results from 240 human experiments with our sim-

ulation environment. The results show that the improved NoVeLTI system can successfully assist

user to navigate a simulated wheelchair to the exact desired destination on a given map at a success

rate of 90.4%. The maximum distance between the desired destination and the actual destination

is 6.8m but it only happened once among all human experiments due to continuous erroneous user

inputs. The average navigation time for a 25.5 meters route is 35 seconds with a velocity of 1.0

39

Page 49: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.10: Shared position control with two different confusion matrices/interface matrices

40

Page 50: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

Figure 5.11: Shared orientation control with different intermediate orientation strategies

41

Page 51: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 5. SIMULATION AND EXPERIMENTS

m/s. The time is dependent on multiple factors including the choice of confusion matrix, the profi-

ciency of the user, etc. The experiments also exposed one critical drawback of the NoVeLTI shared

control system: The time needed to navigate a short path is similar to a long path (in our case the

short path is 1.2m while the long path is 25m). The cause for this phenomenon has been thoroughly

explained in the previous sections, which is a trade-off we have to make to ensure the robustness of

the system.

42

Page 52: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Chapter 6

Conclusion

The shared position control system NoVeLTI introduced in [27] is a powerful tool to in-

corporate BCI with wheelchair for navigation, but due to its restricted use case, we proposed an

improved version of NoVeLTI in this work. NoVeLTI now contains two core components: shared

position control unit and shared orientation control unit. The software architecture of NoVeLTI is

redesigned to ensure a stable communication channel between different components in NoVeLTI

and a new user interface is also presented in the work. During the whole experiments, we didn’t ex-

perience any data loss or synchronization problem and the integrity of the data can be fully attained.

The orientation control unit utilized the same shared control strategy, including a human

intent inference unit, intermediate orientation selector and a orientation pie segmentation unit. This

newly added component enables user to determine the final orientation of the wheelchair to any de-

gree via low throughput interface. User only needs to provide information about where the intended

orientation is without steering the wheelchair directly. The orientation control process is designed

to be probabilistically optimal to minimize the overall rotation time.

To validate and evaluate the performance of new NoVeLTI and also help us find the best

parameters configuration, we conducted simulation experiments with four human subjects. Users

were able to use the new NoVeLTI user interface and orientation control unit to fulfill the require-

ments. The results shows that with the proper parameters configuration based on numerous au-

tonomous experiments, the best success rate to navigate the wheelchair using NoVeLTI can reach

98%. As the subjects become more proficient at using the system, we can also observe an apparent

decrease of time needed for navigation. The newly added orientation control unit also worked as

expected according to the result.

However, the data from human subject experiments also exposed several issues of NoV-

43

Page 53: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

CHAPTER 6. CONCLUSION

eLTI we need to solve in the future. Because the inference unit will only output the inferred position

until the maximum probability of a single vertex surpass the threshold and the least number of com-

mands needed to decrease information entropy to 0 is fixed, if not considering human or device

errors, the time needed for navigation to any point on the map is determined by the number of ver-

tices on a given map. This problem would result in the observation we found in section 5.3.1 that

there is always a minimal navigation time for any kind of path. To solve this issue, we may need to

consider using a different control strategy for short distance route.

Another work could be done in the future is to develop a new user interface independent

of ROS. Currently, all interfaces are developed as Rviz plugins, thus hindering user from choosing

the correct color sometimes because the camera control is not intelligent enough to always focus

on the interest area. If real BCI devices were to be included in the system, a new way to show the

segmentation of map is also required because there is no BCI signal generated from color directly.

For further research, we would like to implement the new NoVeLTI on a wheelchair hard-

ware with BCI and test its performance. Augmented reality device could be also utilized to show

more details of the environment at final destination so user could control the facing of wheelchair

more specifically, like facing towards the window, rather than using an abstract angle value.

44

Page 54: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

Bibliography

[1] Cornelia I Bargmann, Mien-Chie Hung, and Robert A Weinberg. The neu oncogene encodes

an epidermal growth factor receptor-related protein. Nature, 319(6050):226–230, 1986.

[2] Jose M Bernardo and Adrian FM Smith. Bayesian theory, 2001.

[3] Luzheng Bi, Xin-An Fan, and Yili Liu. Eeg-based brain-controlled mobile robots: a survey.

IEEE Transactions on Human-Machine Systems, 43(2):161–176, 2013.

[4] National Spinal Cord Injury Statistical Center et al. Spinal cord injury. facts and figures at a

glance. The journal of spinal cord medicine, 28(4):379, 2005.

[5] Pablo F Diez, Sandra M Torres Muller, Vicente A Mut, Eric Laciar, Enrique Avila, Teodi-

ano Freire Bastos-Filho, and Mario Sarcinelli-Filho. Commanding a robotic wheelchair with

a high-frequency steady-state visual evoked potential based brain–computer interface. Medical

engineering & physics, 35(8):1155–1164, 2013.

[6] Andrew T Duchowski. Eye tracking methodology. Theory and practice, 328, 2007.

[7] Marco Ferrari and Valentina Quaresima. A brief review on the history of human func-

tional near-infrared spectroscopy (fnirs) development and fields of application. Neuroimage,

63(2):921–935, 2012.

[8] Ferran Galan, Marnix Nuttin, Eileen Lew, Pierre W Ferrez, Gerolf Vanacker, Johan Philips,

and J del R Millan. A brain-actuated wheelchair: asynchronous and non-invasive brain–

computer interfaces for continuous control of robots. Clinical Neurophysiology, 119(9):2159–

2169, 2008.

45

Page 55: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

BIBLIOGRAPHY

[9] Vaibhav Gandhi, Girijesh Prasad, Damien Coyle, Laxmidhar Behera, and Thomas Martin

McGinnity. Eeg-based mobile robot control through an adaptive brain–robot interface. IEEE

Transactions on Systems, Man, and Cybernetics: Systems, 44(9):1278–1285, 2014.

[10] Larissa K Grover, Donald C Hood, Quraish Ghadiali, Tomas M Grippo, Adam S Wenick,

Vivienne C Greenstein, Myles M Behrens, and Jeffrey G Odel. A comparison of multifocal

and conventional visual evoked potential techniques in patients with optic neuritis/multiple

sclerosis. Documenta ophthalmologica, 117(2):121–128, 2008.

[11] Matti Hamalainen, Riitta Hari, Risto J Ilmoniemi, Jukka Knuutila, and Olli V Lounasmaa.

Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies

of the working human brain. Reviews of modern Physics, 65(2):413, 1993.

[12] Ben H Jansen, George Zouridakis, and Michael E Brandt. A neurophysiologically-based math-

ematical model of flash visual evoked potentials. Biological cybernetics, 68(3):275–283, 1993.

[13] Jeonghee Kim, Hangue Park, Joy Bruce, Erica Sutton, Diane Rowles, Deborah Pucci, Jaimee

Holbrook, Julia Minocha, Beatrice Nardone, Dennis West, et al. The tongue enables computer

and wheelchair control for people with spinal cord injury. Science translational medicine,

5(213):213ra166–213ra166, 2013.

[14] Jo Ann S Kinney. Transient visually evoked potential. JOSA, 67(11):1465–1474, 1977.

[15] Anis Koubaa. Robot operating system (ros): The complete reference, volume 1. Springer,

2016.

[16] Marta Kutas, Gregory McCarthy, and Emanuel Donchin. Augmenting mental chronometry:

the p300 as a measure of stimulus evaluation time. Science, 197(4305):792–795, 1977.

[17] Zhijun Li, Suna Zhao, Jiding Duan, Chun-Yi Su, Chenguang Yang, and Xingang Zhao. Hu-

man cooperative wheelchair with brain–machine interaction based on shared control strategy.

IEEE/ASME Transactions on Mechatronics, 22(1):185–195, 2017.

[18] Chern-Sheng Lin, Chien-Wa Ho, Wen-Chen Chen, Chuang-Chien Chiu, and Mau-Shiun Yeh.

Powered wheelchair controlled by eye-tracking system. Optica Applicata, 36, 2006.

[19] Nikos K Logothetis, Jon Pauls, Mark Augath, Torsten Trinath, and Axel Oeltermann. Neuro-

physiological investigation of the basis of the fmri signal. Nature, 412(6843):150, 2001.

46

Page 56: Implementation of a shared control system for brain ...cj82sr91j/fulltext.pdfAbstract of the Thesis Implementation of a shared control system for brain-controlled wheelchair navigation

BIBLIOGRAPHY

[20] Ana C Lopes, Gabriel Pires, and Urbano Nunes. Assisted navigation for a brain-actuated

intelligent wheelchair. Robotics and Autonomous Systems, 61(3):245–258, 2013.

[21] Fabien Lotte, Laurent Bougrain, and Maureen Clerc. Electroencephalography (eeg)-based

brain–computer interfaces. Wiley Encyclopedia of Electrical and Electronics Engineering,

2015.

[22] Paivi Majaranta and Andreas Bulling. Eye tracking and eye-based human–computer interac-

tion. In Advances in physiological computing, pages 39–65. Springer, 2014.

[23] Imad Mougharbel, Racha El-Hajj, Houda Ghamlouch, and Eric Monacelli. Comparative

study on different adaptation approaches concerning a sip and puff controller for a powered

wheelchair. In Science and Information Conference (SAI), 2013, pages 597–603. IEEE, 2013.

[24] G Plourde. Auditory evoked potentials. Best Practice & Research Clinical Anaesthesiology,

20(1):129–139, 2006.

[25] B Jenita Amali Rani and A Umamakeswari. Electroencephalogram-based brain controlled

robotic wheelchair. Indian Journal of Science and Technology, 8(S9):188–197, 2015.

[26] Dmitry A Sinyukov and Taskin Padir. Cwave: High-performance single-source any-angle path

planning on a grid. In Robotics and Automation (ICRA), 2017 IEEE International Conference

on, pages 6190–6197. IEEE, 2017.

[27] Dmitry Aleksandrovich Sinyukov. Semi-autonomous robotic wheelchair controlled with low

throughput human-machine interfaces. 2017.

[28] Rui Zhang, Yuanqing Li, Yongyong Yan, Hao Zhang, Shaoyu Wu, Tianyou Yu, and Zhenghui

Gu. Control of a wheelchair in an indoor environment based on a brain–computer interface and

automated navigation. IEEE transactions on neural systems and rehabilitation engineering,

24(1):128–139, 2016.

47