14
Advances in EEG-based Brain-Computer Interfaces for Control and Biometry Virgílio Bento, Luís Paula, António Ferreira, Nuno Figueiredo, Ana Tomé, Filipe Silva, João Paulo Cunha and Pétia Georgieva Department of Electronics, Telecommunications and Informatics/IEETA, University of Aveiro, Aveiro, Portugal [email protected], {a22144, a32951, nuno.figueiredo, ana, fmsilva, jcunha, petia}@ua.pt Abstract The most recent advances in acquisition technology and signal processing assert that controlling certain functions by neural interfaces may have a significant impact in the way people operate computers, wheelchairs, prostheses or other devices using only brain signals. This paper exploits the possibility of achieving a communication pathway between the brain and an external device, based on electroencephalogram (EEG) signals, in two challenging scenarios: (1) modulating the EEG signals using motor imagery tasks for brain actuated control of a mobile robot; and (2) person identification using brainwaves. The developed BCI systems differ in the required mental activity and in the type of brain signals used for classification. The essential design principles and signal processing tools of the two EEG- based BCIs are presented. Preliminary results of brain-state estimation during several experimental sessions demonstrate the systems’ performance. 1 Introduction During the last decade many advances in a number of fields have supported the idea that a direct interface between the human brain and an artificial system, called Brain Computer Interface (BCI), is a viable concept, although a significant research and development effort has to be conducted before these technologies enter routine use [Berger et al., 2008]. The conceptual approach is to model the brain activity variations and map them into some kind of actuation or command over a target output (e.g., a computer interface or a robotic system). Nowadays, the principal reason for the BCI research is the potential benefits to those with severe motor disabilities, such as brainstem stroke, amyotrophic lateral sclerosis or severe cerebral palsy [Birbaumer et al.‚ 2007; Nijboer et al.‚ 2008; Pfurtscheller et al.‚ 2008]. A very effective way to analyze the brain physiological activity is the EEG signals stem from the cortex whose sources are the action potentials of the nerve cells in the brain. The theoretical and the application studies are based on the knowledge that the EEG signals are composed of waves inside the 0- 60 Hz frequency band and that different brain activities can be identified based on the recorded oscillations. For example, signals within the delta band (below 4 Hz) correspond to a deep sleep, theta band (4-8 Hz) signals are typical for dreamlike state, alpha frequencies (8-13 Hz) correspond to relaxed state with closed eyes, beta band (13-20 Hz) are related with waking activity and gamma frequencies (20- 50 Hz) are characteristics for mental activities as perception and problem solving [Niedermayer and Lopes da Silva, 1999]. Over the last decade, the interest in extracting knowledge hidden in the EEG signals is rapidly growing, as well as their applications. EEG-based BCIs for motor control and biometry are among the most recent applications in the computational neuro-engineering field. In this line of thought, the project behind this paper aims to initiate a long-term multidisciplinary research by combining developments in relevant fields, such as computational neuro-engineering, signal processing, pattern recognition, brain imaging and robotics. In the middle-term, the main objective has been the design and development of BCIs to exploit the benefits of advanced human-machine interfaces for control and biometry. This paper presents the advances in the development of two BCI systems that analyzes the brain activity of a subject measured through electroencephalogram (EEG). The former tries to find out the user’s intention and generates output commands for controlling an appropriate output device. The relevant features of the implementation include the choice of the motor imagery (mu rhythms) as control paradigm, the efforts dedicated to the development of various training tools and the

Advances in EEG-based Brain-Computer Interfaces for Control and Biometry

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Advances in EEG-based Brain-Computer Interfaces for Control and Biometry

Virgílio Bento, Luís Paula, António Ferreira, Nuno Figueiredo, Ana Tomé, Filipe Silva, João Paulo Cunha and Pétia Georgieva

Department of Electronics, Telecommunications and Informatics/IEETA, University of Aveiro, Aveiro, Portugal

[email protected], {a22144, a32951, nuno.figueiredo, ana, fmsilva, jcunha, petia}@ua.pt

Abstract

The most recent advances in acquisition technology and signal processing assert that controlling certain functions by neural interfaces may have a significant impact in the way people operate computers, wheelchairs, prostheses or other devices using only brain signals. This paper exploits the possibility of achieving a communication pathway between the brain and an external device, based on electroencephalogram (EEG) signals, in two challenging scenarios: (1) modulating the EEG signals using motor imagery tasks for brain actuated control of a mobile robot; and (2) person identification using brainwaves. The developed BCI systems differ in the required mental activity and in the type of brain signals used for classification. The essential design principles and signal processing tools of the two EEG-based BCIs are presented. Preliminary results of brain-state estimation during several experimental sessions demonstrate the systems’ performance.

1 Introduction

During the last decade many advances in a number of fields have supported the idea that a direct interface between the human brain and an artificial system, called Brain Computer Interface (BCI), is a viable concept, although a significant research and development effort has to be conducted before these technologies enter routine use [Berger et al., 2008]. The conceptual approach is to model the brain activity variations and map them into some kind of actuation or command over a target output (e.g., a computer interface or a robotic system). Nowadays, the principal reason for the BCI research is the potential benefits to those with severe motor disabilities, such as brainstem stroke, amyotrophic lateral sclerosis or severe cerebral palsy [Birbaumer et al.‚ 2007; Nijboer et al.‚ 2008; Pfurtscheller et al.‚ 2008].

A very effective way to analyze the brain physiological activity is the EEG signals stem from the cortex whose sources are the action potentials of the nerve cells in the brain. The theoretical and the application studies are based on the knowledge that the EEG signals are composed of waves inside the 0-60 Hz frequency band and that different brain activities can be identified based on the recorded oscillations. For example, signals within the delta band (below 4 Hz) correspond to a deep sleep, theta band (4-8 Hz) signals are typical for dreamlike state, alpha frequencies (8-13 Hz) correspond to relaxed state with closed eyes, beta band (13-20 Hz) are related with waking activity and gamma frequencies (20-50 Hz) are characteristics for mental activities as perception and problem solving [Niedermayer and Lopes da Silva, 1999]. Over the last decade, the interest in extracting knowledge hidden in the EEG signals is rapidly growing, as well as their applications. EEG-based BCIs for motor control and biometry are among the most recent applications in the computational neuro-engineering field.

In this line of thought, the project behind this paper aims to initiate a long-term multidisciplinary research by combining developments in relevant fields, such as computational neuro-engineering, signal processing, pattern recognition, brain imaging and robotics. In the middle-term, the main objective has been the design and development of BCIs to exploit the benefits of advanced human-machine interfaces for control and biometry. This paper presents the advances in the development of two BCI systems that analyzes the brain activity of a subject measured through electroencephalogram (EEG). The former tries to find out the user’s intention and generates output commands for controlling an appropriate output device. The relevant features of the implementation include the choice of the motor imagery (mu rhythms) as control paradigm, the efforts dedicated to the development of various training tools and the

user-dependent approach followed. The later explores the possibility of using the brain electrical activity during visual stimuli for implementing an EEG biometric system. Simulation results in a large group of subjects indicate the potential of the proposed solutions.

The remainder of the paper is organized as follows: Section 2 presents an overview of the current research activity at IEETA. Section 3 describes the design and development of a BCI prototype enabling the control of a small mobile robot. Section 4 describes the main advances in the development of an EEG-based BCI for biometry. Section 5 concludes the paper and outlines the perspectives of future research.

2 Framework of the Research at IEETA

Over the past decades, several working BCI systems have been described in the literature. These systems use a variety of signal acquisition methods, experimental paradigms, pattern recognition approaches and output interfaces, requiring different types of cognitive activity [Bashashati et al., 2007; Berger et al., 2008; Sanchez et al.‚ 2008]. Most solutions rely on bioelectrical brain signals recorded by EEG electrodes placed on the scalp. Despite their poor spatial resolution, this non-invasive technique has proven to be a useful and practical tool in experimental research, mainly due to fast recording, easy subject preparation and reduced equipment required. Further, the relationship between EEG and brain function is well documented in the literature [Niedermayer and Lopes da Silva, 1999].

In general, a BCI works by analyzing a brain signal, extracting its features and then translating or classifying these features into some recognizable command through the use of signal processing and machine learning methods. Fig. 1 illustrates the different functional blocks of a typical EEG-based BCI system. Despite the proof of concept and many encouraging results achieved by some research groups [Bensch et al.‚ 2007; Leeb et al.‚ 2007; Millán‚ 2008; Müller-Putz and Pfurtscheller‚ 2008; Nijboer et al.‚ 2008] additional efforts are required in order to build and implement an efficient BCI. For example, reliable signal processing and pattern recognition techniques able to continuously extract meaningful information from the very noisy EEG is still a high challenge.

The development of noninvasive BCIs for control and biometry are the research focus of the IEETA Computational Neuro-engineering research group and among the most recent applications based on personal EEG data. In spite of sharing the same basic components, a BCI to provide an alternative control channel for acting on the environment and a biometric system for identification or authentication reveal significant differences. While the BCI technology has been focused on interpreting brain signals for communication and control, the requirements of an EEG-based biometric system are entirely different: they require no interpretation of the brain signals, but the use of as much signal information as possible. The identified person is exposed to a stimulus (usually visual or auditory) for a certain time and the EEG data collected over this time is input to the biometry system. It has been shown in previous studies [Paranjape, et al. 1999; Poulos et al. 2001] that the EEG can be used for building personal identification and authentication systems due to the unique brain-wave patterns of every individual.

Fig. 1. General components of a BCI system: signals are recorded and enhanced, meaningful features are extracted and subsequently classified, an output signal provides the control command for some target device

Signal ProcessingSignal ProcessingSignal ProcessingSignal Processing

FeedbackFeedbackFeedbackFeedback

The frequency band segmentation is a key concept in the emerging areas of EEG based BCIs. Current BCIs for motor control are based on the special frequency zone termed sensorimotor rhythm mu (12-16 Hz) which is related with imagery subject movements. As for the EEG-based biometry, the concept of Evoked Potentials (EP) and Visual Evoked Potentials (VEP) of the brain electrical activity play a major role. EP are transient EEG signals generated in response to a stimulus (for example motor imagery or mental tasks) and VEP are EP produced in response to visual stimuli (presentation of images) generating activity within the gamma band.

Bearing this in mind, we intend to exploit the possibility of achieving a communication channel between the brain and a mobile robot though the modulation of the EEG signal during motor imagery tasks. From the very beginning of the work, a major concern was directed towards designing a generalized and multi-purpose framework that supports rapid prototyping of various experimental strategies and operating modes [Bento et al., 2008]. Issues like EEG-fMRI design for optimal electrodes locations, personalized BCIs, study of different mental activities, feedback and training approaches, advanced signal processing and pattern recognition techniques are being currently challenged. In what concerns the complete EEG based biometrical scenario, we aim at focusing on several open problems related with: i) design a feature model that belongs to a certain person and design a personal classifier with a respective owner, ii) study on the type and the duration of the evoked potentials (visual or auditory) that would enhance the identification/authentication capacity; iii) post-processing techniques on the classifier output as averaging or sporadic error correction would improve the identification/authentication capacity, and iv) optimization of the evoked potential duration (EPD) in order to implement the paradigm in an on-line scheme.

In this paper, we concentrate our efforts on describing the design issues and the strategic options towards the development of both BCI prototypes. The design process has revealed much about the several problems, challenges and tradeoffs imposed by the BCI research. Although some issues are yet to be addressed, these two BCIs are already mature for practical experiments and to obtain the first conclusions on the potential of the proposed solutions.

3 Brain-Computer Interfacing for Control

This section describes the development steps towards a brain-controlled interface, focusing the problems, challenges and tradeoffs of the complete prototype. Special attention was given to the design and evaluation of a variety of tools allowing users to adapt and modulate the mu-rythmic activity. Further, the BCI system was designed having in mind a specific user by providing him with a training period in the presence of feedback. As result, the current system depends on the specific user control of the brain electric activity, such as amplitude in a specific frequency band (mu rhythms) in EEG recorded over a specific cortical area (sensoriomotor cortex).

For enhanced flexibility and versatility, the developed prototype is based on portable and modular concepts supported on MATLAB and Simulink. Due to its programming flexibility, all the algorithms are written in MATLAB code, whereas the software algorithm for signal acquisition is written in C++ language (using a wrapper to integrate it). Another important aspect when developing the system in MATLAB is that it can be done in a higher abstraction level, allowing the developer to focus on the problems of the system and less on the tools that support it. Further, many different toolboxes provide signal processing modules for direct implementation, while a new module can be implemented using S-functions to assure the temporal performance essential in online operation.

3.1 BCI control paradigm

One type of BCI that has been extensively studied derives information either from the user’s movements or the imagination of movement [Millán et al., 2004; Pfurtscheller et al., 2006; Wolpaw et al., 2003]. These movement-based BCIs recognize changes in the human mu rhythm, which is an EEG oscillation recorded in the 8-13 Hz range from the central region of the scalp overlying the sensoriomotor cortex [Kuhlman, 1978; Arroyo‚ 1993; Pfurtscheller and Lopes da Silva, 1999]. This activity is most pronounced when subjects are at rest, but not planning to initiate voluntary movement. At least a second before subjects initiate voluntary movement, the mu rhythms over the hemisphere contralateral to the

region moved shows a decrease in amplitude. This attenuation becomes more symmetric over both hemispheres as subjects actually initiate the movement and remains until shortly after the movement is initiated. Mu activity returns to baseline levels within a second after movement is initiated and may briefly increase above reference [Fatourechi et al., 2007; Pineda, 2005]. These activity-dependent changes in mu-rhythms have also been called event related desynchronization (ERD) and synchronization (ERS) [Pfurtscheller and Lopes da Silva, 1999].

The mu-rhythms have potential for BCIs for many reasons. First, it is present in nearly all adults, including many individuals with motor disabilities. Second, the mu-rhythms can be modulated in both hemispheres [Pfurtscheller and Lopes da Silva, 1999; Pineda, 2005]. Since it is easy to train in subjects while they are awake with eyes open [Kuhlman, 1978; Pfurtscheller and Lopes da Silva, 1999] and can be affected by visual and imagined input [Pineda, 2005; Muthukumaraswamy et al., 2004; Hoshi and Tanji, 2006], it may be possible for users to learn to use a mu rhythm based BCI system by means of a multiplicity of stimuli and cognitive strategies [Pineda et al., 2000; Schalk et al., 2004; Wolpaw et al., 2000]. These observations led us to use motor imagery as control strategy to achieve asymmetrical electrocortical responses and the left-right differences in the sensoriomotor EEG to provide the required control options of a two dimensional environment.

3.2 Signal acquisition, processing and classification

The subject utilizes a portable EEG acquisition system (Fig. 2) with a sampling rate of 256 Hz. This EEG system comprises a maximum number of 8 acquisition electrodes. This difficulty becomes secondary by the advantage of using a portable system that represents minor power consumption, essential to the implementation of a future ambulatory prototype. Another advantage of using a small number of electrodes is the online performance of the BCI system, given that a higher number of EEG electrodes results in more signals to process that can produce slowdowns in the real-time processing.

Accordingly, EEG signals were recorded from eight scalp electrodes placed over central (C3, C4), frontal (F3, F4) and parietal (P7, P8, P3 and P4) locations according to the 10-20 international system and referred to a linked-ear reference (see Fig. 3). Using these spatial locations, it is assumed that a generic motor imagery task can relate to different subsets of cortical areas, resulting in the excitability of different regions such as the premotor cortex, the supplementary motor area, the primary motor cortex and the sensoriomotor cortex [Porro et al., 1996; Lotze et al., 1999; Lacourse, et al., 2005].

An effective way to reduce the noisy EEG signals and increase the signal-to-noise ratio is the Surface Laplacian method [McFarland‚ 1997]. Each signal block was transformed by a surface Laplacian in F3, C3, P3 and P7 for the left hemisphere and F4, C4, P4 and P8 for the right hemisphere. The idea is to record the EEG at two different sites on the scalp, most preferably over C3 and C4, hoping that subjects would be able to learn to intentionally vary the amplitudes of the mu rhythms simultaneously and independently. In other words, we take the amplitudes measured by one pair of electrodes and translate it into movement actions. The computation involves the power spectrum estimation (using the Yule-Walker method) of the ongoing EEG associated with the mu rhythm frequency range and the comparison of the resulting values with adaptable voltage ranges (called baselines).

This procedure leads to a simple quantification (or classification) encoding the mu rhythm amplitude and it is directly translated into the movement of an output device. The ERD block verifies, for a specific frequency band, whether the power attenuation is confirmed (one ERD module for each hemisphere).

Fig. 2. Ambulatory acquisition of EEG signals using the TrackIt system from LifeLines Ltd

The classifier has two inputs, one for each ERD block. It was implemented by mean of a decision tree if only the right hemisphere signal verifies the ERD, the classifier output is “LEFT”; if only the left hemisphere signal verifies the ERD, then the classifier output is “RIGHT”; if both signals verify the ERD then the output is “FORWARD”; and if neither of the signals verify the ERD, then the output is “STOP”.

Fig. 3. Spatial location of the EEG electrodes over the frontal, central and parietal areas

3.3 Subject’s training

In order to ensure reliable performance‚ the user and the BCI system need to adapt to each other both initially and continually. A crucial feature to improve the performance of a BCI is real-time feedback [Kostov and Polak, 2000; Guger‚ 2002; Fabiani‚ 2004]. However, although it is well known that feedback is very important in the learning process, it remains unclear which type of feedback is best. This appears to be a user-dependent issue, as one subject may have the best success with one type of feedback, whereas another subject might find that particular type to be distracting. In order to allow the user to adapt and modulate various signal features, two graphical applications were developed (Fig. 4):

1. Biofeedback I provides feedback by displaying one of four images according to the classifier output: “RIGHT”, ”LEFT”, ”FORWARD” or ”STOP”. Feedback is provided by means of colored arrows (one for each mental task) for easily recognition of the system output;

2. Biofeedback II is used in online mode and it corresponds to a modified version of a classical application aiming to place a cursor on one of three possible areas.

An alternative training module was developed also relying on vision, but not on a computer monitor. Here, the user attempts to control a Khepera mobile robot receiving visual feedback during execution of movements (Fig. 5). The Khepera robot (5.7 cm diameter) is a two-wheeled mobile robot representing the ideal analogy for a wheelchair. The Khepera provides eight infrared sensors that enable obstacle detection and avoidance whenever the robot reaches a wall. The output module, depending on what the classifier output is in each instant, controls the velocity of each individual wheel.

Fig. 4. Biofeedback modules used for user’s training: colored arrows (top) and move cursor for the desired area (bottom)

Fig. 5. Training sessions where the user learns to modulate mu-rhythms with the help of visual feedback (left) and direct command of a mobile robot (right)

3.4 Experimental results

This section describes the experiments carried out in order to verify the effectiveness of the proposed solutions. As a first example, we describe results from an experiment for the discrimination of two mental states: the task is to imagine either right-hand or left-hand movement depending on a visually presented cue stimulus in the form of an arrow. Second, preliminary results of brain-state estimation using EEG signals recorded during a self-paced left/right hand movement task are presented.

An initial effort has been expended in training the user to operate the system using the feedback tools described above. The performance depends on how the user is able to achieve a state of concentration and learns to properly modulate the mu-rhythms. At the same time, special attention was devoted to obtain the baselines for each hemisphere, including attempts to select what baseline offered the best results. The notion of a good baseline is associated with the absence of involuntary desynchronization and artifacts. Finally, a design procedure is proposed based on the possibility of combining EEG and functional magnetic resonance imaging (fMRI) for physiological model tuning.

3.4.1 Synchronous operation

In the synchronous operation mode the subject fixates on a computer monitor, while cue stimuli in the form of an arrow indicate the type of imagination to perform. In this way, the experimental system was set up such that the acquisition system is continuously active, while a separate cuing mechanism guides him into some desired state. Depending on the direction of the arrow, the subject is instructed to imagine a movement of the left or the right hand during a fixed time window of 30 s. In this study‚ the initial training period comprised two sessions with periods of about 10 minutes.

Figure 6-(a) shows the ERD/ERS curves obtained by first calculating the power spectra in intervals of 500 ms and‚ thereafter‚ estimating the alpha band power. The time period of the cue presentation is also indicated. In this experiment the user is asked to perform a left-hand movement after a relaxation phase. In a next step, the user performs the mental imagination of the same movement. From these charts, it can be observed that the mu waves are almost constantly present when the subject is relaxed and they are suppressed when the subject performs a motor (imagery or real) task exciting the contralateral side. In other words, mu waves almost disappear over the left brain hemisphere when the right hand is moved and vice versa. An interesting point is the similar suppression of mu-rhythms verified when the subject imagines the movement or when they actually perform it. These experiments give evidence that adaptation of the subject to the machine is still necessary, such that the suppression of mu rhythms is more pronounced at the contralateral hemisphere when subjects imagine the movement as desired.

In the second experimental task the user imagines either right or left-hand movements depending on a series of visually cue stimuli presented randomly and interchanged by relaxation periods. The goal was to verify the separability of the EEG patterns, i.e., whether the ERD remains mostly limited to the contralateral hemisphere. From the results in Figure 6-(b), it can be seen that the ERD develops a bilateral distribution that may be explained by the long period of time spent with the imagery task. In order to obtain a deeper insight into the dynamics of mu-rhythms, issues like intensive training periods with feedback and adaptation of pattern recognition methods are being currently challenged.

(a) (b)

Fig. 6. ERD/ERS curves during motor imagery in the alpha band: (a) comparison between movement execution and imagination for a left hand movement, and (b) response to randomized cue stimuli in the form of an arrow

3.4.2 Self-paced control

This subsection reports our first steps towards a BCI prototype that supports self-paced operation. Self-paced operation implies that the system is always available for control and, at the same time, the system is able to recognize periods in which the user’s brain state is considered to be in a “no control” state, avoiding false responses during those periods. At this stage, the BCI configuration system is set up for a specific user, what can be easily achieved due to the fast-prototyping feature of the system. For this analysis‚ the EEG is first bandpass filtered in the alpha band and then the band power is estimated.

During a session, the first step is to acquire the subject-specific baselines for the desired mental imagery task: open/close hand. Using the contralateral propriety of cortical activation, the baseline that shares all the common underlying brain activity of the motor imagery task is its opposite task. For example, if we are analyzing the presence of a right motor imagery task, the EEG signal is compared with the recorded baseline for the left motor imagery task performed before. In doing so, all the differences in signal amplitudes related to the modulation of the mu rhythms are detected. After the initial configuration steps are concluded, the subject proceeds to accomplish the following experimental protocol:

1. Rest: the subject, sat in a comfortable chair, is asked to relax as much as possible; 2. Self generated movements: subject is asked to open/close each hand while grasping a small ball; 3. Imagination: the subject is instructed to control the khepera robot’s motion by motor imagery.

Using this protocol, several experiments were performed online with the subject controlling the khepera robot. The user was asked to move the robot into one of two possible areas, while the robot operates in the free environment (Fig. 7). The degree of mu-rhythms suppression occurring during the imagery of movements is expressed as a percentage of the peak power value at rest. Examples of the ERD achieved for both the “right” and “left” areas in the contralateral spatial filtered electrodes (C3 and C4) are shown in Fig. 7. The user achieved an averaged classification rate of about 70%, for each direction (right/left) after several trials have been completed. In the course of each session, the classification rate tended to settle into a stable value, suggesting that an extensive training is now essential to improve these results.

Fig. 7. Brain-actuated control of a Khepera mobile robot (left) and ERD curves recorded over right (right/top) and left (right/bottom) sensorimotor cortex during motor imagery

3.4.3 Analysis of BOLD-fMRI data

Most research groups devote a great amount of time and effort to the classification of EEG activity and underlying signal processing algorithms [Bashashati et al., 2007]. Besides the importance of the signal processing methods, the computational efforts could be greatly reduced through a better knowledge of the brain activity for a specific subject when performing imagery motor tasks. In the field of BCI research, Functional Magnetic Resonance Imaging (fMRI) has been accepted as a crucial tool in terms of physiological model tuning [Zarahn, 2001]. Having this in mind, the preliminary work presented aims to understand the lowest level of abstraction of a BCI system. More concretely, two different optimization problems have been considered: (1) where to place the electrodes; and (2) what type of mental activity produces the best results. Being both user-dependent questions, the optimum conditions are determined for a particular subject.

The analysis of the BOLD-fMRI data during motor imagery (left and right hand movement) for two subjects is illustrated in Fig. 8. The analysis of the fMRI data shows that both users (right-handed) evidenced a contralateral activation of the left hemisphere sensorimotor cortex. This region was established to be directly connected with the sensorial sensing of the surrounding environment [Jeannerod, 1994; Lacourse et al., 2005], essential to the motor imagery task. Most significant, the results demonstrate a considerable difference in terms of spatial activation and excitability for both subjects.

A practical explanation for the wider activation area of subject X could lie on the fact that this subject was experienced in performing imagery tasks, while subject Y was instructed to perform the motor imagery without any training. Thus, the production of the required brain states for the inexperienced user has not been automatic, tending to focus more on generating a particular output than the required brain state. The fMRI data allowed us to define a real-time model specific for each subject defining the optimal spatial location of the EEG electrodes. The correct placement of the EEG electrodes allows to mainly acquire the rhythm activity with a direct relation with the imagery task performed, enlightening the signal-to-noise ration of the signal and thus reducing the complexity of the signal processing methods.

Fig. 8. Analysis of BOLD-fMRI data during right-hand motor imagery: (a) subject X and (b) subject Y

4 EEG-based Biometry

Like the BCIs for motor control, the EEG based biometry provides an alternative communication channel between the human brain and the external world. It has been shown in previous studies [Paranjape, et al. 1999; Poulos et al. 2001] that the brain-wave pattern of every individual is unique and, therefore, the EEG can be used for building personal identification and authentication systems. The identified person is exposed to a stimulus (usually visual or auditory) for a certain time and the EEG signals coming from a number of electrodes spatially distributed over the subject’s scalp are collected and input to the biometry system. The identification attempts to establish the identity of a given person out of a closed list of persons (one from many), while the authentication aims to confirm or deny the identity claimed by a person (one to one matching). Independently of the particular application (motor control or biometry) and the frequency band considered (e.g., mu rhythms or gamma band) the raw EEG signals are too noisy and variable to be analyzed directly. Therefore, the EEG signals need to go through a sequence of processing steps: i) Data acquisition, storage and format transforming; ii) Filtering (removal of interferences from

other unwanted sources, as for example physiological artifacts or baseline electrical trends); iii) Feature extraction and classification; iv) Feedback generation and visualization.

The identification/authentication systems built so far differ basically in filtering and classification components [Palaniappan and Mandic, 2007; Marcel and Millan, 2007]. However, our initial study has shown that the discrimination process is slightly dependent on the specific filter and classifier. The main issues a complete EEG based biometry system needs to address are briefly discussed below.

Biometry as a modeling problem. The EEG recordings are unique for each person and the problem of EEG-based biometry can be interpreted as a modeling problem, i.e., design a feature model that belongs to a certain person and design a personal classifier with a respective owner [Oliveira et al., 2009]. The trained identification model has to identify the subject from a data base of personal profiles and the authentication system has to confirm or not that the subject being evaluated is who he claims to be.

Stimulus. Study on the type and the duration of the evoked potentials (visual or auditory) that would enhance the identification/authentication capacity. Our preliminary tests have demonstrated that the type of the stimulus (for example mental task, motor task, image presentation or a combination of them) is crucial for reliable extraction of personal characteristics. It seems that some mental tasks are more appropriate than others. At the same time, experiments with combination of stimuli appear to be more advantageous for the personal uniqueness of the EEG patterns.

Data maping. For the automatic identification of each subject, the EEG signals have to be represented in a feature space and after some possible feature reduction steps they can be mapped by a classifier on the set of class labels. Between the initial representation in the feature space and the final mapping on the set of class labels the representation may be changed several times: simplified feature spaces (feature selection), normalization of features (e.g., by scaling), linear or nonlinear mappings (feature transformation), classification by a possible set of classifiers, combining classifiers and the final labeling. In each of these steps the data is transformed by some mapping (transformations operating on datasets). Data mining and machine learning theory offer a great number of alternatives for data mapping and knowledge extraction that still need to be explored and comparatively studied [Teixeira et al., 2008, Stadlthanner et al., 2008, Teixeira et al., 2007].

Post-processing. Our ongoing research suggests that post-processing techniques on the classifier output as instant error correction and averaging would improve the identification/authentication capacity.

Real-time biometry. Optimization of the evoked potential duration (EPD) in order to implement the paradigm in an on-line scheme. Current study has shown that both two short or too long EPD worsen the biometrical system [Ferreira, 2009]. The compromise can be learned by cross validation during the classifier training.

4.1 Experimental setup

VEP signals were extracted from sixteen female subjects (20-28 years old). All participants had normal or corrected to normal vision and no history of neurological or psychiatric illness. Neutral, fearful and disgusting faces of 16 different individuals (8 males and 8 females) were selected, giving a total of 48 different facial stimuli. Images of 16 different house fronts to be superimposed on each of the faces were selected from various internet sources. This resulted in a total of 384 grey-scaled composite images (9.5 cm wide by 14 cm high) of transparently superimposed face and house with equivalent discriminability.

Participants were seated in a dimly lit room, where a computer screen was placed at a viewing distance of approximately 80 cm coupled to a PC equipped with software for the EEG recording. The images were divided into two experimental blocks. In the first, the participants were required to attend to the houses (ignoring the faces) and in the other they were required to attend to the faces (ignoring the houses). The participant’s task was to determine, on each trial, if the current house or face (depending on the experimental block) is the same as the one presented on the previous trial. Stimuli were presented in sequence, for 300ms each and were preceded by a fixation cross displayed for 500 ms. The inter-trial interval was 2000 ms.

EEG signals were recorded from 20 electrodes (Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2; F7, F8, T3, T4; P7, P8, Fz, Cz, Pz, Oz) according to the 10/20 International system. EOG signals were also recorded from electrodes placed just above the left supraorbital ridge (vertical EOG) and on the left outer canthus

(horizontal EOG). VEP were calculated off-line averaging segments of 400 points of digitized EEG (12 bit A/D converter, sampling rate 250 Hz). These segments covered 1600ms comprising a pre-stimulus interval of 148 ms (37 samples) and post-stimulus onset interval of 1452 ms. Before processing, EEG was visually inspected and those segments with excessive EOG artifacts were eliminated. Only trials with correct responses were included in the data set. The experimental setup was designed by Santos et al. (2008) for their study on subject attention and perception using VEP signals.

4.2 Feature extraction

The gamma-band spectral power (from 30 to 50 Hz.) of the VEP signals was computed by the Welch’s periodogram method. The temporal segments over which one value of the spectral power matrix is computed correspond to one trial (around 1600 ms), i.e., the samples collected during one image presentation. The normalized gamma-band spectral power for each channel was computed. It is a ratio of the spectral power of each channel and the total gamma-band spectral power of all channels. The level of perception and memory access among individuals are different and this reflects in significant difference between the gamma-band spectral power ratios of the subjects which is the key for the VEP based individuals identification.

4.3 PCA for noise reduction

A possible way to increase the signal to noise ratio is to accompany the feature extraction step with the principal component analysis (PCA). For the case considered, the PCA was designed first to extract only principal components of the normalized gamma-band spectral power (the feature space) that accumulates 95% of the signal energy (this is equivalent to feature space reduction). Then, it follows a step to reconstruct the feature space with the same dimensionality. We have studied the performance of various classifiers with or without PCA processing. The results, summarized in Table 1 and Table 2, suggest that while the PCA is aimed at capturing the main EEG patterns, the individual specificity is lost and the classification accuracy is worsen. A possible interpretation is that the energy in the 30-50 Hz band of the original data set is already attenuated due to an embedded filtering process of the EEG acquisition apparatus. The PCA processing additionally reduces the VEP power spectral density and, therefore, all classifiers studied exhibit worse generalization performance (Table 1).

Table 1. Average classification error with PCA feature selection

Classification with PCA feature selection

Classifier 1st PP step

2nd PP step

3rd PP step

4th PP step

5th PP step

Linear 1 65,94 63,10 60,01 59,71 59,79 59,61 Linear 2 56,42 51,58 48,05 47,12 46,25 45,57 Nonlinear 1 44,53 37,26 31,24 27,95 26,19 24,07

MSVM1_ OAO

Nonlinear 2 36,43 28,00 22,08 19,01 17,41 14,49 Linear 1 58,65 54,24 50,64 49,88 49,34 48,60 Linear 2 59,79 56,55 54,42 53,42 52,36 51,24 Nonlinear 1 43,78 36,76 31,12 28,03 24,93 23,33

MSVM2_ OAA

Nonlinear 2 35,99 27,60 21,24 18,67 16,44 15,17

Table 2. Average classification error without PCA feature selection

Classification without PCA feature selection

Classifier 1st PP step

2nd PP step

3rd PP step

4th PP step

5th PP step

Linear 1 38,21 35,36 33,43 31,89 31,63 30,37 Linear 2 29,98 24,88 23,19 23,55 22,77 21,54 Nonlinear 1 26,42 20,31 17,42 16,87 15,97 14,95

MSVM1_ OAO

Nonlinear 2 15,67 10,16 8,32 6,95 5,54 5,10 Linear 1 30,57 25,02 23,56 22,58 21,27 20,26 Linear 2 26,84 21,17 17,87 16,45 14,52 13,71 Nonlinear 1 26,99 21,54 18,32 16,70 15,16 14,49

MSVM2_ OAA

Nonlinear 2 17,43 12,05 9,78 8,49 6,96 6,62

4.4 Classification of individuals

Two multi class versions of the original binary (two class) Support Vector Machine (SVM) method were used for classification of the VEP spectral power ratios [Tan, 2006]: i) one against other SVM (MSVM1_OAO) and ii) one against all SVM (MSVM2_OAA). Each method creates a set of binary classifiers that are afterwards combined to output the final labeling. Linear or nonlinear functions are comparatively tested as the SVM feature space mapping functions. Radial Basis Function (RBF) is selected for the nonlinear SVM version. Two classifier training scenarios were considered within the linear and nonlinear versions of the two methods:

• Scenario 1 (linear 1, nonlinear 1): the classifier is trained with data set coming from one experimental block (subject has to attend to the faces ignoring houses) and tested with data from the other experimental block (subject has to attend to the houses and ignore the faces).

• Scenario 2 (linear 2, nonlinear 2): the classifier is trained with data coming from both experimental blocks and tested with unseen data from the same blocks.

4.5 Post processing (PP) steps

Both classifiers perform a static (memoryless) classification that does not consider explicitly the temporal nature of the VEP signals. Time accounting classifiers, as for example Recurrent Neural Networks (NNs), Time Lag NNs or Reservoir Computing, have the disadvantage to require complex training procedures that not always converge. In order to keep low complexity of the biometrical system, we propose here an empirical way to introduce memory into the classifiers. During a post processing (PP) step, a moving window of n past classifier outputs (personal labels) is isolated and following a predefined strategy the labels can be corrected. For example, during the 1st PP step a window of the last three labels is defined (n=3) and, in case the first and the last labels are the same but different from the central one, this label is corrected to be equal to the others. The window dimension of the 2nd PP step is increased with one (n=4). If the first and the last elements have the same label, but the two central elements are different from each other and from the lateral elements they are corrected. It was observed that increasing the dimensionality of the moving window (3rd PP step with n=5; 4th PP step with n=6; 5th PP step with n=7) the overall performance of both classifiers improved. The strategy of each next step is to increase the number of central elements and to correct them in case they are different from the equal lateral elements of the moving (with one sample) window. After the 5th PP step the performance started to decrease, therefore five PP steps were subsequently implemented in our EEG-based biometry system (see Table 1 and Table 2 above).

4.6 Evoked potential duration

The effect of the Evoked Potential Duration (EPD) was particularly studied since it defines the viability of the biometry system. If the identified person has to be exposed too long time to a stimulus in order to be identified, it would make the system not quite practical and difficult to realize in real time. Therefore, the training time series length needs to be reasonably short. The results of this study are summarized in Fig. 9 and Fig. 10 where the average classification error (ACE) is depicted as a function of the training segment length (Nº of trails). Note that for both classifiers (SVM1_OAO and SVM2_OAA) there is a number of trails for which the ACE is minimized and longer time exposure does not suggest better person’s discrimination. These results are averaged over the total number of identified subjects (13 persons). However, results related with each individual exhibit similar tendency. Though the conclusions go beyond of what can be analytically proved, the intuition behind is that too long time exposure to visual stimuli leads to accommodation and tiredness, thus the personal specificity encoded in the VEPs is vanishing and the classifier error increases.

Average classification error (%)

Training segment length (Nº of trials)

Fig. 9. MSVM1_OAO: ACE without PP (bold line) and ACE after the 5th PP step (dashed line)

Average classification error (%)

Training segment length (Nº of trials)

Fig. 10. MSVM2_OAA: ACE without PP (bold line) and ACE after the 5th PP step (dashed line)

5 Conclusions and Future Research

This paper described the design issues associated with the development of EEG-based brain-computer interfaces for control and biometry. The results obtained so far suggest the following major comments. On the one hand, the brain-controlled interface combines the multipurpose data processing and rapid prototyping functionalities of MATLAB with the real-time and modular capabilities of Simulink. The current prototype provides a programming environment that could support rapid and flexible adaptation to a specific subject. In spite of the first promising results, substantial training is required to improve its performance, while further studies are necessary to highlight the potential for near-term applications. On the other hand, the feasibility of EEG-based person identification has been successfully tested by the results of this paper. However, the classification accuracy of the EEG biometry cannot currently compete with conventional biometrics (e.g.‚ fingerprint and iris). In addition, the database we used is still small and no definitive conclusive lessons can be learned for the task of person identification. Nevertheless the

EEG biometry would provide an excellent source of supplementary information in a multi-expert system, which may be appropriate in certain applications (e.g.‚patient verification in medical data monitoring).

Ongoing developments focus on the validation of the BCI technology in a large population of subjects with more and extended online experiments. Besides its usability and performance evaluation, there are a number of issues to be investigated in the near future. For example‚ defining a real-time model paradigm in an EEG-fMRI environment provides, in theory, new perspectives to achieve innovative designs. In another context‚ a better understanding of the mechanisms that allow a subject to form and modify the brainwaves is a key element in future developments. It is tempting to transfer the burden of adaptation and training to the human subject by means of feedback learning. Despite the apparent advantages of this scenario, a hybrid solution, where the training is transferred from the subject to the learning algorithm, is likely to be the key to extending the impact of BCIs. As with respect to the biometric EEG-based system, there has been very little research work published using brain signals as a biometric tool to identify individuals. However, we expect several potential applications to emerge in the near future. Control of the classified access into restricted areas security systems, illnesses or health disorder identification in medicine, gaining more understanding of the cognitive human brain processes in neuroscience are among the most appealing.

Acknowledgments

The present work was partly supported by FCT and FEDER, grants GRID/GRI/81833/2006, PTDC/SAU-BEB/72305/2006 and GRID/GRI/81819/2006.

References

Arroyo, S., R.P. Lesser, B. Gordon, S. Uematsu, D. Jackson, R. Webber, “Functional significance of the mu rhythm of human cortex: an electrophysiologic study with subdural electrodes”, Electroencephalography Clinical Neurophysiology, 87: 76-87, 1993.

Bashashati, A., M. Fatourechi, R.K. Ward and G.E. Birch, “A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals”, J. Neural Engineering, 4: R32-R57, 2007.

Bensch‚ M.‚ A. Karim‚ J. Mellinger‚ T. Hinterberger‚ M. Tangermann‚ M. Bogdan‚ W. Rosenstiel and N. Birbaumer‚ “Nessi: an EEG-controller web browser for severely paralyzed patients”‚ Intelligence Neuroscience‚ (2): 1-10‚ 2007.

Bento, V., J.P. Cunha and F. Silva, “Towards a Human-Robot Interface Based on the electrical Activity of the Brain”, Proc. IEEE-RAS International Conference on Humanoid Robots, pp. 85-90, Daejon, South-Korea, 2008.

Berger, T.W., J.K. Chapin, G.A. Gerhardt, D.J. McFarland, J.C. Princípe, W.V. Soussou, D.M. Taylor and P.A. Tresco, Brain-Computer Interfaces: An International Assessment of Research and Development Trends, Springer, 2008.

Birbaumer‚ N. and L. Cohen‚ “Brain-computer interfaces (BCI): communication and restoration of movement in paralysis”‚ J. Physiology‚ January‚ 2007.

Fabiani, G.E., D.J., McFarland, J.R., Wolpaw, and G. Pfurtscheller, “Conversion of EEG activity into cursor movement by a brain-computer interface (BCI)”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 12:331-338, 2004.

Fatourechi, M., G.E. Birch and R.K. Ward, “A self-paced brain interface system that uses movement related potentials and changes in the power of brain rhythms”, J. Computational Neuroscience, 23:21-37, 2007.

Ferreira A.J.C, EEG-based personal authentication, Master thesis, University of Aveiro, 2009 (in Portuguese). Guger, C., A. Schlogl, C. Neuper, D. Walterspacher, T. Strein and G. Pfurtscheller, “Rapid prototyping of an EEG-based

brain-computer interface (BCI)”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 9: 49-58, 2002. Hoshi, E. and J. Tanji, “Differential involvement of neurons in the dorsal and ventral premotor cortex during processing of

visual signals for action planning”, J Neurophysiology, 95: 3596-616, 2006. Jeannerod, M., “The Representing Brain: Neural Correlates of Motor Intention and Imagery”, Behavioral and Brain

Sciences, nº. 17, pp. 187-245, 1994. Kostov, A., M. Polak, “Parallel man-machine training in development of EEG-based cursor control”, IEEE Trans Rehabil

Eng, 8: 203-5, 2000. Kuhlman‚ W.N.‚ “Functional topography of the human mu rhythm”‚ Electroencephalography Clinical Neurophysiology‚

44: 83-92‚ 1978. Lacourse, M.G., E.L. Orr, S.C. Cramer and M.J. Cohen, “Brain activation during execution and motor imagery of novel and

skilled sequential hand movements”, Neuroimage, 27(3): 505-19, 2005 Leeb‚ R.‚ F. Lee‚ C. Keinrath‚ R. Scherer‚ H. Bischof and G. Pfurtscheller‚ “Brain computer communication: motivation‚

aim and impact of exploring a virtual apartment”‚ IEEE Transactions on Neural Systems and Rehabilitation Engineering‚ 15(4): 473-482‚ 2007.

Lotze, M., P. Montoya, M. Erb, E. Hulsmann, H. Flor, U. Klose, N. Birbaumer and W. Grodd, “Activation of cortical and cerebellar motor areas during executed and imagined hand movements: an fMRI study”, J Cognitive Neuroscience, 11: 491-501, 1999.

Marcel, S., José del R. Millán, “Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4), 743-752, 2007.

McFarland‚ D.‚ “Spatial filter selection for EEG based communication“‚ Electroencephalography Clinical Neurophysiology‚ 103: 386-394‚ 1997.

Millán, J. del R., F. Renkens, J. Mouriño and W. Gerstner “Brain-Actuated Interaction”, Artificial Intelligence, 159: 241-259, 2004.

Millán, J. del R., “Brain-controlled robots”‚ IEEE Intelligent Systems‚ 2008. Müller-Putz‚ G.R. and G. Pfurtscheller‚ “Control of an electrical prosthesis with an SSVEP-based BCI”‚ IEEE Transactions

on Biomedical Engineering‚ 55(1): 361-364‚ 2008. Muthukumaraswamy, S.D., B.W., Johnson and N.A. McNair, “Mu rhythm modulation during observation of an object-

directed grasp”, Cogn Brain Res, 19: 195-201, 2004. Niedermeyer‚ E. and F. Lopes da Silva‚ Electroencephalography. Lippincott Williams and Wilkins‚ 1999. Nijboer‚ F.‚ E. Sellers‚ J. Mellinger‚ M. Jordan‚ T. Matuz‚ A. Furdea‚ S. Halder‚ U. Mochty‚ D. Krusienski and T.

Vaughan‚ “A P300-based brain computer interface for people with amyotrophic lateral sclerosis”‚ Clinical Neurophysiology‚ 119(8): 1909-1916‚ 2008.

Oliveira, C., P. Georgieva, F. Rocha, S. Feyo de Azevedo , “Artificial Neural Networks for Modeling in Reaction Process Systems”, Neural Computing & Applications, Springer, 18, 15-24, 2009.

Palaniappan, R., D. P. Mandic, “Biometrics from Brain Electrical Activity: A Machine Learning Approach”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 4, 2007.

Paranjape, R.B., J. Mahovsky, L. Benedicenti, and Z. Koles, "The Electroencephalogram as a Biometric", Proc. CCECE, vol. 2, pp.1363-1366, 2001.

Pfurtscheller, G. and F.H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles”, Clin Neurophysiol, 110: 1842-57, 1999.

Pfurtscheller, G., G.R. Muller-Putz, A. Schlogl, B.A. Graimann, R.A. Scherer, R.A. Leeb, C.A. Brunner, C.A. Keinrath, F.A. Lee, G.A. Townsend, C.A: Vidaurre, and C.A. Neuper, “15 years of BCI research at graz university of technology: current projects”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14: 205-210, 2006.

Pfurtscheller, G., G.R. Muller-Putz, R.A. Scherer and C.A. Neuper, “Rehabilitation with brain-computer interface systems“, Computer, 41(10): 58-65, 2008.

Pineda, J.A., B.Z. Allison and A. Vankov, “The effects of self-movement, observation, and imagination on mu rhythms and readiness potentials (RP's): toward a brain-computer interface (BCI)”, IEEE Trans Rehabil Eng, 8: 219-22, 2000.

Pineda, J.A., “The functional significance of mu rhythms: translating ‘seeing’ and ‘hearing’ into ‘doing’”, Brain Res Rev, 50:57-68, 2005.

Porro, C., M. Francescato, V. Cettolo, M. Diamond, P. Baraldi, C. Zuiani, M. Bazzocchi, P. Prampero, “Primary motor and sensory cortex activation during motor performance and motor imagery: a functional magnetic resonance imaging study”, J Neurosci, 16: 7688-98, 1996.

Poulos, M., M. Rangoussi, V. Chrissikopoulos, and A. Evangelou, "Person identification based on parametric processing of the EEG", Proc. IEEE ICECS, vol. 1, pp. 283-286, 1999.

Santos, I.M., J. Iglesias, E. I. Olivares, A.W. Young, “Differential effects of object-based attention on evoked potentials to fearful and disgusted faces”, Neuropsychologia, 46(5), 1468-1479, 2008.

Schalk, G.S., D.J. McFarland, T. Hinterberger, N.A. Birmaumer and J.R. Wolpaw, “BCI2000: a general-purpose brain-computer interface (BCI) system”, IEEE Transaction on Biomedical Engineering, 51:1034-1043, 2004.

Stadlthanner, K., F. J. Theis, E.W. Lang, A. M. Tome, C. G. Puntonet, and J. M. Gorriz. “Hybridizing sparse component analysis with genetic algorithms for microarray analysis”, Neurocomputing, 71, 2356–2376, 2008.

Tan, P.-N., M. Steinbach and V. Kumar, Introduction to Data Mining, 2006. Teixeira, A.R., A. M. Tome, K. Stadlthanner, and E.W. Lang. “KPCA Denoising and the Pre-image Problem Revisited”,

Digital Signal Processing, 18, 568–590, 2008. Teixeira, A.R., Tomé, A.M., Lang, E.W., Gruber, P., and Silva, A.M.d., “Automatic Removal of High-Amplitude Artifacts

from single-channel electroencephalograms”, Computer Methods and Programs in Biomedicine, 83, 125–138, 2007. Wolpaw, J.R., D.J. McFarland and T.M. Vaughan, “Brain-computer interface research at the Wadsworth Center”, IEEE

Transactions on Rehabilitation Engineering, 8:222-226, 2000. Wolpaw, J.R., D.J. McFarland and T.M. Vaughan, and G.S. Schalk, “The Wadsworth Center brain-computer interface (BCI)

research and development program”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 11:1-4, 2003.

Zarahn, E., ”Spatial Localization and Resolution of BOLD-fMRI”, Curr. Opin. Neurobiology, 11(2): 209-212, 2001.