11
1053-5888/03/$17.00©2003IEEE H uman-computer interface (HCI) has been a growing field of research and develop- ment in recent years [1]-[4]. Most of the effort has been dedicated to the design of user-friendly and ergonomic sys- tems by means of innovative interfaces such as voice, vision, and other input/output de- vices in virtual reality [5]-[15]. Di- rect brain-computer interface (BCI) adds a new dimension to HCI [16]-[23]. Interesting re- search in this direction has already been initiated, motivated by the hope of creating new communication channels for persons with severe motor disabilities. In this article, we approach the problem of BCI from the viewpoint of interactions in a multimedia-rich environment for the gen- eral consumer market. However, this is by no means incompatible with applications for motor impaired subjects. There is a general consensus that BCI rep- resents a new frontier in science and technol- ogy. One of the challenging aspects is the need for multidisciplinary skills to achieve this goal. The growing field of BCI is in its infancy, and a significant amount of research is still needed to answer many questions and to resolve many complex problems. This article raises various issues in the design of an efficient BCI system in multi- media applications. The main focus will be on one specific modality, namely electroen- cephalography (EEG)-based BCI. In do- Touradj Ebrahimi, Jean-Marc Vesin, and Gary Garcia CIRCUIT ©1997 JOHN FOXX IMAGES, HEAD: ©1995 PHOTODISC, INC. 14 IEEE SIGNAL PROCESSING MAGAZINE JANUARY 2003

Human-computer interface - CiteSeerX

Embed Size (px)

Citation preview

1053-5888/03/$17.00©2003IEEE

Human-computer interface(HCI) has been a growingfield of research and develop-ment in recent years [1]-[4].

Most of the effort has been dedicated to thedesign of user-friendly and ergonomic sys-tems by means of innovative interfaces suchas voice, vision, and other input/output de-

vices in virtual reality [5]-[15]. Di-rect brain-computer interface(BCI) adds a new dimension toHCI [16]-[23]. Interesting re-search in this direction has already

been initiated, motivated by the hope ofcreating new communication channels forpersons with severe motor disabilities. Inthis article, we approach the problem ofBCI from the viewpoint of interactions in amultimedia-rich environment for the gen-eral consumer market. However, this is byno means incompatible with applicationsfor motor impaired subjects.

There is a general consensus that BCI rep-resents a new frontier in science and technol-ogy. One of the challenging aspects is theneed for multidisciplinary skills to achievethis goal. The growing field of BCI is in itsinfancy, and a significant amount of researchis still needed to answer many questions andto resolve many complex problems.

This article raises various issues in thedesign of an efficient BCI system in multi-media applications. The main focus will beon one specific modality, namely electroen-cephalography (EEG)-based BCI. In do-

Touradj Ebrahimi, Jean-Marc Vesin,and Gary Garcia

CIR

CU

IT©

1997

JOH

NF

OX

XIM

AG

ES

,HE

AD

:©19

95P

HO

TO

DIS

C,I

NC

.

14 IEEE SIGNAL PROCESSING MAGAZINE JANUARY 2003

ing so, we provide an overview of the most recentprogress achieved in this field, with an emphasis on signalprocessing aspects.

Human-Machine Interfacein Multimedia CommunicationCommunication—A DefinitionOne of the main characteristics of humans, as opposed toanimals, resides in their extended ability to communicate.The central role of communication in society, culture, andeconomy increases ceaselessly. Within this context, a firstquestion is “What defines communication?” Several an-swers can be envisioned. Here, we use a simple one whichrefers to communication as a process to express and shareexperiences between humans. Such an experience can beeither real or imaginary. Communication took place fromthe early days of cave men when they told stories aroundthe fire and drew sketchy figures on the walls of theircaves. Modern communication is not different, except inits sophistication and efficiency. An evolution in commu-nication would consist of improving the quality of the ex-pression and that of the shared experience withoutinherently modifying its underlying nature. Examples ofsuch evolution are the moves from black-and-white tele-vision to color, to high definition, to stereo, to three-di-mensional (3-D) holographic images. Following thislogic, a revolution in communication would refer to achange at the fundamental level, either due to new modal-ities or to the addition of new dimensions. A trivial exam-ple is that of written book and press, which allowed forcommunication to take place in an asynchronous way.Anyone can read and enjoy the writings of his favorite au-thor at any time. The same is true for photography, whichcan grasp a scene and immortalize a moment.

Based on the above, a second question is “What will bethe next revolution in communication?” This difficult andoften posed question is not easy to answer. As in the pastthe fate of communication will be shaped by many trends.Probably the two most significant will be those in mediatechnologies on one hand and the interface between hu-mans and machines on the other. Below we briefly describeeach trend but will focus on the second trend.

New MediaAn essential element in any communication is the con-tent. Content can be seen as the recipient in which we ex-press ourselves. In the history of mankind, we have beensuccessful in finding multiple ways of expression. The ob-vious audio (music, song) and visual (painting, photog-raphy, film) expressions are by no means the onlychannels available to us. Mechanisms of expression stimu-lating all our senses, such as haptics (sculptures, cloth-ing), olfaction (perfumes), and taste (cuisine) havealways had a strong impact in culture and art. Electronicand digital communication has not been very successful

in taking full advantage of content beyond audiovisualmodalities as vectors of expression due to limitations intechnology and science for acquisition, representation,manipulation, and generation of such content.

In the meantime, efforts have been made to further thelimits of expression as far as audiovisual content is con-cerned. Progress in virtual, augmented, and mixed realityis a good indication that science and technology can con-tribute in defining new paradigms of expression[64]-[66]. Advances in electronics, software, and com-munication technologies not only allow for the more effi-cient production, distribution, and consumption oftraditional media but also will extend what is defined asmedia today. New media include modalities such as 3-Dvideo, photo-realistic 3-D, and animated 3-D models andeven scenes built from synthetic and natural objects com-ing from multiple sources. In the near future, digital me-dia will eventually include totally new sensoryinformation such as those mentioned above, aiming atstimulating all our senses and bringing us new possibili-ties of expression and experiences.

The Weak Link in CommunicationSomewhere in the tumbling evolutions of communica-tion, tools found their place in between communicatinghumans. Pen and paper are simple examples of such tools.These tools were further improved to become machines,of which computers are an important representative. Be-cause of the existence of machines and computers in thechain of communication between humans, a new prob-lem was identified: how to improve the communicationbetween machines. This was a central focus of science andtechnology in the 20th century and continues to be so,leading to various disciplines and technologies which arestill evolving. The last two decades have witnessed tre-mendous progress in communications. This progress hassucceeded to make communication between machinesthe strong link in the information chain, somewhat un-dermining, if not ignoring, the importance of the linksbetween humans and machines. In fact, despite theabove-mentioned progress in connecting machines, westill use rather primitive interfaces to communicate withthem. The several-fold increase in computer performanceand communication channel bandwidth does not seem to

JANUARY 2003 IEEE SIGNAL PROCESSING MAGAZINE 15

Examples of Revolutions and Evolutionsin Communications

Story telling andcave drawing

Books and written press

Photography Telegraph

Telephone Radio and music recording

Cinema Television and video recording

Internet Mobile communication

properly reflect in human-machine interfaces, and thegood old keyboards, pointing devices, and displays stillassure the majority of our interactions with computers. Itseems as if Moore’s Law has not applied with the samerigor to interfaces between humans and computers. Re-cently, other more natural communication modalitiessuch as speech recognition and synthesis have paved theway out of this situation. Research and development inthese areas indicate that there is hope that human-ma-chine interaction can extend to other modalities such asinteraction through smart cameras (vision), haptics(touch), olfaction (smell), and others. Most current ef-forts aim at adapting the interface to our natural senses orto replace them. Visual sensors mimic the human eye inmachines to allow them to see or at least to interpret vi-sual cues emanating from a user. Haptic devices allow ma-chines to feel and measure pressure for machines to reactin a more natural way to commands and actions. Like-wise, they can produce haptic or even olfactory feedbackto provide users with a richer experience, exciting sensesbeyond those of conventional audiovisual information. Isthere a way to extend the man-machine interface beyondthis? Are there ways to altogether bypass the natural inter-faces of a human user such as his muscles and yet establisha meaningful communication between man and ma-chine? Many in the research and development communityassert that this is within possibilities of today and near fu-ture state of the art.

All proposed solutions seem invariantly to point to thesource of our senses and emotions: the human brain.

Brain Activity—A New Communication ModalityRecent progress in technology allows us to probe andmonitor physiological processes inside our body, forwhich no natural interfaces exist. In particular we canmeasure our blood pressure, heart rate variability, muscu-lar activity, and brain electrical activity in efficient andnoninvasive ways. It is natural to assume that such activitycan be used as information in new communication chan-nels. In this article, we focus on brain electrical activityand review methods and procedures aiming at detectingand interpreting such signals for the purpose of com-mand and control in a multimedia environment. A varietyof noninvasive methods are now available to monitorbrain functions. These include EEG, magentoen-cephalography (MEG), positron emission tomography(PET), and functional magnetic resonance imaging(fMRI) [23]. PET, fMRI, and MEG are expensive andcomplex to operate and therefore not practical in most ap-plications. At present, only EEG, which is easily recordedand processed with inexpensive equipment, appears to of-fer the practical possibility of a noninvasive communica-tion channel. Furthermore, EEG signals are rather wellstudied, and there is evidence that subjects can controlthem to some extent in a voluntary manner.

An Overview of BCIsBackgroundAlthough the EEG is an imperfect, distorted indicator ofbrain activity, it remains nonetheless its direct conse-quence. Also, it is based on a much simpler technologyand is characterized by much smaller time constants whencompared to other noninvasive approaches such as MEG,PET, and fMRI. When it became possible to process digi-tized EEG signals on a computer, the temptation wasgreat to use EEG as a direct communication channel fromthe brain to the real world. Early research programs inthat direction were sponsored by defense agencies. Oneoften cites the project headed by Dr. J. Vidal, director ofthe Brain-Computer Interface Laboratory at UCLA, asthe first successful endeavor at building a BCI [24]. Someyears later, works such as [25] confirmed that it was in-deed possible to determine reasonably well what mentaltask (out of a specified small set of tasks) was performedby a subject on the basis of his EEG. In this reference,mental task discrimination used a feature compoundingasymmetry between right and left hemispheres in all fre-quency bands and on all electrodes.

The last ten years have witnessed an explosion in thearea of BCI research. A cornerstone was the first work-shop on BCI technology that took place in 1999, in Al-bany, New York, where 22 research groups presentedtheir work [21]. Among them were the Graz group [18],the Neil Squire foundation [19], the Wadsworth Center[17], the Tuebingen group [22], and the Beckman Insti-tute [26]. A formal definition of the term BCI has beenset forward [23]: “A brain-computer interface is a com-munication system that does not depend on the brain’snormal output pathways of peripheral nerves and mus-cles.” The framework is now clearly defined, and the criti-cal mass is reached. An excellent review of all the efforts tomake BCI technology a mature field of research can befound in [23].

The main motivation today is to develop replacementcommunication and control means for severely disabledpeople, especially those who have lost all voluntary mus-cle control (locked in). As expressed by [26], the poten-tially huge human impact of such developments does notresult in high levels of funding or commercial interest.BCIs are classified as dependent or independent. De-pendent BCIs rely upon voluntary ocular activity to gen-erate a specific EEG pattern, i.e., a visual evoked potential(VEP) provoked by redirection of subject’s gaze [27],[28] and as such do not fully satisfy the above definition.On the contrary, independent BCIs do not imply recourseto muscular intervention of any kind and constitute by farthe principal topic of investigation.

General Structure of an Independent BCITo create a communication channel from the brain to acomputer, a BCI must first acquire signals generated bycortical activity. Most existing BCIs are based on an EEG.

16 IEEE SIGNAL PROCESSING MAGAZINE JANUARY 2003

Figure 1 shows an example of a typical EEG acquisition de-vice in BCI applications. Some preprocessing is generallyperformed due to the high levels of noise and interferenceusually present. Then features related to specific EEG com-ponents must be extracted. Finally, some mapping fromthe feature vectors to device commands is exerted. In a vastmajority of existing BCIs these commands take the form ofa selection of a specific letter, in a letter set, or cursor dis-placement on a computer screen (Figure 2).

Clearly, to obtain an effective communication, a fea-ture-to-command translation must be established. In al-most all BCIs this is done through one or more trainingsessions composed of trials. During each trial the subjectis asked to perform some mental task (typically from a setof two to five mental tasks), and the EEG features corre-sponding to this task are extracted. After enough trials, aclassification algorithm can take care of the fea-ture-to-command translation. In test sessions, the BCIsystem initiates a succession of trials. During each trial thesubject performs the mental task corresponding to thecommands he wants executed. A feedback is usually pro-vided, i.e., the extracted feature vector is translated into acommand that is executed (for instance a cursor moves tothe left). This synchronous mode of operation may be un-fit in control applications such as mechanical device con-trol. Reference [29] presents an interesting approach foran asynchronous BCI, in which an additional EEG fea-ture set is used by the subject to initiate a new trial. Notealso what is called in [30] the man-machine learning di-lemma. Feedback constitutes effectively an important as-pect of human learning. And in a BCI, the subject usesfeedback information to improve the production of anEEG activity such that fewer errors occur in fea-ture-to-command translation. However, repeated failuresduring a session may induce subject’s frustration andmodification of his mental state. Additionally, the feed-back itself, which is often visual, can cause a perturbationin EEG activity.

Signal Processing AspectsMovement-induced perturbations in the EEG can beavoided by a careful monitoring of the experi-ments in terms of subject discomfort, soelectrooculogram (EOG) and electromyo-graphic (EMG) artifacts constitute the majorsource of trouble in EEG processing. In someworks such as [31] and [32], the contami-nated EEG trials are detected and discarded.In [33] a nonlinear filtering approach is pro-posed for EMG artifact cancellation. TheEOG can be estimated by placing a pair ofelectrodes near one eye. This additional infor-mation is first used for direct subtraction fromthe EEG [34]. A more elaborate techniquebased on adaptive interference cancellation isproposed in [35]. It seems, however, thatglobal techniques [36], that is, using simulta-

neously the information from all electrodes, represent themost promising approach. In [37] principal componentanalysis (PCA) is used to decompose a multielectrodeEEG trial in linearly uncorrelated components, and thenreconstruction is performed by omitting unwanted com-ponents such as EOG. A now familiar improvement con-sists of using independent component analysis (ICA)[38], [39] instead of PCA [40]-[42]. A quantitative eval-uation [43] confirms the superiority of ICA with respectto other techniques. Another interest of ICA lies in thefact that it can also be used for event related potentials(ERPs) extraction [44].

Many existing BCIs use features related to EEG eventsknown to be representative of specific brain activities. Thoseevents may be characterized in the time domain, such as theslow cortical potentials (SCPs) or P300 evoked potentials.SCPs are slow voltage changes occurring on time scalesfrom 0.5 to 10 seconds and associated with movement andcortical activation [45], [46]. SCPs have been shown to becontrollable by a subject and have been used in theso-called thought translation device [22]. The evoked po-tential “P300,” a positive peak, takes place about 300 msafter an infrequent or significant stimulus imbedded in a

JANUARY 2003 IEEE SIGNAL PROCESSING MAGAZINE 17

� 1. Example of an EEG electrode cap.

Machine

EEGAcquisition

EEG-Signal

Feedback

Pre-processing

FeatureEstimation

FeatureClassification

Output

� 2. General block diagram of EEG-based BCI.

series of routine stimuli. As such, it can be used to detect thesubject’s choice by proposing in turn the possible options.This approach has been implemented in [26]. But muchemphasis in the BCI literature has been put on frequencyevents, namely mu and beta rhythms (see “The Electroen-cephalogram”). Those rhythms are linked to motor outputcortical activity but also to preparation of movement,which makes them suitable for BCI design. In several re-search works such as [18], [17], [31], [32], [47], and [48],the mental tasks consist of imagined movements, typicallyright- and left-hand imagined movement in a binary choiceBCI. Spatial filtering such as surface Laplacian [31] or themethod of common spatial patterns (CSPs) [32], [48] hasalso been employed to enhance those frequency features.

An interesting approach to frequency componentselection, based on an extension of Kohonen’s LVQ [49],is described in [50]. Finally, some BCIs use features not di-rectly related to specific brain activities, such as those pro-posed in [51] that are based on autoregressive parameters.

The choice of the classification method for the fea-ture-to-command translation, while important, is prob-ably less crucial for a successful BCI operation. Adecision tree is used in [31], a local neural network in[53], a Bayesian network in [54], and in [55] a time-de-pendent multiplayer perceptron (MLP) is proposed asan improvement upon conventional MLP [55]. Whatmatters, however, is the adaptability of the classifier,which permits adjustment of the BCI to short- and

18 IEEE SIGNAL PROCESSING MAGAZINE JANUARY 2003

The Electroencephalogram

An electroencephalogram (EEG) is a recording of thevery weak (in the order of 5-100 µV) electrical poten-

tials generated by the brain on the scalp. EEG has beenthe subject of much fantasy in some popular books andmovies. Its origin seemed to imply it was a direct expres-sion of the cerebral processes, but unfortunately (bysome aspects one could say fortunately) reality is quitedifferent. The layers of cerebrospinal fluid, bone, and skinthat separate the electrodes from the brain itself inducemuch attenuation and volume conduction. The formercauses a poor signal to interference ratio and the latter anample smoothing of the electrical activity of the cortex.Also, many physiological factors influence the EEG.The pioneering work of Berger in the 1930s led him todiscover the existence of structured patterns in the EEG,still under study today. Later, analysis of EEG activity, per-formed visually on paper records, was aimed at clinicalpurposes such as detection of epilepsy and other severepathologies. The huge advances in acquisition devicesand computers in the last 20 years have triggered a hostof research on the EEG. If no mind-reading machine islikely to appear, some of the links between EEG activityand sensorimotor and mental processes are now at leastpartially understood.

The EEG is recorded as a potential difference betweena signal electrode placed on the scalp and a referenceelectrode (generally one ear or both ears electrically con-nected). A conductive paste is used to decrease contactimpedance and electrode migration. Due to signal levels,high-gain, high-quality amplifiers are placed betweenthe electrodes and the acquisition devices. The Interna-tional Federation in Electroencephalography and ClinicalNeurophysiology defined a standard called the 10-20electrode placement system (see Figure 5). The elec-trode names are standardized too and correspond to

their location on the scalp. For instance, the two occipitalelectrodes are called O1 and O2. The average frequencycontent of the EEG imposes a sampling rate in the range100-200 Hz, typically 128 Hz.

It is common practice to consider specific frequencybands thought to be associated with specific brainrhythms: the alpha (8-13 Hz), beta (13-30 Hz), delta(0.5-4 Hz), and theta (4-7 Hz) bands. Alpha waves, ofmoderate amplitude, are typical of relaxed wakefulness(idling). Lower-amplitude beta waves are more promi-nent during intense mental activity. Theta and deltawaves are normal during drowsiness and earlyslow-wave sleep. Recently a distinction has been madebetween idling activity focused over visual cortex, stillcalled alpha rhythm, and idling activity focused over mo-tor cortex, now called mu rhythm. The latter seems to beassociated to beta rhythm.

Another modality of EEG that is largely used in studies ofhuman cognitive processing is the evoked potential (EP),or, to use a more neutral denomination, the event-relatedpotential (ERP). EP is the voltage change in EEG activity re-corded (typically during a one-second trial) after a stimulushas been presented to the subject. This stimulus is in mostcases an auditory or a visual one. In the latter case one of-ten uses the term visual evoked potential (VEP).

The main causes for EEG perturbation, particularly atfrontal, temporal, and occipital electrodes, are theelectrooculogram (EOG) and the electromyogram(EMG). These perturbations are most often called arti-facts in the relevant literature. EOG is due to fluctuatingelectrical fields generated by eye movement, with theeyeball constituting an electrical dipole. EMG is due tothe electrical activity of scalp muscles.Further useful references on EEG and ERP are [61] and[68]-[70].

long-term variations in a subject’s EEG activ-ity. Typically adaptation should take placeduring test sessions.

Example of a Working BCI SystemArchitecture and Operational ModesThis section presents the overall architectureof a BCI system which was built to carry outresearch and development in BCIs for multi-media applications [71]. The system has beendesigned to be very modular and flexible, so asto exploit it in a large number of BCI applica-tions. In a typical application, the resultingEEG pattern of a specific mental activity(MA) is first learned by the computer in aninitial training session and automatically rec-ognized in ulterior sessions.

The training process is mutual as the hu-man subject and the computer learn how toproduce and how to recognize a given EEGpattern corresponding to an MA.

Real-time interaction between the subject and thecomputer is therefore an essential part of the system. Forreasons of efficiency, the BCI system has been designedaround five operational modes (OMs) going from simpleto more sophisticated.

Visualization OMIn this OM, the user can see a visual representation ofher/his EEG activity in real time. Particular features asso-ciated with EEG signals, such as the power values in thetypical frequency bands (delta, theta, alpha, and beta),inter-electrode coherences, and total power at a givenelectrode, are mapped to a 3-D virtual environment andregularly updated. The objectives of this OM are calibra-tion and familiarization of the subject with the system.

Training Without Feedback OMIn this OM, audio or visual cues are presented to the sub-ject for him to perform predefined mental activities. ThisOM allows the computer to learn those EEG patterns as-sociated with particular mental activities. The learningprocess is carried out offline. Reference models are builtfor each predefined MA.

Training with Feedback OMIn this OM, the subject is asked to perform an MA andfeedback is provided. The feedback is positive when thecomputer recognizes the MA and negative otherwise.This recognition is based on the reference models of themental activities calculated during the training withoutfeedback OM. This OM allows the subject to modulatehis electrical brain activity to minimize the misclassifi-cation rate. As a matter of fact, several series of training

with and without feedback are necessary to achieve goodrecognition rates.

Control OMAs the result of the previous experiments is a model of MA,the subject can start to control the system by producing themental activities for which the system has been trained.Thus, visual or sound cues are no longer necessary.

Multiuser OMThis OM exists in a multiuser game where the goal is togain control of an object by performing an MA. This kindof training paradigm was chosen because of its more stim-ulating effect when compared to a simple feedback.

The above experiments require a carefully designedBCI system in terms of flexibility, speed, and EEG real-time processing. A solution fulfilling these requirementsis a distributed system in which each component providesspecific services to the others in an efficient and transpar-ent way. Figure 3 depicts the general block diagram of thisBCI system.

Three component types can be distinguished:� a signal production component whose responsibility isto digitize the acquired EEG signals and transmit them tothe processing units� a signal processing component that is in charge of thesignal preprocessing, feature extraction, model building,and classification� a rendering component that is used to display visual orauditory cues and for rendering possible feedback.

The communication rules between these componentswere designed according to the CORBA specificationsand the algorithms were implemented in Java, C, andMATLAB.

JANUARY 2003 IEEE SIGNAL PROCESSING MAGAZINE 19

Signal ProductionComponent

EEG Data

SignalProcessingComponent

Network

ControlSignals

RenderingComponent

� 3. BCI system architecture.

Signal PreprocessingAs said before, EEG signals can be contaminated by ocu-lar and muscular artifacts. Ocular artifacts are particularlyundesirable because of their large amplitude. Therefore,we implemented an online elimination of those EEG seg-ments that are contaminated by ocular artifacts.

The EEG signal is segmented into half-second seg-ments and the variations in the signal power at electrodesFp1 and Fp2 (Figure 4) are evaluated. Whenever thepower values at both electrodes change abruptly the cor-responding EEG segment is discarded (Figure 5).

Feature Extraction and ClassificationThe BCI system receives commands sent by the user inthe form of EEG patterns produced during MAs.Typically the BCI system analyzes EEG segments (EEGtrials) and extracts features from them that are comparedwith MA models computed during the training phase.The features extracted from EEG trials, the models com-putation, and the comparison methods are interdepen-dent and constitute one of the most important designaspects in a BCI system.

The single EEG trial-classification problem can bestated as follows. Given a labeled set of EEG trials (train-

ing set) that were recorded during the performance ofknown MAs, we wish to characterize each MA by a modelso that we can compare an unknown EEG trial to eachMA model and assign it to its nearest MA model.

The features that are extracted from EEG trials deter-mine the nature of the MA models and the comparisonmethod. Among the possible choices, we focused on twoapproaches based on a multivariate autoregressive analy-sis of EEG trials and on a joint time-frequency-spatialanalysis of the correlations between the univariate com-ponents of EEG trials.

Multivariate Autoregressive AnalysisThe multivariate autoregressive (MVAR) representationof a discrete time N-multivariate signal S k( ) consists of aweighted linear combination of past observations plus arandom, uncorrelated input

S k M l S k l e kl

p

( ) ( ) ( ) ( ).= − +=∑

1 (1)

In (1), S k( )and e k( )are vectors of dimension N and theM l l p( ), ,...,=1 are matrices of dimension N N× whosecoefficients are called MVAR coefficients. The MVARcoefficients are estimated with a least squares procedure.

The MVAR coefficients were chosenbecause they reflect the auto andcross-spectral relationships between thecomponents of a multivariate signalS k( ).

In the case of EEG trials, the set ofMVAR coefficients calculated for eachtrial constitutes its feature vector. Weused a single layer neural network (NN)with two possible output values (0 or 1)for each MA pair. The output of the NNcorresponding to an MA pair indicates towhich MA the input EEG trial is closest.With this method, the classification of anunknown EEG trial S is performed in thefollowing way. The MVAR coefficientsof S are estimated and grouped into a fea-

ture vector which is fed into different NNs. A majority ruleis then used on outputs to classify the EEG trial.

Joint Time-Frequency-Space Correlation (TFSC) AnalysisIn this approach, the EEG trials are analyzed with respectto their correlative time-frequency representation(CTFR). The CTFRs provide a measure of the interac-tion strength between groups of neurons as a function ofthe time and frequency. Previous studies emphasized theimportance of these parameters to classify brain activity[57]. The CFTR is commonly characterized by the ambi-guity function [58]. In the following we limit ourselves tothe continuous time and frequency representations forsake of clarity. A discrete domain formulation can befound in [72].

20 IEEE SIGNAL PROCESSING MAGAZINE JANUARY 2003

Common Reference

Fp1

F3

T3

C3

T5 P3

O1

Pz

O2

P4T6

C4

T4A2

Cz

F4Fz

Fp2

A1

+

+

+

+

� 4. International 10-20 system placement for an EEG acquisition.

Fp1

Fp2

Discarded EEG Segment Because of the AbruptPower Changes in the Signals Recorded atElectrodes Fp1 and Fp2

0.5 s

0 1 2 3

� 5. Example of preprocessing of EEG signals.

The ambiguity function of a multivariate signalS t s t s t s tN

t( ) [ ( ) ( ) ( )]= 1 2 K is defined in (2). Theunivariate signals s t Nl l( ); ,...,=1 are the spatial compo-nents of S t( )

A S t S t dtSH

j t

( , )θ τ τ τ θ

= +

⋅ −

∫ 2 2 (2)

where H stands for the conjugate transpose andθ, τare thefrequency and time lags, respectively. Equation (2) can bewritten as

[ ]A A m n N

A s t s t

S mn

mn m n

( , ) ( , ) ,

( , ) *

θ τ θ τ

θ τ τ τ= ≤ ≤

= +

1

2 2

j t

dtθ

.(3)

The diagonal terms of AS ( , )θ τ are called the auto ambigu-ity functions and the off-diagonal terms the cross-ambi-guity functions.

In [59] Cohen defined the characteristic function inthe univariate case as the product between the ambiguityfunction and a two-dimensional function called the ker-nel. In an analog way we can define the multivariate char-acteristic function as

[ ]M A m n NS mn mn( , ) ( , ) ( , ) ,θ τ θ τ θ τ= φ ≤ ≤1 . (4)

The two-dimensional functions φmn ( , )θ τ are the ker-nels. Results in the univariate case [60] demonstratedthat it is possible to design akernel that is optimized forthe classification in the θ τ−plane. We generalized theseresults to the multivariatecase by considering N 2 ra-dially Gaussian kernelsparameterized by theirshape [60].

The characteristic func-tion of a multivariate signalprovides information on itsjoint time, frequency, andspace correlations. The off-diagonal terms of the matrixMS ( , )θ τ (4) represent thespatial correlations. If wetransform the multivariatesignal to obtain spatially de-correlated components, wethen reduce the number ofkernels to optimize fromN 2 to N [36]. The decorre-lating transformation is de-signed to obtain trans-formed components (direc-t ions) that maximal lyseparate the classes [67].

When working with transformed components only Nkernels have to be determined. They are separately opti-mized using an iterative procedure that updates their shapeso as to enhance those regions in the θ τ− plane where classseparation is maximal [36]. The resulting characteristicfunctions are linearly separable.

Results and DiscussionsThree types of mental activities, mental counting (MA1)and imagined left and right index movements (MA2 andMA3, respectively), were selected. Three healthy male sub-jects (S1, S2, and S3) participated in the experiments.

Five 20-minute sessions were carried out. In the firsttraining session no feedback was provided. In the next ses-sions the model references were updated at the end of eachsession and used in the next session to provide feedback.

The protocol used in a session where feedback wasprovided is shown in Figure 6. During the first five min-utes the visualization OM was applied to calibrate the sys-tem and familiarize the user with the system. The rest ofthe session was divided into three five-minute slices.These five-minute slices were divided into rest and re-cording periods. During a recording period a feedbackwas provided each half-second. The half-second segmentimmediately after the visual cue was discarded because ofthe influence of the visual evoked potentials.

EEG signals were recorded from electrodes Fp1, Fp2,C3, C4, P3, P4, O1, and O2 of the 10-20 international sys-tem [61]. Among these signals the last six were used for the

JANUARY 2003 IEEE SIGNAL PROCESSING MAGAZINE 21

Recording SessionVisualization Modality 5-Minute Slice 5-Minute Slice 5-Minute Slice

Minutes

Minutes

0 5 10 15 20

1-MinuteRecording

1-MinuteRecording

1-MinuteRecordingBreak

Break Break

BreakMA-Models

Update

0 1.0 1.5 2.5 3.0 4.0 5.0

5-Minute Slice

One-Minute Recording

Visual Cue

MentalActivity

MentalActivity

0 5 10 50 55 60

Break Cue

Seconds

Seconds

MA with Feedback

Visual Cue

Feedback

0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

� 6. Protocol of a training session where feedback was provided.

classification, and Fp1, Fp2 for artifact detection and re-moval [36].

After the first training session where no feedback wasprovided, the weights of the NNs and the optimal value[62] for the model order p for the MVAR approach and thekernel parameters for the time-frequency-space correlation(TFSC) analysis were determined. From sessions two tofive a continuous feedback was provided to the subject.

The results in terms of error rate are shown in Figures 7and 8. As can be observed from these figures, the classifi-cation error rate decreased over the sessions for both ap-proaches. This indicates that the feedback provided to theuser and the update of the MA models at the end of eachsession positively influenced the overall performance ofboth user and BCI. The same trend can be observed inother studies where the feedback strategy was evaluated.

The second approach was better in terms of errorrate. This last point indicates that the analysis of the co-herent neuronal activity in time, frequency, and space ap-pears to be more appropriate for the recognition ofmental activities.

Perspectives and Challenges

BCI technology, by its intimate connection with the won-der of human thought processes, is a fascinating researchfield. The existing BCI systems have demonstrated that di-rect information transfer from the brain to a device is in-deed possible, but many lines of investigation are wideopen for multidisciplinary teams. One of the first problemsto address is the limitation of the information transfer rate,which is at best currently 20 bits/min. It seems dubiousthat BCI protocols based on mental task classification canimprove this figure by much. A simple reason for this is thepractical impossibility for a subject to switch betweenmental tasks, even simple ones such as imagined move-ments, at time intervals smaller than two or three seconds.Finer spatial and temporal scales of analysis as well as deter-mination of more stable and more reliable features couldcope with more complex mental tasks. Other cortical activ-ities such as emotions could be employed to initiate infor-mation transfer. In the framework of performanceimprovement, let us mention the very sensible advocacy in

[23] for the design of reliable tools for perfor-mance assessment, in order to assure scientificcredibility. Information transfer rate [23], [63] is afirst step in this direction.

Another very challenging aspect in BCI tech-nology is the interplay between the BCI systemand the subject, in terms of adaptation and learn-ing. Ideally, the BCI system should adapt tochanges in subject EEG activity and the subjectshould also learn to better control his EEG activ-ity, but these conjugate tasks should be as harmo-nious as possible. Finally let us note that thelimitations of EEG-based BCI systems seem lessimportant in a multimodal HCI. The EEG activ-ity is, by its very nature, “orthogonal” to moreclassical modes. Inclusion of a BCI in amultimodal interface thus results in a net gain ofinformation transfer capability.

Touradj Ebrahimi received M.Sc. and Ph.D. de-grees in electrical engineering from the SwissFederal Institute of Technology, Lausanne(EPFL), in 1989 and 1992, respectively. In1993, he was a research engineer at the Corpo-rate Research Laboratories of Sony Corporation,Tokyo. In 1994, he was as a researcher at AT&TBell Laboratories. He is currently a titular profes-sor at the Signal Processing Institute of theSchool of Engineering at EPFL, where he is in-volved in research and teaching for multimediainformation processing and coding. In 2002, hefounded Emitall, a research and developmentcompany in electronic media innovations. Hewas the recipient of the IEEE and Swiss nationalASE award in 1989 and the winner of the firstprize for the best paper appearing in IEEE Trans-

22 IEEE SIGNAL PROCESSING MAGAZINE JANUARY 2003

S1 S2 S3

504540353025201510

50

Session 2 Session 3 Session 4 Session 5

3943

4036

4138

3539

35 3236

30

Classifier

Classification Error Rate over the Sessions in the Case MVAR

Cla

ssifi

cato

n E

rror

%

� 7. Evolution of the classification error rate over sessions two to five withMVAR features.

S1 S2 S3

40353025201510

50

Session 2 Session 3 Session 4 Session 5

35 36 35 33 35 33 32 30 2925

3027

Classifier

Classification Error Rate over the Sessions in the Case TFSC

Cla

ssifi

cato

n E

rror

%

� 8. Evolution of the classification error rate over sessions two to six withTFSC features.

actions on Consumer Electronics in 2001. In 2001 and2002, he received two ISO awards for contributions toMPEG-4 and JPEG 2000 standards. He is author orco-author of over 100 scientific publications and holds adozen patents.

Jean-Marc Vesin graduated from the Ecole NationaleSupérieure d’Ingénieurs Electriciens de Grenoble(ENSIEG, Grenoble, France) in 1980. He received hisM.Sc. from Laval University, Québec City, Canada, in1984, where he spent four years on research projects. Af-ter two years in the industry, he joined the Signal Pro-cessing Institute (ITS) of the Swiss Federal Institute ofTechnology, Lausanne, Switzerland, where he obtainedhis Ph.D. in 1992. His primary interests are nonlinearsignal modeling and analysis, adaptive systems, and evo-lutionary algorithms. From an application viewpoint, heworks mainly in biomedical signal processing and in com-puter modelling of biological tissue electrical activity.

Gary Garcia received his M.Sc. degree in electricalengineering from the Swiss Federal Institute of Technology(EPFL) in 2001. Since April 2001 he has been a Ph.D. stu-dent at EPFL where he is involved in several projects in im-age, video and biomedical signal processing. Currently he isdeveloping an adaptive direct brain-computer communica-tion device with strong emphasis on the models and inter-pretations of synchronized neural activity.

References[1] V.I. Pavlovic,. R. Sharma, and T. Huang, “Visual interpretation of hand

gestures for human-computer interaction: A review,” IEEE Trans. PatternAnal. Machine Intell., vol. 19, pp. 677-695, July 1997.

[2] Z. Duric, W.D. Gray, R. Heishman, L. Faylin, A. Rosenfeld, M.J.Schoelles, C. Schunn, and H. Wechsler, “Integrating perceptual and cogni-tive modeling for adaptive and intelligent human-computer interaction,”Proc. IEEE, vol. 90, pp. 1272-1289, July 2002.

[3] W. Ying and T.S. Huang, “Nonstationary color tracking for vision-basedhuman-computer interaction,” IEEE Trans. Neural Networks, vol. 13, pp.948-960, July 1997.

[4] R. Picard, Affective Computing. Cambridge, MA: MIT Press, 1998.

[5] J. Grey, “Human-computer interaction in life drawing, A fine artist’s per-spective,” in Proc. IEEE 6th Int. Conf. Inf. Visualisation, London, 2002, pp.761-770.

[6] Y. Daabaj, “An evaluation of the usability of human-computer interactionmethods in support of the development of interactive systems,” in Proc.IEEE Int. Conf. System Sciences, Hawaii, 2001, pp. 1830-1839.

[7] L.S. Chen and T.S. Huang, “Emotion expressions in audiovisual humancomputer interaction,” in Proc. IEEE Int. Conf. Multimedia Expo., NewYork, 2000, vol. 1, pp. 423-426.

[8] D.W. Massaro, “Perceptual interfaces in human computer interaction,” inProc. IEEE Int. Conf. Multimedia Expo., New York, 2000, vol. 1, pp.563-566.

[9] H. Hongo, M. Ohya, M. Yasumoto, and K. Yamamoto, “Face and handgesture recognition for human-computer interaction,” in Proc. IEEE 15thInt. Conf. Pattern Recognition, 2000, vol. 2, pp. 921-924.

[10] J.A. Landay, “Informal user interface for natural human-computer interac-tion,” IEEE Intell. Syst., vol. 13, pp. 14-16, May-June 1998.

[11] J. Segen and S. Kumar, “Human-computer interaction using gesture rec-ognition and 3D hand tracking,” in Proc. IEEE Int. Conf. Image Processing,Chicago, IL, 1998, vol. 3, pp. 188-192.

[12] K.-H. Englmeier, C. Krapichler, M. Haubner, M. Seemann, and M.Reiser, “Virtual reality and multimedia human-computer interaction inmedicine,” in Proc. IEEE 2nd Workshop Multimedia Signal Processing,Redondo Beach, CA, 1998, pp. 193-202.

[13] R. Stiefelhagen and J. Yang, “Gaze tracking for multimodal human-com-puter interaction,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Pro-cessing, Munich, Germany, 1997, vol. 4, pp. 2617-2620.

[14] C.M. Jones and S.S. Dlay, “Human-computer interaction and animationsystem for simple interfacing to virtual environments,” in Proc. IEEE Int.Conf. Compt. Cybernetics Sim., Cambridge, U.K., 1997, vol. 5, pp.4242-4247.

[15] N. Kayian, “Exploratory study of implicit theories in human computer in-teraction,” in Proc. IEEE 6th Australian Conf. Computer-Human Interaction,New Zealand, 1996, pp. 338-339.

[16] C. Guger, A. Schlögl, C. Neuper, D. Walterspacher, T. Strein, and G.Pfurtscheller, “Rapid prototyping of an EEG-based brain-computer inter-face (BCI),” IEEE Trans. Rehab. Eng., vol. 9, pp. 49-58, Mar. 2001.

[17] J.R. Wolpaw, D.J. McFarland, and T.M. Vaughan, “Brain-computer in-terface research at the Wadsworth Center,” IEEE Trans. Rehab. Eng., vol. 8,pp. 222-226, June 2000.

[18] G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, H. Ramoser, A.Schlögl, B. Obermaier, and M. Pregenzer, “Current trends in Grazbrain-computer interface (BCI) research,” IEEE Trans. Rehab. Eng., vol. 8,pp. 216-219, June 2000.

[19] G.E. Birch and S.G. Mason, “Brain-computer interface research at theNeil Squire Foundation,” IEEE Trans. Rehab. Eng., vol. 8, pp. 193-195,June 2000.

[20] J.D. Bayliss and D.H. Ballard, “A virtual reality testbed for brain-com-puter interface research,” IEEE Trans. Rehab. Eng., vol. 8, pp. 188-190,June 2000.

[21] J.R. Wolpaw, N. Birbaumer, W.J. Heetderks, D.J. McFarland, P.Peckman, G. Schalk, E. Donchin, L.A. Quatrano, C.J. Robinson, and T.M.Vaughan, “Brain-computer interface technology: A review of the first inter-national meeting,” IEEE Trans. Rehab. Eng., vol. 8, pp. 164-173, June2000.

[22] N. Birbaumer, A. Kubler, N. Ghanayim, T. Hinterberger, J. Perelmouter,J. Kaiser, I. Iversen, B. Kotchoubey, N. Neumann, and H. Flor, “Thethought translation device (TTD) for completely paralyzed patients,” IEEETrans. Rehab. Eng., vol. 8, pp. 190-193, June 2000.

[23] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M.Vaughan, “Brain-computer interfaces for communication and control,”Clin. Neurophysiol., vol. 113, pp. 767-791, 2002.

[24] J.J. Vidal, “Real-time detection of brain events in EEG,” Proc. IEEE, vol.65, pp. 633-664, May 1977.

[25] Z.A. Keirn and J.I. Aunon, “A new mode of communication betweenman and his surroundings,” IEEE Trans. Biomed. Eng., vol. 37, pp.1209-1214, Dec. 1990.

[26] E. Donchin, K.M. Spencer, and R. Wijesinghe, “The mental prosthesis:Assessing the speed of a P300-based brain-computer interface,” IEEE Trans.Rehab. Eng., vol. 8, pp. 174-179, June 2000.

[27] E.E. Sutter, “The brain response interface: Communication through visu-ally-induced electrical brain responses,” J. Microcomput. Appl., vol. 15, pp.31-45, 1992.

[28] M. Middendorf, G. McMillan, G. Calhoun, and K.S. Jones, “Brain-com-puter interfaces base on steady-state visual-evoked response,” IEEE Trans.Rehab. Eng., vol. 8, pp. 211-214, June 2000.

[29] S.G. Mason and G.E. Birch, “A brain-controlled switch for asynchronouscontrol applications,” IEEE Trans. Biomed. Eng., vol. 47, pp. 1297-1307,Oct. 2000.

[30] G. Pfurtscheller and C. Neuper, “Motor imagery and direct brain-com-puter communication,” Proc. IEEE, vol. 89, pp. 1123-1134, July 2001.

JANUARY 2003 IEEE SIGNAL PROCESSING MAGAZINE 23

[31] J. del R. Millan, M. Franzé, J. Mouriño, F. Cincotti, and F. Babiloni,“Relevant EEG features for the classification of spontaneous motor-relatedtasks,” Biol. Cybern., vol. 86, pp. 89-95, 2002.

[32] H. Ramoser, J. Mueller-Gerking, and G. Pfurtscheller, “Optimal spatialfiltering of single trial EEG during imagined hand movement,” IEEE Trans.Rehab. Eng., vol. 8, pp. 441-446, Dec. 2000.

[33] L.P. Panych, J.A. Wada, and M.P. Beddoes, “Practical digital filters for re-ducing EMG artifacts in EEG seizure recordings,” Electroencephalogr. Clin.Neurophysiol., vol. 72, pp. 268-272, 1989.

[34] P.M. Quilter, B.B. McGillivray, and D.G. Wadbrook, “The removal of eyemovement artefact from the EEG signals using correlation techniques,”Proc. Inst. Elec. Eng. Conf. Publ., vol. 159, pp. 93-100, 1977.

[35] K.D. Rao and D.C. Reddy, “On-line method for enhancement of electro-encephalogram signals in presence of electro-oculogram artifacts usingnon-linear recursive least squares technique,” Med. Biol. Eng. Comput., vol.33, pp. 488-491, 1995.

[36] G. Garcia and T. Ebrahimi, “Time-frequency-space kernel for singleEEG-trial classification,” presented at NORSIG Conf., Norway, 4-7 Oct.2002.

[37] T.D. Lagerlund, F.W. Sharbrough, and N.E. Busacker, “Spatial filteringof multichannel electroencephalographic recordings through principal com-ponent analysis by singular value decomposition,” J. Clin. Neurophysiol., vol.14, pp. 73-82, 1997.

[38] A. Hyvärinen and E Oja, “Independent component analysis: Algorithmsand applications,” Neural Netw., vol. 13, pp. 411-430, 2000.

[39] T.W. Lee, Independent Component Analysis. Theory and Applications.Boston, MA: Kluwer, 1998.

[40] T.-P. Jung, S. Makeig, C. Humphries, T.W. Lee, M.J. McKeown, V.Iragui, and T.J. Sejnowski, “Removing electroencephalographic artifacts byblind source separation,” Psychophysiology, vol. 37, pp. 163-178, 2000.

[41] R. Vigario, J. Särelä, V. Jousmäki, M. Hämäläinen, and E. Oja, “Inde-pendent component approach to the analysis of EEG and MEG record-ings,” IEEE Trans. Biomed. Eng., vol. 47, pp. 589-593, 2000.

[42] S. Vorobyov and A. Cichoki, “Blind noise reduction for multisensory sig-nals using ICA and subspace filtering, with application to EEG analysis,”Biol. Cybern., vol. 86, pp. 293-303, 2002.

[43] L. Vigon, M.R. Saatchi, J.E.W. Mayhew, and R. Fernandes, “Quantita-tive evaluation of techniques for ocular filtering of EEG waveforms,” Proc.Inst. Elec. Eng.-Sci. Meas. Technol., vol. 147, pp. 219-228, Sept. 2000.

[44] T.-P. Jung, S. Makeig, M. McKeown, A.J. Bell, T.W. Lee, and T.J.Sejnowski, “Imaging brain dynamics using independent component analy-sis,” Proc. IEEE, vol. 89, pp. 1107-1122, July 2001.

[45] N. Birbaumer, “Slow cortical potentials: Their origin, meaning, and clini-cal use,” in Brain and Behavior Past, Present, and Future, G.J.M. van Boxteland K.B.E. Böcker, Eds. Tilburg, The Netherlands: Tilburg Univ. Press,1997.

[46] A. Kübler, B. Kotchoubey, J. Kaiser, J.R. Wolpaw, and N. Birbaumer,“Brain-computer communication: Unlock the lock-in,” Psych. Bull., vol.127, pp. 358-375, 2001.

[47] J.R. Wolpaw and D.J. McFarland, “Multichannel EEG-based brain com-puter communication,” Electroencephalogr. Clin. Neurophysiol., vol. 90, pp.444-449, 1994.

[48] C. Guger, H. Ramoser, and G. Pfurtscheller, “Real-time EEG analysiswith subject-specific spatial patterns for a brain-computer interface (BCI),”IEEE Trans. Rehab. Eng., vol. 8, pp. 447-456, Dec. 2000.

[49] T. Kohonen, “The self-organizing map,” Proc. IEEE, vol. 78, pp.1464-1480, 1990.

[50] M. Pregenzer and G. Pfurtscheller, “Frequency component selection foran EEG-based brain to computer interface,” IEEE Trans. Rehab. Eng., vol.7, pp. 413-417, Dec. 1999.

[51] C.W. Anderson, E.A. Stolz, and S. Shamsunder, “Multivariateautoregressive models for classification of spontaneous

electroencephalographic signals during mental tasks,” IEEE Trans. Biomed.Eng., vol. 45, pp. 277-286, Mar. 1998.

[52] J. del R. Millan, J. Mouriño, M. Franzé, F. Cincotti, M. Varsta, J.Heikkonen, and F. Babiloni, “A local neural classifier for the recognition ofEEG patterns associated to mental tasks,” IEEE Trans. Neural Networks, vol.13, pp. 678-686, 2002.

[53] W.D. Penny, S.J. Roberts, E.A. Curran, and M.J. Stokes, “EEG-basedcommunication: A pattern recognition approach,” IEEE Trans. Rehab. Eng.,vol. 8, pp. 214-215, June 2000.

[54] E. Haselsteiner and G. Pfurtscheller, “Using time-dependent neural net-works for EEG classification,” IEEE Trans. Rehab. Eng., vol. 8, pp.457-463, Dec. 2000.

[55] B.D. Ripley, Pattern Recognition and Neural Networks. Cambridge, U.K.:Cambridge Univ. Press, 1996.

[56] N. Saiwaki, N. Kimura, and S. Nishida, “An analysis of EEGS based oninformation flow with SD method,” in Proc. 1998 IEEE Int. Conf. Syst.,Man, Cybernetics, San Diego, CA, Oct. 1998, pp. 4115-4119.

[57] P.L. Nunez, R. Srinivasan, A.F. Westdorp, R.S. Wijesinghe, D.M.Tucker, R.B. Silberstein, and P.J. Cadusch, “EEG coherency I: Statistics,reference electrode, volume conduction, Laplacians, cortical imaging, andinterpretation at multiple scales,” Electroencephalogr. Clin. Neurophysiol., vol.103, pp. 499-515, 1997.

[58] M. Coates, “Time-frequency modelling,” Ph.D. dissertation, Dept. Engi-neering, Univ. of Cambridge, Cambridge, U.K., Aug. 1998.

[59] L. Cohen, Time-Frequency Analysis. Englewood Cliffs, NJ: Prentice-Hall,1995.

[60] M. Davy, “Noyaux optimises pour la classification dans le plantemps-frequence—Proposition d’un algorithme constructif et d’une refer-ence bayesienne basee sur les methodes MCMC—Application au diagnosticd’enceintes acoustiques,” Ph.D. dissertation, Institut de Recherche en Com-munications et Cybernetique de Nantes, Université de Nantes, Nantes,France, Sept. 2000.

[61] H.H. Jasper, “The ten-twenty electrode system of the international federa-tion,” Electroencephalogr. Clin. Neurophysiol., vol. 10, pp. 371-375, 1958.

[62] A. Neumaier and T. Schneider, “Estimation of parameters andeigenmodes of multivariate autoregressive models,” ACM Trans. Math.Softw., vol. 27, no. 1, pp. 27-57, Mar 2001.

[63] B. Obermaier, C. Neuper, C. Guger, and G. Pfurtscheller, “Informationtransfer rate in a five-classes brain-computer interface,” IEEE Trans. Rehab.Eng., vol. 9, pp. 283-288, Sept. 2001.

[64] H. Tamura, H. Yamamoto, and A. Katayama, “Mixed reality: Futuredreams seen at the border between real and virtual worlds,” IEEE Comput.Graph. Appl., vol. 21, pp. 64-70, Nov.-Dec 2001.

[65] E. Badique, “New imaging frontiers: 3D and mixed reality,” in Proc. IEEE3D Data Proc. Vis. Trans. Conf., 2002, pp. 296-304.

[66] H. Yamamoto, “Case studies of producing mixed reality worlds,” IEEETrans. Syst. Man Cybernet., vol. 6, pp. 42-47, 1999.

[67] C.W. Therrien, Decision Estimation and Classification. New York: Wiley,1989.

[68] J.D. Bronzino, “Principles of electroencephalography,” in The BiomedicalEngineering Handbook, J.D. Bronzino, Ed. Boca Raton, FL: CRC, 1995.

[69] K.A. Kooi, R.P. Tucker, and R.E. Marshall, Fundamentals of Electroenceph-alography, 2nd ed. Hagerstown, MD: Harper & Row, 1978.

[70] M.D. Rugg and M.G.H. Coles, Eds., Electrophysiology of Mind-Event-Re-lated Brain Potentials and Cognition. Oxford, U.K.:Oxford Univ. Press,1995.

[71] G. Garcia, T. Ebrahimi, and J.-M. Vesin, “Classification of EEG signals inthe ambiguity domain for brain-computer interface applications,” in IEEEInt. Conf. Digit. Sig. Proc., Santorini, Greece, vol. 1, 1-3 July 2002, pp.301-305.

[72] L. Atlas, P. Droppo, and J. McLaughlin, “Optimizing time-frequency dis-tributions for automatic classification,” Proc. SPIE, vol. 3162, 1997.

24 IEEE SIGNAL PROCESSING MAGAZINE JANUARY 2003