Upload
others
View
7
Download
0
Embed Size (px)
Citation preview
Biosignal Processing for Human-Machine Interaction
Tanja SchultzCognitive Systems Lab, Universität Bremen
T. Schultz, http://csl.uni-bremen.de
Keynote on Youtube: https://www.youtube.com/watch?v=XzkQnzP7yPA
T. Schultz, http://csl.uni-bremen.de 2
NonverbalArticulation
Brainactivity
Eye gaze Body
temperature
Bodynoise
Gesture
KineticBiosignals
ElectricalBiosignals
AcousticBiosignals
ChemicalBiosignals
ThermalBiosignals
Acce
lero
met
er
Ther
mom
eter
Respiration
Surfa
ce
Elec
trode
s
Ther
mal
ca
mer
a
OpticalBiosignals
Gyr
osco
pe
Mag
neto
met
er
Mas
s sp
ectro
met
er
Che
mo-
elec
trica
l Se
nsor
s
Mik
roph
one
Sono
grap
h
Vide
o-ca
mer
a
Nea
r Inf
rare
d Sp
ectro
scop
y
Dermalactivity
Muscleactivity
Motion
Facial expression
Heartactivity Speech
Bios
igna
ls|S
enso
rs |
Hum
an A
ctiv
ities
, Mod
aliti
esDefinition Biosignals
Autonomous signals measured in physical quantities
T. Schultz, C. Amma, D. Heger, F. Putze, M. Wand Biosignale-basierte Mensch-Maschine-Schnittstellen, In at - Automatisierungstechnik, 2013, volume 61, 2013.
T. Schultz, http://csl.uni-bremen.de 3
KineticBiosignals
ElectricalBiosignals
AcousticBiosignals
ChemicalBiosignals
ThermalBiosignals
Acce
lero
met
er
Ther
mom
eter
Surfa
ce
Elec
trode
s
Ther
mal
ca
mer
a
OpticalBiosignals
Gyr
osco
pe
Mag
neto
met
er
Mas
s sp
ectro
met
er
Che
mo-
elec
trica
l Se
nsor
s
Mik
roph
one
Sono
grap
h
Vide
o-ca
mer
a
Nea
r Inf
rare
d Sp
ectro
scop
ySpeech
Definition BiosignalsAutonomous signals measured in physical quantities
T. Schultz, C. Amma, D. Heger, F. Putze, M. Wand Biosignale-basierte Mensch-Maschine-Schnittstellen, In at - Automatisierungstechnik, 2013, volume 61, 2013.
. . .
Bios
igna
ls|S
enso
rs |
Hum
an A
ctiv
ities
, Mod
aliti
es
T. Schultz, http://csl.uni-bremen.de 4
Biosignal-based Spoken Communication
T. Schultz, M. Wand, T. Hueber, D. Krusienski, C. Herff, J. Brumberg. Biosignal-based Spoken Communication: A Survey. In IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 25, pp 2257-2271, 2017.
T. Schultz, http://csl.uni-bremen.de 5
Speech-related Activities and BiosignalsSPEECH production is a complex process resulting from human activities
It is …• initiated in the brain, … • leading to muscle activities that produce ...• respiratory, laryngeal, and articulatory
gestures which create acoustic signals
Speech-related activities can be measured at each level of speech processing, including • the central and peripheral nervous system,• muscular action potentials, • speech kinematics.
Their measurement, • recorded with various sensor technologies,
results in “speech-related biosignals”
T. Schultz, http://csl.uni-bremen.de 6
Panoply of Sensor Technologies
US+video (Hueber, Denby), EMA (Schönle), OPG(Birkholz), PMA (Gilbert/Gonzalez, Erro/Hernaez),
NAM (Nakajima), intraoral (Bos), Radar, …
EMG (Jorgensen, Schultz), Lipreading (Petajan, others)
ECoG (Herff, Cheng), microelectrodes (Brumberg), EEG (Wester, D‘Zmura), fNIRS (Herff/Schultz), MEG (Wang)
T. Schultz, http://csl.uni-bremen.de 7
Standing On the Shoulders of Giants
• Biosignals have been studied for decades to better understand the mechanisms of human speech processing. – Traditional speech processing still mostly focus on the acoustic signal– Speech-related Biosignals other than acoustics could overcome current
limitations in producing speech AND in processing speech• Capture speech prior to the airborne acoustic signal
– Leverage off that muscle & brain activities precede acoustic output– Different speaking modes: normal (modal) speech, whispered – Different levels of effort: normal – shouted – murmured
• Capture speech even if no acoustic output is available: – Silent speech (mouthing speech)– Imagined speech: like silent speech but articulation is suppressed– Inner speech: internalized process in which one thinks in pure meaning
(no phonological properties, no turn-taking qualities, etc.
T. Schultz, http://csl.uni-bremen.de 8
Biosignal-based Spoken CommunicationLike acoustics, speech-related Biosignals can be automatically processed: • Feature extraction followed by speech recognition, speech synthesis, …
Opens up novel use cases, coined Biosignal-based Spoken Communication: • Robust Spoken Communication
– Enhance performance under adverse noise conditions– Fuse complementary biosignals
• Mute-Spoken Communication– Avoid disturbance in quiet environments– Secure against eavesdropping in public places
• Restore Spoken Communication– Voice prostheses for individuals unable to speak
• Speech Training and Therapy – Deliver articulatory biofeedback of voice production – Increase articulatory awareness for therapy & training
T. Schultz, http://csl.uni-bremen.de 9
Biosignal-based Spoken CommunicationLike acoustics, speech-related Biosignals can be automatically processed: • Feature extraction followed by speech recognition, speech synthesis, …
Opens up novel use cases, coined Biosignal-based Spoken Communication: • Robust Spoken Communication
– Enhance performance under adverse noise conditions– Fuse complementary biosignals
• Mute-Spoken Communication– Avoid disturbance in quiet environments– Secure against eavesdropping in public places
• Restore Spoken Communication– Voice prostheses for individuals unable to speak
• Speech Training and Therapy – Deliver articulatory biofeedback of voice production – Increase articulatory awareness for therapy & training
Tutorial 4 – IS2019, Herff/HueberBiosignal-based Speech Processing
Speech Therapy & Language Learning
T. Schultz, http://csl.uni-bremen.de 10
I /i/you /j/ /u/we /w/ /e/
)(1)|()(maxarg)|(maxargxP
WXpWPXWPWW
××=
“Hello”
Speech Signal Capturing Signal Preprocessing
Automatic Speech Recognition (ASR)Text
Output
Automatic Speech Recognition (ASR)
I amYou areWe are
Acoustic Dictionary Language Model
Exam
ple
Patte
rn
t1 t2
?
t 1t 2
t M
0 0.2 0.4 0.6 0.8 1 1.20
1000
2000
3000
4000
5000
6000
7000
8000
Modality: SpeechSensor: MicrophoneAcoustic Biosignal
T. Schultz, http://csl.uni-bremen.de 11
I /i/you /j/ /u/we /w/ /e/
)(1)|()(maxarg)|(maxargxP
WXpWPXWPWW
××=
Speech Signal Capturing Signal Processing
Automatic Speech Recognition (ASR)
Use Muscle Activity instead of Acoustics
I amYou areWe are
Acoustic Dictionary Language Model
Exam
ple
Patte
rn
t1 t2
?
t 1t 2
t M
• Time domain features • Artefact reduction• Contextual features• Compression
“Hello”
TextOutput
Maier-Hein et al, ASRU 2005, Jou/Schultz 2006-2009, Wand/Schultz 2007-2014, Janke/Schultz 2010-2016, Diener/Schultz 2015-Wand, Janke, Schultz: Tackling Speaking Mode Varieties in EMG-based ASR, IEEE Biomedical Engineering, Vol 61, 2014.
Modality: SpeechSensor: EMG-sensorElectrical Biosignal
T. Schultz, http://csl.uni-bremen.de 12
Silent Speech Interfaces: EMGSurface ElectroMyoGraphy (EMG)
Surface = No needlesElectro = electrical activityMyo = muscleGraphy = recording
Speech results from the activity of articulatory musclesElectrodes capture the electrical potentials of the muscle activity in the faceEMG records Muscle activity, not the acoustic signal
EMG-Signal „zero zero zero“ +–
refs1
s2
s2 – s1
Denby, Schultz, Honda, Hueber, Gilbert, Brumberg (2010): Silent Speech Interfaces. Speech Communication, Vol 52 (4).
T. Schultz, http://csl.uni-bremen.de 13
Silent Speech Interfaces: BenefitsEMG records motion, not acoustics Þ Silent Speech can be processedIn Silent speech the speakers are instructed to move their articulators as if they were producing
normal modal speech but to suppress the pulmonary airstream, so that no sound is heard
No Disturbance: Speak silently in quiet environmentsKeep your Privacy: Transmit confidential informationNoise Robustness: No corruption in noisy environment Speech Augmentation: Support speech impaired people
Lautlos Telefonieren, https://www.wdrmaus.de/filme/sachgeschichten/lautlos_telefonieren.php5
T. Schultz, http://csl.uni-bremen.de 14
M. Wand, T. Schultz, J. Schmidhuber: Domain-Adversarial Training for Session Independent EMG-based Speech Recognition, Interspeech 2018
Deep EMG-to-Text (ASR)
Common part
BDPF state classifier (only on source sessions)
Session classifier - inverted gradient (confuse sessions)- configurable contribution
Apply Adversarial Training to session-independent EMG-based ASR, i.e. make data of different sessions more confusable, to improve the target classification accuracy
T. Schultz, http://csl.uni-bremen.de 15
• EMG-UKA corpus: – Many sessions of eight subjects, same scenario– Audible, Silent & Whispered speaking mode
EMG Multi-speaker, Multi-Session Data
Free Trail Corpus athttp://csl.uni-bremen.de/en/csl/
research/silent-speech-communication
M. Wand, M. Janke, T. Schultz: The EMG-UKA corpus for electromyographic speech processing, Interspeech 2014
Full corpus availablefrom ELRA
T. Schultz, http://csl.uni-bremen.de 16
Challenge: Lack of Auditory Feedback
EMG signals of Silent speech are different from those of audible speechEffect weaker for experienced speakers; group “good/bad” speakersPower Spectral Densities (PSD) of audible, whispered and silent EMGSignificantly smaller variations for “good” speakers
Spectral Mapping Algorithm to compensate for differences: ~12% rel DProbable Cure: Provide instant auditory feedback ® Direct Synthesis
‘Bad’ silent speaker (WER >40%)
‘Good’ silent speaker (WER < 40%)
Janke/Wand/Schultz: A Spectral Mapping Method for EMG-based Recognition of Silent Speech, Biosignals 2010
T. Schultz, http://csl.uni-bremen.de 17
Two Methods: ASR versus Direct Synthesis
Automatic Speech Recognition (ASR)
TEXT:
I /i/you /j/ /u/we /v/ /e/
I amyou arewe are
SPEECH:
Feature Transform + Vocoding
EMG-2-MFCC
EMG-to-Text EMG-to-Speech
T. Schultz, http://csl.uni-bremen.de 18
EMG-to-Speech (Subjective Eval)
Mean Opinion Scores from Listening test: Resynthesized reference (Resynth), 10 subjects rated the speech quality from 0 (bad) to 100 (excellent); error bars = standard deviation
M. Janke and L. Diener, “EMG-to-speech: Direct generation of speech from facial electromyographic signals,” IEEE/ACM Trans. Audio, Speech, Language Process., Special Issue Biosignal-based Spoken Communication, December 2017.
T. Schultz, http://csl.uni-bremen.de 19
Human in the Loop: Low-Latency SynthesisReal-time speech output to enable:
– Natural Conversation (retain paralinguistic information)– Auditory feedback with acceptable delay (EMG precedes audio)
Diener/Schultz: Investigating Objective Intelligibility in Real-Time EMG-to-Speech Conversion. Interspeech 2018Special Session on Interspeech 2018: Novel Paradigms for Direct Synthesis based on Speech-related Biosignals
Lorenz Diener, Doctoral Consortium
Interspeech 2019
T. Schultz, http://csl.uni-bremen.de 20
Acoustic Speech Recognition – Audible Speech produced by the (excited)
human articulatory apparatusÞTraditional Speech-to-Text
Silent Speech Recognition– Silent Speech captured by muscle activities
which move the articulatory apparatus– Speech involves innervation of musclesÞ EMG-to-Text, EMG-to-Speech
Imagined Speech Recognition– Thinking about producing speechÞ Brain-to-Text, Brain-to-Speech
From Muscle to Brain Activities
T. Schultz, http://csl.uni-bremen.de 21
ElectroCorticoGraphy (ECoG)
• ECoG: captures electrical activity of the brain (like EEG) with high temporal resolution (like EEG) but also high spatial resolution (unlike EEG)– records directly on the brain surface
• 7 subjects with intractable epilepsy, Albany Medical Center (NY, US)
• Electrode locations determined by clinical needs• Very little data per session (about 5 minutes)
– Political speeches, Fan-fiction, Children rhymes– Between 20 and 48 phrases per session
Electrode positions co-registeredin common Talairach space
T. Schultz, http://csl.uni-bremen.de 22
Experiments with Audible Speech
• Participants read aloud scrolling text (Ticker task)• ECoG & acoustics recorded simultaneously• Assign phones labels from the acoustic stream (forced alignment, ASR)• Impose labels on neural data; model phones solely from neural data
C Herff, D Heger, A de Pesters, D Telaar, P Brunner, G Schalk and T Schultz Brain-to-text: decoding spoken phrases from phonerepresentations in the brain. Front. Neurosci., 12 June 2015 | http://dx.doi.org/10.3389/fnins.2015.00217
T. Schultz, http://csl.uni-bremen.de 23
I /i/you /j/ /u/we /v/ /e/
)(1)|()(maxarg)|(maxargxP
WXpWPXWPWW
××=
Speech Signal Capturing Signal Processing
Automatic Speech Recognition
Brain-to-Text (Speech Recognition)
I amYou areWe are
Acoustic Dictionary Language Model
Exam
ple
Patte
rn
t1 t2
?
t 1t 2
t M
EMG+EEG
fNIRS Electrocorticography (ECoG)
“Hello”
TextOutput
T. Schultz, http://csl.uni-bremen.de 24
Features and Feature Selection• 60-120 channels, per channel extract log power of Broadband gamma
(70-170 Hz) in 50 ms intervals, 25 ms overlap• Stack with ± 4 neighboring frames (up to 200 ms prior and after)Þ Large Feature Space: Over 900 dimensions
• Select based on discriminability: Mean Kullback-Leibler divergence DKL
• Calculate DKL between phones pairsà mean discriminability for the currentphone at location and time offset
• Plotting mean DKL allows interpretation of relevant brain areas and time offsets
T. Schultz, http://csl.uni-bremen.de 25
Paper: https://www.frontiersin.org/articles/10.3389/fnins.2015.00217/full
Download video here: https://www.frontiersin.org/articles/file/downloadfile/141498_supplementary-materials_videos_1_avi/octetstream/Video%201.AVI/1/141498
T. Schultz, http://csl.uni-bremen.de 26Regina Dugan, Facebook F8 5/2017 https://www.youtube.com/watch?v=kCDWKdmwhUI
T. Schultz, http://csl.uni-bremen.de 27
Two Methods: ASR versus Direct Synthesis
Signal ProcessingA/D, Artifacts, Feature Extraction …
Automatic Speech Recognition (ASR)
TEXT:
I /i/you /j/ /u/we /v/ /e/
I amyou arewe are
SPEECH:
Feature Transform + Vocoding
ECoG-2-MFCC
Brain-to-Text Brain-to-Speech
T. Schultz, http://csl.uni-bremen.de 28
Two Synthesis Approaches
(1) High-quality Speech Output with Dual Neural Network Approach: – Densely Connected Convolutional NN maps ECoG to spectral features– Wavenet transforms spectral features to speech waveform
(2) Fast and straight-forward codebook-based Unit Selection Approach:Herff, Johnson, Diener, Shih, Krusienski, Schultz: Towards direct speech synthesis from ECoG: A Pilot Study, EMBC 2016Herff, Diener, Mugler, Slutzky, Krusienski, Schultz: Brain-To-Speech: Direct Synthesis of Speech from Intracranial BrainActivity Associated with Speech Production, BCI 2018
T. Schultz, http://csl.uni-bremen.de 29
3rd Brain-to-Speech
Video available here: https://www.youtube.com/watch?v=GDsb99rDDgo
T. Schultz, http://csl.uni-bremen.de 30
Science Magazine January 2019
M Angrick, C Herff, E Mugler, MC Tate,MW Slutzky, DJ Krusienski, T Schultz Speech Synthesis from ECoGusing Densely Connected 3D Convolutional Neural Networks, biorxiv Nov 2018, Journal of Neural Engineering, Volume 16, Number 3, 2019
Direct Speech Synthesis
MesgaraniColumbia University
Angrick, CSL
https://www.sciencemag.org/news/2019/01/artificial-intelligence-turns-brain-activity-speech
T. Schultz, http://csl.uni-bremen.de 31
Functional Electrical Stimulation (FES)
Vision: Speech Rehabilitation - Transform neural (ECoG) signals into FES-patterns that stimulate articulatory muscles that produce speech (Restauration of Articulatory Movements)
1
2
3
Feedback
T. Schultz, http://csl.uni-bremen.de 32
Step 1: Analyze Articulatory Muscles• EMG during voluntary sound production
• Vowel ([a], [e]) and Vowel-Consonant-Vowel ([ava], [eve])
• Analyze Mean Frame-Based Power of each muscle• Take zygomaticus major for this proof-of-concept
• Draws mouth angle laterally, thus relevant to V[v]V• Reasonably isolated, thus reducing risk of cross-stimulation• Distant from critical areas like eye and carotid artery
Schultz, Angrick, Diener, Küster, Meier, Krusienski, Brumberg (2019): Towards Restauration of articulatory Movements: Functional Electric Stimulation of Orofacial Muscles, IEEE Engineering in Medicine and Biology
T. Schultz, http://csl.uni-bremen.de 33
Setup: Participant (wearing headphones)(a) Voluntary neutral excitation to V: a, e(b) Starting from vowel ® VCV
• Stimulation of zygomaticus major to raise corner of the mouth (MOTIONSTIM 8, Medel)
• Synchronized recording (LSL) of audio / video / stimulus marker
• Three tasks: • Voluntary articulation• Self controlled stimulation [SCS]• External controlled stimulation [ECS]
Step 2: Stimulate Muscle & Record Audio
LSL Recorder
stimulus data& timestamps
T. Schultz, http://csl.uni-bremen.de 34
Step 3: Experimental Results1. Visual inspection of acoustics
Reasonable correlations between pre-stimulus and voluntary (spectral coefficients) Lower correlations for complete spectrogramÞ auditory output during FES does not mimic the [v] production completely
2. Pearson Correlations (with reference)• Pre-stimulus only• Complete spectrogram
T. Schultz, http://csl.uni-bremen.de 35
Biosignal-based Spoken CommunicationSpecial Issue T-ASL, Dec 2017, Vol 25, Number 12
Editors: Tanja Schultz, Thomas Hueber, Dean J Krusienski, Jonathan Brumberg
In total 13 papers covering the field, including survey“Biosignal-based Spoken Communication: A Survey”
T. Schultz, http://csl.uni-bremen.de 36
Mission: Enhance our understanding of human cognition and information processing(“minds”); of robotics and artificial intelligence (“machines”), and of deep mediatizationand datafication (“media”). Across these fields, the high-profile area MMM combinestheory-based modeling with machine learning for the development of technical systemsas well as their use and appropriation in everyday life.
• 35+ Professors • 300+ Researchers• 7 departments:
ING, math, bio, psychology, health & care, law
• 9 institutionsexternal: DFKI, ifibFraunhofer, DLRinternal: ZeTeM, ZKW, TZI, ZeMKI
• EXC Chair H. Li• Speakers:
Schultz / Beetz
https://www.uni-bremen.de/en/minds-media-machines.html
T. Schultz, http://csl.uni-bremen.de 37
CRC EASE
Everyday Activity Science and Engineering (EASE) - Advance understanding of how everyday human activities can be mastered by robotic agents
Our part (CSL): Build structured Models of Human Everyday Activities (EA)• Acquire multimodal data of human everyday activities • Interpret and abstract episodic memories into structured activity models• Disambiguate vaguely formulated instructions by NLP (mining, games)• Learn and apply
descriptive and causal models of EA
https://ease-crc.orgSpeaker: M Beetz
T. Schultz, http://csl.uni-bremen.de 38
Biosignals Lab
EASE-BASE: Biosignals Acquisition Space and Environment– Real Interaction Space (5x4,5x3m), Biosignals Lab @ CSL– Virtual and Augmented Reality Interaction: Vive, Hololens, VirtuSphere
Acquire time-aligned cross-modal high-dimensional biosignals of EA• Speech & Audio (near and far microphones) • Motion
– 12-cam Optitrack, mobile inertial sensors
• Muscle activity – 256-channel ElectroMyoGraphy (EMG)– mobile EMG (Plux)
• Eye movement– mobile pupil headset for eyetracking
• Brain activity – 64-channel EEG, mobile openBCI– 3 Tesla fMRI scanner (virtually linked)
T. Schultz, http://csl.uni-bremen.de 39
Time-aligned Biosignals
Joint Framework to integrate all contributions and interconnect with openEASE• Time-aligned Biosignals (LabStreamingLayer) in score-File („Partitur“)• Common concepts and ontology• Consistent annotations to guide model learning (elan)• Learning models from speech, video, brain activity, …
100 subjects; data will be made freely available for
research purposes
Mason, Meier, Ahrens, Fehr, Herrmann, Putze, Schultz: Human Activities Data Collection and Labeling using a Think-aloud Protocol in a Table Setting Scenario, IROS, Workshop on Activity Data Sources for Robotics, 2018.
T. Schultz, http://csl.uni-bremen.de 40
Learning Models from Speech
Manual and automatic transcription & annotation& semantic structuring (ASR, Named Entity, NLP)
Concurrent and retrospective THINK-ALOUDS
Do they yield different insights?
Compare vocabulary, semantics, timings, behavioral effects, …
Meier, Mason et al. [Mon-P-1-D], 11-13:00
Interspeech 2019
Meier, Mason, Putze, Schultz: Comparative Analysis of Think-aloud Methods, Interspeech 2019
T. Schultz, http://csl.uni-bremen.de 41
Learning Models from Brain Activity
§ Feasibility and proof-of-concept studies using 3 Tesla-fMRI and high density EEG devices
§ First-person perspective (eyetracker) of tablesetting task from Biosignals Lab
§ Premise: Perception of an action leads to comparable brain activation patterns as the own execution of the action
§ Identify task-related components of human motor cognition using semantic annotations
§ fMRI constrained EEG ICA analysis of spatio-temporal brain activity during table setting
Florian Ahrens, Thorsten Fehr, Manfred Herrmann
T. Schultz, http://csl.uni-bremen.de 42
Automatic Annotation from Brain Activity
LOC (lateral gyrus fusiformis) and lateral inferior temporal lope
sometimes called „interface between visual input and
object recognition“Florian Ahrens, Thorsten Fehr, Manfred Herrmann
T. Schultz, http://csl.uni-bremen.de 43
Model spatial-temporal dynamics in interaction & decision processes
Transfer into economic contexts
• Subject A in Biosignals Lab makes food-choices
• Subject B in the MRI-scanner evaluates the choices (healthy/unhealthy)
• Decision of subject B is transmitted in real-time to subject A
NeuroImaging Lab BioSignals Lab
Complex Interactions
T. Schultz, http://csl.uni-bremen.de 44
Linking Labs in Bremen
Biosignals Lab Neuroimaging Lab
T. Schultz, http://csl.uni-bremen.de 45
Bremen
Bremen
Behavioral Studies in 40 cabins
VR/AR & Eye-Tracking Lab
Gießen
Big Data & AnalyticsInfrastruktur
Karlsruhe
Biosignals Lab
fMRI Imaging
Linking Labs: Next Steps
T. Schultz, http://csl.uni-bremen.de 46
1. Think beyond acoustics• Leverage off speech-related Biosignals• Capture speech prior to the airborn signal• Capture speech even if no acoustic output
is available or desirable
2. Cross-modal time-aligned biosignals to automatically structure everyday activities
3. Linking Labs • Enable novel experimental studies• Establish joint protocols and standards • Train student training across disciplines
Conclusions
T. Schultz, http://csl.uni-bremen.de 47
CSL PhD students involved: Miguel Angrick, Lorenz Diener, Dominic Heger (now Zalando), Christian Herff (now Maastricht Uni), Matthias Janke (now Schaeffler), Szu-Chen Jou (now Synopsys), Dennis Küster, Celeste Mason, Moritz Meier, UdhayNallasamy (now Apple), Felix Putze, Dominic Telaar (now Apple), Michael Wand (now IDSIA), Jochen Weiner
CSL-Visitors: Hugo Gamboa (Nove Lisbon, Plux Inc.), Keigo Nakamura (NAIST), Kishore Prahallad (IIT Hyderabad), Arthur Toth (CMU), S. Umesh (IIT Madras)
CSL-Collaborators:Michael Beetz, University Bremen, GermanyAlan W Black, Carnegie Mellon University, Pittsburgh, PA,Andrea Bottin, OT Bioelettronica, ItalyJonathan Brumberg, University Kansas, KSHugo Da Silva, Plux Inc., PortugalCuntai Guan and Haizhou Li, NUS/NTU, SingaporeJose A. Gonzalez, University of Malaga, SpainChristian Herff, Maastricht University, Netherlands Manfred Herrmann, Florian Ahrens, University BremenThomas Hueber, CNRS/GIPSA, FranceDean J. Krusienski, Virginia Commonwealth U, Richmond, VAGerwin Schalk, P. Brunner, Wadsworth Center, Albany, NYJerry Shih, Epilepsy Center, UC San Diego Health, CAMarc W Slutzky, Northwestern University, Chicago, ILMichael Wand and Jürgen Schmidhuber, IDSIA, Switzerland
Thank you!