Upload
myron-farmer
View
221
Download
1
Tags:
Embed Size (px)
Citation preview
Brain Machine Interfaces: Modeling Strategies for Neural Signal Processing
Jose C. Principe, Ph.D.Distinguished Professor ECE, BME
Computational NeuroEngineering Laboratory
Electrical and Computer Engineering Department
University of Florida
www.cnel.ufl.edu
Brain Machine Interfaces (BMI)
A man made device that either substitutes a sensory input to the brain, repairs functional communication between brain regions or translates intention of movement.
Types of BMIsSensory (Input BMI): Providing sensory input to form percepts when natural systems are damaged.
Ex: Visual, Auditory Prosthesis
Motor (Output BMI): Converting motor intent to a command output (physical device, damaged limbs)
Ex: Prosthetic Arm Control
Cognitive BMI: Interpret internal neuronal state to deliever feedback to the neural population.
Ex: Epilepsy, DBS Prosthesis
Computational Neuroscience and Technology developments are playing a larger role in the development of each of these areas.
J.R. Wolpaw et al. 2002
BCI (BMI) bypasses the brain’s normal pathways of peripheral nerves (and muscles)
General Architecture
INTENT
PERCEPT
ACTION
STIMULUS
Decoding
Coding
BRAIN MACHINE
Neural Interface Physical Interface
The Fundamental Concept
Stimulus Neural Response
Coding Given To be inferred
Decoding To be inferred Given
Need to understand how brain processes information.
Levels of Abstraction for Neurotechnology
Brain is an extremely complex system
1012 neurons
1015 synapses
Specific interconnectivity
Tapping into the Nervous System
The choice and availability of brain signals and recording methods can greatly influence the ultimate performance of the BMI.
The level of BMI performance may be attributed to selection of electrode technology, choice of model, and methods for extracting rate, frequency, or timing codes.
http://ida.first.fhg.de/projects/bci/bbci_official/
Coarse(mm)
Choice of Scale for Neuroprosthetics
Bandwidth (approximate)
Localization
Scalp Electrodes
0 ~ 80 Hz Volume Conduction Cortical Surface
Electro-corticogram (ECoG)
0 ~ 500Hz Cortical Surface
Implanted Electrodes
0 ~ 7kHz Single Neuron
Spatial Resolution of Recordings
Moran
Florida Multiscale Signal Acquisition
EEG
ECoG
Microelectrodes
Least Invasive
Highest Resolution
NRG IRB Approval for
Human Studies
NRG IACUC
Approval for Animal Studies
Develop a experimental paradigm with a nested hierarchy for studying neural population dynamics.
Common BMI-BCI Methods
BMIs --- Invasive, work with intention of movement• Spike trains, field potentials, ECoG• Very specific, potentially better performance
BCIs --- Noninvasive, subjects must learn how to control their brain activity
• EEG• Very small bandwidth
Computational NeuroScience
Integration of probabilistic models of information processing with the neurophysiological reality of brain anatomy, physiology and purpose.
Need to abstract the details of the “wetware” and ask what is the purpose of the function. Then quantify it in mathematical terms.
Difficult but very promising. One issue is that biological evolution is a legacy system!
BMI research is an example of a computational neuroscience approach.
How to put it together?
NeoCortical Brain Areas Related to Movement
Posterior Parietal (PP) – Visual to motor transformation
Premotor (PM) and Dorsal Premotor (PMD) -
Planning and guidance (visual inputs)
Primary Motor (M1) – Initiates muscle contraction
Ensemble Correlations – Local in Time – are Averaged with
Global Models
Computational Models of Neural Intent
Two different levels of neurophysiology realism
Black Box models – no realism, function relation between input desired response
Generative Models – minimal realism, state space models using neuroscience elements
Signal Processing Approaches with Black Box Modeling
Accessing 2 types of signals (cortical activity and behavior) leads us to a general class of I/O models.
Data for these models are rate codes obtained by binning spikes on 100 msec windows.
Optimal FIR Filter – linear, feedforwardTDNN – nonlinear, feedforwardMultiple FIR filters – mixture of expertsRMLP – nonlinear, dynamic
Linear Model (Wiener-Hopf solution)
Consider a set of spike counts from M neurons, and a hand position vector dC (C is the output dimension, C = 2 or 3). The spike count of each neuron is embedded by an L-tap discrete time-delay line. Then, the input vector for a linear model at a given time instance n is composed as x(n) = [x1(n), x1(n-1) … x1(n-L+1), x2(n) … xM(n-L+1)]T, xLM, where xi(n-j) denotes the spike count of neuron i at a time instance n-j.
A linear model estimating hand position at time instance n from the embedded spike counts can be described as
where yc is the c-coordinate of the estimated hand position by the model, wji is a weight on the connection from xi(n-j) to yc, and bc is a bias for the c-coordinate.
cL
i
M
j
cjii
c bwjnxy
1
0 1
)(
Linear Model (Wiener-Hopf solution)
In a matrix form, we can rewrite the previous equation as
where y is a C-dimensional output vector, and W is a weight matrix of dimension (LM+1)C. Each column of W consists of [w10
c, w11c, w12
c…, w1L-
1c, w20
c, w21c…, wM0
c, …, wML-1c]T.
xWy T
x1(n)
xM(n)
z-1
z-1
…
z-1
z-1
…
…
yx(n)
yy(n)
yz(n)
Linear Model (Wiener-Hopf solution)
For the MIMO case, the weight matrix in the Wiener filter system is estimated by
R is the correlation matrix of neural spike inputs with the dimension of (LM)(LM),
where rij is the LL cross-correlation matrix between neurons i and j (i ≠ j), and rii is the LL autocorrelation matrix of neuron i.
P is the (LM)C cross-correlation matrix between the neuronal bin count and hand position, where pic is the cross-correlation vector between neuron i and the c-coordinate of hand position. The estimated weights WWiener are optimal based on the assumption that the error is drawn from white Gaussian distribution and the data are stationary.
PRW 1Wiener
MMMM
M
M
rrr
rrr
rrr
R
21
22221
11211
MCM
C
C
pp
pp
pp
P
1
221
111
Linear Model (Wiener-Hopf solution)
The predictor WWiener minimizes the mean square error (MSE) cost function,
Each sub-block matrix rij can be further decomposed as
where rij() represents the correlation between neurons i and j with time lag . Assuming that the random process xi(k) is ergodic for all i, we can utilize the time average operator to estimate the correlation function. In this case, the estimate of correlation between two neurons, rij(m-k), can be obtained by
ydee ],[2
EJ
)0()2()1(
)2()0()1(
)1()1()0(
rLrLr
Lrrr
Lrrr
ij
ijijij
ijijij
ij
r
)()(1
1)]()([)(
1
knxmnxN
kxmxEkmr j
N
nijiij
Linear Model (Wiener-Hopf solution)
The cross-correlation vector pic can be decomposed and estimated in the same way, substituting xj by the desired signal cj.
From the equations, it can be seen that rij(m-k) is equal to rji(k-m). Since these two correlation estimates are positioned at the opposite side of the diagonal entries of R, the equality leads to a symmetric R.
The symmetric matrix R, then, can be inverted effectively by using the Cholesky factorization. This factorization reduces the computational complexity for the inverse of R from O(N3) using Gaussian elimination to O(N2) where N is the number of parameters.
)()(1
1)]()([)(
1
kncmnxN
kcmxEkmp j
N
nijiij
Optimal Linear Model
Normalized LMS with weight decay is a simple starting point.
Four multiplies, one divide and two adds per weight update
Ten tap embedding with 105 neurons
For 1-D topology contains 1,050 parameters (3,150)
Alternatively, the Wiener solution
)()()(
)()1( 2 nxnenx
nwnw
pw 1)( IR
Time-Delay Neural Network (TDNN)
The first layer is a bank of linear filters followed by a nonlinearity.The number of delays to span I secondy(n)= Σ wf(Σwx(n))Trained with backpropagationTopology contains a ten tap embedding and five hidden PEs– 5,255 weights (1-D)
Principe, UF
Multiple Switching Local Models
Multiple adaptive filters that compete to win the modeling of a signal segment. Structure is trained all together with normalized LMS/weight decay Needs to be adapted for input-output modeling.We selected 10 FIR experts of order 10 (105 input channels)
d(n)
Recurrent Multilayer Perceptron (RMLP) – Nonlinear “Black Box”
Spatially recurrent dynamical systems Memory is created by feeding back the states of the hidden PEs. Feedback allows for continuous representations on multiple timescales. If unfolded into a TDNN it can be shown to be a universal mapper in Rn
Trained with backpropagation through time
))1()(()( 1111 byWxWy ttft f
2122 )()( byWy tt
Motor Tasks Performed
-40 -30 -20 -10 0 10 20 30 40-40
-30
-20
-10
0
10
20
30
40
Tas
k 1
Tas
k 2
Data• 2 Owl monkeys – Belle, Carmen
• 2 Rhesus monkeys – Aurora, Ivy
• 54-192 sorted cells
• Cortices sampled: PP, M1, PMd, S1, SMA
• Neuronal activity rate and behavior is time synchronized and downsampled to 10Hz
Model Building Techniques
Train the adaptive system with neuronal firing rates (100 msec) as the input and hand position as the desired signal.Training - 20,000 samples (~33 minutes of neuronal firing) Freeze weights and present novel neuronal data.Testing - 3,000 samples – (5 minutes of neuronal firing)
Results (Belle)
Signal to error ratio (dB) Correlation Coefficient
(average) (max) (average) (max)
LMS 0.8706 7.5097 0.6373 0.9528
Kalman 0.8987 8.8942 0.6137 0.9442
TDNN 1.1270 3.6090 0.4723 0.8525
Local Linear 1.4489 23.0830 0.7443 0.9748
RNN 1.6101 32.3934 0.6483 0.9852
Based on 5 minutes of test data, computed over 4 sec windows (training on 30 minutes)
Physiologic Interpretation
When the fitting error is above chance, a sensitivity analysis can be performed by computing the Jacobian of the output vector with respect to each neuronal input i
This calculation indicates which inputs (neurons) are most important for modulating the output/trajectory of the model.
Computing Sensitivities Through the Models
T
iit
Tft
T
t
t1
12
2
)(
)(WDWDW
x
y
))1()(()( 1111 byWxWy ttft f
2122 )()( byWy tt
Feedforward RMLP Eqs.
General form of RMLP Sensitivity
Feedforward Linear Eq.
General form of Linear Sensitivity
Wx
y
)(
)(
t
t
)()( tt Wxy
Identify the neurons that affect the output the most.
Data Analysis : The Effect of Sensitive Neurons on Performance
0 20 40 60
-20
0
20
40
60
Hightest Sensitivity Neurons
0 20 40 60
-20
0
20
40
60
Middle Sensitivity Neurons
0 20 40 60
-20
0
20
40
60
Lowest Sensitivity Neurons
0 20 40 60 800
0.2
0.4
0.6
0.8
1
Pro
babi
lity
3D Error Radius (mm)
Movements (hits) of Test Trajectory
10 Highest Sensitivity
84 Intermediate Sensitivity
10 Low est Sensitivity
All Neurons
0 20 40 60 80 100 1200
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Se
nsi
tivity
Primate 1, Session 1
Neurons
93
19 29 5 4
84 7
26 45
104
Decay trend appears in all animals and behavioral
paradigms
Directional Tuning vs. Sensitivity of ranked cells
Tuning Sensitivity
Significance: Sensitivity analysis through trained models automatically delivers deeply tuned cells that span the space.
Reaching Movement Segmentation
0 10 20 30 40 50 60 70-30
-20
-10
0
10
20
30
40
50
60
70XYZ
Food to Mouth Mouth to RestRest to Food
How does each cortical area contribute to the reconstruction of this movement?
Cortical Contributions Belle Day 2
0 20 40
-20
0
20
40
60
Area 1
0 20 40
-20
0
20
40
60
Area 2
0 20 40
-20
0
20
40
60
Area 3
0 20 40
-20
0
20
40
60
Area 4
0 20 40
-20
0
20
40
60
Areas 12
0 20 40
-20
0
20
40
60
Areas 13
0 20 40
-20
0
20
40
60
Areas 14
0 20 40
-20
0
20
40
60
Areas 23
0 20 40
-20
0
20
40
60
Areas 24
0 20 40
-20
0
20
40
60
Areas 34
0 20 40
-20
0
20
40
60
Areas 123
0 20 40
-20
0
20
40
60
Areas 124
0 20 40
-20
0
20
40
60
Areas 134
0 20 40
-20
0
20
40
60
Areas 234
0 20 40
-20
0
20
40
60
Areas 1234
Area 1 PP
Area 2 M1
Area 3 PMd
Area 4 M1 (right)
Train 15 separate RMLPs with every combination of cortical input.
Is there enough information in spike trains for modeling movement?
Analysis is based on the time embedded modelCorrelation with desired is based on a linear filter output for each neuron
Utilize a non-stationary tracking algorithmParameters are updated by LMS
Build a spatial filterAdaptive in real timeSparse structure based on regularization for enables selection
Adapted by LMS Adapted by on-line LAR(Kim et. al., MLSP, 2004)
Architecture
x1(n)z-1
z-1
y1(n)
w11
w1L
//
xM(n)z-1
z-1
yM(n)
wM1
wML
//
… y2(n)
…
c1
cM
)(ˆ ndc2
Training Algorithms
Tap weights for every time lag is updated by LMS
Then, the spatial filter coefficients are obtained by on-line version of least angle regression (LAR) (Efron et. al. 2004)
i=0 r = y-X = yFind argmaxi |xi
Tr|xj
jr = y-X = y-xjj
Adjust j s.t.k, |xk
Tr|=|xiTr|
. . .
x1
xk
yxj
jr = y-(xjj+ xkk)Adjust j & k s.t.q, |xq
Tr|=|xkTr|=|xi
Tr|k
)()(2)()1( nxnenwnw ijijij
Application to BMI Data – Tracking Performance
Application to BMI Data – Neuronal Subset Selection
Hand Trajectory
(z)
Neuronal Channel
Index
EarlyPart
LatePart
Generative Models for BMIs
Use partial information about the physiological system, normally in the form of states.
They can be either applied to binned data or to spike trains directly.
Here we will only cover the spike train implementations.
Difficulty of spike train Analysis: Spike trains are point processes, i.e. all the information is
contained in the timing of events, not in the amplitude fo the signals!
Build an adaptive signal processing framework for BMI decoding in the spike domain.
Features of Spike domain analysisBinning window size is not a concernPreserve the randomness of the neuron behavior. Provide more understanding of neuron physiology (tuning) and interactions at the cell assembly levelInfer kinematics onlineDeal with nonstationaryMore computation with millisecond time resolution
Goal
Recursive Bayesian Approach
),~
(~
tt nXHZ tt
State Time-series model cont.
observ.
Prediction
),(~
11 tttt vXFX
Updating
tZ
P(state|observation)
Recursive Bayesian approach
State space representation
First equation (system model) defines a first order Markov process.
Second equation (observation model) defines the likelihood of the observations p(zt|xt) . The problem is completely defined by the prior distribution p(x0).
Although the posterior distribution p(x0:t|u1:t,z1:t) constitutes the complete solution, the filtering density p(xt|u1:t, z1:t) is normally used for on-line problems.
The general solution methodology is to integrate over the unknown variables (marginalization).
ttttt
ttt
nxuhz
vxfx
),(
)(1
Recursive Bayesian approach
There are two stages to update the filtering density: Prediction (Chapman Kolmogorov)
System model p(xt|xt-1) propagates into the future the posterior density
Update
Uses Bayes rule to update the filtering density. The following equations are needed in the solution.
11:11:1111:11:1 ),|()|(),|( ttttttttt dxzuxpxxpzuxp
),|(
),|(),|(),|(
1:1
1:11:1:1:1
ttt
ttttttttt zuup
zxxpuxzpzuxp
1111111111 )()()|(),|()|( ttttttttttttt dvvpxvxdvxvpxvxpxxp
ttttttttt dnnpnxuhzuxzp )()),((),|(
tttttttttt dxuzxpuxzpuzzp ),|(),|(),|( 1:11:11:1
Kalman filter for BMI decoding
Kinematic State
Neuron tuning function Firing rate
Continuous Observation
P(state|observation)Prediction
Updating
Gaussian
Linear
Linear
[Wu et al. 2006]
For Gaussian noises and linear prediction and observation models, there
is an analytic solution called the Kalman Filter.
Particle Filter for BMI decoding
Kinematic State
Neuron tuning function Firing rate
Continuous Observation
P(state|observation)Prediction
Updating
nonGaussianLinear
Exponential
[Brockwell et al. 2004]
In general the integrals need to be approximated by sums using Monte Carlo integration with a set of samples drawn from the
posterior distribution of the model parameters.
State estimation framework for BMI decoding in spike domain
Tuning function
Kinematics
state
Neural Tuning function
Multi-spike trains observation
xk k-1xkF k-1v= ( ),
kx
kz
kH
kn= )( ,
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x 105
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
time
spike
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x 105
-1.5
-1
-0.5
0
0.5
1
1.5
time (ms)
ve
loc
ity
Decoding
Kinematic dynamic model
Key Idea: work with the probability of spike firing which is a
continuous random variable
Adaptive algorithm for point processes
Kinematic State
Neuron tuning function spike train
Point process
P(state|observation)Prediction
Updating
GaussianLinear
nonlinear
[Brown et al. 2001]
Poisson Model
Monte Carlo Sequential estimation for point process
Kinematic State
Neuron tuning function spike train
Point process
P(state|observation)Prediction
Updating
nonGaussiannonLinear
nonlinear
[Wang et al. 2006]
Sequential Estimate PDF
Monte Carlo sequential estimation framework for BMI decoding in spike domain
STEP 1. Preprocessing1. Generate spike trains from stored spike times 10ms interval, (99.62%
binary train)
2. Synchronize all the kinetics with the spike trains.
3. Assign the kinematic vector to reconstruct.
X=[position velocity acceleration]’
(more information, instantaneous state avoid error accumulation,
less computation)
x
STEP 2- Neural tuning analysis
Encoding
(Tuning)
kinematics Neural spike trains
A example of a tuned neuron
Metric: Tuning depth:
how differently does a neuron fire across directions?
D=(max-min)/std (firing rate)
0.05
0.1
0.15
0.2
0.25
30
210
60
240
90
270
120
300
150
330
180 0
neuron No. 72 TuningDepth: 1
Neuron 72: Tuning Depth 1
)arg(N
iN
Nermeancircular
Step 2- Information Theoretic Metric of Tuning
1,0
2 ))(
)|((log)|()();(
spikeangle spikep
anglespikepanglespikepanglepanglespikeI
kinematics direction angle
neural spikesInformation
)(
),1()|1(
anglep
anglespikepanglespikep
Step 2- Information theoretic Tuning depths for 3 kinds of kinematics (log axis)
Step 2- Tuning Function Estimation
Neural firing Model
Assumption :
generation of the spikes depends only on the kinematic vector we choose.
Linear filter
nonlinear f Poisson model
velocity spikes
)( tt vkf )( tt Poissonspike
Step 2- Linear Filter Estimation
• Spike Triggered Average (STA)
• Geometry interpretation
][)][(|
1 vEIvvEkspikev
T
-30 -20 -10 0 10 20 30-25
-20
-15
-10
-5
0
5
10
15
20
25
1st Principal Component
2nd
Prin
cipa
l Com
pone
nt
neuron 72: VpS PCA
Vp
VpS
1st Principal component2nd P
rincipal com
ponent
Step 2- Nonlinear f estimation
Step 2- Diversity of neural nonlinear properties
Ref: Paradoxical cold [Hensel et al. 1959]
Step 2- Estimated firing probability and generated spikes
Step 3: Sequential Estimation Algorithm for Point Process Filtering
• Consider the neuron as an inhomogenous Poisson point process
• Observing N(t) spikes in an interval T, the posterior of the spike
model is
• The probability of observing an event in t is
• And the one step prediction density (Chapman-Kolmogorov)
The posterior of the state vector, given an observation N
}exp{)( kkk vkt
t
ttttNttNtttt
t
))(),(),(|1)()(Pr(lim))(),(),(|(
0
HθxHθx
)),|(exp()),|((),|( ttttNP kkkN
kkkkkkk HxHxHx
)|(
)|(),|(),|(
kk
kkkkkkkk Np
pNPNp
H
HxHxHx
11111 ),|(),|()|( kkkkkkkkk dNppp xHxHxxHx
Step 3: Sequential Estimation Algorithm for Point Process Filtering
• Monte Carlo Methods are used to estimate the integral. Let
represent a random measure on the posterior density, and represent
the proposed density by
• The posterior density can then be approximated by
• Generating samples from using the principle of Importance
sampling
• By MLE we can find the maximum or use direct estimation with kernels
of mean and variance
)|( :1:0 kk Nq x
N
i
itt
ittt xxkwNxp
1:0:0:1:0 ),()|(
SNi
ik
ik w 1:0 },{ x SN
iik
ik w 1:0 },{ x SN
iik
ik w 1:0 },{ x SN
iik
ik w 1:0 },{ x
SNi
ik
ik w 1:0 },{ x
),|(
)|()|(
)|(
)|(
1
11
:1:0
:1:0
kik
ik
ik
ik
ikki
kk
ik
ki
kik Nq
pNpw
Nq
Npw
xx
xxx
x
x
SN
i
ik
ikkk Np
1
~)|( xxx ))()(()|(
~
1
~T
kik
N
i
kik
ikkk
S
NpV xxxxx
)|( :1:0 kk Nq x
Posterior density at a time index
-2.5 -2 -1.5 -1 -0.5 0 0.50
0.05
0.1
0.15
0.2
0.25
0.3
0.35
velocity
prob
abili
typdf at time index 45.092s
posterior density
desired velocity
velocity by seq. estimation (collapse)velocity by seq. estimation (MLE)
velocity by adaptive filtering
Step 3: Causality concerns
1,0
2);( ))(
))(|((log))(|())(()(
spikeXKXspike spikep
lagKXspikeplagKXspikeplagKXplagI
lag
For 185 neurons, average delay is 220.108 ms
0 50 100 150 200 250 300 350 400 450 5000.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
time delay (ms)
I (spk
,KX
)(Tim
eD
ela
y)
I(spk,KX) as function of time delay
neuron 80
neuron 72
neuron 99neruon 108
neruon 77
Figure 3-14 Mutual information as function of time delay for 5 neurons.
Step 3: Information Estimated Delays
Step 4: Monte Carlo sequential kinematics estimation
)(i
itt
Xkf
Kinematic State
Neural Tuning function spike trains
Prediction
it
itt
it vXFX 11
Updating
)|( )(1
it
jt
it
it Npww
)( jtN
NonGaussian
P(state|observation)
N
i
itt
it
jtt xxkwNxp
1:0:0
)(:1:0 )()|(
N
i
i
kki
kkk kWNp1
:1 )()|( xxx
Reconstruct the kinematics from neuron spike trains
650 700 750 800-30
-20
-10
0
10
t
Px
650 700 750 800-40
-20
0
20
40
t
Py
650 700 750 800-2
-1
0
1
t
Vx
650 700 750 800-2
0
2
t
Vy
650 700 750 800-0.1
0
0.1
0.2
0.3
t
Ax
650 700 750 800-0.1
0
0.1
0.2
0.3
t
Ay
desired
ccexp
=0.7002
ccMLE
=0.69188
desired
ccexp
=0.015071
ccMLE
=0.040027
desiredcc
exp=0.91319
ccMLE
=0.91162
desiredcc
exp=0.81539
ccMLE
=0.8151
desired
ccexp
=0.97445
ccMLE
=0.95376
desired
ccexp
=0.80243
ccMLE
=0.67264
Table 3-2 Correlation Coefficients between the Desired Kinematics and the Reconstructions
CC
Position Velocity Acceleration
x y x y x y
Expectation 0.8161 0.8730 0.7856 0.8133 0.5066 0.4851
MLE 0.7750 0.8512 0.7707 0.7901 0.4795 0.4775
Table 3-3 Correlation Coefficient Evaluated by the Sliding Window
CC
Position Velocity Acceleration
x y x y x y
Expectation0.84010 0.0738
0.89450.0477
0.79440.0578
0.81420.0658
0.52560.0658
0.44600.1495
MLE0.79840.0963
0.87210.0675
0.78050.0491
0.79180.0710
0.49500.0430
0.44710.1399
Results comparison
[Sanchez, 2004]
Conclusion
• Our results and those from other laboratories show it is possible to extract intent of movement for trajectories from multielectrode array data.
• The current results are very promising, but the setups have limited difficulty, and the performance seems to have reached a ceiling at an uncomfortable CC < 0.9
• Recently, spike based methods are being developed in the hope of improving performance. But difficulties in these models are many.
• Experimental paradigms to move the field from the present level need to address issues of: Training (no desired response in paraplegic) How to cope with coarse sampling of the neural population How to include more neurophysiology knowledge in the design