Upload
kasandra-addy
View
218
Download
2
Tags:
Embed Size (px)
Citation preview
Basis FunctionsBasis Functions
The SPM MfD courseThe SPM MfD course
1212thth Dec 2007 Dec 2007
Elvina ChuElvina Chu
IntroductionIntroduction
• What is a basis functionWhat is a basis function
• What do they do in MRIWhat do they do in MRI
• How are they useful in SPMHow are they useful in SPM
BasisBasis
• Mathematical term to describe any point in spaceMathematical term to describe any point in space
• Euclidian i.e. the x y z co-ordinatesEuclidian i.e. the x y z co-ordinates
x
y
4
2
i
j
i
v = 4 i + 2 j
Vectors are produced as each function in the Vectors are produced as each function in the function space can be represented as a linear function space can be represented as a linear combination of basis functions.combination of basis functions.Linear algebra: Orthonormal i.e. same unit length Linear algebra: Orthonormal i.e. same unit length with perpendicular elements with perpendicular elements
FunctionFunction
Uses in SPMUses in SPM
• Spatial normalisation to register different Spatial normalisation to register different subjects to the same co-ordinate systemsubjects to the same co-ordinate system
• Ease of reporting in standard spaceEase of reporting in standard space
• Useful for reporting what happens Useful for reporting what happens generically to individuals in functional generically to individuals in functional imagingimaging
Uses in SPMUses in SPM
• Basis functions are used to model the Basis functions are used to model the haemodynamic response haemodynamic response
Finite impulse response Fourier
Fourier BasisFourier Basis
• % signal change with % signal change with timetime
Fourier analysis: the complex wave at the top can be decomposed into the sum of the three simpler waves shown below.f(t)=h1(t)+h2(t)+h3(t)
f(t)
h1(t)
h2(t)
h3(t)
Provides a reasonably good fit to the impulse response, although it lacks an undershoot.
Fewer functions required to capture the typical range of impulse responses than other sets, thus reducing the degrees of freedom in design matrix
Gamma FunctionGamma Function
Canonical haemodynamic Canonical haemodynamic response function (HRF)response function (HRF)
Typical BOLD response to an impulse stimulation
The response peaks approximately 5 sec after stimulation, and is followed by an undershoot.
Canonical HRF
Temporal derivative
Dispersion derivative
The canonical HRF is a “typical” BOLD impulse response characterised by two gamma functions.
Temporal derivative can capture differences in latency of peak response
Dispersion derivative can capture differences in duration of peak response
Design matrix Design matrix
3 regressors used to model each condition
The three basis functions are:
1. Canonical HRF
2. Derivatives with respect to time
3. Derivatives with respect to dispersion
Left Right Mean
These plots show the haemodynamic response at a single voxel. The left plot shows the HRF as estimated using the simple model. Lack of fit is corrected, on the right using a more flexible model with basis functions.
Comparison of the fitted Comparison of the fitted responseresponse
SummarySummary
• Basis functions identify position in space Basis functions identify position in space
• Used to model the HRF of BOLD response Used to model the HRF of BOLD response to an impulse stimulation in fMRIto an impulse stimulation in fMRI
• SPM allows you to choose from 4 different SPM allows you to choose from 4 different basis functionsbasis functions
Multiple Regression Analysis&
Correlated Regressors
Hanneke den OudenHanneke den Ouden
Methods for Dummies 2007Methods for Dummies 2007
12/12/200712/12/2007
Overview
GeneralGeneral
Regression analysisRegression analysis
Multiple regressionsMultiple regressions
Collinearity / correlated regressorsCollinearity / correlated regressors
Orthogonalisation of regressors in SPMOrthogonalisation of regressors in SPM
Regression analysis
if the model fits the data well:- R2 is high (reflects the proportion of variance in Y explained by
the regressor X)- the corresponding p value will be low
regression analysis
examines the relation of a
dependent variable Y to
specified independent
variables X:
Y = aX + b
Multiple regression analysis
Multiple regression characterises the relationship between several Multiple regression characterises the relationship between several
independent variables (or regressors), Xindependent variables (or regressors), X11, X, X22, X, X33 etc, and a single dependent etc, and a single dependent
variable, Y:variable, Y:
Y = Y = ββ11XX1 1 + + ββ22XX22 +…..+ +…..+ ββLLXXLL + + εε
The X variables are combined linearly and each has its own regression The X variables are combined linearly and each has its own regression
coefficient coefficient ββ (weight) (weight)
ββs reflect the independent contribution of each regressor, X, to the value of s reflect the independent contribution of each regressor, X, to the value of
the dependent variable, Ythe dependent variable, Y
i.e. the proportion of the variance in Y accounted for by each regressor after i.e. the proportion of the variance in Y accounted for by each regressor after
all other regressors are accounted forall other regressors are accounted for
Multicollinearity
Multiple regression results are sometimes difficult to interpret:Multiple regression results are sometimes difficult to interpret: the overall the overall pp value of a fitted model is very value of a fitted model is very lowlow
i.e. the model fits the data welli.e. the model fits the data well
but individual but individual pp values for the regressors are values for the regressors are highhigh i.e. none of the X variables has a significant impact on predicting Y. i.e. none of the X variables has a significant impact on predicting Y.
How is this possible?How is this possible? Caused when two (or more) regressors are highly correlated: problem known Caused when two (or more) regressors are highly correlated: problem known
as as multicollinearitymulticollinearity
Multicollinearity
Are correlated regressors a problem?Are correlated regressors a problem?
NoNo when you want to predict Y from X1 and X2when you want to predict Y from X1 and X2
Because Because RR22 and and pp will be correct will be correct
YesYes when you want assess impact of individual regressors when you want assess impact of individual regressors
Because Because individual individual pp values can be misleading: a p value values can be misleading: a p value can be can be highhigh, even though the variable is important, even though the variable is important
In practice this will nearly always be the caseIn practice this will nearly always be the case
General Linear Model&
Correlated Regressors
General Linear Model and fMRI
Y = X . β + εObserved dataY is the BOLD signal at various time points at a single voxel
Design matrixSeveral components which explain the observed data Y:-Different stimuli-Movement regressors
Parameters(or betas)Define the contribution of each component of the design matrix to the value of Y
Error (or residuals)Any variance in Y that cannot be explained by the model X.β
Collinearity example
Experiment:Experiment: Which areas of the brain are active in reward processing?Which areas of the brain are active in reward processing? Subjects press a button to get a reward when they spot a red dot Subjects press a button to get a reward when they spot a red dot
amongst green dotsamongst green dots
model to be fit:model to be fit:
Y = Y = ββ11XX1 1 + + ββ22XX22 + + εε
Y = BOLD responseY = BOLD response
X1 = button press (movement)X1 = button press (movement)
X2 = response to rewardX2 = response to reward
Collinearity example
Which areas of the brain are active in reward processing?Which areas of the brain are active in reward processing?
The regressors are linearly dependent (correlated), so The regressors are linearly dependent (correlated), so
variance attributable to an individual regressor may be confounded with other variance attributable to an individual regressor may be confounded with other regressor(s)regressor(s)
As a result we don’t know which part of the BOLD response is explained by As a result we don’t know which part of the BOLD response is explained by movement and which by response to getting a rewardmovement and which by response to getting a reward
this may lead to misinterpretations of activations in certain brain areasthis may lead to misinterpretations of activations in certain brain areas
Primary motor cortex involved in reward processing??Primary motor cortex involved in reward processing??
We can’t answer the questionWe can’t answer the question
How to deal with collinearity
Avoid it:Avoid it:
Design the experiment so that the independent variables are Design the experiment so that the independent variables are uncorrelateduncorrelated
Use common senseUse common sense Use toolbox “Use toolbox “Design Magic” - Multicollinearity assessment for fMRI for SPMDesign Magic” - Multicollinearity assessment for fMRI for SPM
URL: URL: http://www.matthijs-vink.com/tools.htmlhttp://www.matthijs-vink.com/tools.html Allows you to assess the multicollinearity in your fMRI-design by calculating the Allows you to assess the multicollinearity in your fMRI-design by calculating the amount of factor variance that is also accounted for by the other factors in the amount of factor variance that is also accounted for by the other factors in the design (expressed in Rdesign (expressed in R22).). also allows you to reduce correlations between regressors through use of high-also allows you to reduce correlations between regressors through use of high-pass filterspass filters
How to deal with collinearity II
OrthogonaliseOrthogonalise the correlated regressor variables the correlated regressor variables
using factor analysis (like PCA)using factor analysis (like PCA) this will produce this will produce linearly independentlinearly independent regressors and corresponding factor regressors and corresponding factor
scores.scores. these factor scores can subsequently be used instead of the original correlated these factor scores can subsequently be used instead of the original correlated
regressor valuesregressor values
However, the meaning of these factors is rather unclear… so SPM does not do However, the meaning of these factors is rather unclear… so SPM does not do thisthis
Instead SPM does something called Instead SPM does something called serial orthogonalisationserial orthogonalisation(note that this is only within each condition, so for each condition and its (note that this is only within each condition, so for each condition and its associated parametric modulators, if there are any)associated parametric modulators, if there are any)
Serial Orthogonalisation
Y = 1X1
1 = 1.5
When we have only one regressor, things are simple…When we have only one regressor, things are simple…
Serial Orthogonalisation
Y = 1X1 + 2X2
1 = 1
2 = 1
When we two correlated regressors, things become difficult…When we two correlated regressors, things become difficult…
The value of The value of 11 is now smaller, so X is now smaller, so X11 now explains less of the variance, as X now explains less of the variance, as X22 explains some of the variance Xexplains some of the variance X11 used to explain used to explain
Serial Orthogonalisation
Y = 1X1 + 2*X2*
1 = 1.5
2* = 1
We now orthogonalise XWe now orthogonalise X22 with respect to X with respect to X11, and call this X, and call this X22**
- - 11 now again has the original value it had when X now again has the original value it had when X22 was not included was not included
- - 22* is the same value as * is the same value as 22
- X- X22* is a different regressor from X* is a different regressor from X22!!! !!!
Serial Orthogonalisation in SPM
Regressors are orthogonalised from left to right in the design matrixRegressors are orthogonalised from left to right in the design matrix
Order in which you put parametric modulators is important!!!Order in which you put parametric modulators is important!!!
Put the ‘most important’ modulators first (i.e the ones whose Put the ‘most important’ modulators first (i.e the ones whose meaning you don’t want to change)meaning you don’t want to change)
If you add an orthogonalised regressor, the If you add an orthogonalised regressor, the values of the preceding values of the preceding regressors do not changeregressors do not change
The regressor you orthogonalise to (XThe regressor you orthogonalise to (X11) does not change) does not change
The regressor you are orthogonalising (XThe regressor you are orthogonalising (X22) ) does does changechange
Plot the orthogonalised regressors to see what it is you are Plot the orthogonalised regressors to see what it is you are actually estimatingactually estimating
Conclusions
Correlated regressors can be a big problem when analysing / Correlated regressors can be a big problem when analysing / interpreting your datainterpreting your data
Try to design your experiment such that you avoid correlated Try to design your experiment such that you avoid correlated regressorsregressors
Estimate how much your regressors are correlated so you know Estimate how much your regressors are correlated so you know what you’re getting yourself intowhat you’re getting yourself into
If you cannot avoid themIf you cannot avoid them
Think about the order of the regressors in your design matrixThink about the order of the regressors in your design matrix
Look at what the regressors look like after orthogonalisationLook at what the regressors look like after orthogonalisation
Sources
Will Penny & Klaas StephanWill Penny & Klaas Stephan
Rik Henson’s slides: Rik Henson’s slides: www.mrc-cbu.cam.ac.uk/Imaging/Common/www.mrc-cbu.cam.ac.uk/Imaging/Common/rikrikSPM-GLM.pptSPM-GLM.ppt
Previous years’ presenters’ slidesPrevious years’ presenters’ slides