8/9/2019 Eye Movement Model
1/55
Contents
1 Naming Experiments 31.1 Experiment 1a . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.1.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 Experiment 1b . . . . . . . . . . . . . . . . . . . . . . . . . . 101.2.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . 101.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Models 13
2.1 Predictions by the models . . . . . . . . . . . . . . . . . . . . 142.1.1 The preview model . . . . . . . . . . . . . . . . . . . . 212.1.2 The priming model . . . . . . . . . . . . . . . . . . . . 282.1.3 Predictions: Preview model . . . . . . . . . . . . . . . 292.1.4 Predictions: Priming Model . . . . . . . . . . . . . . . 34
2.1.5 Partial preview . . . . . . . . . . . . . . . . . . . . . . 37
A Derivations 42
A.1 PDF of a race between two processes . . . . . . . . . . . . . . 42A.2 Probability of a process to win the race . . . . . . . . . . . . . 43A.3 Loops within a state . . . . . . . . . . . . . . . . . . . . . . . 44A.4 Expected value of the convolution of two geometric distributions 45
1
8/9/2019 Eye Movement Model
2/55
CONTENTS 2
B Tables 46
B.1 Experiment Materials . . . . . . . . . . . . . . . . . . . . . . . 46
C Figures 51
C.1 Number of rexations . . . . . . . . . . . . . . . . . . . . . . . 51
8/9/2019 Eye Movement Model
3/55
Chapter 1
Naming Experiments
1.1 Experiment 1a
1.1.1 Method
Participants
Fourteen participants took part in the experiment. One participant wasremoved from the eye movement data analysis because of problem measuringthe eye movements during the experiment. All participants were students atthe university of Nijmegen or the hogeschool Nijmegen.
Materials
On each trial, participants were presented with two line drawings of objects,which they named in a noun phrase coordination, like De stoel en de hark.
The names of the objects were either high frequent, or low frequent, wherefrequencies were determined by using the CELEX database. Twelve drawingsof objects with high frequent names and twelve with low frequent names wereused. The names of the objects are listed in Appendix A. From the two setsof objects twenty-four object pairs were created, so that the frequency of the object name was orthogonally varried. There were six pairs in which therst object name was high frequent and the second object name low frequent.
3
8/9/2019 Eye Movement Model
4/55
CHAPTER 1. NAMING EXPERIMENTS 4
The other object pairs included six high-high frequent combinations, six low-high, and six low-low combinations. Object names within pairs were notphonologically neither semantically related. The object pairs are listed inAppendix A, together with the six practice picture names.
Design
Each pair was presented eight times during the experiment, where each sec-ond time a pair was presented its order of presentation was reversed. That
is, if rst tree and house was presented, the second time house and treewas shown to the participant. Table 1.1 shows how the order of presentationof objects was varried over participants.
Table 1.1: Presentation of pairs in the experiment.Blocks Participant 1 Participant 2 Participant 3 . . .1 Normal Reversed Normal2 Reversed Normal Reversed3 Normal Reversed Normal4 Reversed Normal Reversed5 Normal Reversed Normal6 Reversed Normal Reversed7 Normal Reversed Normal8 Reversed Normal Reversed
The participants were presented with 8 times 24 equals 192 test trials to
which some practice trials at the beginning of each block were added. Aftereach two blocks there was a short break, which could be used to adjust theeye tracker settings.
The presentation order of pairs within each pair of blocks was random-ized accross participants, where the restriction was used that no picture wasallowed to appear in two succesive trials.
8/9/2019 Eye Movement Model
5/55
CHAPTER 1. NAMING EXPERIMENTS 5
Apparatus
The experiment was controlled by a Compaq 486 computer. The pictureswere presented on a ViewSonic 17PS screen. The participants speech wasrecorded using a Sennheiser ME400 microphone and a SONY DTC55 DATrecorder. Speech onset latencies were measured using a voice key. Eyemovements were monitored using an SMI EyeLink-Hispeed 2D head-mountedeye tracking system (SensoMotoric Instruments GmbH, Teltow, Germany).Throughout the experiment, the position of the right eye was tracked with
a sampling rate of 4 ms. The eye trackers spatial accuracy is better than0.5 degree. Three thresholds were used to detect the onsets and offsets of saccades: motion (0.2 degrees), velocity (30 degrees/second), and accelera-tion (8000 degrees/second 2). The duration of a xation was the time periodbetween two successive saccades. The position of a xation was dened asthe means of the x- and y- coordinates of the positions recorded during thexation.
Procedure
The participants were tested individually. They were seated in a quiet roomapproximately 65 cm in front of a monitor. They rst received a bookletincluding drawings of the practice and experimental objects. The names of the objects were printed next to them. In a written instruction that the par-ticipants received together with the picture booklet, they were told that theywould later see pairs of objects which they should name, from left to right ina utterance like stoel en boek (chair and book). The instructions includeda picture of an object pair and an example of the expected description.
After the participants had read the instruction and studied the picturebooklet, the head band of the eye-tracking system was mounted and the sys-tem was calibrated. For the calibration, a grid of three by three positions hadbeen identied. During a calibration trial a xation target appeared once,in random order, on each of these positions for one second. The participantswere asked to xate upon each target until the next target appeared. After
8/9/2019 Eye Movement Model
6/55
CHAPTER 1. NAMING EXPERIMENTS 6
the calibration trial, the estimated positions of the participants xationsand the distances from the xation points were graphically displayed to theexperimenter. Calibration was considered adequate if there was at least onexation within 1.5 degrees of each xation target. When calibration wasinadequate, the procedure was repeated, sometimes after adjustment of thecameras. Successful calibration was followed by a validation trial. For theparticipants, this trial did not differ from the calibration trial, but the datacollected during the validation trial were used to estimate the participantsgaze positions, and the error (i.e., the distance between the estimated gazeposition and the target position) was measured. The validation was consid-ered adequate if the average error was below 1 degree and the worst errorbelow 1.5 degree. Depending on the result, the calibration and validationtrials were repeated or the main part of the experiment started. Calibrationand validation were repeated after each test block, when needed (because of,for example, movements of the participant).
At the beginning of each test trial in the main experiment, a xationpoint was presented in the centre of the frame for the left object for 800 ms.Our earlier experiments, in which two objects were presented, had shownthat on more than 90% of the trials the participants naming object pairsrst looked at the left and then at the right object. This strong tendency toinspect the objects in the order of mention was reinforced by the presentationof the xation point. Following a blank interval of 200 ms, an object pairwas presented for 3000 ms, and the participant named the objects. Afteranother blank interval of 300 ms the next trial began.
The entire session took about 45 minutes.
Data Analysis
For the off-line analyses of the eye movements, graphical software was usedthat displayed for each trial the locations of the speakers xations as dotssuperimposed upon the line drawing. The rst step in the analyses was toclassify the xations as falling on the left or right object or elsewhere. A
8/9/2019 Eye Movement Model
7/55
CHAPTER 1. NAMING EXPERIMENTS 7
xation was categorized as pertaining to an object when it lay inside thecontours of the object or less than 1.5 degrees away from one of its outercontours.
Next, the speakers gaze patterns were examined. At the beginning of each trial a xation point had appeared at the location where the left objectwould be shown a little later. Consequently, the speakers usually xatedupon the left object at picture onset. In the cases in which participantsdidnt look at the left object during the rst xation the data was removedfrom the analysis. Also the cases in which participants didnt look at oneof the other two objects were not included in the analysis. Viewing timeswere determined by subtracting intimes from outtimes, where intimes wereset to zero when xation onset of the object occured before picture onset.Outtimes were dened as the onset of the saccade to the second object.
1.1.2 Results
Trials in which the participant named one of the objects incorrectly, hesi-
tated, and trials in which the voice key was triggered incorrectly were re-moved from the data analysis. Also the practice trials are left out from theanalysis.
Speech onset latencies
The pattern in the speech onset latencies is illustrated in Figure 1.1 in whicha interaction plot of the mean reaction time is shown. The same kind of plotfor percentage correct is shown in Figure 1.2. It can be seen that the datapattern is similar for both dependent variables, which is evidence against aspeed-accuracy-trade-off.
The interaction effect is highly signicant ( F 1(1, 13) = 57.80, p < 0.001; F 2(1, 22) =31.69, p < 0.001). The simple effect of context within high frequency of therst object is signicant both for participants and items ( t1(13) = 2 .90, p =0.012; t2(11) = 2 .54, p = 0 .027), as is the effect of context within low fre-quency (t1(13) = 6.37, p < 0.001; t2(11) = 5.09, p < 0.001). The only
8/9/2019 Eye Movement Model
8/55
CHAPTER 1. NAMING EXPERIMENTS 8
Frequency 2nd object
LowHigh M e a n s p e e c
h o n s e
t l a t e n c y
780
760
740
720
700
680
Frequency 1st object
High
Low
Figure 1.1: Mean speech onset latencies for each of the conditions.
Frequency object 2
LowHigh M e a n p r o p o r t
i o n c o r r e c t
.98
.97
.96
.95
.94
.93
Frequency object 1
High
Low
Figure 1.2: Mean proportion correct for each of the conditions.
simple effect that is not signicant is the effect of rst object frequency withina high frequency context ( t1(13) = 1.36, p = 0 .198; t2(22) = 0.61, p =0.548). There is a signicant effect of rst object frequency within low fre-quency context ( t1(13) = 6.48, p < 0.001; t2(22) = 4.35, p < 0.001).
8/9/2019 Eye Movement Model
9/55
8/9/2019 Eye Movement Model
10/55
CHAPTER 1. NAMING EXPERIMENTS 10
icant, namely the effect of rst object name frequency within low frequencyof the second object (t1(12) = 3.13, p = 0 .009; t2(22) = 2.97, p = 0 .007),and the effect of second object frequency within high frequent rst objectnames (t1(12) = 2 .53, p = 0 .026; t2(11) = 3 .25, p = 0 .008). All the t-valuescorresponding to the other simple effects tests are smaller than 1.5.
1.1.3 Discussion
A clear interaction effect of the frequencies of object names is found when
two objects presented together on a computer screen have to be named.This effect also shows up in the viewing times of the rst object, whichis additional evidence for the hypothesis that viewing times are related toretrieval durations of words from the mental lexicon.
1.2 Experiment 1b
To be sure that the interaction effect is not the result of the particular pairs
of objects used in the experiment, a control experiment is run, in which thesame objects are used, but different pairs of objects are formed.
1.2.1 Method
Participants
Sixteen participants from the Max Planck Participants pool took part in theexperiment. None of the participants took part in Experiment 1a.
Materials, Design, Apparatus, Procedure, and Data Analysis
Materials, design, apparatus, procedure, and data analysis are the same asin Experiment 1a, except for the pairs of objects that were used. Twelve newpairs were formed from the twenty-four objects in the experiment, that didnot have semantically or phonologically related object names, and were notused in Expeirment 1a. The pairs are presented in Appendix A.
8/9/2019 Eye Movement Model
11/55
CHAPTER 1. NAMING EXPERIMENTS 11
1.2.2 Results
Speech onset latencies
Trials in which an error occured are removed from the data analysis. Meanspeech onset latencies are shown in Figure 1.4.
Frequency 2nd object
LowHigh M e a n s p e e c h o n s e
t l a t e n c y
840
830
820
810
800
790
780
770
760
Frequency 1st object
High
Low
Figure 1.4: Mean speech onset latencies for each of the conditions.
Again the interaction effect is signicant ( F 1(1, 15) = 14.48, p = 0 .002; F 2(1, 22) =7.26, p = 0 .013). Three simple effects are signicant in the participant anal-ysis and almost signicant in the item analysis: The effect of right objectname frequency within high frequent left object names ( t1(15) = 2 .67, p =0.017; t2(11) = 2 .11, p = 0 .058), the effect of right object name frequencywithin low frequent left object names ( t1(15) = 3.30, p = 0 .005; t2(11) =
2.11, p = 0 .09), and the effect of rst object name frequency within low fre-quent right objects ( t1(15) = 4.98, p < 0.001; t2(11) = 3.28, p = 0 .003).The effect of rst object name frequency within high frequent second objectnames was not signicant ( t1(15) = 1.66, p > 0.1; t2(11) = < 1).
Viewing times
Mean viewing times are presented in Figure 1.5.
8/9/2019 Eye Movement Model
12/55
8/9/2019 Eye Movement Model
13/55
Chapter 2
Models of viewing times during
picture naming
In the experiment the task of the participants was to name two objectspresented as line-drawings on a computer screen in a clause like de etsen het bot (the bike and the bone). The frequency of each object namewas either high or low, according to the celex database. The frequencies of names within a pair could either be congruent (high-high or low-low) orcongruent (high-low or low-high). As described in the results section thecongruent frequency pairs were named faster than the incongruent frequencypairs. Also viewing times of the rst object for the congruent pairs wereshorter than for incongruent pairs.
Two models are proposed to explain the congruency effects. The rstmodel, which is named the preview model, illustrated in Figure 2.1. In this
model it is assumed that the second picture is processed too when the rstpicture is xated. The parafoveal processing, however, is slower than pro-cessing during picture xation. To explain the congruency effect it must beassumed that parafoveal processing is more efficient in congruent conditionsthan in uncongruent ones. Additionally, it can be assumed that the sec-ond picture inuences the processing of the rst, during xation of the rstobject.
13
8/9/2019 Eye Movement Model
14/55
CHAPTER 2. MODELS 14
I: initial state.
a: parafoveal processing delay
I
l2: lexical access second word
l1: lexical access first word
object 2m2: programming eye movement to
object 1
m1, l1, a*l2
m1, l1
m1
l2l1
S1
S2
Fm1: programming eye movement to
F: final state.
1
2
m2b, l2
m2b
S2b
3
1: m1 is cancelled2: m2 is cancelled
3: m2b is cancelled
2
1
m2, l1
l1, l2
m2b: programming eye movement within secondobject
m1S1
m2, a*l2
currently assumed to take zerotime.
Assumption: m1=m2b
S1, S2, S2b: saccades, fixed duration,
4
5
1
3
Figure 2.1: Order op Processing diagram to describe the preview hypothesisin picture naming.
The second model, named the priming model, is illustrated in Figure2.2. This model assumes that the second picture is not processed duringxation of the rst object. However, when the frequency of the picture nameis congruent with the rst one, processing of the second picture is facilitated
during xation.
2.1 Predictions by the models
Predictions of mean reaction times for both of the models can be obtainedin two ways, namely by exact derivation of the equations, and by simulationstudies. The exact derivation method is used here to get an impression of
8/9/2019 Eye Movement Model
15/55
CHAPTER 2. MODELS 15
I
I: initial state.
F: final state.
m1: programming eye movement toF
l2: lexical access second word
m1
l1
S1
S2
object 1
m2: programming eye movement to object 2
l1: lexical access first word
m2, l1
2: m2b is cancelled
1
m2b
S2b
1: m1 is cancelled
m2b: programming eye movement within secondobject
l1, l2
currently assumed to take zerotime.
Assumption: m1=m2b
S1, S2, S2b: saccades, fixed duration,
1
m2
2
m1, l1
2
3
4
m2b, p*l2
p: processing speed factor picture 2p (congruent pictures)> p (incongruent pictures)
Figure 2.2: Order of Processing diagram to describe the priming hypothesein picture naming.
the best tting model.Exact derivation of equations is only possible when the so-called expo-
nential distribution is used. This distribution has one free parameter, oftendenoted by , the rate of the process. In Figure 2.3 three exponential func-tions are plotted, with three different rates. These functions are plottedaccording to the following equation:
P (occurence ) = exp( t )
8/9/2019 Eye Movement Model
16/55
8/9/2019 Eye Movement Model
17/55
CHAPTER 2. MODELS 17
0 0.5 1 1.5 2 2.5 3 3.5 40
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
Time
P r o b a
b i l i t y o
f a n e v e n
t
lambda=0.6lambda=1.2lambda=1.8
Figure 2.4: Three examples of a gamma distribution consisting of three ex-ponential components.
resemble the gamma distribution shape quite accurately. Other distribu-tions have been proposed to describe reaction time distributions, which willbe discussed in the section on simulation studies.
The expressions for the mean and the variance of the gamma distributionare as follows:
mean =n
variance =n2
Therefore the parameters of the gamma can be estimated 1 by lling in the1 The parameter estimation method in which moments of the distribution (as the mean
and the variance) are used is known as the method of moments. There are methods (likemaximum-likelihood that yield better estimations.
8/9/2019 Eye Movement Model
18/55
CHAPTER 2. MODELS 18
Response time
1300.01200.0
1100.01000.0
900.0800.0
700.0600.0
500.0400.0
300.0200.0
Histogram of response times of participant 150
40
30
20
10
0
Std. Dev = 127.25Mean = 815.9N = 191.00
Figure 2.5: Response time histogram of participant 1.
Response time
1100.01050.0
1000.0950.0
900.0850.0
800.0750.0
700.0650.0
600.0550.0
500.0
Histogram of response times of participant 1330
20
10
0
Std. Dev = 84.03Mean = 669.3N = 179.00
Figure 2.6: Response time histogram of participant 13.
observed values estimations of n and can be obtained. The results of thisprocedure applied to the data of Experiment 1b is shown in Table 2.1. Highvalues are found for the number of substages (17 to 45), and relatively lowvalues for the rates. The number of substages in the models is much smaller.This difference in number of stages can be solved by assuming that the un-derlying distribution of each substage is gamma with ve subcomponents
8/9/2019 Eye Movement Model
19/55
CHAPTER 2. MODELS 19
and not exponential.
Table 2.1: Parameters of the gamma distribution estimated by using themethod of moments.Participant n1 0.0504 412 0.0207 233 0.0496 314 0.0765 45
5 0.0238 196 0.0383 297 0.0229 238 0.0392 309 0.0232 1710 0.0316 2711 0.0198 1712 0.0402 27
13 0.0948 6314 0.0957 6515 0.0333 3016 0.0448 3417 0.0380 3118 0.0352 28
If the gamma distribution is used, then it is also assumed that all sub-
stages are equally fast. If durations of substages differ, the resulting distri-bution will be general gamma, also called the Erlang distribution (see McGill& Gibbon, 1965).
A usefull property of the exponential distribution, concerning the mod-elling of race-models, is that the distribution of the minimum of two expo-nential distributions (say, with rates 1, and 2) equals:
8/9/2019 Eye Movement Model
20/55
8/9/2019 Eye Movement Model
21/55
CHAPTER 2. MODELS 21
2.1.1 The preview model
Now the expected time until state ve is reached in the preview model (Figure2.1) can be derived as follows:
E (T 15) = E (T 1) + P 12 E (T 25) + P 13 E (T 35)
where E (T ) is the total expected time until reaching state ve, P ij is theprobability of going from state i to j , E i is the expected time spend in statei, and E ij is the expected time needed to reach state j from state i.
Some of the values of this equation can now be lled in:
E (T 15) =1
l1 + a l2+
l1 l1 + a l2
E (T 25) +a l2
l1 + a l2E (T 35)
The motor programming process is not taken into account in the compu-tation of the expected duration of the stay in state one. The decision to leavethe motor programming time out of the equation is based on a derivationpresented in Appendix A3.
Expressions for E (T 25) and E (T 35) can be derived in a similar way:
E 25 = E (T 2) + P 24 E (T 45 + P 25 E (T 55) =1
m 2 + a l2+
m 2m 2 + a l2
E (T 45)
where E (T 55) is zero because it takes no time to go from a state to itself.
E (T 35) = E (T 3) =1
l1
E (T 45) = E (T 4) = 1 l2Filling in these expressions into the expression of E (T ) gives:
E (T ) =1
l1 + a l2+
l1 l 1 + a l2
1m 2 + a l2
+m 2
m 2 + a l21
l2+
a l2 l1 + a l2
1 l1
8/9/2019 Eye Movement Model
22/55
CHAPTER 2. MODELS 22
This means that E (T ) is a function of l1 , l2 , a, and m 2 . How E (T )depends on each of the variables is illustrated in Figure 2.8. In the plot theparameters that are not varied are xed at the following values: l1 = 1400 , l2 = 1300 , a = 0 .3, and m 1 =
1200 . The longer the retrieval of each of the
object names takes, the longer the speech onset latency will be. If parafovealprocessing is better ( a is higher), speech onset will be faster. If the saccadeplanning process takes more time, parafoveal processing will continue longerand speech onset will be slower.
300 350 400 450 500620
640
660
680
700
E ( T )
1/l1300 350 400 450 500
600
700
800
900
1000
1100
E ( T )
1/l2
0.2 0.4 0.6560
580
600
620
640
660
680
E ( T )
a100 200 300 400
600
620
640
660
680
700
E ( T )
m2
Figure 2.8: Expected speech onset latency as function of the parameters.
It can be assumed that l1 , and l2 are a function of frequency of theobject name. a is a number that must be smaller than 1 if parafoveal pro-cessing is slower than foveal processing. m 2 is the time needed to plan aneye movement to the second object.
From the model also the time until an eye movement is made to the
8/9/2019 Eye Movement Model
23/55
CHAPTER 2. MODELS 23
second object can be predicted. There is only one path in the diagram thatleads to an eye movement to the second object. Since participants look oftento the second object when naming both pictures, the probabilities of takingone of the other paths must not be very big. The time needed to go fromstate 1 to state 4 equals:
E (T 14 |path 1, 2, 4 is taken) =1
l1 + a l2+
1m 2
This means that the time until an eye movement to the second object
must be smaller than the time until speech onset. In cases in which this isnot the case, the second object was probably already processed parafoveallyand eye movements to the rst object are continued. If viewing times happento be very often longer than speech onset latencies, this is an indication of the model being incorrect. The viewing time depends on the parameters l1 , l2 , m 2 , and a. How the viewing time depends on each of the parameters isillustrated in Figure 2.9. The dependencies are similar to those for speech on-set latency, which is in agreement with observations in previous experiments
in which viewing times were measured together with speech onset latencies.Another prediction that can be derived from the model is the expected
number of times that participants do not move their eyes to the second object.That is the number of times the path via states 2 and 4 is not taken. Thisprobability equals:
P (no saccade) = P (path:1 , 2, 5) + P (path:1 , 3, 5) =
l 1
l1 + a l2
a l2
m 2 + a l2+
a l2 l1 + a l 2
This probability depends on l1 , l2 , a, and m 2 . How this dependencyis, is illustrated in Figure 2.10. For the parameter values l1 = 1400 , l2 =
1300 ,
a = 0 .3, and m 1 = 1200 , the probability if no saccade to the right objectequals 0.4048, which is quite high compared to observed proportions. This
8/9/2019 Eye Movement Model
24/55
CHAPTER 2. MODELS 24
300 350 400 450 500420
440
460
480
500
520
540
E ( T )
1/l1200 250 300 350 400
450
460
470
480
490
500
510
E ( T )
1/l2
0 0.2 0.4 0.6440
460
480
500
520
540
560
E ( T )
a100 150 200 250 300
350
400
450
500
550
600
E ( T )
1/m2
Figure 2.9: Expected viewing time as function of the parameters.
gives an indication that the chosen parameter values must be changed tobetter agree with the data. The dependencies are as one would expect: thelonger it takes to retrieve the rst object name, the longer the second objectis parafoveally processed and the higher the probability no saccade is neededto the second object. If processing the second object takes little time, asaccade to this object is less probable, which is also the case if parafoveal
processing is more efficient (a is large). The faster the motor program to thesecond object is completed, the more often a saccade to the second object isexpected.
Finally, the PDF of the number of rexations can be determined. Thisnumber of rexations is a mixture of two distributions: one of the path viastates 1,2, and further, and one via states 1 and 3. The rst path results inthe following distribution:
8/9/2019 Eye Movement Model
25/55
CHAPTER 2. MODELS 25
300 350 400 450 5000.34
0.36
0.38
0.4
0.42
0.44
0.46
P ( n o s a c c a
d e
1/l1300 350 400 450 500
0.25
0.3
0.35
0.4
0.45
P ( n o s a c c a
d e
)
1/l2
0.2 0.4 0.60.3
0.4
0.5
0.6
0.7
P ( n o s a c c a
d e
)
a100 200 300 400
0.35
0.4
0.45
0.5
P ( n o s a c c a
d e
)
m2
Figure 2.10: Probability of no saccade to the second object.
P (N rexations) =m 1
m 1 + l1 + a l2
N
l1 + a l2
m 1 + l1 + a l2
The PDF of this distribution is of a familiar form: it is a geometricdistribution, which is of the form:
f (N ) = (1 p)N p
and has an expected value of:
E (N ) =1 p
pwhich implies that the expected number of rexations from the path viastates 1 and 2 equals:
8/9/2019 Eye Movement Model
26/55
CHAPTER 2. MODELS 26
E (N ) = l1 + a l2m 1
In the path via states 1 and 3, two states contribute to the number of rexations, which leads to a convolution of two geometric distributions forthe PDF of the number of rexations there.
The PDF there is of the form:
P (N ) = p1 p2N
t =0(1 p1)t (1 p2)(N t )
where:
p1 =m 1
m 1 + l 1 + a l2
p2 =m 1
m 1 + l1What the geometric disribution and the convolution of two geometric
distributions look like is illustrated in Figure 2.11. The geometric distribution
is sometimes called the discrete version of the exponential distribution.The expected number of rexations resulting from state 3, can be com-
puted using the expression derived in Appendix A4.
E (N state 3 ) =q1 + q2 2 q1 q2
p1 p2where
p1 = 1 q1 =m 1
m 1 + l1 + a l2
p2 = 1 q2 = m 1m 1 + l1Therefore, the total expected number of rexations equals:
E (N ) = l1
l1 + a l 2
1 p1 p1
+a l2
l1 + a l2
q1 + q2 2 q1 q2 p1 p2
8/9/2019 Eye Movement Model
27/55
CHAPTER 2. MODELS 27
0 1 2 3 4 5 6 7 8 9 100
0.1
0.2
0.3
P ( N )
p=0.3
0 1 2 3 4 5 6 7 8 9 100
0.1
0.2conv(p1=0.3, p2=0.5)
P ( N )
Figure 2.11: Example of the geometric distribution and the convolution of two geometric distributions.
where p1, p2, q1, and q2 are dened as above. The plots illustrating thedependency of the expected number of rexations on each of the parameters,is presented in Appendix C1.
To account for the frequency effects found, the lemma retrieval rates cannow be redened in terms of retrieval rate per frequency category:
l1 =H for high frequent wordsL for low frequent words
The same recoding is used for the lemma retrieval rate of the secondobject, l2 .
8/9/2019 Eye Movement Model
28/55
CHAPTER 2. MODELS 28
2.1.2 The priming model
The same predictions as for the preview model can be derived for the primingmodel (see Figure 2.2. In the priming model always a saccade to the secondobject is made. The expected speech onset latency is the sum of the durationsof each of the subprocesses:
E (T ) =1
l1+
1m 2
+1
p l2where l1 and l2 are the rates at which the two lemmas are retrieved. Againthe predicted viewing time is shorter than the speech onset latency:
E (V T ) =1
l1+
1m 1
This means that the priming parameter does not inuence the viewingtime of the rst object. However, the pattern in the viewing time datasuggest also priming effects. This presents at least some evidence against thepriming model.
The probability of making a saccade to the second object equals 1, wherethe expected number of rexations equals:
E (N ) =m 1
l1 + m 1
m 1 + l1 l1
=m 1 l1
This means that the faster lemma retrieval is, the smaller the number of rexations, and the faster motor programma to the rst object, the largerthe expected number of rexations.
In Figures 2.12 to 2.14 an illustration is given of how the dependentvariables depend on each of the independent variables. The speech onsettime depends on each of the parameters. It increases with the lemma retrievaltimes and motor planning time and decreases with the amount of priming.Except for the exact shape of the functions this relationship between theparameters and the speech onset latency is the same for both models. Thisis not the case for the predictions of the viewing time. The priming modelpredicts that the viewing time does not depend on the amount of priming and
8/9/2019 Eye Movement Model
29/55
CHAPTER 2. MODELS 29
the lexical retrieval time of the second object. Also the number of rexationsdoes not depend on properties of the second object and amount of priming,where in the preview model all parameters have an effect.
300 350 400 450 500700
750
800
850
900
950
E ( T )
1/l1300 350 400 450 500
800
850
900
950
1000
E ( T )
1/l2
0.8 1 1.2 1.4800
820
840
860
880
900
920
940
E ( T )
p100 200 300 400
700
750
800
850
900
950
1000
1050
E ( T )
1/m2
Figure 2.12: Speech onset latency predicted by the priming model as functionof the parameters.
2.1.3 Predictions: Preview model
In order to check whether one of the models can explain the pattern inthe data, datapoints are generated using parameter values that seem to berealistic for the given task. The following values are used:
8/9/2019 Eye Movement Model
30/55
CHAPTER 2. MODELS 30
300 350 400 450 500500
550
600
650
700
E ( V T )
1/l1300 350 400 450 500
599
599.5
600
600.5
601
E ( V T )
1/l2
0.8 1 1.2 1.4599
599.5
600
600.5
601
E ( V T )
p100 200 300 400
500
550
600
650
700
750
800
E ( V T )
1/m2
Figure 2.13: Viewing time predicted by the priming model as function of theparameters.
H = 1200L = 1400a = 0 .3m 2 = 1200
The interaction plots predicted by the preview model for speech onset la-tencies and viewing times are presented in Figures 2.15 and 2.16 respectively.The model can describe the pattern in the speech onset quite accurately. Thet of the viewing time data is not so good.
An attempt was made to estimate optimal parameter values for the dataof Experiment 1a by dening an error measure as follows:
8/9/2019 Eye Movement Model
31/55
CHAPTER 2. MODELS 31
300 350 400 450 5001.5
2
2.5
E ( N )
1/l1300 350 400 450 5001
1.5
2
2.5
3
E ( N )
1/l2
0.8 1 1.2 1.41
1.5
2
2.5
3
E ( N )
p100 200 300 4001
1.5
2
2.5
3
3.5
4
E ( N )
1/m1
Figure 2.14: Number of rexations to the rst object predicted by the primingmodel as function of the parameters.
Error =i{HH,HL,LH,LL }
(E (RT i ) Obs(RT i ))2 + ( E (V T i ) Obs(V T i ))2
The error measure was minimized using a simplex search, where noneof the parameters was restricted. As starting values of the search 1
l 1=
200, 1 l 2 = 300, a = 0 .2,1
m 2= 200 were used. This minimization resulted
in the following parameter estimations:
1 l1
= 618,1
l 2= 662, a = 3 .2,
1m 2
= 444
were found, which results in the predicted and observed means plot shownin Figure 2.17.
8/9/2019 Eye Movement Model
32/55
CHAPTER 2. MODELS 32
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2400
500
600
700
800
900
1000
1100
Freq. 2nd Object
R T
1st Object: High1st Object: Low
Figure 2.15: Pattern in the speech onset latencies predicted by the previewmodel.
First of all, the parameter values raise some questions about the qualita-tive t of the model: retrieval of high frequent words is not much faster thanof low frequent words (618 vs 662 ms). Worse than this is that is it predictedthat parafoveal processing is faster than foveal processing ( a = 3 .2). Thenthe predicted mean plot for reaction times show two parallel lines instead of
an interaction. Changing the initial values of the search does not result invery different estimations. Finally, the predicted probability of not makinga saccade to the second object is far too high: P = 0 .803, probably becauseof the efficient preview caused by the high estimated value of a.
In an attempt to improve estimated parameter values, the preview pa-rameter, a, is xed to a value of 0.3. The resulting parameter values are:
1 l 1
= 325, 1 l 2 = 331, a = 0 .3,1
m 2= 344, resulting in the predicted and
8/9/2019 Eye Movement Model
33/55
CHAPTER 2. MODELS 33
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2350
355
360
365
370
375
Freq. 2nd Object
V T
1st Object: High1st Object: Low
Figure 2.16: Pattern in the viewing times predicted by the preview model.
observed values plot in Figure 2.18.The predicted mean viewing times are now closer to the observed times,
but the interaction pattern in the data is not predicted any more. The motorplanning process now takes more time relatively to the lexical access times,which results again in a too high probability of no saccade to the secondobject (P = 0 .826).
The preview model might be improved that only a part of the processingof the second object can occur during xation of the rst object, so that asaccade to the second object has to take place in order to name it. Thisadaptation of the preview model will be discussed after the t of the primingmodel is presented.
8/9/2019 Eye Movement Model
34/55
CHAPTER 2. MODELS 34
1 1.2 1.4 1.6 1.8 2680
700
720
740
760
780
1st Object: High1st Object: Low
1 1.2 1.4 1.6 1.8 2690
700
710
720
730
740
750
1st Object: High1st Object: Low
1 1.2 1.4 1.6 1.8 2590
592
594
596
598
600
602 1st Object: High1st Object: Low
1 1.2 1.4 1.6 1.8 2590
592
594
596
598
600
602
Freq. 2nd Object
V T
Predicted Means
1st Object: High1st Object: Low
Figure 2.17: Observed and predicted means for the parameter values esti-mated for the preview model.
2.1.4 Predictions: Priming Model
The predictions of the priming model are presented in plots 2.19 and 2.20.The priming parameter is set to 0 .7, while the other parameters are thesame as in the predictions of the preview model. The pattern in the speech
onset data can again be described quite accurately. However, the predictedviewing times do not show an interaction for the priming model, which is notin agreement with the data.
In a similar way as for the preview model, parameter values were esti-mated for the priming model. As initial parameter values 1 l 1 = 300,
1 l 2
=400, p = 1 .3, 1 m 2 = 200 which resulted in the following parameter estima-tions: 1 l 1 = 172,
1 l 2
= 201, p = 1 .52, 1 m 2 = 407 The plots of predicted and
8/9/2019 Eye Movement Model
35/55
CHAPTER 2. MODELS 35
1 1.2 1.4 1.6 1.8 2680
700
720
740
760
780
Freq. 2nd Object
R T
Observed Means
1st Object: High1st Object: Low
1 1.2 1.4 1.6 1.8 2710
715
720
725
730
Freq. 2nd Object
R T
Predicted Means
1st Object: High1st Object: Low
1 1.2 1.4 1.6 1.8 2590
592
594
596
598
600
602
Freq. 2nd Object
V T
Observed Means
1st Object: High1st Object: Low
1 1.2 1.4 1.6 1.8 2594
595
596
597
598
599
Freq. 2nd Object
V T
Predicted Means
1st Object: High1st Object: Low
Figure 2.18: Observed and predicted means for the parameter values esti-mated for the preview model when the preview parameter is xed to a valueof 0.3.
observed values are shown in Figure 2.21.Also the priming model does not show an interaction between rst object
frequency and second object frequency. The parameter estimation howeveris stable and results in not unrealistic parameter values: high frequent wordsare retrieved faster than low frequent words (172 vs 201 ms), and there ispriming ( p = 1 .52). In the priming model always a saccade to the secondobject is predicted, which is closer to reality than the about 80 % percentof no saccade trials predicted by the preview model. Because of the stableparameter estimations in the priming model, the expected duration of themotor program resulting in rexations can be computed, using:
8/9/2019 Eye Movement Model
36/55
CHAPTER 2. MODELS 36
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2600
700
800
900
1000
1100
1200
Freq. 2nd Object
1st Object: High1st Object: Low
Figure 2.19: Pattern in the speech onset latencies predicted by the primingmodel.
E (N ) = l1m 2
Therefore, the expected number of rexations equals 0 .42. The observedmean is twice as big: 0.89. This difference could be corrected by not assum-ing that rexating the rst object can still occur during planning the eyemovement to the second object.
Although the t of the priming model is far from perfect, the modeldescribes the data better than the preview model, which suggests that nopreview takes place in naming two pictures. However, it could be the casethat only perceptual processing of the second picture occurs while xatingthe rst object. This option is investigated in the following section.
8/9/2019 Eye Movement Model
37/55
CHAPTER 2. MODELS 37
1 2300
350
400
450
500
550
600
650
700
Freq. 2nd Object
V T
1st Object: High1st Object: Low
Figure 2.20: Pattern in the viewing times predicted by the priming model.
2.1.5 Partial preview
The partial preview model is displayed in Figure 2.22. The processing of the second object is split into two subprocesses. The saccade to the secondobject must be made in order to start naming the two pictures.
The equation describing the expected speech onset latency can be derivedin the same way as was done for the preview and priming model.
E (T 17) =1
l1 + a l 2 a{1 + l1 E (T 27) + a l2 a E (T 37)}
E (T 27) =1
m 2 + a l2 a{1 + m 2 E (T 57) + l2 a E (T 47)}
E (T 37) =1
l1+ E (T 47)
8/9/2019 Eye Movement Model
38/55
CHAPTER 2. MODELS 38
1 1.2 1.4 1.6 1.8 2680
700
720
740
760
780
Freq. 2nd Object
R T
Observed Means
1st Object: High1st Object: Low
1 1.2 1.4 1.6 1.8 2690
700
710
720
730
740750
Freq. 2nd Object
R T
Predicted Means
1st Object: High1st Object: Low
1 1.2 1.4 1.6 1.8 2590
592
594
596
598
600
602
Freq. 2nd Object
V T
Observed Means
1st Object: High1st Object: Low
1 1.2 1.4 1.6 1.8 2575
580
585
590
595
600
605
610
Freq. 2nd Object
V T
Predicted Means
1st Object: High1st Object: Low
Figure 2.21: Observed and predicted means for the parameter values esti-mated for the priming model.
E (T 47) =1
m 2+ E (T 67)
E (T 57) =1
l2 a+ E (T 67)
E (T 67) =1
l2 bSubstituting equations will not yield much extra insight, therefore this is
skipped. The following substitutions are used for each of the frequencies:
l 2 b =1 i
1
l2 a
1
where i equals H for high frequent words and L for low frequent words.
8/9/2019 Eye Movement Model
39/55
CHAPTER 2. MODELS 39
I
I: initial state.F: final state.
m1: programming eye movement to F
S2
S1
l1
m1
S2
object 1
m2: programming eye movement to object 2
l1: lexical access first word
a: parafoveal processing delay
m1, l1
m2, l1
4
1
m2b
S2b
1: m1 is cancelled2: m2 is cancelled3: m2b is cancelled
m2b: programming eye movement within secondobject
currently assumed to take zerotime.
Assumption: m1=m2b
S1, S2, S2b: saccades, fixed duration,
1
l1, a*l2a, m1
m2, a*l2a
m2b
S2bl2a
2
l1, l2a, l2b
l2b
l1, l2a
l2a
m1
S1
3
m2
l1, l2a
l2a: visually processing second picturel2b: lexical access second word
5
6
7
Figure 2.22: Partial preview model.
Figure 2.23 shows the expected reaction times for the parameter valuesH = 1200 , L =
1400 , p =
1150 , a = 0 .15, m 2 =
1200 together with the
probability of each path taken in the OP diagram. A small interaction isfound, but this is not enough to explain the pattern found in the data of Experiments 1 and 2.
In deriving the viewing the viewing times for the partial priming model,an OP diagram with two nal states is obtained (states 4 and 5). The viewingtime can be determined by applying the usual set of equations:
E (V T ) = E (T 1) + P 12 E (T 2) + P 13 E (T 3) =
E (T 1) + P 12 (P 24 E (T 4) + E (T 2)) + P 13 (E (T 3) + E (T 4)) =1
l1 + a l2 a+
l1 l 1 + a l2 a
a l2 am 2 + a l2 a
1
m 2+
1m 2 + a l2 a
+
8/9/2019 Eye Movement Model
40/55
CHAPTER 2. MODELS 40
1 1.2 1.4 1.6 1.8 2750
800
850
900
950
1000
1050
1100
1150
1200
1250
Freq. 2nd Object
R T
1st Object: High1st Object: Low
1 2 30
0.1
0.2
0.3
0.4
0.5
0.6
P r o
b a
b i l i t y
252434
Figure 2.23: Interaction plot predicted by the partial preview model.
a l2 a l1 + a l2 a
1 l 1
+1
m 2Using the same parameters as for the reaction times example, a set of
predicted viewing times was generated. The predicted viewing times areshown in Figure 2.24. The partial preview model seems to predict the samepattern of viewing times as the priming model, namely: no interaction.
8/9/2019 Eye Movement Model
41/55
CHAPTER 2. MODELS 41
1 2300
350
400
450
500
550
600
650
700
Freq. 2nd Object
R T
1st Object: High1st Object: Low
Figure 2.24: Partial preview model: Predicted Viewing Times.
8/9/2019 Eye Movement Model
42/55
Appendix A
Derivations
A.1 PDF of a race between two processes
In a race between two processes with exponentially distributed nising times,and rates 1, and 2, the probability density function (PDF) of the rstnishing process equals:
P ( process 1 or 2 finishes ) = ( 1 + 2) exp( (1 + 2) t)
This equation can be derived in the following way. The probability that aprocess has nished before t1 equals:
P ( process finished before t 1) = t 1
0 exp( t )dt =
[ exp( t )]t 10 = [1 exp( t1)]
Therefore, the probability that a process is still running after t1 equals:
P ( process still running ) = 1 P ( process f inished before t 1) = exp( t 1)
The probability that both processes are still running at time t1 equals:
P ( process 1 and process 2 still running ) = P ( process 1 still running )
P ( process 2 still running ) =
42
8/9/2019 Eye Movement Model
43/55
APPENDIX A. DERIVATIONS 43
exp( 1 t1) exp( lambda 2 t2) = exp( (1 + 2) t1)
The probability of at least one process that have nished at time t, can befound by differentiating:
P (min ( process 1, process 2)) = exp( (1 + 2) t
t=
(1 + 2) exp( (1 + 2) t)
A.2 Probability of a process to win the raceIf two processes with exponentially distributed process durations race againsteach other, the probability that process A wins the race equals:
P (A wins) = a
a + bwhere a is the rate of process A, and b the rate of process B.
This equation can be derived in the following way:
P (A wins) = P (ta < t b) =
t =0P (ta < t b|ta = t) P (ta = t) dt =
0P (tb > t |ta = t) P (ta |t)dt =
0P (tb > t ) P (ta = t)dt =
0 a exp( b t) exp( a t)dt = a exp( ( a + b)t
1 a + b
0=
aa + b
One of the processes has to win the race, so
P (B wins) = 1 P (A wins) = 1 a
a + b=
ba + b
8/9/2019 Eye Movement Model
44/55
APPENDIX A. DERIVATIONS 44
A.3 Loops within a state
In Figure A.1 a possible situation in which a loop within a state occurs, isillustrated. If process A ends, the current state is started again with thesame processes running before A ended.
D
A, B, C, D,....A
B C
Figure A.1: Illustration of loops in OP diagrams.
It can be shown that process A does not inuence the expected duration
of the stay in the state. This result can be derived as follows. Here it isassumed that the process that recurs has rate r and that all other rates aredenoted i . The expected duration of the stay in the state equals:
E (T ) =1i i
1 + r E (T ) +i= r
i T i
Therefore:
i i E (T ) r E (T ) = 1 +
i= r i T i
Thus:
i i r E (T ) = 1 +
i= r i T i
And thus:
E (T ) =1
i= r i 1 +
i= r i
This expression for the expected duration of the stay in the state doesntcontain the rate of the recurring process. A way to interpretet this equation
8/9/2019 Eye Movement Model
45/55
APPENDIX A. DERIVATIONS 45
is that the expected duration does not take into account properties of therecurring process.
Because of independence of the processes in a state, the recurring processdoesnt have an inuence on the probability of a certain process to win therace.
A.4 Expected value of the convolution of two
geometric distributions
In order to derive the expected value of the convolution of two geometricdistributions, the moment generating function (mgf) of the distribution isused. The mgf of the geometric distribution equals:
mx (t) =p
1 q et
where q equals 1 p.The mgf of the convolution of two distributions equals the product of the
mgf of each distribution. Therefore:
mx + y = mx my =p1
1 q1 et
p21 q2 et
=
p1 p21 + q1 q2 e2t q1 et q2 et
The expected value of a distribution can be computed by rst taking thederivative of the mgf with respect to t:
m x + yt
= p1 p2
[(1 q1 et )((1 q2 et )]2 q1 et q2 et + 2 q1 q2 e2t
and evaluating this expression at the point t = 0:
E (T x + y ) =m x + y
t t =0=
q1 + q2 2 q1 q2 p1 p2
8/9/2019 Eye Movement Model
46/55
Appendix B
Tables
B.1 Experiment Materials
In Tables B.1 to B.4 the stimulus materials of Experiment 1a and 1b arelisted.
46
8/9/2019 Eye Movement Model
47/55
APPENDIX B. TABLES 47
Table B.1: Pairs of object used in Experiment 1a. (Frequency: 1=High,2=Low).Left Picture Right Picture Frequency Name 1 Frequency Name 2arm deur 1 1arm uit 1 2bank slee 1 2
bank voet 1 1bijl slee 2 2bijl zak 2 1boot ster 1 1boot worst 1 2broek mond 1 1broek vaas 1 2deur broek 1 1
deur muts 1 2uit bank 2 1uit kam 2 2hark step 2 2hark ster 2 1kam mond 2 1kam tol 2 2mond bank 1 1
mond bijl 1 2muts boot 2 1muts zaag 2 2muur zaag 1 2muur zak 1 1
8/9/2019 Eye Movement Model
48/55
APPENDIX B. TABLES 48
Table B.2: Pairs of object used in Experiment 1a. (Frequency: 1=High,2=Low).Left Picture Right Picture Frequency Name 1 Frequency Name 2neus hark 1 2neus stoel 1 1slee arm 2 1
slee tang 2 2step vaas 2 2step voet 2 1ster boot 1 1ster tol 1 2stoel arm 1 1stoel kam 1 2tang uit 2 2
tang stoel 2 1tol muts 2 2tol neus 2 1vaas bijl 2 2vaas muur 2 1voet muur 1 1voet tang 1 2worst deur 2 1
worst hark 2 2zaag broek 2 1zaag worst 2 2zak neus 1 1zak step 1 2
8/9/2019 Eye Movement Model
49/55
APPENDIX B. TABLES 49
Table B.3: Pairs of object used in Experiment 1b. (Frequency: 1=High,2=Low).Left Picture Right Picture Frequency Name 1 Frequency Name 2arm bijl 1 2arm broek 1 1bank uit 1 2
bank voet 1 1bijl uit 2 2bijl mond 2 1boot stoel 1 1boot tang 1 2broek step 1 2broek ster 1 1deur arm 1 1
deur slee 1 2uit zaag 2 2uit zak 2 1hark slee 2 2hark stoel 2 1kam muur 2 1kam step 2 2mond boot 1 1
mond zaag 1 2muts deur 2 1muts kam 2 2muur bank 1 1muur vaas 1 2
8/9/2019 Eye Movement Model
50/55
APPENDIX B. TABLES 50
Table B.4: Pairs of object used in Experiment 1b. (Frequency: 1=High,2=Low).Left Picture Right Picture Frequency Name 1 Frequency Name 2neus muts 1 2neus muur 1 1slee broek 2 1
slee vaas 2 2step bank 2 1step bijl 2 2ster kam 1 2ster neus 1 1stoel deur 1 1stoel worst 1 2tang neus 2 1
tang worst 2 2tol boot 2 1tol hark 2 2vaas ster 2 1vaas tol 2 2voet hark 1 2voet zak 1 1worst arm 2 1
worst tang 2 2zaag muts 2 2zaag voet 2 1zak mond 1 1zak tol 1 2
8/9/2019 Eye Movement Model
51/55
Appendix C
Figures
C.1 Number of rexations
In the preview model the number of rexations to the rst object is a mixtureof rexations in two states. The number of rexations in each of the states(E 1 for state 1, and E 2 for state 1 plus 3), the probability of entering those
states ( p1 for going from state 1 to state 2, and p1 for going from state 1to state 3), and the total number of rexations is plotted against a range of values of each parameter in the model. The parameters that were varriedare: l1 (Figure C.1), l2 (Figure C.2), a (Figure C.3), and m 1 (Figure C.4).If a parameter was not varried, it was xed to the following values: l1 = 1400 , l2 = 1300 , m 1 =
1200 , and a = 0 .3.
51
8/9/2019 Eye Movement Model
52/55
APPENDIX C. FIGURES 52
100 200 300 400 5000.5
1
1.5
2
2.5
E ( N )
l1100 200 300 400 500
0.5
1
1.5
2
2.5
E 1 ( N )
l1
100 200 300 400 5000
2
4
6
E 2 ( N )
l1
100 200 300 400 5000.6
0.8
1
1.2
p 1
l1
100 200 300 400 5000
0.1
0.2
0.3
0.4
p 2
l1
Figure C.1: Expected number of rexations and path-probabilities as func-
tion of the lemma retrieval duration of the rst object.
8/9/2019 Eye Movement Model
53/55
APPENDIX C. FIGURES 53
100 200 300 400 5000.6
0.8
1
1.2
1.4
E ( N )
1/l2100 200 300 400 500
0.6
0.8
1
1.2
E 1 ( N )
1/l2
100 200 300 400 5001
1.2
1.4
1.6
E 2 ( N )
1/l2
100 200 300 400 500
0.5
1
p 1
1/l2
100 200 300 400 5000
0.2
0.4
0.6
p 2
1/l2
Figure C.2: Expected number of rexations and path-probabilities as func-
tion of the lemma retrieval duration of the second object.
8/9/2019 Eye Movement Model
54/55
8/9/2019 Eye Movement Model
55/55