Upload
jacekg
View
1.191
Download
1
Tags:
Embed Size (px)
DESCRIPTION
Presented by Jacek Gwizdka at University of Missouri
Citation preview
Jacek GwizdkaDepartment of Library and Information Science
School of Communication and InformationRutgers University
Monday, April 4, 2011
Learning about Information Searchers from Eye-Tracking
CONTACT:
www.jsg.tel
Outline• Overall research goals• Eye-tracking – fundamentals• Eye-fixation patterns: reading models (Exp 1; Exp 3)• Search results presentation and cognitive abilities (Exp 2)• Summary and Challenges
2
Overall Research Goals
• Characterization and enhancement of human information interaction mediated by computing technology
• Characterization: cognitive and affective user states –traditionally: little access to the mental/emotional states of users while they are engaged in the search process
• Implicit data collection about searchers’ cognitive and affective states in relation to information search phases
• Enhancement: Personalization and Adaptation
3
Example: Implicit Characterization of Cognitive Load on Web Search
4
Q formulate
query
L view search
results list
B bookmark
page
STARTENDC view
contentpage
97% 58% 30%
42%
95%
7%
27%35%
higher peak cognitive load: C
higher averagecognitive load: Q & B
(Gwizdka, JASIST, 2010)
Eye-Tracking?
• Early attempts late XIX c.; early 1950’s - using a movie camera and hand-coding (Fitts, Jones & Milton 1950)
• Now computerized and “easy to use” – infrared light sources and cameras– stationary and mobile
5
Current Tobii eye-trackers
Eye-tracking – fundamental assumptions
• Top-down vs. bottom-up control – in between: language processing (higher-level) controls when eyes
move, visual processing (lower-level) controls where eyes move
(Reichle et al., 1998)
• Eye-mind link hypothesis: attention is where eyes are focused (Just & Carpenter, 1980; 1987)
• Overt and covert attention• Attention can move with no eye movement BUT eyes
cannot move without attention
6
Data from Eye-tracking Devices
• eye gaze points – eye gaze points in screen coordinates + distance – eye fixations in screen coordinates + validity – pupil diameter
• [head position 3D, distance from monitor]• 50/60Hz; 300Hz; 1000-2000Hz eye-trackers• common: 60Hz: one data record every 16.67ms
7
Tobii T-60 eye-tracker
Eye-Tracking Can …
• Eye tracking can allow identification of the specific content acquired by the person from Web pages
• Eye tracking enables high resolution analysis of searcher’s activity during interactions with information systems
• And more…
8
Example: composing answer and from information on a Web page(video)
Related Work in Information Science
• Interaction with search results– Interaction with SERPs (Granka et al., 2004; Lorigo et al., 2007; 2008)– Effects results presentation (Cutrell et al., 2007; Kammerer al., 2010)– Relevance detection (Buscher, et al. 2009)– Implicit Feedback (Fu, X., 2009); Query expansion (Buscher, et al. 2009)
•Relevance detection – Pupillometry (Oliveira, Aula, Russell, 2009)
• Detection of task differences from eye-gaze patterns– Reading/reasoning/search/object manipulation (Iqbal & Bailey, 2004)– Informational vs. transactional tasks (Terai , et al., 2008)– Task detection is also one of our research interests
9
Experiment 1: Journalism tasks – Open Web Search
• 32 journalism students• 4 journalistic tasks (realistic, created by journalism faculty and journalists)
• Tasks: – advanced obituary (OBI) – interview preparation (INT)– copy editing (CPE)– background information (BIC)
10Note: OBI vs. CPE are most dissimilar
Task facets:• product: factual vs. intellectual• level: whole document vs. segment• nature of task goal• complexity – number of steps needed
Experiment 1 – Research Questions
• Can we detect task type (differences in task facets) from implicit interaction data (e.g., eye-tracking) ?
• How do we aggregate information from eye-tracking data?
11
Eye-gaze patterns
• Eye-tracking research have frequently analyzed eye-gaze position aggregates ('hot spots’)– spatiotemporal-intensity – heat maps– also sequential – scan paths
12
• Higher-order patterns:– reading models
Reading Eye Patterns
• Reading and scanning have easily distinguished patterns of fixations and saccades. (Rayner & Fischer, 1996)
• Lexical Processing of Words– Reading research has established word availability is a function of
fixation duration:– Orthographic recognition: 40-50 ms• time to move data from eyes to mind
– Phonological recognition: 55-70ms– Lexical availability (typical): 113 ms – 150ms (Rayner, 1998)• Unfamiliar or complex meanings require longer processing– Eyes do not saccade until the word has been processed
13
Scan Fixations vs. Reading Fixations
• Scanning fixations provide some semantic information, limited to foveal (1° visual acuity) visual field (Rayner & Fischer, 1996)
• Fixations in a reading sequence provide more information than isolated “scanning” fixations:– information is gained from the larger parafoveal (5° beyond foveal
focus) region (Rayner et al., 2003) (asymmetrical, in dir of reading)– richer semantic structure available from text compositions
(sentences, paragraphs, etc.) • Some of the types of semantic information available only
through reading sequences may be crucial to satisfy task requirements.
14
Reading Models
• We implemented the E-Z Reader reading model (Reichle et al., 2006)– Inputs: (eye fixation location, duration)– Fixation duration >113 ms – threshold for lexical processing (Reingold
& Rayner, 2006)– The algorithm distinguishes reading fixation sequences from isolated
fixations, called 'scanning' fixations– Each lexical fixation is classified to (S,R) (Scan, Reading)– These sequences used to create a state model
15
Reading Model – States and Characteristics
• Two states: transition probabilities• Number of lexical fixations and duration
16
Example Reading Sequence
17
Results: Search Task Effect on Reading/Scanning
Task effects on transition probabilities SR & RS (all subjects & pages)
18(Cole, Gwizdka, Liu, Bierig, Belkin & Zhang, 2010)
• For OBI, INT searchers biased to continue reading
• For CPE to continue scanning
Searchers are adopting different reading strategies for different task types
Results: Search Task Facets and Text AcquisitionFor highly attended pages
19
Total Text Acquisition on SERPs and Content per page
Total Text Acquired on SERPs and Content
Results: Search Task Facets and State Transitions
For highly attended pages
20
State Transitions on SERPs per page
State Transitions on Content pages per page
Read ScanRead Scan Scan Read
Scan Read
Task Facets Effects - Summary
For highly attended pages
21(Cole, Gwizdka, Liu, Bierig, Belkin & Zhang, submitted, 2011)
Scan<->Read Transition Probabilities in 2 Experiments
• Person’s tendency to readscan related to scanread? (i.e., is p related to q ?)
• p ~ 1-q
correlation (Spearman ρ): 0.914 and 0.830
Journalistic tasks (N=32) Genomics tasks (N=40)
Experiment 1: Conclusions
• Searchers’ reading / scanning behavior affected by task• Tasks facets can be “detected” from eye-tracking data (from
reading model properties)• Reading models can be built on the fly (during search) real-
time observations of eye movements can be used by adaptive search systems
• Challenge: Lack of baseline data about reading models of individuals
23
Experiment 2: Result List vs. Overview Tag-Cloud37 participantsEveryday information seeking tasks (travel, shopping…)
- two levels of task complexity
Two user interfaces
24
1. List UI2. Overview UI (Tag Cloud)
Experiment 2: User Actions in Two Interfaces
25
View Search Results
View one result
page
Start
End
Delete Tag New Tag
Click Result
URL
Click “Back” button Click “Done” &
enter answer
1. List
2. Overview Tag Cloud
Experiment 2: Research Questions
• Does the search results overview benefit users?• Task effects?• Individual differences - cognitive ability effects?
26
General Results• Search results overview (“tag cloud”) benefited users
– made them faster– facilitated formulation of more effective queries
• More complex tasks were indeed more demanding – required more search effort
27(Gwizdka, Information Research, 2009)
Task and UI and Reading Model differences• Complex tasks required more reading effort
– Longer max reading fixation length and more reading fixation regressions •Overview UI required less effort
– Scanning more likely (S-S higher; S-R lower; R-S higher)– Total reading scan path length shorter but total scan path (including
scanning) were longer– Less and shorter mean fixations per page visited
28
List Overview
Task and UI Interaction and Reading model data• For complex tasks UI effect
– Higher probability of short reading sequences in Overview UI
• For simple tasks UI effect– Shorter length of reading scan paths per
page and less fixations per page• Task & UI interaction
– Speed of reading: • for complex tasks faster reading in Overview than in
List UI• for simple tasks faster in List than in Overview UI
29
User Interface Features – Individual Differences
• Two users, same UI and task
30
Individual Differences – Least Effort? • Higher cognitive ability searchers were faster in Overview UI
and on simple tasks (same number of queries)• Higher ability searchers did more in more demanding situations
–higher search effort did not seem to improve task outcomes
31
F(144,1)=4.2; p=.042 F(144,1)=3.1; p=.08
For task complexity factor and working memory (WM)
Task and Working Memory – Eye-tracking Data
•High WM less likely to keep scanning•High WM higher reading speed (scan path/total fixation duration)
• Number and duration of reading sequences differs – (borderline: 0.05<p<0.1)
• For high WM searchers: – for complex more reading– for simple tasks less reading
• For low WM no such difference!
32
Experiment 2: Conclusions• Overview UI was faster – reflected in some eye-tracking
measures• Task complexity differences reflected in some eye-tracking
measures• Some effects of cognitive abilities on interaction
– e.g., task & high WM – more effort than needed opportunistic discovery of
information?– “violation” of the least effort principle not fully explained yet
33
Current Project: Can We Implicitly Detect Relevance Decisions?
• Start with pupillometry – info relevance (Oliveria, Russell, Aula, 2009)
– low-level decision timing (Einhäuser, et al. 2010)
• Also look at EEG, GSRFunded by Google Research Award
EEG
GSR
Eye trackingpupil animation
• Implicit characterization of Information Search Process using physiological devices
• Can we detect when searchers make information relevance decisions?
Tobii T-60 eye-tracker
Emotiv EPOC wireless EEG headset
Summary & Conclusions
• Eye tracking enables high resolution analysis of searcher’s activity during interactions with information systems
• There is more beyond eye-gaze locations with timestamps • Eye-tracking data:
– can support for identification of search task types– reflects differences in searcher performance on user interfaces– reflects individual differences between searchers
• High potential for implicit detection of a searcher’s states
36
Some Challenges• High-resolution data (low-level)• How do we create higher-level patterns?• How do we detect them computationally?• How do we deal with ind. diffs (baseline data)?
37(Lorigo et al., 2008)(Terai et al., 2008)
(Iqbal & Bailey, 2004)
High-resolution Eye-tracking is Coming Soon to You
• Eye tracking technology is declining in price and in 2-3 years could be part of standard displays.– Already in luxury cars and semi-trucks (sleep detection)– Computers with built in eye-tracking
38
Tobii / Lenovoproof of concept eye-tracking laptop - March 2011
Thank you! Questions?
Jacek Gwizdka contact: http://jsg.tel
PoODLE Project: Personalization of the Digital Library ExperienceIMLS grant LG-06-07-0105-07
http://comminfo.rutgers.edu/research/poodle or for short: http://bit.ly/poodle_project
PoODLE PIs: Nicholas J. Belkin, Jacek Gwizdka, Xiangmin ZhangPost-Doc: Ralf Bierig, PhD Students: Michael Cole (Reading Models + E-Z Reader algorithm),
Jingjing Liu, (now Asst Prof.), Chang Liu