Upload
scarlett-preston
View
216
Download
0
Tags:
Embed Size (px)
Citation preview
IBE312 Information Architecture2013
Ch. 7 NavigationCh. 8 Search
Many of the slides in this slideset are reproduced and/or modified content from publically available slidesets by Paul Jacobs (2012),
The iSchool, University of Maryland http://terpconnect.umd.edu/~psjacobs/s12/INFM700s12.htm.
These materials were made available and licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States
See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details.
Ch. 7 Navigation Systems
• Embedded navigation systems– Global – Local – Contextual
• Supplemental navigation Systems – Sitemaps - reinforce the hierachy, fast direct access, (p.132)– Indexes – bypass the hierarchy, know what you are looking for, level
of granularity (word, paragraph), how created (manually, auto- w/controlled vocabulary), term rotation (p. 135).
– Guides – linear navigation, Rules of thumb (short, can exit when wish, navigation buttons in same spot, designed to answer questions, clear screenshots, if large then own ToC, p. 137).
• Browser navigation features – Back, Forward, History, Bookmark, Favorites, color coded visited/unvisited links, …
Navigation systems• Building Context
– Your users should know where they are without walking the complete way (Stress Test, p. 120)
– http://www.asu.edu/
• Improving flexibility – Vertical and lateral navigation (gophersphere, p. 121) – Its a balance between flexibilty and dangers of clutter – Embedded
navigation systems - (embedded global systems repeated on each page, expanding global bar, embedded links, subsites, <ALT>)
• Advanced navigation approaches– Personalization (we guess what user wants) and Customization (user has
direct control over what they see). (p.139) – Visualization – Social navigation – Flickr tag clouds – the size of the tag show populartity.
Supporting the “Middle Game”
• Navigation systems must support moves through the information space
• Analogy: User views a projection of the information space
Users’ Needs
OrganizationSystems
NavigationSystems
Page Layoutand Design
Information Space Possibly Relevant Information
What the user sees
Possible “Moves”
n1
n2
b2
b1
s1 s2 j1 j2
narrow broaden
shift jump
Users’ Needs
OrganizationSystems
NavigationSystems
Page Layoutand Design
Navigation Patterns
• Movement in the organization hierarchy– Move up a level– Move down a level– Move to sister– Move to next (natural sequences)– Move to sister of parent
• Drive to content• Drive to advertisement• Jump to related• Jump to recommendations
Navigation Patterns
$$
Mostly navigation Mostly content
Types of Navigation Systems
• Global– Shown everywhere– Tells the user “what’s important”
• Local– Shown in specific parts of the site– Tells the user “what’s nearby”
• Contextual– Shown only in specific situations– Tells the user “what’s related”
You are here
• Remind users “where they are”• Not everyone starts from the front page• Don’t assume that the “back button” is
meaningfulExample from Amazon Example from IBM
Designing CRAPy Pages
• Contrast: make different things different– to bring out dominant elements– to mute lesser elements
• Repetition: repeat design throughout the interface– to create consistency– to foster familiarity
• Alignment: visually connect elements– to create flow– to convey organization
• Proximity: make effective use of spacing– to group related elements– to separate unrelated elements
From: Saul Greenberg
CRAPy Pages: Contrast
ImportantLess importantLess importantLess important
ImportantLess importantLess importantLess important
ImportantLess importantLess importantLess important
Important• Less important• Less important• Less important
CRAPy Pages: Repetition
• Block 1– My points– You points– Their points
• Block 2– Blah– Argh– Shrug
http://www.trademarks.umd.edu/trademarks/web.cfm
CRAPy Pages: Alignment
• Major Bullets– Secondary bullet– Secondary bullet
• Major Bullet– Secondary bullet– Secondary bullet
Alignment denotes items “at the same level”
CRAPy Pages: ProximityImportant• Less important• Less important• Less importantImportant• Less important• Less important• Less important
Important• Less important• Less important• Less importantImportant• Less important• Less important• Less important
Related
Related
Less Related
Page Layout: Conventions
Navigation Content
Content
Navigation(Local)
Navigation (Global)
Navigation
ContentContent
Navigation
(Contextual)
It’s all about the grid!
• Natural correspondence to organization hierarchy
• Conveys structure• Easy to implement in tables• Easy to control alignment and proximity
Grid Layout: NY Times
Grid Layout: NY Times
Navigation (Global)
Banner Ad
Another Ad
Content
PopularArticles
Grid Layout: ebay
Grid Layout: ebay
Navigation (Global)
Banner Ad
Search Results
Nav
igati
on(L
ocal
)
Navigation (Search)
Grid Layout: Amazon
Grid Layout: Amazon
Navigation (Global)
Search Results
Nav
igati
on(C
onte
xtua
l)Navigation (Contextual)
Navigation Overload
Beware: Navigation Overload
Navigation
Content
Mor
e N
avig
ation
Even
Mor
e N
avig
ation
Ch. 8 Search systems
• Does your site need search?– Enough content? Will it steel resources from
navigation? Enough time? Better alternatives? Will it be used?
• When does your site need search?– Too much context; site is fragmented; your users
expect it; tame dynamism;
The Search CycleSource
Selection
Search
Query
Selection
Results
Examination
Documents
Delivery
Information
QueryFormulation
Resource
source reselection
System discoveryVocabulary discoveryConcept discoveryDocument discovery
Today
Choosing what to search
• Determining search zones – by audience type and topical zones (p. 152). – Basis of search zones: content type, audience, role,
subject/topic, geography, chronology, author, department…. (but some users just want to search the whole site).
• Navigation versus destination – destination pages contain the actual information. Sometimes pages are both. (seach zones and indexs by audience, p. 154).
Search Algorithms
• Recall and precision (p.159) – Recall = #relevant documents retrieved/ #relevant
documents in the collection– Precision = #relevant documents retrieved/ #total
documents retrieved• Stemming
– ”computer” has common roon ”comut” with ”computation”, ”computing”, ”computers”, …
– Weak stemming is to only include plurals of a word in the search.
Presenting Search Results• How much info to present• How many results (documents to display)• Listing results (sorting)
– Alphabet– Chronology– Ranking (relevance, popularity, experts, pay-for-placement, etc.)
• Grouping results • Exporting results (email, printing) • Designing the search interface (pp.178-192). • Google Custom Search Engine • (https://www.google.com/cse/) • https://www.youtube.com/watch?v=KyCYyoGusqs#t=14
How do we represent text?
• Remember: computers don’t “understand” documents or queries
• Simple, yet effective approach: “bag of words”– Treat all the words in a document as index terms– Assign a “weight” to each term based on “importance”– Disregard order, structure, meaning, etc. of the words
• Assumptions– Term occurrence is independent– Document relevance is independent– “Words” are well-defined
What’s a word?天主教教宗若望保祿二世因感冒再度住進醫院。這是他今年第二度因同樣的病因住院。 - باسم الناطق ريجيف مارك وقال
قبل - شارون إن اإلسرائيلية الخارجيةبزيارة األولى للمرة وسيقوم الدعوة
المقر طويلة لفترة كانت التي تونس،عام لبنان من خروجها بعد الفلسطينية التحرير لمنظمة الرسمي
1982. Выступая в Мещанском суде Москвы экс-глава ЮКОСа заявил не совершал ничего противозаконного, в чем обвиняет его генпрокуратура России.
भा�रत सरका�र ने आर्थि� का सर्वे�क्षण में� विर्वेत्ती�य र्वेर्ष� 2005-06 में� स�त फ़ी�सदी� विर्वेका�स दीर हा�सिसल कारने का� आकालने विकाय� हा! और कार स#धा�र पर ज़ो'र दिदीय� हा!
日米連合で台頭中国に対処…アーミテージ前副長官提言
조재영 기자 = 서울시는 25 일 이명박 시장이 ` 행정중심복합도시 '' 건설안에 대해 ` 군대라도 동원해 막고싶은 심정 '' 이라고 말했다는 일부 언론의 보도를 부인했다 .
Sample DocumentMcDonald's slims down spudsFast-food chain to reduce certain types of fat in its french fries with new cooking oil.NEW YORK (CNN/Money) - McDonald's Corp. is cutting the amount of "bad" fat in its french fries nearly in half, the fast-food chain said Tuesday as it moves to make all its fried menu items healthier.But does that mean the popular shoestring fries won't taste the same? The company says no. "It's a win-win for our customers because they are getting the same great french-fry taste along with an even healthier nutrition profile," said Mike Roberts, president of McDonald's USA.But others are not so sure. McDonald's will not specifically discuss the kind of oil it plans to use, but at least one nutrition expert says playing with the formula could mean a different taste.Shares of Oak Brook, Ill.-based McDonald's (MCD: down $0.54 to $23.22, Research, Estimates) were lower Tuesday afternoon. It was unclear Tuesday whether competitors Burger King and Wendy's International (WEN: down $0.80 to $34.91, Research, Estimates) would follow suit. Neither company could immediately be reached for comment.…
14 × McDonald’s12 × fat11 × fries8 × new6 × company, french, nutrition5 × food, oil, percent, reduce,
taste, Tuesday…
“Bag of Words”
What’s the point?
• Retrieving relevant information is hard!– Evolving, ambiguous user needs, context, etc.– Complexities of language
• To operationalize information retrieval, we must vastly simplify the picture
• Bag-of-words approach:– Information retrieval is all (and only) about matching
words in documents with words in queries– Obviously, not true…– But it works pretty well!
Why does “bag of words” work?• Words alone tell us a lot about content
• It is relatively easy to come up with words that describe an information need
Random: beating takes points falling another Dow 355
Alphabetical: 355 another beating Dow falling points
“Interesting”: Dow points beating falling 355 another
Actual: Dow takes another beating, falling 355 points
Boolean Retrieval
• Users express queries as a Boolean expression– AND, OR, NOT– Can be arbitrarily nested
• Retrieval is based on the notion of sets– Any given query divides the collection into two
sets: retrieved, not-retrieved (complement)– Pure Boolean systems do not define an ordering of
the results
AND/OR/NOT
A B
All documents
C
Logic Tables
A OR B
A AND B A NOT B
NOT B
0 1
1 1
0 1
0
1
AB
(= A AND NOT B)
0 0
0 1
0 1
0
1
AB
0 0
1 0
0 1
0
1
AB
1 0
0 1B
Representing Documents
The quick brown fox jumped over the lazy dog’s back.
Document 1
Document 2
Now is the time for all good men to come to the aid of their party.
the
isfor
to
of
quick
brown
fox
over
lazy
dog
back
now
time
all
good
men
come
jump
aid
their
party
00110110110010100
11001001001101011
Term Doc
umen
t 1
Doc
umen
t 2
Stopword List
Boolean View of a Collection
quick
brown
fox
over
lazy
dog
back
now
time
all
good
men
come
jump
aid
their
party
00110000010010110
01001001001100001
Term
Doc
1D
oc 2
00110110110010100
11001001001000001
Doc
3D
oc 4
00010110010010010
01001001000101001
Doc
5D
oc 6
00110010010010010
10001001001111000
Doc
7D
oc 8
Each column represents the view of a particular document: What terms are contained in this document?
Each row represents the view of a particular term: What documents contain this term?
To execute a query, pick out rows corresponding to query terms and then apply logic table of corresponding Boolean operator
Sample Queries
foxdog 0
000
11
00
11
00
01
00
Term
Doc
1D
oc 2
Doc
3D
oc 4
Doc
5D
oc 6
Doc
7D
oc 8
dog fox 0 0 1 0 1 0 0 0
dog fox 0 0 1 0 1 0 1 0
dog fox 0 0 0 0 0 0 0 0
fox dog 0 0 0 0 0 0 1 0
dog AND fox Doc 3, Doc 5
dog OR fox Doc 3, Doc 5, Doc 7
dog NOT fox empty
fox NOT dog Doc 7
goodparty
00
10
00
10
00
11
00
11
g p 0 0 0 0 0 1 0 1
g p o 0 0 0 0 0 1 0 0
good AND party Doc 6, Doc 8over 1 0 1 0 1 0 1 1
good AND party NOT over Doc 6
Term
Doc
1D
oc 2
Doc
3D
oc 4
Doc
5D
oc 6
Doc
7D
oc 8
Inverted Index
quick
brown
fox
over
lazy
dog
back
now
time
all
good
men
come
jump
aid
their
party
00110000010010110
01001001001100001
Term
Doc
1D
oc 2
00110110110010100
11001001001000001
Doc
3D
oc 4
00010110010010010
01001001000101001
Doc
5D
oc 6
00110010010010010
10001001001111000
Doc
7D
oc 8
quick
brown
fox
over
lazy
dog
back
now
time
all
good
men
come
jump
aid
their
party
4 82 4 61 3 71 3 5 72 4 6 83 53 5 72 4 6 831 3 5 7
1 3 5 7 8
2 4 82 6 8
1 5 72 4 6
1 36 8
Term Postings
Boolean Retrieval• To execute a Boolean query:
– Build query syntax tree
– For each clause, look up postings
– Traverse postings and apply Boolean operator
• Efficiency analysis– Postings traversal is linear (assuming sorted postings)– Start with shortest posting first
( fox or dog ) and quick
fox dog
ORquick
AND
foxdog 3 5
3 5 7
foxdog 3 5
3 5 7 OR = union 3 5 7
Proximity Operators
• Simple implementation– Store word offset in postings– Treat proximity queries like “AND”, with additional
constraints• Disadvantages:
– What happens to the index size?– How can users select the proper threshold?
Why Boolean Retrieval Works
• Boolean operators approximate natural language
• How so?– AND can discover relationships between concepts
• (e.g., good party)
– OR can discover alternate terminology• (e.g., excellent party, wild party, etc.)
– NOT can discover alternate meanings• (e.g., Democratic party)
The Perfect Query Paradox
• Every information need has a perfect set of documents– If not, there would be no sense doing retrieval
• Every document set has a perfect query– AND every word in a document to get a query for it– Repeat for each document in the set– OR every document query to get the set query
• But can users realistically be expected to formulate this perfect query?– Boolean query formulation is hard!
Why Boolean Retrieval Fails
• Natural language is way more complex• How so?
– AND “discovers” nonexistent relationships• Terms in different sentences, paragraphs, …
– Guessing terminology for OR is hard• good, nice, excellent, outstanding, awesome, …
– Guessing terms to exclude is even harder!• Democratic party, party to a lawsuit, …
Strengths and Weaknesses
• Strengths– Precise, if you know the right strategies– Precise, if you know what you’re looking for– It’s fast
• Weaknesses– Users must learn Boolean logic– Boolean logic insufficient to capture the richness of language– No control over size of result set: either too many documents or none– When do you stop reading? All documents in the result set are
considered “equally good”– What about partial matches? Documents that “don’t quite match”
the query may be useful also
Ranked Retrieval• Order documents by how likely they are to be relevant to the
information need– Estimate relevance(q, di)– Sort documents by relevance– Display sorted results
• User model– Present results one screen at a time, best results first– At any point, users can decide to stop looking
• How do we estimate relevance?– Assume that document d is relevant to query q if they share words in
common– Replace relevance(q, di) with sim(q, di)– Compute similarity of vector representations
Vector Representation
• “Bags of words” can be represented as vectors– Why? Computational efficiency, ease of
manipulation– Geometric metaphor: “arrows”
• A vector is a set of values recorded in any consistent order
“The quick brown fox jumped over the lazy dog’s back”
[ 1 1 1 1 1 1 1 1 2 ]
1st position corresponds to “back”2nd position corresponds to “brown”3rd position corresponds to “dog”4th position corresponds to “fox”5th position corresponds to “jump”6th position corresponds to “lazy”7th position corresponds to “over”8th position corresponds to “quick”9th position corresponds to “the”
Vector Space Model
Assumption: Documents that are “close together” in vector space “talk about” the same things
t1
d2
d1
d3
d4
d5
t3
t2
θ
φ
Therefore, retrieve documents based on how close the document is to the query (i.e., similarity ~ “closeness”)
Similarity Metric
• How about |d1 – d2|?• Instead of Euclidean distance, use “angle”
between the vectors– It all boils down to the inner product (dot product)
of vectorskj
kj
dd
dd
)cos(
n
i ki
n
i ji
n
i kiji
kj
kjkj
ww
ww
dd
ddddsim
1
2,1
2,
1 ,,),(
Components of Similarity
• The “inner product” (aka dot product) is the key to the similarity function
• The denominator handles document length normalization
n
i kijikj wwdd1 ,,
n
i kij wd1
2,
24.41840941
20321
92200130221
2010220321
Example:
Example:
Term Weighting
• Term weights consist of two components– Local: how important is the term in this doc?– Global: how important is the term in the collection?
• Here’s the intuition:– Terms that appear often in a document should get high
weights– Terms that appear in many documents should get low
weights• How do we capture this mathematically?
– Term frequency (local)– Inverse document frequency (global)
TF.IDF Term Weighting
ijiji n
Nw logtf ,,
jiw ,
ji ,tf
N
in
weight assigned to term i in document j
number of occurrence of term i in document j
number of documents in entire collection
number of documents with term i
TF.IDF Example
4
5
6
3
1
3
1
6
5
3
4
3
7
1
2
1 2 3
2
3
2
4
4
0.301
0.125
0.125
0.125
0.602
0.301
0.000
0.602
tfidf
complicated
contaminated
fallout
information
interesting
nuclear
retrieval
siberia
1,4
1,5
1,6
1,3
2,1
2,1
2,6
3,5
3,3
3,4
1,2
0.301
0.125
0.125
0.125
0.602
0.301
0.000
0.602
complicated
contaminated
fallout
information
interesting
nuclear
retrieval
siberia
4,2
4,3
2,3 3,3 4,2
3,7
3,1 4,4
Document Scoring Algorithm
• Initialize accumulators to hold document scores
• For each query term t in the user’s query– Fetch t’s postings– For each document, scoredoc += wt,d wt,q
• Apply length normalization to the scores at end
• Return top N documents
Indexing: Performance Analysis
• Inverted indexing is fundamental to all IR models
• Fundamentally, a large sorting problem– Terms usually fit in memory– Postings usually don’t
• Lots of clever optimizations…• How large is the inverted index?
– Size of vocabulary– Size of postings
Vocabulary Size: Heaps’ Law
• In other words:– When adding new documents, the system is likely
to have seen most terms already– But the postings keep growing
KnV V is vocabulary sizen is corpus size (number of documents)K and are constants
Typically, K is between 10 and 100, is between 0.4 and 0.6
Postings Size: Zipf’s Law
• George Kingsley Zipf (1902-1950) observed the following relation between the rth most frequent event and its frequency:
• In other words:– A few elements occur very frequently– Many elements occur very infrequently
• Zipfian Distributions– English words– Library book checkout patterns– Website popularity (almost anything on the Web)
crf or
r
cf
f = frequencyr = rankc = constant
Word Frequency in English
the 1130021 from 96900 or 54958of 547311 he 94585 about 53713to 516635 million 93515 market 52110a 464736 year 90104 they 51359in 390819 its 86774 this 50933and 387703 be 85588 would 50828that 204351 was 83398 you 49281for 199340 company 83070 which 48273is 152483 an 76974 bank 47940said 148302 has 74405 stock 47401it 134323 are 74097 trade 47310on 121173 have 73132 his 47116by 118863 but 71887 more 46244as 109135 will 71494 who 42142at 101779 say 66807 one 41635mr 101679 new 64456 their 40910with 101210 share 63925
Frequency of 50 most common words in English (sample of 19 million words)
IR Intro
Boolean
Vector Space
Tokenization
Does it fit Zipf’s Law?
the 59 from 92 or 101of 58 he 95 about 102to 82 million 98 market 101a 98 year 100 they 103in 103 its 100 this 105and 122 be 104 would 107that 75 was 105 you 106for 84 company 109 which 107is 72 an 105 bank 109said 78 has 106 stock 110it 78 are 109 trade 112on 77 have 112 his 114by 81 but 114 more 114as 80 will 117 who 106at 80 say 113 one 107mr 86 new 112 their 108with 91 share 114
The following shows rf1000/n r is the rank of word w in the sample f is the frequency of word w in the sample n is the total number of word occurrences in the sample
Summary thus far…
• Represent documents as “bags of words”• Reduce retrieval to a problem of matching
words• Back to this question: what’s a word?
– First try: words are separated by spaces
– What about clitics?
I’m not saying that I don’t want John’s input on this.
The cat on the mat. the, cat, on, the, mat
What’s a word?天主教教宗若望保祿二世因感冒再度住進醫院。這是他今年第二度因同樣的病因住院。 - باسم الناطق ريجيف مارك وقال
قبل - شارون إن اإلسرائيلية الخارجيةبزيارة األولى للمرة وسيقوم الدعوة
المقر طويلة لفترة كانت التي تونس،عام لبنان من خروجها بعد الفلسطينية التحرير لمنظمة الرسمي
1982. Выступая в Мещанском суде Москвы экс-глава ЮКОСа заявил не совершал ничего противозаконного, в чем обвиняет его генпрокуратура России.
भा�रत सरका�र ने आर्थि� का सर्वे�क्षण में� विर्वेत्ती�य र्वेर्ष� 2005-06 में� स�त फ़ी�सदी� विर्वेका�स दीर हा�सिसल कारने का� आकालने विकाय� हा! और कार स#धा�र पर ज़ो'र दिदीय� हा!
日米連合で台頭中国に対処…アーミテージ前副長官提言
조재영 기자 = 서울시는 25 일 이명박 시장이 ` 행정중심복합도시 '' 건설안에 대해 ` 군대라도 동원해 막고싶은 심정 '' 이라고 말했다는 일부 언론의 보도를 부인했다 .
Tokenization Problem
• In many languages, words are not separated by spaces…
• Tokenization = separating a string into “words”• Simple greedy approach:
– Start with a list of every possible term (e.g., from a dictionary)
– Look for the longest word in the unsegmented string– Take longest matching term as the next word and
repeat
Indexing N-Grams
• Consider a Chinese document: c1 c2 c3 … cn
• Don’t segment (you could be wrong!)• Instead, treat every character bigram as a term
• Break up queries the same way• Instead of bigrams, try also trigrams…• Works at least as well as trying to segment
correctly!
c1 c2 c3 c4 c5 … cn
c1 c2 c2 c3 c3 c4 c4 c5 … cn-1 cn
Morphological Variation
• Handling morphology: related concepts have different forms– Inflectional morphology: same part of speech
– Derivational morphology: different parts of speech
• Different morphological processes:– Prefixing– Suffixing– Infixing– Reduplication
dogs = dog + PLURAL
broke = break + PAST
destruction = destroy + ionresearcher = research + er
Stemming
• Dealing with morphological variation: index stems instead of words– Stem: a word equivalence class that preserves the
central concept• How much to stem?
– organization organize organ?– resubmission resubmit/submission submit?– reconstructionism?
Stemmers
• Porter stemmer is a commonly used stemmer– Strips off common affixes– Not perfect!
• Many other stemming algorithms available
Errors of comission: doe/doing execute/executive ignore/ignorantErrors of omission: create/creation europe/european cylinder/cylindrical
Incorrectly lumps unrelated terms together
Fails to lump related terms together
Does Stemming Work?
• Generally, yes! (in English)– Helps more for longer queries– Lots of work done in this area
Donna Harman (1991) How Effective is Suffixing? Journal of the American Society for Information Science, 42(1):7-15.
Robert Krovetz. (1993) Viewing Morphology as an Inference Process. Proceedings of SIGIR 1993.
David A. Hull. (1996) Stemming Algorithms: A Case Study for Detailed Evaluation. Journal of the American Society for Information Science, 47(1):70-84.
And others…
Stemming in Other Languages• Arabic makes frequent use of infixes
• What’s the most effective stemming strategy in Arabic? Open question…
maktab (office), kitaab (book), kutub (books), kataba (he wrote), naktubu (we write), etc.
the root ktb
Beyond Words…
• Tokenization = specific instance of a general problem: what is it?
• Other units of indexing– Concepts (e.g., from WordNet)– Named entities– Relations– …