56
Information Retrieval CSE 8337 (Part D) Spring 2009 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/ Data Mining Introductory and Advanced Topics by Margaret H. Dunham http://www.engr.smu.edu/~mhd/book Introduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze http://informationretrieval.org

Information Retrieval

  • Upload
    rhett

  • View
    30

  • Download
    0

Embed Size (px)

DESCRIPTION

Information Retrieval. CSE 8337 (Part D) Spring 2009 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza -Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/ - PowerPoint PPT Presentation

Citation preview

Page 1: Information Retrieval

Information Retrieval

CSE 8337 (Part D)Spring 2009

Some Material for these slides obtained from:Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto

http://www.sims.berkeley.edu/~hearst/irbook/Data Mining Introductory and Advanced Topics by Margaret H. Dunham

http://www.engr.smu.edu/~mhd/bookIntroduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze

http://informationretrieval.org

Page 2: Information Retrieval

CSE 8337 Spring 2009 2

CSE 8337 Outline• Introduction• Simple Text Processing• Boolean Queries• Web Searching/Crawling• Indexes• Vector Space Model• Matching• Evaluation

Page 3: Information Retrieval

CSE 8337 Spring 2009 3

Why System Evaluation? There are many retrieval models/

algorithms/ systems, which one is the best?

What does best mean? IR evaluation may not actually look at

traditional CS metrics of space/time. What is the best component for:

Ranking function (dot-product, cosine, …) Term selection (stopword removal,

stemming…) Term weighting (TF, TF-IDF,…)

How far down the ranked list will a user need to look to find some/all relevant documents?

Page 4: Information Retrieval

CSE 8337 Spring 2009 4

Measures for a search engine How fast does it index

Number of documents/hour (Average document size)

How fast does it search Latency as a function of index size

Expressiveness of query language Ability to express complex information

needs Speed on complex queries

Uncluttered UI Is it free?

Page 5: Information Retrieval

CSE 8337 Spring 2009 5

Measures for a search engine All of the preceding criteria are

measurable: we can quantify speed/size; we can make expressiveness precise

The key measure: user happiness What is this? Speed of response/size of index are factors But blindingly fast, useless answers won’t

make a user happy Need a way of quantifying user

happiness

Page 6: Information Retrieval

CSE 8337 Spring 2009 6

Happiness: elusive to measure Most common proxy: relevance of

search results But how do you measure relevance? We will detail a methodology here, then

examine its issues Relevant measurement requires 3

elements:1. A benchmark document collection2. A benchmark suite of queries3. A usually binary assessment of either

Relevant or Nonrelevant for each query and each document

Page 7: Information Retrieval

CSE 8337 Spring 2009 7

Difficulties in Evaluating IR Systems

Effectiveness is related to the relevancy of retrieved items.

Relevancy is not typically binary but continuous.

Even if relevancy is binary, it can be a difficult judgment to make.

Relevancy, from a human standpoint, is: Subjective: Depends upon a specific user’s

judgment. Situational: Relates to user’s current needs. Cognitive: Depends on human perception and

behavior. Dynamic: Changes over time.

Page 8: Information Retrieval

CSE 8337 Spring 2009 8

How to perform evaluation

Start with a corpus of documents. Collect a set of queries for this corpus. Have one or more human experts

exhaustively label the relevant documents for each query.

Typically assumes binary relevance judgments.

Requires considerable human effort for large document/query corpora.

Page 9: Information Retrieval

CSE 8337 Spring 2009 9

IR Evaluation Metrics Precision/Recall

P/R graph Regular Smoothing

Interpolating Averaging

ROC Curve MAP R-Precision P/R points

F-Measure E-Measure Fallout Novelty Coverage Utility ….

Page 10: Information Retrieval

CSE 8337 Spring 2009 10

documents relevant of number Totalretrieved documents relevant of Number recall

retrieved documents of number Totalretrieved documents relevant of Number precision

Relevant documents

Retrieved documents

Entire document collection

retrieved & relevant

not retrieved but relevant

retrieved & irrelevant

Not retrieved & irrelevant

retrieved not retrieved

rele

vant

irrel

evan

t

Precision and Recall

Page 11: Information Retrieval

CSE 8337 Spring 2009 11

Precision and Recall Precision

The ability to retrieve top-ranked documents that are mostly relevant.

Recall The ability of the search to find all of

the relevant items in the corpus.

Page 12: Information Retrieval

CSE 8337 Spring 2009 12

Determining Recall is Difficult Total number of relevant items is

sometimes not available: Sample across the database and

perform relevance judgment on these items.

Apply different retrieval algorithms to the same database for the same query. The aggregate of relevant items is taken as the total relevant set.

Page 13: Information Retrieval

CSE 8337 Spring 2009 13

Trade-off between Recall and Precision

10

1

Recall

Prec

isio

n

The ideal

Desired areas

Returns relevant documents butmisses many useful ones too

Returns most relevantdocuments but includes lots of junk

Page 14: Information Retrieval

CSE 8337 Spring 2009 14

Recall-Precision Graph Example

Recall-Precision Graph

00.20.40.60.8

1

0 0.5 1Recall

Prec

ision

Page 15: Information Retrieval

CSE 8337 Spring 2009 15

A precision-recall curve

0.0

0.2

0.4

0.6

0.8

1.0

0.0 0.2 0.4 0.6 0.8 1.0

Pre

cisi

on

Recall

Page 16: Information Retrieval

CSE 8337 Spring 2009 16

Recall-Precision Graph Smoothing Avoid sawtooth lines by smoothing Interpolate for one query Average across queries

Page 17: Information Retrieval

CSE 8337 Spring 2009 17

Interpolating a Recall/Precision Curve

Interpolate a precision value for each standard recall level: rj {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,

1.0} r0 = 0.0, r1 = 0.1, …, r10=1.0

The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level between the j-th and (j + 1)-th level: )(max)(

1

rPrPjj rrrj

Page 18: Information Retrieval

CSE 8337 Spring 2009 18

Interpolated precision Idea: If locally precision increases with

increasing recall, then you should get to count that…

So you max of precisions to right of value(Need not be at only standard levels.)

Page 19: Information Retrieval

CSE 8337 Spring 2009 19

Precision across queries Recall and Precision are calculated for a

specific query. Generally want a value for many

queries. Calculate average precision recall over

a set of queries. Average precision at recall level r:

Nq – number of queries Pi(r) - precision at recall level r for ith

query

1

( )( )qN

i

i q

P rP rN

Page 20: Information Retrieval

CSE 8337 Spring 2009 20

Average Recall/Precision Curve

Typically average performance over a large set of queries.

Compute average precision at each standard recall level across all queries.

Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.

Page 21: Information Retrieval

CSE 8337 Spring 2009 21

Compare Two or More Systems The curve closest to the upper right-

hand corner of the graph indicates the best performance

00.2

0.40.6

0.81

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1Recall

Prec

ision

NoStem Stem

Page 22: Information Retrieval

CSE 8337 Spring 2009 22

Operating Characteristic Curve (ROC Curve)

Page 23: Information Retrieval

CSE 8337 Spring 2009 23

ROC Curve Data False Positive Rate vs True Positive

Rate True Positive Rate

Sensitivity Recall tn/(fp+tn)

False Positive Rate fp/(fp+tn) 1-Specificity Specificity -

Page 24: Information Retrieval

CSE 8337 Spring 2009 24

Yet more evaluation measures…

Mean average precision (MAP) Average of the precision value obtained for

the top k documents, each time a relevant doc is retrieved

Avoids interpolation, use of fixed recall levels

MAP for query collection is arithmetic ave. Macro-averaging: each query counts equally

R-precision If have known (though perhaps incomplete)

set of relevant documents of size Rel, then calculate precision of top Rel docs returned

Perfect system could score 1.0.

Page 25: Information Retrieval

CSE 8337 Spring 2009 25

Variance For a test collection, it is usual that a

system does crummily on some information needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7)

Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query.

That is, there are easy information needs and hard ones!

Page 26: Information Retrieval

CSE 8337 Spring 2009 2626

Evaluation Graphs are good, but people want summary measures!

Precision at fixed retrieval level Precision-at-k: Precision of top k results Perhaps appropriate for most of web search: all

people want are good matches on the first one or two results pages

But: averages badly and has an arbitrary parameter of k

11-point interpolated average precision The standard measure in the early TREC

competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them

Evaluates performance at all recall levels

Page 27: Information Retrieval

CSE 8337 Spring 2009 2727

Typical (good) 11 point precisions

SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)

0

0.2

0.4

0.6

0.8

1

0 0.2 0.4 0.6 0.8 1

Recall

Prec

isio

n

Page 28: Information Retrieval

CSE 8337 Spring 2009 28

Computing Recall/Precision Points

For a given query, produce the ranked list of retrievals.

Adjusting a threshold on this ranked list produces different sets of retrieved documents, and therefore different recall/precision measures.

Mark each document in the ranked list that is relevant according to the gold standard.

Compute a recall/precision pair for each position in the ranked list that contains a relevant document.

Page 29: Information Retrieval

CSE 8337 Spring 2009 29

R=3/6=0.5; P=3/4=0.75

Computing Recall/Precision Points: An Example (modified from [Salton83])

n doc # relevant1 588 x2 589 x3 5764 590 x5 9866 592 x7 9848 9889 57810 98511 10312 59113 772 x14 990

Let total # of relevant docs = 6Check each new recall point:

R=1/6=0.167; P=1/1=1

R=2/6=0.333; P=2/2=1

R=5/6=0.833; p=5/13=0.38

R=4/6=0.667; P=4/6=0.667

Missing one relevant document.

Never reach 100% recall

Page 30: Information Retrieval

CSE 8337 Spring 2009 30

F-Measure One measure of performance that takes

into account both recall and precision. Harmonic mean of recall and precision:

Calculated at a specific document in the ranking.

Compared to arithmetic mean, both need to be high for harmonic mean to be high.

Compromise between precision and recall

PRRPPRF 11

22

Page 31: Information Retrieval

CSE 8337 Spring 2009 31

A combined measure: F Combined measure that assesses

precision/recall tradeoff is F measure (weighted harmonic mean):

People usually use balanced F1 measure i.e., with = 1 or = ½

RPPR

RP

F

2

2 )1(1)1(1

1

Page 32: Information Retrieval

CSE 8337 Spring 2009 32

F1 and other averagesCombined Measures

0

20

40

60

80

100

0 20 40 60 80 100

Precision (Recall fixed at 70%)

Minimum

Maximum

Arithmetic

Geometric

Harmonic

Page 33: Information Retrieval

CSE 8337 Spring 2009 33

E Measure (parameterized F Measure)

A variant of F measure that allows weighting emphasis on precision or recall:

Value of controls trade-off: = 1: Equally weight precision and recall

(E=F). > 1: Weight precision more. < 1: Weight recall more.

PRRPPRE

1

2

2

2

2

)1()1(

Page 34: Information Retrieval

CSE 8337 Spring 2009 34

Fallout Rate Problems with both precision and

recall: Number of irrelevant documents in

the collection is not taken into account.

Recall is undefined when there is no relevant document in the collection.

Precision is undefined when no document is retrieved.

collection the in items tnonrelevan of no. totalretrieved items tnonrelevan of no. Fallout

Page 35: Information Retrieval

CSE 8337 Spring 2009 35

Fallout Want fallout to be close to 0. In general want to maximize recall

and minimize fallout. Examine fallout-recall graph. More

systems oriented than recall-precision.

Page 36: Information Retrieval

CSE 8337 Spring 2009 36

Subjective Relevance Measure Novelty Ratio: The proportion of items

retrieved and judged relevant by the user and of which they were previously unaware. Ability to find new information on a topic.

Coverage Ratio: The proportion of relevant items retrieved out of the total relevant documents known to a user prior to the search. Relevant when the user wants to locate

documents which they have seen before (e.g., the budget report for Year 2000).

Page 37: Information Retrieval

CSE 8337 Spring 2009 37

Utility Subjective measure Cost-Benefit Analysis for retrieved

documents Cr – Benefit of retrieving relevant document Cnr – Cost of retrieving a nonrelevant

document Crn – Cost of not retrieving a relevant

document Nr – Number of relevant documents retrieved Nnr – Number of nonrelevant documents

retrieved Nrn – Number of relevant documents not

retrieved

Nrn))* (Crn Nnr)*((Cnr - Nr) (Cr Utility

Page 38: Information Retrieval

CSE 8337 Spring 2009 38

Other Factors to Consider User effort: Work required from the user

in formulating queries, conducting the search, and screening the output.

Response time: Time interval between receipt of a user query and the presentation of system responses.

Form of presentation: Influence of search output format on the user’s ability to utilize the retrieved materials.

Collection coverage: Extent to which any/all relevant items are included in the document corpus.

Page 39: Information Retrieval

CSE 8337 Spring 2009 39

Experimental Setup for Benchmarking

Analytical performance evaluation is difficult for document retrieval systems because many characteristics such as relevance, distribution of words, etc., are difficult to describe with mathematical precision.

Performance is measured by benchmarking. That is, the retrieval effectiveness of a system is evaluated on a given set of documents, queries, and relevance judgments.

Performance data is valid only for the environment under which the system is evaluated.

Page 40: Information Retrieval

CSE 8337 Spring 2009 40

Benchmarks A benchmark collection contains:

A set of standard documents and queries/topics.

A list of relevant documents for each query.

Standard collections for traditional IR:TREC: http://trec.nist.gov/

Standard document collection

Standard queries

Algorithm under test Evaluation

Standard result

Retrieved result

Precision and recall

Page 41: Information Retrieval

CSE 8337 Spring 2009 41

Benchmarking The Problems

Performance data is valid only for a particular benchmark.

Building a benchmark corpus is a difficult task.

Benchmark web corpora are just starting to be developed.

Benchmark foreign-language corpora are just starting to be developed.

Page 42: Information Retrieval

CSE 8337 Spring 2009 42

The TREC Benchmark • TREC: Text REtrieval Conference (http://trec.nist.gov/) Originated from the TIPSTER program sponsored by Defense Advanced Research Projects Agency (DARPA).• Became an annual conference in 1992, co-sponsored

by the National Institute of Standards and Technology (NIST) and DARPA.• Participants are given parts of a standard set of

documents and TOPICS (from which queries have to be derived) in different stages for training and testing.• Participants submit the P/R values for the final

document and query corpus and present their results at the conference.

Page 43: Information Retrieval

CSE 8337 Spring 2009 43

The TREC Objectives • Provide a common ground for comparing different

IR techniques.

– Same set of documents and queries, and same evaluation method.

• Sharing of resources and experiences in developing the

benchmark.– With major sponsorship from government to develop

large benchmark collections.• Encourage participation from industry and

academia.• Development of new evaluation techniques,

particularly for new applications.

– Retrieval, routing/filtering, non-English collection, web-based collection, question answering.

Page 44: Information Retrieval

CSE 8337 Spring 2009 44

Page 45: Information Retrieval

CSE 8337 Spring 2009 45

Test Collections

Page 46: Information Retrieval

CSE 8337 Spring 2009 46

From document collections to test collections

Still need Test queries Relevance assessments

Test queries Must be germane to docs available Best designed by domain experts Random query terms generally not a good

idea Relevance assessments

Human judges, time-consuming Are human panels perfect?

Page 47: Information Retrieval

CSE 8337 Spring 2009 47

Unit of Evaluation We can compute precision, recall,

F, and ROC curve for different units.

Possible units Documents (most common) Facts (used in some TREC

evaluations) Entities (e.g., car companies)

May produce different results. Why?

Page 48: Information Retrieval

CSE 8337 Spring 2009 48

Kappa measure for inter-judge (dis)agreement Kappa measure

Agreement measure among judges Designed for categorical judgments Corrects for chance agreement

Kappa = [ P(A) – P(E) ] / [ 1 – P(E) ] P(A) – proportion of time judges agree P(E) – what agreement would be by chance Kappa = 0 for chance agreement, 1 for total

agreement.

Page 49: Information Retrieval

CSE 8337 Spring 2009 49

Kappa Measure: ExampleNumber of docs Judge 1 Judge 2

300 Relevant Relevant

70 Nonrelevant Nonrelevant

20 Relevant Nonrelevant

10 Nonrelevant relevant

P(A)? P(E)?

Page 50: Information Retrieval

CSE 8337 Spring 2009 50

Kappa Example P(A) = 370/400 = 0.925 P(nonrelevant) = (10+20+70+70)/800 = 0.2125 P(relevant) = (10+20+300+300)/800 = 0.7878 P(E) = 0.2125^2 + 0.7878^2 = 0.665 Kappa = (0.925 – 0.665)/(1-0.665) = 0.776

Kappa > 0.8 = good agreement 0.67 < Kappa < 0.8 -> “tentative conclusions”

(Carletta ’96) Depends on purpose of study For >2 judges: average pairwise kappas

Page 51: Information Retrieval

CSE 8337 Spring 2009 51

Interjudge Agreement: TREC 3

Page 52: Information Retrieval

CSE 8337 Spring 2009 52

Impact of Inter-judge Agreement Impact on absolute performance measure can be

significant (0.32 vs 0.39) Little impact on ranking of different systems or

relative performance Suppose we want to know if algorithm A is better

than algorithm B A standard information retrieval experiment will

give us a reliable answer to this question.

Page 53: Information Retrieval

CSE 8337 Spring 2009 53

Critique of pure relevance Relevance vs Marginal Relevance

A document can be redundant even if it is highly relevant

Duplicates The same information from different sources Marginal relevance is a better measure of

utility for the user. Using facts/entities as evaluation units

more directly measures true relevance. But harder to create evaluation set

Page 54: Information Retrieval

CSE 8337 Spring 2009 54

Can we avoid human judgment?

No Makes experimental work hard

Especially on a large scale In some very specific settings, can use proxies

E.g.: for approximate vector space retrieval, we can compare the cosine distance closeness of the closest docs to those found by an approximate retrieval algorithm

But once we have test collections, we can reuse them (so long as we don’t overtrain too badly)

Page 55: Information Retrieval

CSE 8337 Spring 2009 55

Evaluation at large search engines

Search engines have test collections of queries and hand-ranked results

Recall is difficult to measure on the web Search engines often use precision at top k, e.g., k = 10 . . . or measures that reward you more for getting rank 1

right than for getting rank 10 right. NDCG (Normalized Cumulative Discounted Gain)

Search engines also use non-relevance-based measures. Clickthrough on first result

Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate.

Studies of user behavior in the lab A/B testing

Page 56: Information Retrieval

CSE 8337 Spring 2009 56

A/B testing Purpose: Test a single innovation Prerequisite: You have a large search engine up and

running. Have most users use old system Divert a small proportion of traffic (e.g., 1%) to the

new system that includes the innovation Evaluate with an “automatic” measure like

clickthrough on first result Now we can directly see if the innovation does

improve user happiness. Probably the evaluation methodology that large

search engines trust most In principle less powerful than doing a multivariate

regression analysis, but easier to understand