Statistical Translation and Web Search Ranking Jianfeng Gao Natural language processing, MSR July...

Preview:

Citation preview

Statistical Translation and Web Search Ranking

Jianfeng GaoNatural language processing, MSR

July 22, 2011

Who should be here?

• Interested in statistical machine translation and Web search ranking

• Interested in modeling technologies• Look for topics for your master/PhD thesis– A difficult topic: very hard to beat a simple

baseline– An easy topic: others cannot beat it either

3

Outline

• Probability• Statistical Machine Translation (SMT)• SMT for Web search ranking

Probability (1/2)

• Probability space:

– Cannot say • Joint probability:

– Probability that x and y are both true• Conditional probability:

– Probability that y is true when we already know x is true• Independence:

– x and y are independent

Probability (2/2)

• : assumptions on which the probabilities are based

• Product rule –from the def of conditional probability

• Sum rule – a rewrite of the marginal probability def

• Bayes rule – from the product rule

An example: Statistical Language Modeling

Statistical Language Modeling (SLM)

• Model form– capture language structure via a probabilistic

model

• Model parameters– estimation of free parameters using training data

8

Model Form

• How to incorporate language structure into a probabilistic model

• Task: next word prediction– Fill in the blank: “The dog of our neighbor ___”

• Starting point: word n-gram model– Very simple, yet surprisingly effective– Words are generated from left-to-right– Assumes no other structure than words

themselves

9

Word N-gram Model

• Word based model– Using chain rule on its history (=preceding words)

Word N-gram Model• How do we get probability estimates?

– Get text and count!

• Problem of using the whole history– Rare events: unreliable probability estimates– Assuming a vocabulary of 20,000 words, model # parametersunigram P(w1) 20,000bigram P(w2|w1) 400Mtrigram P(w3|w1w2) 8 x 1012

fourgram P(w4|w1w2w3) 1.6 x 1017

From Manning and Schütze 1999: 194

11

Word N-gram Model • Markov independence assumption

– A word depends only on N-1 preceding words– N=3 → word trigram model

• Reduce the number of parameters in the model– By forming equivalence classes

• Word trigram model

𝑃 (𝑤𝑖|¿ 𝑠>𝑤1𝑤2…𝑤𝑖− 2𝑤𝑖 −1 )=𝑃 (𝑤𝑖|𝑤𝑖 −2𝑤 𝑖−1 ¿

...

12

Model Parameters

• Bayesian estimation paradigm• Maximum likelihood estimation (MLE)• Smoothing in N-gram language models

13

Bayesian Paradigm

– – Posterior probability– – Likelihood– – Prior probability– – Marginal probability

• Likelihood versus probability– for fixed , defines a probability over ; – for fixed , defines the likelihood of .

• Never say “the likelihood of the data”• Always say “the likelihood of the parameters given the

data”

14

Maximum Likelihood Estimation (MLE)

• : model; : data

– Assume a uniform prior – is independent of , and is dropped

– where is the likelihood of parameter• Key difference between MLE and Bayesian Estimation

– MLE assume that is fixed but unknown, – Bayesian estimation assumes that itself is a random

variable with a prior distribution

15

MLE for Trigram LM

• It is easy – let us get some real text and start to count

But, why is this the MLE solution?

16

Derivation of MLE for N-gram

• Homework – an interview question of MSR • Hints– This is a constrained optimization problem– Use log likelihood as objective function– Assume a multinomial distribution of LM– Introduce Lagrange multiplier for the constraints

17

Sparse Data Problem

• Say our vocabulary size is |V|• There are |V|3 parameters in the trigram LM– |V| = 20,000 20,0003 = 8 1012 parameters

• Most trigrams have a zero count even in a large text corpus

– oops…

18

Smoothing: Adding One

• Add one smoothing (from Bayesian paradigm)• But works very badly – do not use this

• Add delta smoothing• Still very bad – do not use this

19

Smoothing: Backoff

• Backoff trigram to bigram, bigram to unigram

D(0,1) is a discount constant – absolute discount α is calculated so probabilities sum to 1 (homework)

• Simple and effective – use this one!

20

Outline

• Probability• SMT and translation models• SMT for web search ranking

SMT

and

C: 救援 人员 在 倒塌的 房屋 里 寻找 生还者E: Rescue workers search for survivors in collapsed houses

𝑃 (𝐸∨𝐶)• Translation process (generative story)– C is broken into translation units– Each unit is translated into English– Glue translated units to form E

• Translation models– Word-based models– Phrase-based models– Syntax-based models

Generative Modeling

Art

Science

Engineering

Story

Math

Code

Generative Modeling for

• Story making– how a target sentence is generated from a source

sentence step by step• Mathematical formulation– modeling each generation steps in the generative

story using a probability distribution• Parameter estimation– implementing an effective way of estimating the

probability distributions from training data

Word-Based Models: IBM Model 1

• We first choose the length for the target sentence , according to the distribution .

• Then, for each position in the target sentence, we choose a position in the source sentence from which to generate the -th target word according to the distribution

• Finally, we generate the target word by translating according to the distribution .

Mathematical Formulation

• Assume that the choice of the length is independent of and

• Assume that all positions in the source sentence are equally likely to be chosen

• Assuming that each target word is generated independently from

Parameter Estimation

• Model Form

• MLE on word-aligned training data

• Don’t forget smoothing

Phrase-Based Models

Mathematical Formulation

• Assume a uniform probability over segmentations

• Use the maximum approximation to the sum

• Assume each phrase being translated independently and use distance-based reordering model

Parameter Estimation

MLE: Don’t forget smoothing

Syntax-Based Models

Story

• Parse an input Chinese sentence into a parse tree

• Translate each Chinese constituent into English– VP (PP 寻找 NP, search for NP PP)

• Glue these English constituents into a well-formed English sentence.

Other Two Tasks?

• Mathematical formation– Based on synchronous context free grammar (SCFG)

• Parameter estimation– Learning SCFG from data

• Homework • Let us go thru an example (thanks to Michel

Galley)– Hierarchical phrase model– Linguistically syntax-based models

rescue

workers

for

survivors

in

collapsed

houses

search

救援 人员 在 倒塌 的 房屋 里 寻找 生还者

倒塌 的 房屋 collapsed houses

rescue

workers

for

survivors

in

collapsed

houses

search

救援 人员 在 倒塌 的 房屋 里 寻找 生还者

search for survivors in collapsed houses在 倒塌 的 房屋 里 寻找 生还者

rescue

workers

for

survivors

in

collapsed

houses

search

救援 人员 在 倒塌 的 房屋 里 寻找 生还者

search for survivors in collapsed houses在 倒塌 的 房屋 里 寻找 生还者

A synchronous rule

• Phrase-based translation unit• Discontinuous translation unit• Control on reordering

里 寻找 在

A synchronous grammar

里 寻找 在

倒塌 的 房屋

生还者

Context-free derivation:

search for survivors in collapsed houses在 倒塌 的 房屋 里 寻找 生还者

search for in collapsed houses在 倒塌 的 房屋 里 寻找

search for in 在 里 寻找

A synchronous grammar

里 寻找 在

倒塌 的 房屋

生还者

search for survivors in collapsed housesRecognizes:

search for collapsed houses in survivors

search for survivors collapsed houses in

NNS

PP

VP

VBP

救援 人员 在 倒塌 的 房屋 里 寻找 生还者

Rescue workers search for survivors in collapsed houses.

rescue staff in collapse of house in search survivors

JJ NNS

NPPP

PP

NNS

NP

PP

VP

VBP

IN

NNS

NP

S

NN

NNS

PP

VP

VBP

救援 人员 在 倒塌 的 房屋 里 寻找 生还者

Rescue workers search for survivors in collapsed houses.

rescue staff in collapse of house in search survivors

JJ NNS

NPPP

PP

NNS

NP

PP

VP

VBP

IN

NNS

NP

S

NN

NNS

PP

VP

VBP

救援 人员 在 倒塌 的 房屋 里 寻找 生还者

Rescue workers search for survivors in collapsed houses.

rescue staff in collapse of house in search survivors

JJ NNS

NPPP

PP

NNS

NP

PP

VP

VBP

IN

NNS

NP

S

NN

PP

VP

VBP

救援 人员 在 倒塌 的 房屋 里 寻找 生还者

Rescue workers search for survivors in collapsed houses.

rescue staff in collapse of house in search survivors

JJ NNS

NPPP

PP

NNS

NP

PP

VP

VBP

IN

VP

VP

NP寻找VBP

PP

PPIN

PP NPsearch for

PP

VP

VBP

救援 人员 在 倒塌 的 房屋 里 寻找 生还者

Rescue workers search for survivors in collapsed houses.

rescue staff in collapse of house in search survivors

JJ NNS

NPPP

PP

NNS

NP

PP

VP

VBP

IN

VP-234 NP-57寻找 PP-32PP-32 NP-57search for

SCFG rule:

NNS

PP

VP

VBP

救援 人员 在 倒塌 的 房屋 里 寻找 生还者

Rescue workers search for survivors in collapsed houses.

rescue staff in collapse of house in search survivors

JJ NNS

NPPP

PP

NNS

NP

PP

VP

VBP

IN

NNS

NP

S

NN

47

Outline

• Probability• SMT and translation models• SMT for web search ranking

Web Documents and Search Queries

• cold home remedy • cold remeedy• flu treatment• how to deal with

stuffy nose?

Map Queries to Documents• Fuzzy keyword matching

– Q: cold home remedy– D: best home remedies for cold and flu

• Spelling correction– Q: cold remeedies– D: best home remedies for cold and flu

• Query alteration– Q: flu treatment– D: best home remedies for cold and flu

• Query/document rewriting– Q: how to deal with stuffy nose– D: best home remedies for cold and flu

• Where are we now?

Research Agenda (Gao et al. 2010, 2011)

• Model documents and queries as different languages (Gao et al., 2010)

• Cast mapping queries to documents as bridging the language gap via translation

• Leverage statistical machine translation (SMT) technologies and infrastructures to improve search relevance

Are Queries and Docs just Different Languages?

• A large scale analysis, extending (Huang et al. 2010)

• Divide web collection into different fields, e.g., queries, anchor text, titles, etc.

• Develop a set of language models, each on one n-gram datasets from a different field

• Measure language difference between different fields (queries/docs) via perplexity

Microsoft Web N-gram Model Collection (cutoff = 0)

• Microsoft web n-gram services. http://research.microsoft.com/web-ngram

Perplexity Results

• Test set– 733,147 queries from the May 2009 query log

• Summary– Query LM is most predictive of test queries– Title is better than Anchor in lower order but is worse in higher

order– Body is in a different league

SMT for Document Ranking• Given a query (q), doc (d) can be ranked by

how likely it is that q is rewritten from d,

• An example: phrasal statistical translation for Web document ranking

how to deal with stuffy nose?

Phrasal Statistical Translation for Rankingd: “cold home remedies” titleS: [“cold”, “home remedies”] segmentationT: [“stuffy nose”, “deal with”] translationM: (1 2, 2 1) permutationq: “deal with stuffy nose” query

• Uniform probability over S: • Maximum approximation:

• Max probability assignment via dynamic programming: and

• Model training on query-doc pairs

Mine Query-Document Pairs from User Logs

http://www.agelessherbs.com/BestHomeRemediesColdFlu.html

NO CLICK

NO CLICK

how to deal with stuffy nose?

stuffy nose treatment

cold home remedies

Mine Query-Document Pairs from User Logs

how to deal with stuffy nose?

stuffy nose treatment

cold home remedies

Mine Query-Document Pairs from User Logs

how to deal with stuffy nose?

stuffy nose treatment

cold home remedies

QUERY (Q) Title (T)how to deal with stuffy nose best home remedies for cold and flustuffy nose treatment best home remedies for cold and flucold home remedies best home remedies for cold and flu… … … …go israel forums goisrael communityskate at wholesale at pr wholesale skates southeastern skate supplybreastfeeding nursing blister baby clogged milk ducts babycenterthank you teacher song lyrics for teaching educational children s musicimmigration canada lacolle cbsa office detailed information

• 178 million pairs from 0.5 year log

Evaluation Methodology

• Measurement: NDCG, t-test• Test set: – 12,071 English queries sampled from 1-y log– 5-level relevance label for each query-doc pair– On a tail document sets (click field is empty)

• Training data for translation models:– 82,834,648 query-title pairs

Baseline: Word-Based Models (Berger&Lafferty, 99)

• Basic model:• Mixture model:

• Learning translation probabilities from clickthrough data– IBM Model 1 with EM

ResultsSample IBM-1 word translation probability after EM training on the Query-title pairs

Bilingual Phrases

• Notice that with context information, we have less ambiguous translations

Results

• Ranking results– All features

– Only phrase translation features

Why Do Bi-Phrases Help?

• Length distribution

• Good/bad examples

Generative Topic Models

• Probabilistic latent Semantic Analysis (PLSA)

– d is assigned a single most likely topic vector– q is generated from the topic vectors

• Latent Dirichlet Allocation (LDA) generalizes PLSA– a posterior distribution over topic vectors is used– PLSA = LDA with MAP inference

Q: stuffy nose treatment D: cold home remediesTopic

Q: stuffy nose treatment D: cold home remedies

Bilingual Topic Model

• For each topic z: • For each q-d pair: • Each q is generated by and • Each w is generated by and

Log-likelihood of LDA Given Data

• : distribution of distribution• LDA requires integral over • This is the MAP approximation to LDA

MAP Estimation via EM• Estimate by maximizing joint log likelihood of

q-d pairs and the parameters• E-Step: compute posterior probabilities– ,

• M-Step: update parameters using the posterior probabilities– ,,

Posterior Regularization (PR)

• q and its clicked d are relevant, thus they– Share same prior distribution over topics (MAP)– Weight each topic similarly (PR)

• Model training via modified EM– E-step: for each q-d pair, project the posterior topic

distributions onto a constrained set, where the expected fraction of each topic is equal in q and d

– M-step: update parameters using the projected posterior probabilities

Topic Models for Doc Ranking

Evaluation Methodology

• Measurement: NDCG, t-test• Test set: – 16,510 English queries sampled from 1-y log– Each query is associated with 15 docs– 5-level relevance label for each query-doc pair

• Training data for translation models:– 82,834,648 query-title pairs

Topic Model Results

Summary

• Probability– Basics– A case study of a probabilistic model: N-gram language model

• Statistical Machine Translation (SMT)– Generative modeling (story math code)– Word/phrase/syntax based models

• SMT for web search ranking– View query and doc as different language– Doc ranking via – Word/phrase/topic based models

• Slides/doc will be available at http://research.microsoft.com/~jfgao/

Main Reference• Berger, A., and Lafferty, J. 1999. Information retrieval as statistical translation.

In SIGIR, pp. 222-229.• Gao, J., He, X., and Nie, J-Y. 2010. Clickthrough-based translation models for

web search: from word models to phrase models. In CIKM, pp. 1139-1148.• Gao, J., Toutanova, K., and Yih, W-T. 2011. Clickthrough-based latent semantic

models for web search. In SIGIR.• Huang, J., Gao, J., Miao, J., Li, X., Wang, K., and Behr, F. 2010. Exploring web

scale language models for search query processing. In Proc. WWW 2010, pp. 451-460.

• MacKay, David J. C. 2003. Information Theory, Inference and Learning Algorithms. Cambridge: Cambridge University Press.

• Manning, C., and H. Chutze. 1999. Foundations of statistical natural language processing. MIT Press. Cambridge.

• Philipp Koehn. Statistical Machine Translation. Cambridge University Press. 2009.

Recommended