40
5/28/03 1 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

Embed Size (px)

Citation preview

Page 1: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 1

Data Intensive Linguistics

Statistical Alignment and Machine Translation

Page 2: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 2

Overview MT is very hard: by any reasonable

standard translation programs available today do not perform very well. But they are still used and useful.

Most MT systems are a mix of probabilistic and non-probabilistic components, though there are a few completely statistical translation systems.

Page 3: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 3

Overview (Cont’d) A large part of implementing an MT

system is not specific to MT. Nonetheless, parts of MT that are

specific to it are: text alignment and word alignment.

Definition: In the sentence alignment problem, one seeks to say that some group of sentences in one language corresponds in content to some other group of sentences in another language. Such a grouping is referred to as a bead of sentences.

Page 4: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 4

Overview of the Lecture Text Alignment Word Alignment Fully Statistical Attempt at MT

Page 5: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 5

Text Alignment: Aligning Sentences and Paragraphs Text alignment is useful for bilingual

lexicography, MT, but also as a first step to using bilingual corpora for other tasks.

Text alignment is not trivial because translators do not always translate one sentence in the input into one sentence in the output, although they do so in 90% of the cases.

Another problem is that of crossing dependencies, where the order of sentences are changed in the translation.

Page 6: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 6

Different Approached to Text Alignment Length-Based Approaches: short

sentences will be translated as short sentences and long sentences as long sentences.

Offset Alignment by Signal Processing Techniques: these approaches do not attempt to align beads of sentences but rather just to align position offsets in the two parallel texts.

Lexical Methods: Use lexical information to align beads of sentences.

Page 7: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 7

Length-Based Methods I: General Approach

Goal: Find alignment A with highest probability given the two parallel texts S and T: arg maxA P(A|S, T)=argmaxA P(A, S, T)

To estimate the above probabilities, the aligned text is decomposed in a sequence of aligned beads where each bead is assumed to be independent of the others. Then P(A, S, T) k=1

K P(Bk). The question, then, is how to estimate the

probability of a certain type of alignment bead given the sentences in that bead.

Page 8: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 8

Length-Based Methods II: Gale and Church, 1993

The algorithm uses sentence length (measured in characters) to evaluate how likely an alignment of some number of sentences in L1 is with some number of sentences in L2.

The algorithm uses a Dynamic Programming technique that allows the system to efficiently consider all possible alignments and find the minimum cost alignment.

The method performs well (at least on related languages). It gets a 4% error rate. It works best on 1:1 alignments [only 2% error rate]. It has a high error rate on more difficult alignments.

Page 9: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 9

Length-Based Methods II: Other Approaches Brown et al., 1991: Same approach as

Gale and Church, except that sentence lengths are compared in terms of words rather than characters. Other difference in goal: Brown et al. Didn’t want to align entire articles but just a subset of the corpus suitable for further research.

Wu, 1994: Wu applies Gale and Church’s method to a corpus of parallel English and Cantonese Text. The results are not much worse than on related languages. To improve accuracy, Wu uses lexical cues.

Page 10: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 10

Lexical Methods of Sentence Alignment I: Kay & Röscheisen, 1993

Assume the first and last sentences of the texts align. These are the initial anchors.

Then, until most sentences are aligned:1. Form an envelope of possible alignments.2. Choose pairs of words that tend to co-occur

in these potential partial alignments.3. Find pairs of source and target sentences

which contain many possible lexical correspondences. The most reliable of these pairs are used to induce a set of partial alignments which will be part of the final result.

Page 11: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 11

Lexical Methods of Sentence Alignment II: Chen, 1993 Chen does sentence alignment by

constructing a simple word-to-word translation model as he goes along.

The best alignment is the one that maximizes the likelihood of generating the corpus given the translation model.

This best alignment is found by using dynamic programming.

Page 12: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 12

Lexical Methods of Sentence Alignment III: Haruno & Yamazaki, 1996

Their method is a variant of Kay & Röscheisen (1993) with the following differences: For structurally very different languages,

function words impede alignment. They eliminate function words using a POS Tagger.

If trying to align short texts, there are not enough repeated words for reliable alignment using Kay & Röscheisen (1993). So they use an online dictionary to find matching word pairs

Page 13: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 13

Word Alignment A common use of aligned texts is the

derivation of bilingual dictionaries and terminology databases.

This is usually done in two steps: First, the text alignment is extended to a word alignment. Then, some criterion, such as frequency is used to select aligned pairs for which there is enough evidence to include them in the bilingual dictionary.

Using a 2 measure works well unless one word in L1 occurs with more than one word in L2. Then, it is useful to assume a one-to-one correspondence.

Future work is likely to use existing bilingual dictionaries.

Page 14: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 14

Fully Statistical MT Suppose we are translating F -> E We conceptualize this by imagining that

the French has arisen by a stochastic process from an underlying but unknown English version.

We need to guess the most likely such version

Page 15: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 15

Stochastic MT Model P(E|F) as P(F|E)P(E) / P(F) P(F) is constant, all we want is to choose E P(F|E)P(E) is a noisy channel model, where

P(F|E) is a distorting channel that converts E into F

Page 16: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 16

Why not just model P(E|F)? It seems circular to model P(E|F) by an

approach that requires P(F|E)

Page 17: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 17

Why not just model P(E|F)? It seems circular to model P(E|F) by an

approach that requires P(F|E) P(F|E) is the translation specialist P(E) is the English specialist Both can be sloppy

Page 18: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 18

Scene of the crime To be guilty

You have to have a motive You have to have the means The CSI person can ignore motive The police psychologist can ignore means

To be a good translation You have to convey the message You have to be plausible English P(F|E) can score translation while ignoring

many aspects of English P(E) doesn’t have to know anything about

French

Page 19: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 19

Summa cum laude “Topmost with praise” scores high on P(L|

E) but poorly on P(E) “Suit and tie” scores much higher on P(E)

but badly on P(L|E) “With highest honors” maximizes P(E)P(L|

E) (we hope)

Page 20: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 20

What are the models Source model p(E) could be trigram model

Guarantees semi-fluent English Channel model p(F|E) or p(L|E) could be finite-state

transducer Stochastically translates each word + allows a little

random rearrangement – with high prob, words stay more or less put

Maximizing p(F|E) would give really lousy French translation of English

Random word translation is stupid – need word sense from context

Random word rearrangement is stupid – phrases rearrange!

This channel has no idea what fluent French looks like But maximizing p(E)*p(F|E) gives a better English

translation of French because p(E) knows what English should look like.

Page 21: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 21

IBM models 1,2,3,4,5 Models for P(F|E) There is a set of English words and the

extra English word NULL Each English word generates and places 0

or more French words Any remaining French words are deemed

to have been produced by NULL

Page 22: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 22

IBM models 1,2,3,4,5 In Model 2, the placement of a word in the

French depends on where it was in the English

Page 23: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 23

IBM models 1,2,3,4,5 In model 3 we model how many French

words and English word can produce, using a concept called fertility

Page 24: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 24

IBM models 1,2,3,4,5 In model 4 the placement of later French

words produced by an English word depends on what happened to earlier French words generated by that same English word

Page 25: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 25

Alignments The basic idea in all 5 is to build models

of translation on top of models of the alignment.

We really want P(F|E), which we could obtain from direct models of the joint distribution P(F,E)

Instead we introduce probabilistic models of the alignment A, and work with P(F,A,E)

Page 26: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 26

Alignments

The dog ate my homework

Le chien a mangé mes devoirs

Page 27: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 27

IBM models 1,2,3,4,5 In model 5 they do non-deficient

alignment. To understand what is at stake, you need to know a little more about the internals of the models

Page 28: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 28

Alignments All of models 1 through 5 are actually

models of how alignments come to be Conceptually, we form the probability of

a pair of sentences by summing over all alignments that relate these sentences.

In practice, the Candide team sometimes preferred to approximate by assuming that the sentence pair having the single best alignment was also the best sentence pair.

Page 29: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 29

Why all the models We don’t start with aligned text, so we

have to get initial alignments from somewhere.

Model 1 is words only, and is relatively easy to train with technology similar to the forward-backward algorithm for HMMs. This is the EM algorithm

Page 30: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 30

Why all the models We are working in a space with many

parameters and many local minima. EM guarantees only local optimality, so it

pays (as in Pereira and Schabes’ work on grammar) to start from a place which is already good.

Page 31: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 31

Why all the models The purpose of model 1 is to find a good

place in which to start training model 2 Model 2 feeds model 3 And so on

Page 32: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 32

Maximum entropy Going beyond the 5 models, there is a

systematic use of log-linear maximum entropy models

These refine the probability models using features like French word is “en”, English word is

“in”, neighboring English word is a month name

Page 33: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 33

Practicalities Training for 2 million Hansard sentences

takes 3600 processor hours on a farm of 15 1993 vintage IBM PowerPC processors.

My guess is that we could do this in a week (8*2* hours

Page 34: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 34

Performance Rated on fluency and adequacy by

independent raters (see Candide paper for details) Systran gets a score of .743 on adequacy,

Candide gets .670 Candide gets a score of .580 on fluency,

where Systran gets only .540 Both are significantly less good than

Transman, which is human-in-the-loop system

Page 35: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 35

Prospects More elaborate translation models, with

serious notions of phrase, word-sense, etc. are in the works.

Nobody really knows what level of quality is attainable.

Nobody really knows what level of quality is required for wider adoption.

For specific tasks, MT is clearly useful

Page 36: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 36

Prospects Many other tasks can be coerced to look

like translation Summarization (E -> Summary) Information retrieval (E -> Keywords) Title generation (E -> Title) Paraphrasing (E -> Paraphrase)

There’s a large and growing literature using this idea.

Page 37: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 37

Where to read moreM&S chapter 13 and references therein

Kevin Knight Automating Knowledge Acquisition for Machine translation AI Magazine 18(4) 1997

http://www-2.cs.cmu.edu/~aberger/mt.html

Especially the overview paper on Candide and the longer paper on Maximum Entropy with Berger as first author

Page 38: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 38

Offset Alignment by Signal Processing Techniques I : Church, 1993 Church argues that length-based methods work

well on clean text but may break down in real-world situations (noisy OCR or unknown markup conventions)

Church’s method is to induce an alignment by using cognates (words that are similar across languages) at the level of character sequences.

The method consists of building a dot-plot, i.e., the source and translated text are concatenated and then a square graph is made with this text on both axes. A dot is placed at (x,y) when there is a match. [Unit=4-grams].

Page 39: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 39

Offset Alignment by Signal Processing Techniques II: Church, 1993 (Cont’d)

Signal processing methods are then used to compress the resulting plot.

The interesting part in a dot-plot is called the bitext maps. These maps show the correspondence between the two languages.

In the bitext maps, can be found faint, roughly straight diagonals.

A heuristic search along this diagonal provides an alignment in terms of offsets in the two texts.

Page 40: 5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation

5/28/03 40

Offset Alignment by Signal Processing Techniques III: Fung & McKeown, 1994

Fung and McKeown’s algorithm works: Without having found sentence boudaries. In only roughly parallel text (with certain

sections missing in one language) With unrelated language pairs.

The technique is to infer a small bilingual dictionary that will give points of alignment.

For each word, a signal is produced, as an arrival vector of integer numbers giving the numver of words between each occurrence of the word at hand.