60
Natural Language Processing Assignment Group Members: Soumyajit De Naveen Bansal Sanobar Nishat

Outline

  • Upload
    zola

  • View
    49

  • Download
    0

Embed Size (px)

DESCRIPTION

Natural Language Processing Assignment Group Members: Soumyajit De Naveen Bansal Sanobar Nishat. Outline. POS tagging T ag wise accuracy G raph- tag wise accuracy P recision recall f-score Improvements In POS tagging Implementation of tri-gram POS tagging with smoothing - PowerPoint PPT Presentation

Citation preview

Page 1: Outline

Natural Language Processing

Assignment Group Members:

Soumyajit DeNaveen BansalSanobar Nishat

Page 2: Outline

Outline• POS tagging

Tag wise accuracyGraph- tag wise accuracyPrecision recall f-score

• Improvements In POS taggingImplementation of tri-gramPOS tagging with smoothingTag wise accuracyImproved precision, recall and f-score

• Comparison between Discriminative and Generative Model

• Next word predictionModel #1Model #2Implementation method and detailsScoring ratioperplexity ratio

Page 3: Outline

• NLTK• Yago

Different examples by using Yago• Parsing

Different examplesConclusions

• A* Implementation – A Comparison with Viterbi

Outline

Page 4: Outline

Assignment#1HMM Based POS Tagger

Page 5: Outline

Tag Wise Accuracy

Page 6: Outline

Graph – Tag Wise Accuracy

Page 7: Outline

Precision, Recall, F-Score

Precision : tp/(tp+fp) = 0.92

Recall: tp/(tp+fn) = 1

F-score: 2.precision.recall/(precision + recall) = 0.958

Page 8: Outline

Improvements inHMM Based POS Tagger

Assignment#1 (Cont..)

Page 9: Outline

Improvement in HMM Based POS Tagger Implementation of Trigram

* Issues - sparcity * Solution – Implementation of smoothing

techniques* Results – increases overall accuracy up to

94%

Page 10: Outline

Smoothing Technique

Implementation of Smoothing Technique* Linear Interpolation Technique* Formula:

i.e.* Finding value of lambda

(discussed in “ TnT- A Statistical Part-of-Speech Tagger ”)

Page 11: Outline

POS Tagging Accuracy With Smoothing

1 2 3 4 5 6 7 8 9 1094.02

94.04

94.06

94.08

94.1

94.12

94.14

94.16

94.18

94.2

94.22

Series1

Page 12: Outline

Precision, Recall, F-Score Precision : tp/(tp+fp) = 0.9415

Recall: tp/(tp+fn) = 1

F-score: 2.precision.recall/(precision + recall)

= 0.97

Page 13: Outline

Tag Wise Accuracy

AJ0 AJC AJS AT0 AV0 AVP AVQ CJC CJS CJT CRD DPS DT0 DTQ EX0 ITJ NN0 NN1 NN2 NP00

20

40

60

80

100

120

Series1

Page 14: Outline

ORD PNI PNPPNQPNXPOS PRF PRP PULPUNPUQPURTO0UNCVBBVBDVBG VBI VBNVBZ0

20

40

60

80

100

120

Series1

VDB

VDD

VDG VD

IVD

NVD

ZVH

BVH

DVH

G VHIVH

NVH

ZVM

0VV

BVV

DVV

G VVI

VVN

VVZ

XX0

ZZ0

0

20

40

60

80

100

120

Series1

Tag wise accuracy (cont..)

Page 15: Outline

Improvements inHMM Based POS Tagger

Handling Unknown Words

Assignment#1 (Cont..)

Page 16: Outline

Precision Score (accuracy in Percentage)

Page 17: Outline

Tag Wise Accuracy

Page 18: Outline

Error Analysis (Tag Wise Accuracy)

VVB - finite base form of lexical verbs (e.g. forget, send, live, return)

Count: 9916

Confused with counts ReasonVVI (infinitive form of lexical verbs (e.g. forget, send, live, return))

1201 VVB is used to tagged the word that has the same form as the infinitive without “to” for all persons. E.g. He has to show Show me

VVD (The past tense form of lexical verbs (e.g. forgot, sent, lived, returned))

145 The base form and past tense form of many verbs are same. So domination of emission probability of such word caused VVB wrongly tagged as VVD. And effect of transition probability might got have lower influence.

NN1 303 Words with similar base form gets confuse with common noun.e.g. The seasonally adjusted total regarded as…Total has been tagged as VVB and NN1

Page 19: Outline

Error Analysis (cont..)ZZ0 - Alphabetical symbols (e.g. A, a, B, b, c, d) (Accuracy -

63%)Count: 337 Confused with count

sReason

AT0 (Article e.g. the, a, an, no)

98 Emission probability of “a” as AT0 is much higher compare to ZZ0. Hence AT0 dominates while tagging “a”

CRD (Cardinal number e.g. one, 3, fifty-five, 3609)

16 Because of the assumption of bigram/trigram Transition probability.

Page 20: Outline

Error Analysis (cont..)ITJ - Interjection (Accuracy - 65%) Count: 177Reason: ITJ Tag appeared so less number of times, that it didn't

miss classified that much, but yet its percentage is so low

Confused with counts

Reason

AT0 (Article (e.g. the, a, an, no))

26 “No“ is used as ITJ and article in the corpus. So confusion is due to the higher emission probability of word with AT0

NN1 (Singular common noun)

14 “Bravo” is tagged as NN1 and ITJ in corpus

Page 21: Outline

Error Analysis (cont..)UNC - Unclassified items (Accuracy - 23%) Count: 756

Confused with counts

Reason

AT0 (Article (e.g. the, a, an, no))

69 Because of the domination of transition probability UNC is wrongly tagged

NN1 (Singular common noun)

224 Because of the domination of transition probability UNC is wrongly tagged

NP0 (Proper noun (e.g. London, Michael, Mars, IBM))

132 New word with begin capital letter is tagged as NP0, since mostly the UNC words are not repeating among different corpus.

Page 22: Outline

Assignment#2Discriminative &

Generative Model – A Comparison

Page 23: Outline

Discriminative and Generative Model

Page 24: Outline

Comparison Graph

1 2 3 4 5 6 7 8 9 10 11 12 13 14 150.82

0.825

0.83

0.835

0.84

0.845

0.85

0.855

0.86

0.865

0.87

discriminative model

1 2 3 4 5 6 7 8 9 10 11 12 13 14 150.82

0.825

0.83

0.835

0.84

0.845

0.85

0.855

0.86

0.865

0.87

generative model

Page 25: Outline

Conclusion

Since its unigram, Discriminative and Generative Model are giving same performance, as expected

Page 26: Outline

Assignment#3Next word prediction

Page 27: Outline

Model # 1

When only previous word is givenExample: He likes -------

Page 28: Outline

Model # 2

When previous Tag & previous word are known.

Example: He_PP0 likes_VB0 --------

Previous Work

Page 29: Outline

Model # 2 (cont..)

Current Work

Page 30: Outline

Evaluation Method

1. Scoring Method Divide the testing corpus into bigram Match the testing corpus 2nd word of

bigram with predicted word of each model Increment the score if match found The final evaluation is the ratio of the two

scores of each model i.e. model1/model2 If ratio > 1 => model 1 is performing

better and vice-verca.

Page 31: Outline

Implementation Detail

Previous Word Next Predicted Word (Model 1)

Next Predicted Word (Model 2)

I see seehe looks goes::

::

::

Look Up Table

Look up is used in predicting the next word

Page 32: Outline

Scoring Ratio

1 2 3 4 510.4

10.6

10.8

11

11.2

11.4

11.6

11.8

12

12.2

Series1

Page 33: Outline

2. Perplexity:

Comparison:

Page 34: Outline

Perplexity Ratio

1 2 3 4 50.988

0.99

0.992

0.994

0.996

0.998

1

Series1

Page 35: Outline

Remarks

Model 2 is performing poorer than model 1 because of words are sparse among tags.

Page 36: Outline

Next word predictionFurther Experiments

Assignment#3 (Cont..)

Page 37: Outline

Score (ratio) of word-prediction

1 2 3 4 5 6 7 8 9 101.13

1.14

1.15

1.16

1.17

1.18

1.19

1.2

1.21

1.22

1.23

Series1

Page 38: Outline

Perplexity (ratio) of word-prediction

1 2 3 4 5 6 7 8 9 100.84

0.86

0.88

0.9

0.92

0.94

0.96

0.98

1

Series1

Page 39: Outline

Remarks

Perplexity is found to be decreasing in this model.

Overall score has been increased.

Page 40: Outline

Assignment#4Yaqo

Page 41: Outline

Example #1Query : Amitabh and Sachin

wikicategory_Living_people -- <type> -- Amitabh_Bachchan -- <givenNameOf> -- Amitabh

wikicategory_Living_people -- <type> -- Sachin_Tendulkar -- <givenNameOf> -- Sachin

ANOTHER-PATHwikicategory_Padma_Shri_recipients -- <type> --

Amitabh_Bachchan -- <givenNameOf> -- Amitabh

wikicategory_Padma_Shri_recipients -- <type> -- Sachin_Tendulkar -- <givenNameOf> -- Sachin

Page 42: Outline

Example#2Query : India and Pakistan

PATHwikicategory_WTO_member_economies -- <type> -- India

wikicategory_WTO_member_economies -- <type> -- Pakistan

ANOTHER-PATHwikicategory_English-speaking_countries_and_territories --

<type> -- India

wikicategory_English-speaking_countries_and_territories -- <type> -- Pakistan

ANOTHER-PATHOperation_Meghdoot -- <participatedIn> -- India

Operation_Meghdoot -- <participatedIn> -- Pakistan

Page 43: Outline

ANOTHER-PATHOperation_Trident_(Indo-Pakistani_War) -- <participatedIn> -- India

Operation_Trident_(Indo-Pakistani_War) -- <participatedIn> -- Pakistan

ANOTHER-PATHSiachen_conflict -- <participatedIn> -- India

Siachen_conflict -- <participatedIn> -- Pakistan

ANOTHER-PATHwikicategory_Asian_countries -- <type> -- India

wikicategory_Asian_countries -- <type> -- Pakistan

Page 44: Outline

ANOTHER-PATHCapture_of_Kishangarh_Fort -- <participatedIn> -- India

Capture_of_Kishangarh_Fort -- <participatedIn> -- Pakistan ANOTHER-PATHwikicategory_South_Asian_countries -- <type> -- India

wikicategory_South_Asian_countries -- <type> -- Pakistan

ANOTHER-PATHOperation_Enduring_Freedom -- <participatedIn> -- India

Operation_Enduring_Freedom -- <participatedIn> -- Pakistan

ANOTHER-PATHwordnet_region_108630039 -- <type> -- India

wordnet_region_108630039 -- <type> -- Pakistan

Page 45: Outline

Example #3

Query: Tom and Jerry

wikicategory_Living_people -- <type> -- Tom_Green -- <givenNameOf> -- Tom

wikicategory_Living_people -- <type> -- Jerry_Brown -- <givenNameOf> -- Jerry

Page 46: Outline

Assignment#5 Parser projection

Page 47: Outline

Example#1

Page 48: Outline

Example#2

Page 49: Outline

Example#3

Page 50: Outline

Example#5Example#4

Page 51: Outline

Example#6

Page 52: Outline

Example#7

Page 53: Outline

Example#8

Page 54: Outline

Conclusion1. VBZ always comes at the end of the parse tree in Hindi and

Urdu.2. The structure in Hindi and Urdu is always expand or reset to

NP VB e.g. S=> NP VP (no change) OR VP => VBZ NP (interchange)

3. For exact translation in Hindi and Urdu, merging of sub-tree in english is sometimes required

4. One word to multiple words mapping is common while translating from English to Hindi/Urdu e.g. donar => aatiya shuda OR have => rakhta hai

5. Phrase to phrase translation is sometimes required, so chunking is required e.g. hand in hand => choli daman ka saath (Urdu) => sath sath hain (Hindi)

6. DT NN or DT NP doesn’t interchange7. In example#7: correct translation won’t require merging of

two sub-trees MD and VP e.g. could be => jasakta hai

Page 55: Outline

NLTK Toolkit NLTK is a suite of open source Python

modules Components of NLTK : Code, Corpora >30

annotated data sets1. corpus readers2. tokenizers3. stemmers4. taggers5. parsers6. wordnet7. semantic interpretation

Page 56: Outline

Assignment#6A* Implementation

& Comparison with Vitervi

Page 57: Outline

A* - Heuristic

^ $

Fixed cost at each level (L)= (Min cost)* No. of Hops

Selected RouteTransition probability

A B C D

Page 58: Outline

F(B) = g(B) + h(B)

where h(B) = min(i,j){-log(Pr(Cj|Bi) * Pr(Wc|Cj))} + min(i,j){-log(Pr(tj|ti) * Pr(Wtj|tj))} * (n-2)

+ min(k){-log(Pr($|Dk) * Pr($|$))}Here, n = #nodes from Bi to $ (including $) Wc = word emitted from next node, C ti ,tj = any combination of tags in the graph Wtj = word emitted from the node tj

Heuristic(h)

Page 59: Outline

Viterbi / A* Ratio : score(Viterbi) / score(A*) = 1.0

Where, score(Algo) = #correct predictions in the test

corpus

Result

Page 60: Outline

Conclusion

1. Since we are making bigram assumption and Viterbi is pruned in a careful way that is guaranteed to find the optimal path in a bigram HMM, its giving the optimal path. For A*, since our heuristic is underestimating and also maintains triangular inequality, A* is also giving the optimal path in the graph.

2. Since A* has to backtrack at times, it requires more time and memory to find the solution compared to Viterbi.