Upload
roxanne-atkins
View
221
Download
2
Tags:
Embed Size (px)
Citation preview
Natural Language Processing
Assignment
Group Members:Soumyajit De
Naveen BansalSanobar Nishat
Outline
• POS taggingTag wise accuracyGraph- tag wise accuracyPrecision recall f-score
• Improvements In POS taggingImplementation of tri-gramPOS tagging with smoothingTag wise accuracyImproved precision, recall and f-score
• Comparison between Discriminative and Generative Model
• Next word predictionModel #1Model #2Implementation method and detailsScoring ratioperplexity ratio
• NLTK• Yago
Different examples by using Yago• Parsing
Different examplesConclusions
• A* Implementation – A Comparison with Viterbi
Outline
Assignment#1HMM Based POS Tagger
Tag Wise Accuracy
Graph – Tag Wise Accuracy
Precision, Recall, F-Score
Precision : tp/(tp+fp) = 0.92
Recall: tp/(tp+fn) = 1
F-score: 2.precision.recall/(precision + recall) = 0.958
Improvements inHMM Based POS Tagger
Assignment#1 (Cont..)
Improvement in HMM Based POS Tagger
Implementation of Trigram* Issues - sparcity * Solution – Implementation of smoothing
techniques* Results – increases overall accuracy up to
94%
Smoothing Technique
Implementation of Smoothing Technique* Linear Interpolation Technique* Formula:
i.e.* Finding value of lambda
(discussed in “ TnT- A Statistical Part-of-Speech Tagger ”)
POS Tagging Accuracy With Smoothing
1 2 3 4 5 6 7 8 9 1094.02
94.04
94.06
94.08
94.1
94.12
94.14
94.16
94.18
94.2
94.22
Series1
Precision, Recall, F-Score
Precision : tp/(tp+fp) = 0.9415
Recall: tp/(tp+fn) = 1
F-score: 2.precision.recall/(precision + recall)
= 0.97
Tag Wise Accuracy
AJ0 AJC AJS AT0 AV0 AVP AVQ CJC CJS CJT CRD DPS DT0 DTQ EX0 ITJ NN0 NN1 NN2 NP00
20
40
60
80
100
120
Series1
ORD PNI PNPPNQPNXPOS PRF PRP PULPUNPUQPURTO0UNCVBBVBDVBG VBI VBNVBZ0
20
40
60
80
100
120
Series1
VDBVDD
VDGVDI
VDNVDZ
VHBVHD
VHGVHI
VHNVHZ
VM0VVB
VVDVVG
VVIVVN
VVZXX0
ZZ00
20
40
60
80
100
120
Series1
Tag wise accuracy (cont..)
Improvements inHMM Based POS Tagger
Handling Unknown Words
Assignment#1 (Cont..)
Precision Score (accuracy in Percentage)
Tag Wise Accuracy
Error Analysis (Tag Wise Accuracy)
VVB - finite base form of lexical verbs (e.g. forget, send, live, return)
Count: 9916
Confused with counts Reason
VVI (infinitive form of lexical verbs (e.g. forget, send, live, return))
1201 VVB is used to tagged the word that has the same form as the infinitive without “to” for all persons. E.g. He has to show Show me
VVD (The past tense form of lexical verbs (e.g. forgot, sent, lived, returned))
145 The base form and past tense form of many verbs are same. So domination of emission probability of such word caused VVB wrongly tagged as VVD. And effect of transition probability might got have lower influence.
NN1 303 Words with similar base form gets confuse with common noun.e.g. The seasonally adjusted total regarded as…Total has been tagged as VVB and NN1
Error Analysis (cont..)
ZZ0 - Alphabetical symbols (e.g. A, a, B, b, c, d) (Accuracy - 63%)
Count: 337
Confused with count
sReason
AT0 (Article e.g. the, a, an, no)
98 Emission probability of “a” as AT0 is much higher compare to ZZ0. Hence AT0 dominates while tagging “a”
CRD (Cardinal number e.g. one, 3, fifty-five, 3609)
16 Because of the assumption of bigram/trigram Transition probability.
Error Analysis (cont..)
ITJ - Interjection (Accuracy - 65%) Count: 177Reason: ITJ Tag appeared so less number of times, that it didn't
miss classified that much, but yet its percentage is so low
Confused with counts
Reason
AT0 (Article (e.g. the, a, an, no))
26 “No“ is used as ITJ and article in the corpus. So confusion is due to the higher emission probability of word with AT0
NN1 (Singular common noun)
14 “Bravo” is tagged as NN1 and ITJ in corpus
Error Analysis (cont..)
UNC - Unclassified items (Accuracy - 23%) Count: 756
Confused with counts
Reason
AT0 (Article (e.g. the, a, an, no))
69 Because of the domination of transition probability UNC is wrongly tagged
NN1 (Singular common noun)
224 Because of the domination of transition probability UNC is wrongly tagged
NP0 (Proper noun (e.g. London, Michael, Mars, IBM))
132 New word with begin capital letter is tagged as NP0, since mostly the UNC words are not repeating among different corpus.
Assignment#2Discriminative &
Generative Model – A Comparison
Discriminative and Generative Model
Comparison Graph
1 2 3 4 5 6 7 8 9 10 11 12 13 14 150.82
0.825
0.83
0.835
0.84
0.845
0.85
0.855
0.86
0.865
0.87
discriminative model
1 2 3 4 5 6 7 8 9 10 11 12 13 14 150.82
0.825
0.83
0.835
0.84
0.845
0.85
0.855
0.86
0.865
0.87
generative model
Conclusion
Since its unigram, Discriminative and Generative Model are giving same performance, as expected
Assignment#3Next word prediction
Model # 1
When only previous word is givenExample: He likes -------
Model # 2
When previous Tag & previous word are known.
Example: He_PP0 likes_VB0 --------
Previous Work
Model # 2 (cont..)
Current Work
Evaluation Method
1. Scoring Method Divide the testing corpus into bigram Match the testing corpus 2nd word of
bigram with predicted word of each model
Increment the score if match found The final evaluation is the ratio of the two
scores of each model i.e. model1/model2 If ratio > 1 => model 1 is performing
better and vice-verca.
Implementation Detail
Previous Word Next Predicted Word (Model 1)
Next Predicted Word (Model 2)
I see see
he looks goes
::
::
::
Look Up Table
Look up is used in predicting the next word
Scoring Ratio
1 2 3 4 510.4
10.6
10.8
11
11.2
11.4
11.6
11.8
12
12.2
Series1
2. Perplexity:
Comparison:
Perplexity Ratio
1 2 3 4 50.988
0.99
0.992
0.994
0.996
0.998
1
Series1
Remarks
Model 2 is performing poorer than model 1 because of words are sparse among tags.
Next word predictionFurther Experiments
Assignment#3 (Cont..)
Score (ratio) of word-prediction
1 2 3 4 5 6 7 8 9 101.13
1.14
1.15
1.16
1.17
1.18
1.19
1.2
1.21
1.22
1.23
Series1
Perplexity (ratio) of word-prediction
1 2 3 4 5 6 7 8 9 100.84
0.86
0.88
0.9
0.92
0.94
0.96
0.98
1
Series1
Remarks
Perplexity is found to be decreasing in this model.
Overall score has been increased.
Assignment#4Yaqo
Example #1
Query : Amitabh and Sachin
wikicategory_Living_people -- <type> -- Amitabh_Bachchan -- <givenNameOf> -- Amitabh
wikicategory_Living_people -- <type> -- Sachin_Tendulkar -- <givenNameOf> -- Sachin
ANOTHER-PATHwikicategory_Padma_Shri_recipients -- <type> --
Amitabh_Bachchan -- <givenNameOf> -- Amitabh
wikicategory_Padma_Shri_recipients -- <type> -- Sachin_Tendulkar -- <givenNameOf> -- Sachin
Example#2
Query : India and Pakistan
PATHwikicategory_WTO_member_economies -- <type> -- India
wikicategory_WTO_member_economies -- <type> -- Pakistan
ANOTHER-PATHwikicategory_English-speaking_countries_and_territories --
<type> -- India
wikicategory_English-speaking_countries_and_territories -- <type> -- Pakistan
ANOTHER-PATHOperation_Meghdoot -- <participatedIn> -- India
Operation_Meghdoot -- <participatedIn> -- Pakistan
ANOTHER-PATHOperation_Trident_(Indo-Pakistani_War) -- <participatedIn> --
India
Operation_Trident_(Indo-Pakistani_War) -- <participatedIn> -- Pakistan
ANOTHER-PATHSiachen_conflict -- <participatedIn> -- India
Siachen_conflict -- <participatedIn> -- Pakistan
ANOTHER-PATHwikicategory_Asian_countries -- <type> -- India
wikicategory_Asian_countries -- <type> -- Pakistan
ANOTHER-PATHCapture_of_Kishangarh_Fort -- <participatedIn> -- India
Capture_of_Kishangarh_Fort -- <participatedIn> -- Pakistan ANOTHER-PATHwikicategory_South_Asian_countries -- <type> -- India
wikicategory_South_Asian_countries -- <type> -- Pakistan
ANOTHER-PATHOperation_Enduring_Freedom -- <participatedIn> -- India
Operation_Enduring_Freedom -- <participatedIn> -- Pakistan
ANOTHER-PATHwordnet_region_108630039 -- <type> -- India
wordnet_region_108630039 -- <type> -- Pakistan
Example #3
Query: Tom and Jerry
wikicategory_Living_people -- <type> -- Tom_Green -- <givenNameOf> -- Tom
wikicategory_Living_people -- <type> -- Jerry_Brown -- <givenNameOf> -- Jerry
Assignment#5 Parser projection
Example#1
Example#2
Example#3
Example#5Example#4
Example#6
Example#7
Example#8
Conclusion
1. VBZ always comes at the end of the parse tree in Hindi and Urdu.
2. The structure in Hindi and Urdu is always expand or reset to NP VB e.g. S=> NP VP (no change) OR VP => VBZ NP (interchange)
3. For exact translation in Hindi and Urdu, merging of sub-tree in english is sometimes required
4. One word to multiple words mapping is common while translating from English to Hindi/Urdu e.g. donar => aatiya shuda OR have => rakhta hai
5. Phrase to phrase translation is sometimes required, so chunking is required e.g. hand in hand => choli daman ka saath (Urdu) => sath sath hain (Hindi)
6. DT NN or DT NP doesn’t interchange7. In example#7: correct translation won’t require merging of
two sub-trees MD and VP e.g. could be => jasakta hai
NLTK Toolkit
NLTK is a suite of open source Python modules
Components of NLTK : Code, Corpora >30 annotated data sets
1. corpus readers2. tokenizers3. stemmers4. taggers5. parsers6. wordnet7. semantic interpretation
Assignment#6A* Implementation
& Comparison with Vitervi
A* - Heuristic
^ $
Fixed cost at each level (L)= (Min cost)* No. of Hops
Selected RouteTransition probability
A B C D
F(B) = g(B) + h(B)
where h(B) = min(i,j){-log(Pr(Cj|Bi) * Pr(Wc|Cj))} + min(i,j){-log(Pr(tj|ti) * Pr(Wtj|tj))} * (n-2)
+ min(k){-log(Pr($|Dk) * Pr($|$))}Here, n = #nodes from Bi to $ (including $)
Wc = word emitted from next node, C
ti ,tj = any combination of tags in the graph
Wtj = word emitted from the node tj
Heuristic(h)
Viterbi / A* Ratio : score(Viterbi) / score(A*) = 1.0
Where, score(Algo) = #correct predictions in the test
corpus
Result
Conclusion
1. Since we are making bigram assumption and Viterbi is pruned in a careful way that is guaranteed to find the optimal path in a bigram HMM, its giving the optimal path. For A*, since our heuristic is underestimating and also maintains triangular inequality, A* is also giving the optimal path in the graph.
2. Since A* has to backtrack at times, it requires more time and memory to find the solution compared to Viterbi.