27
Page 1 Starting from Scratch in Semantic Role Labeling Michael Connor, Yael Gertner, Cynthia Fisher, Dan Roth

Starting from Scratch in Semantic Role Labeling

  • Upload
    lynda

  • View
    44

  • Download
    0

Embed Size (px)

DESCRIPTION

Starting from Scratch in Semantic Role Labeling. Michael Connor, Yael Gertner, Cynthia Fisher, Dan Roth. How do we acquire language?. Topid rivvo den marplox. The language-world mapping problem. “the world”. “the language”. [Topid rivvo den marplox.]. - PowerPoint PPT Presentation

Citation preview

Page 1: Starting from Scratch in  Semantic Role Labeling

Page 1

Starting from Scratchin

Semantic Role Labeling

Michael Connor, Yael Gertner, Cynthia Fisher, Dan Roth

Page 2: Starting from Scratch in  Semantic Role Labeling

Page 2

Topid rivvo den marplox.

How do we acquire language?

Page 3: Starting from Scratch in  Semantic Role Labeling

Page 3

“the language”

“the world”

[Topid rivvo den marplox.]

The language-world mapping problem

Page 4: Starting from Scratch in  Semantic Role Labeling

Page 4

Smur! Rivvo della frowler.

Topid rivvo den marplox.

Marplox dorinda blicket.

Blert dor marplox, arno.

Scene 1

Scene 3

Scene n

Observe how words are distributed across situationsObserve how words are distributed across situations

Page 5: Starting from Scratch in  Semantic Role Labeling

Page 5

Children can learn the meanings of some nouns via cross-situational observation alone [Fisher 1996, Gillette, Gleitman, Gleitman, & Lederer, 1999; Snedeker & Gleitman, 2005]

But how do they learn the meaning of verbs? Sentences comprehension is grounded by the acquisition of an

initial set of concrete nouns These nouns yields a skeletal sentence structure — candidate

arguments; cue to its semantic predicate—argument structure. Represent sentence in an abstract form that permits

generalization to new verbs [Johanna rivvo den sheep.]

Structure-mapping: A proposed starting point for syntactic bootstrapping

Nouns identified

Page 6: Starting from Scratch in  Semantic Role Labeling

Page 6

Strong Predictions [Gertner & Fisher, 2006] Test 21 month olds on assigning arguments with novel verbs How order of nouns influences interpretation: Transitive & Intransitive

Transitive: The boy is daxing the girl!Agent-first: The boy and the girl are daxing!

Agent-last: The girl and the boy are daxing!

Error disappears by 25 monthspreferential looking paradigm

Page 7: Starting from Scratch in  Semantic Role Labeling

Page 7

Current Project: BabySRL Realistic Computational model for Syntactic Bootstrapping via

Structure Mapping: Verbs meanings are learned via their syntactic argument-taking roles Semantic feedback to improve syntactic & meaning representation

Develop Semantic Role Labeling System (BabySRL) to experiment with theories of early language acquisition SRL as minimal level language understanding Determine who does what to whom.

Inputs and knowledge sources Only those we can defend children have access to

Page 8: Starting from Scratch in  Semantic Role Labeling

Page 8

BabySRL: Key Components

Representation: Theoretically motivated representation of the input Shallow, abstract, sentence representation consisting of

# of nouns in the sentence Noun Patterns (1st of two nouns) Relative position of nouns and predicates

Learning: Guided by knowledge kids have

Classify words by part-of-speech Identify arguments and predicates Determine the role arguments take

Page 9: Starting from Scratch in  Semantic Role Labeling

Page 9

BabySRL: Early Results [Connor et. al CoNLL’08, 09]

Fine grained experiments with how language is represented Test different levels of representation

Primary focus on noun pattern (NPattern) feature Hypothesis: number and order of nouns important

Once we know some nouns, can use them to represent structure NPattern gives count and placement:

First of two, second of three, etc.

Alternative: Verb Position Target argument is before or after verb

Key Finding: NPattern reproduces errors in children

Promotes A0-A1 interpretation in transitive, but also intransitive sentences

Verb position does not make this error Incorporating it recovers correct

interpretation

But: Done with manually labeled data Feedback varies

Page 10: Starting from Scratch in  Semantic Role Labeling

Page 10

BabySRL: Key Components

Representation: Theoretically motivated representation of the input Shallow, abstract, sentence representation consisting of

# of nouns in the sentence Noun Patterns (1st of two nouns) Relative position of nouns and predicates

Learning: Guided only by knowledge kids have

Classify words by part-of-speech Identify arguments and predicates Determine the role arguments take

Page 11: Starting from Scratch in  Semantic Role Labeling

Page 11

This work: Minimally Supervised BabySRL Goal: Unsupervised “parsing” for identifying arguments Provide little prior knowledge & high level semantic feedback

Defensible from psycholinguistic evidence

Overview Unsupervised Parsing

Identifying part-of-speech states Argument Identification

Identify Argument States Identify Predicate States

Argument Role Classification Labeled Training using unsupervised arguments

Results and comparison to child experiments

Page 12: Starting from Scratch in  Semantic Role Labeling

Page 12

BabySRL Overview

Traditional Approach

Parse input Identify Arguments Classify Arguments Global inference over Arguments

Each stage has its own classifier/knowledge source Finely labeled training data throughout

She always has salad .

[NP][ VP ][ NP ][ ][ V ][ ][ A0] [ A1 ]

Page 13: Starting from Scratch in  Semantic Role Labeling

Page 13

BabySRL Overview

Unsupervised Approach

Unsupervised HMM Identify Argument States Classify Arguments No Global inference

Labeled training: only for argument classifier Rest is driven by simple background knowledge

She always has salad .

46 48 26 74 2 N V N A0 A1

Page 14: Starting from Scratch in  Semantic Role Labeling

Page 14

Unsupervised Parsing

We want to generate a representation that permits generalization over word forms Incorporate Distributional Similarity Context Sensitive

Hidden Markov Model (HMM) Simple model Essentially provides Part of Speech information

Without names for states; we need to figure this out

Train on child directed speech CHILDES repository Around 1 million words, across multiple children

Page 15: Starting from Scratch in  Semantic Role Labeling

Page 15

Unsupervised Parsing (II)

Standard way to train unsupervised HMM Simple EM produces uniform size clusters Solution: Include priors for sparsity

Dirichlet prior (Variational Bayes, VB) Replace this by psycholinguistically plausible knowledge Knowledge of function words

Function and content words have different statistics Evidence that even newborns can make this distinction

We don't use prosody, but it may provide this. Technically: allocate a number of states to function words

Leave the rest to the rest of the words Done before parameter estimation, can be combined with EM or VB

learning: EM+Func, VB+Func

Page 16: Starting from Scratch in  Semantic Role Labeling

Page 16

Unsupervised Parsing Evaluation

Test as unsupervised POS on subset of hand corrected CHILDES data.

Incorporating function word pre-clustering allows both EM & VB to achieve the same performance with an order of magnitude fewer sentences

EM: HMM trained with EM

VB: HMM trained with Variational Bayes & Dirichlet Prior

EM+FunctVB+Funct: Different training methods with function word pre-clusteringVa

rianc

e of

Info

rmat

ion

Training Sentences

Better

Page 17: Starting from Scratch in  Semantic Role Labeling

Page 17

Argument Identification

Now we have a parser that gives us state ( cluster) each word belongs to

Next: identify states that correspond to arguments & predicates

Knowledge: We provide list of frequent nouns As few as 10 nouns covers 60% occurrences

Mostly pronouns A lot of evidence that children know and recognize nouns early on

Algorithm: Those states that appear over half the time with known nouns treated as argument state Assumes that nouns = arguments

Page 18: Starting from Scratch in  Semantic Role Labeling

Page 18

She 46 {it he she who Fraser Sarah Daddy Eve...}

always 48 {just go never better always only even ...}

has 26 {have like did has...}

salad 74 {it you what them him me her something...}

. 2 {. ? !}

She 46

always 48

has 26

salad 74

. 2

She

always

has

salad

.

Argument IdentificationKnowledge: Frequent Nouns:You, it, I, what, he, me, ya, she, we, her, him, who, Ursula, Daddy, Fraser, baby, something, head, chair, lunch,…

List of words that occurred with state 46

in CHILDES

Page 19: Starting from Scratch in  Semantic Role Labeling

Page 19

Predicate Identification

Nouns are concrete, can be identified Predicates more difficult Not learned easily via cross-situational observation

Structure-mapping account: sentence comprehension is grounded in the learning of an initial set of nouns

Verbs are identified based on their argument-taking behavior.

Algorithm: Identify predicates as those states that tend to appear with a given number of arguments Assume one predicate per sentence

Page 20: Starting from Scratch in  Semantic Role Labeling

Page 20

Predicate Identification

She 46

always 48

has 26

salad 74

. 2

State 48:1 argument - 0.22 argument - 0.43 argument – 0.3

...

State 26:1 argument - 0.12 argument - 0.63 argument – 0.2

...

Page 21: Starting from Scratch in  Semantic Role Labeling

Page 21

Argument Identification Results

Test compared to hand labeled argument boundaries on CHILDES directed speech

Vary number of seed nouns

Argument Identification

Predicate Identification

Good

Bad

Implications on features’ quality

Differences in parsing

disappeared

EM+Funct

Page 22: Starting from Scratch in  Semantic Role Labeling

Page 22

Finally: BabySRL Experiments

Given potential arguments & predicates, train argument classifier Given abstract representation of argument and predicate, determine its

role (Agent, Patient, etc) To train, apply true labels to noisily identified arguments

Roles relative to predicate Regularized Perceptron

Abstract Representations (features) considered: Only depends on number and order of arguments

(1) Lexical: Target noun and target predicate (2) NounPattern: first of two, second of three, etc.

She always has salad: `She` is first of two, `salad` is second of two (3) VerbPosition representation:

`She` is before the verb, `salad` is after the verb

Page 23: Starting from Scratch in  Semantic Role Labeling

Page 23

BabySRL Experiments: Test Data

Unlike previous experiments (CHILDES), here we compare to psycholinguistic data

Evaluate on two-noun constructed sentences with novel verbs Test two noun transitive vs. intransitive sentences A krads B vs. A and B krads. A and B filled with known nouns Test generalization to unknown verb

Reproduces experiments on young children At 21 months of age make mistake: interpreting A as agent, B as patient

in both cases. Hypothesize children (at this stage) represent sentences in terms of

number and order of nouns, not position of verb.

Page 24: Starting from Scratch in  Semantic Role Labeling

Page 24

NounPat features promote Agent-Patient (A0-A1) interpretation for both Transitive (correct) and Intransitive (incorrect). VerbPos pushes the intransitive case in the other direction. Works for Gold Training

BabySRL Experiments

Transitive Intransitive10 SeedNouns

%A0

A1

%A0

A1

Parsing Algorithm Parsing Algorithm

Learns Agent First, Patient Second

Reproduces error on Intransitive. With noisy representation does not recover even when VerbPos available

Page 25: Starting from Scratch in  Semantic Role Labeling

Page 25

Summary

BabySRL: a realistic computational model for verb meaning acquisition via the structure-mapping theory. Representational Issues Unsupervised learning driven by minimal plausible knowledge sources

Even with noisy parsing and argument identification, able to learn abstract rule: agent first, patient second.

Difficult identification of predicates harms usefulness of superior representation (VerbPos) Reproduces errors seen in children

Next Steps: Use correct high level semantic feedback to improve earlier

identification and parsing decisions — improve VerbPos feature Relax correctness of semantic feedback

Thank You

Page 26: Starting from Scratch in  Semantic Role Labeling

Page 26

How do we acquire language?

Balls

Dog

The Dog kradz the balls

Page 27: Starting from Scratch in  Semantic Role Labeling

Page 27

BabySRL Experiments

Transitive Intransitive

10 SeedNouns

365 SeedNouns

Even with VerbPositionavailable, with noisyparse make error on

intransitive sentences.