Why Watson Won: A cognitive perspective

Preview:

DESCRIPTION

In this talk, we present how the Watson program, IBM's famous Jeopardy playing computer, works (based on papers published by IBM), we look at some aspects of potential scoring approaches, and we examine how Watson compares to several well known systems and some preliminary thoughts on using it in future artificial intelligence and cognitive science approaches.

Citation preview

Tetherless World Constellation

Why Watson Won: A cognitive perspective

Jim Hendlerand Simon Ellis

Tetherless World Professor of Computer,Web and Cognitive SciencesDirector, Rensselaer Institute for Data Exploration and Applications

Rensselaer Polytechnic Institute (RPI)http://www.cs.rpi.edu/~hendler

@jahendler (twitter)

Tetherless World Constellation

IBM Watson

Tetherless World Constellation

How’d I get into it? Watson and Semantic Web

IBM

Tetherless World Constellation

Watson and Semantic Web

IBM

???Is Watson cognitive?“The computer’s techniques for unraveling Jeopardy! clues sounded just like mine. That machine zeroes in on key words in a clue, then combs its memory (in Watson’s case, a 15-terabyte data bank of human knowledge) for clusters of associations with those words. It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of answer being sought; the time, place, and gender hinted at in the clue; and so on. And when it feels ‘sure’ enough, it decides to buzz. This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.”

— Ken Jennings

Tetherless World Constellation

Outline

• Is Ken right?– How Watson Works– Watson as a cognitive architecture??– Beyond Watson

???Inside Watson

Watson pipeline as published by IBM; see IBM J Res & Dev 56 (3/4), May/July 2012, p. 15:2

???Question Analysis

???Question analysis

What is the question asking for?

Which terms in the question refer to the answer?

Given any natural language question, how can Watson accurately discover this information?

Who is the president of Rensselaer Polytechnic Institute?

Focus Terms: “Who”, “president of Rensselaer

Polytechnic Institute”

Answer Types: Person, President

QuestionAnalysis

???Parsing and semantic analysis

What information about a previously unseen piece of English text can Watson determine?

How is this information useful?

Natural Language Parsing Semantic Analysis

- grammatical structure

- parts of speech

- relationships between words

- ...etc.

- meanings of words, phrases, etc.

- synonyms, entailment

- hypernyms, hyponyms

- ...etc.

???Question analysis pipeline

UnstructuredQuestion Text

Parsing&

SemanticAnalysis

MachineLearning

Classifiers

Structured Annotationsof Question:

Focus, answer types, Useful search queries

???Search Result Processing and Candidate Generation

???Primary Search

Primary Search is used to generate the corpus of information from which to take candidate answers, passages, supporting evidence, and essentially all textual input to the system

It formulates queries based on the results of Question Analysis

These queries are passed into a (cached) search engine which returns a set number of highly relevant documents and their ranks. on the open Web this could be a regular search engine

(our extension)

???Candidate Generation

Candidate Generation generates a wide net of possible answers for the question from each document.

Using each document, and the passages created by Search Result Processing, we generate candidates using three techniques: Title of Document (T.O.D.): Adds the title of the

document as a candidate. Wikipedia Title Candidate Generation: Adds any noun

phrases within the document’s passage texts that are also the titles of Wikipedia articles.

Anchor Text Candidate Generation: Adds candidates based on the hyperlinks and metadata within the document.

???Search Result Processing andCandidate Generation

???Scoring & Ranking

???Scoring

Analyzes how well a candidate answer relates to the question

Two basic types of scoring algorithm Context-independent scoring Context-dependent scoring

???Types of scorers

Context-independent Question Analysis Ontologies (DBpedia, YAGO, etc) Type hierarchy reasoning

Context-dependent Analyzes feature of the natural language environment

where candidates were found Relies on “passages” found during search

Many special purpose ones used in Jeopardy

???Scorers

Passage Term Match

Textual Alignment

Skip-Bigram

Each of these scores supportive evidence These scores are then merged to produce a single

candidate score

???Example:Textual Alignment

Finds an optimal alignment of a question and a passage

Assigns “partial credit” for close matches

“Who is the President of RPI?” Who President of RPI. Shirley Ann Jackson is the President of RPI.

???Skip-Bigram

Constructs a graph Nodes represent terms (syntactic objects) Edges represent relations

Extracts skip-bigrams A skip-bigram is a pair of nodes either directly

connected or which have only one intermediate node Skip-bigrams represent close relationships between

terms

Scores based on number of common skip-bigrams

???Example

Who authored“The Good Earth”?

“Pearl Buck, author ofthe good earth…”

Tetherless World Constellation

Watson Summary

• Watson works by– Analyzing the question

• natural language parsing• text extraction

– Generating a large number of candidates• mostly search heuristics

– Scoring each• through multiple scorers• with weights adjusted by learning algorithm

– Returning top candidate

???MiniDeepQA (Not Watson!)

RPI students implementing a DeepQA pipeline to explore the principles underlying this kind of Q/A system

(THIS IS NOT WATSON!) Pipeline development Data caching Graphical and command line interfaces Parsing Scoring

???Examples

Right answer

???Examples

Right answer?

???Examples

Right answer

???Examples

Right answer??

???Examples

had to get this one right!

???Scoring

One of DeepQA’s main strengths is aggregating a number of different scoring algorithms capable of running in parallel.

RPI scorers are primitive compared to IBM’s, but allow us to explore the principles allow us to explore different algorithms for

computing scores allow us to create new ones not tried by IBM

???Scoring Principles: combine evidence

He was the Prime Minister of Canada in 1993. candidates could include Trudeau, Harper, Campbell,

Chretien, Mulroney…

Try (Research): Trudeau was Prime Minister of Canada in 1993 (doesn’t

match) Campbell was Prime Minister of Canada in 1993 (MATCH) Chretien was Prime Minister of Canada in 1993 (MATCH)

Scoring Research & type match Trudeau: Re-search NO; Type: Yes Campbell: Re-search YES; Type: No Chretien: Re-search YES; Type: Yes

WHO WAS CHRETIEN?

???New Scoring types

We can explore how new kinds of information can be added to the Watson scoring pipeline Example: new NLP extraction techniques

Adding a ML-based extractor built by Heng Ji

Example: Specialized Web Sources Database advisor project

Example: More complex inferencing Jeopardy questions are unambiguous, real world questions

aren’t

• Where is Montreal?

• Who is Jim Hendler?

Example: Special purpose reasoning…

???Special purpose reasoning

•Can we match simulation (or steer) large scale simulations to help answer NL questions?

- eg. Answer questions such as “Why” and “How” integrated with large scale simulations

???Alternate Universe Reasoning (Contexts)

How can a Watson reasoner appropriately use Q/A contexts? Where was Yoda born?

Very little is known about Yoda's early life. He was from a remote planet, but which one remains a mystery.

Where was Yoda made? The Yoda puppet was originally designed and built by Stuart Freeborn for LucasFilm and Industrial Light & Magic.

Where did Yoda live? Jedi Master Yoda went into voluntary exile on Dagobah

Where did Yoda live in the Phantom Menace? Yoda Lives in the Jedi Temple on episode one till three

Tetherless World Constellation

But back to the original question

• Q: How does Watson fare as a cognitive model?

• A: Poorly– no conversational ability– no concept of self– no deeper reasoning

…• Q: How does Watson fare as a model

of question answering?

Tetherless World Constellation

Watson and Q/A

• Watson’s feed-forward pipeline has the following properties– lots of candidates generated

• the more the better– “ad hoc” filtering pipelines

• domain independent usually score lower than domain dependent

– no “counter-reasoning” between answers• separately scored, only comparison is

numbers

Tetherless World Constellation

Production rules, modules, etc

Production Rule style Architectures cf ACT-R (Anderson 1974; …2012) - modularization, but not Watson style - parallelization, but in rule productions (procedural memory) - declarative memory is fact basedWatson is not well correlated, except for using search for declarative memory

Tetherless World Constellation

Network based

Network based architectures (cf. spreading activation (Collins 75), marker-passing (Hendler 86) … Microsaint 2006) - positive activations - inhibitory nodes (or other negative enforcers)Watson has no negative inhibition, does use network-based scorers

Tetherless World Constellation

MAC/FAC

MAC/FAC (Gentner & Forbus, 1991) Many are chosen, few are called model of analogic reasoning Strong correspondence in performance, not in mechanism New work by Forbus (SME) uses a more feed-forward mechanism

(Discussions in progress)

Office of Research

Cognitive Architecture? Watson as “component”

MemoryReasoning

Decision Making

Watson, Cogito, and Clarion

Tetherless World Constellation

Summary

• Watson won by a combination of – natural language processing– search technologies– semantic typing (minimal reasoning)– scoring heuristics– machine learning (scorer tuning)

• Watson Q/A has some interesting analogies to cognitive architectures of the past– but mainly at a “level of abstraction”

• Watson as a memory component in a more complex cognitive system is a very intriguing possibility

Tetherless World Constellation

Questions?

Recommended