Upload
mavis-nash
View
219
Download
4
Tags:
Embed Size (px)
Citation preview
Lecture 10 – Semantic Networks 1
Two questions about knowledge of the world
1. How are concepts stored?
• We have already considered prototype and other models of concept representation
2. How are relations among concepts stored?
• Our knowledge is not just knowledge of things – it is knowledge of relations among things.
Lecture 10 – Semantic Networks 2
Issues to consider
A. Two things we want our knowledge store to do:
• Help us make decisions• Help us make inferences
B. Two ways we could store knowledge
• As a list of facts• In some sort of structure that encodes
relations among concepts
Lecture 10 – Semantic Networks 3
A. Two things we want our knowledge store to do
Any model of knowledge representation must explain our ability to make decisions.
Example of decision:
Is a mouse a mammal?
Yes. But how do I know? What is the cognitive operation that lets me get the right answer?
Lecture 10 – Semantic Networks 4
A. Two things we want our knowledge store to do
Any model of knowledge representation must explain our ability to make inferences.
Example of inference:
Does a mouse bear live young?
A mouse is a mammal. Mammals bear live young. Therefore, a mouse bears live young.
Lecture 10 – Semantic Networks 5
B. Two ways we could store knowledge:
1. A list
A cuckoo is a bird A tractor is a farm vehicle Molasses is a food
Decisions could take a long time.No attempt to relate one fact to another.No explanation of how we make inferences.
Lecture 10 – Semantic Networks 6
Problems with representing knowledge as a list:
i. As the list gets longer, it gets harder to retrieve any given piece of information.
• More to search through; more searching takes more time.
ii. Our knowledge would never be more than the sum of the items in the list.
Lecture 10 – Semantic Networks 7
B. Two ways we could store knowledge
2. In a structure
• E.g., put all the vehicle knowledge in one spot. Do the same with all the food knowledge, and so on.
• Within the vehicle knowledge, have ‘regions’ for family vehicles, farm vehicles, commercial vehicles, and so on.
Lecture 10 – Semantic Networks 8
Making decisions
Structure in our knowledge store should make it easier (faster) to make a decision:
Is a tractor a vehicle?
It’s a farm vehicle. Farm vehicles are vehicles.
Lecture 10 – Semantic Networks 9
Making inferences
Suppose we learn that:
• Tractors have large tires• Combines have large tires
We can now generalize: farm vehicles have large tires.
• Do hay-balers have large tires? Yes.
Lecture 10 – Semantic Networks 10
Structure in knowledge
We then know that hay balers have large tires not because we experienced it, but because we deduced it.
But what does the structure that can permit this look like?
• The most widely-accepted answer is, a network. A semantic network.
Lecture 10 – Semantic Networks 11
Network models of semantic memory
1. Quillian (1968), Collins & Quillian (1969)
• First network model of semantic memory
2. Collins & Loftus (1975)
• Revised network model of semantic memory
3. Neural network models (later in the term)
Lecture 10 – Semantic Networks 12
Quillian’s model
Quillian was a computer scientist. He wanted to build a program that would read and ‘understand’ English text.
• To do this, he had to give the program the knowledge a reader has.
• Constraint: computers were slow, and memory was very expensive, in those days.
Lecture 10 – Semantic Networks 13
Basic elements of Quillian’s model
1. Nodes
Nodes represent concepts.They are ‘placeholders’.They contain nothing.
2. Links
Connections between nodes.
Lecture 10 – Semantic Networks 14
Animal
MammalBird
Fly Feathers Wings
Air
Live young
breathes
has has
bears
isaisa
Lecture 10 – Semantic Networks 15
Things to notice about Quillian’s model
All links were equivalent.
• They are all the same length.
Structure was rigidly hierarchical. Time to retrieve information based on number of links
Cognitive economy – properties stored only at highest possible level (e.g., birds have wings)
Lecture 10 – Semantic Networks 16
Problems with Quillian’s model
1. How to explain typicality effect?
• Is a robin a bird? Is a chicken?• Easier to say ‘yes’ to robin. Why?
2. How to explain that it is easier to report that a bear is an animal than that a bear is a mammal?
3. Cognitive economy – do we learn by erasing links?
Lecture 10 – Semantic Networks 17
What’s new in Collins & Loftus (1975)
A. Structure
• responded to data accumulated since original Collins & Quillian (1969) paper
• got rid of hierarchy
• got rid of cognitive economy
• allowed links to vary in length (not all equal)
Lecture 10 – Semantic Networks 18
animal
mammal
bird
robin
ostrich
feathers
wings
fly
batfly
skincow
Lecture 10 – Semantic Networks 19
What’s new in Collins & Loftus (1975)?
B. Process – Spreading Activation
• Activation – arousal level of a node
• Spreading – down links
• Mechanism used to extract information from network
• Allowed neat explanation of a very important empirical effect: Priming
Lecture 10 – Semantic Networks 20
Priming
• An effect on response to one stimulus (target) produced by processing another stimulus (prime) immediately before.
• If prime is related to target (e.g., bread-butter), reading prime improves response to target.
• RT (unrelated) – RT (unrelated) = priming effect.
• Sometimes measured on accuracy.
Lecture 10 – Semantic Networks 21
Priming
Related Unrelated Task
bread nurse read only
BUTTER BUTTER read, respond
• Difference in RT to two types of trials = priming effect.
• Responses in related condition are faster.
Lecture 10 – Semantic Networks 22
Why is the Priming effect important?
The priming effect is an important observation that models of semantic memory must account for.
Any model of semantic memory must be the kind of thing that could produce a priming effect.
A network through which activation spreads is such a model. (Score one point for networks.)
Lecture 10 – Semantic Networks 23
Review
Knowledge has structure
Our representation of that structure makes new knowledge available (things not experienced)
The most popular models are network models, containing links and nodes.
Nodes are empty. They are just placeholders.
Lecture 10 – Semantic Networks 24
Review
Knowledge is stored in the structure – the pattern of links, and the lengths of the links.
The pattern of links and the lengths of links reflect experience (learning).
Network models provide a handy explanation of priming effects.
Note: modern neural network models are different in some respects.