114
Perceptrons (a history of Neural Network research) Vicente Malave Department of Cognitive Science University of California, San Diego

University of California, San Diego Department of ...ajyu/Teaching/Cogs1_sp13/Slides/Malave.pdf · Marvin Minsky. Marvin Minsky. Confocal Microscope.. Computer Science too Alan Newell

Embed Size (px)

Citation preview

Perceptrons (a history of Neural Network research)

Vicente MalaveDepartment of Cognitive Science

University of California, San Diego

Peter Norvig (Google, 2010)

Both videos describe the same method. What happened?

What can you do with a neural network?

What can you learn with a neural network?

Zeitgeist

Information Processing Systems

... basically, can you build a machine that can actually do the task?

We'll analyze the behavior of these abstract machines, not caring if they run on a digital, analog, or meat-based computer.

Pattern Recognition

Your brain is so good it's hard to realize how difficult this is.

Thorpe,1996

Thorpe,1996

Character Recognition

so hard,

your bank designed this font to avoid doing it.

Linear Threshold Unitsa neuron basically adds up the inputs, so we'll build a machine that does that.

What can this do?

With only one input, you have a threshold, how much you need for the unit to fire.

What can this do?

WIth two inputs, we're drawing a line between categories..

Now you have (thousands, millions) of connections to set.

That sounds hard.

So, we don't do that. The machine will program itself.

Learning

Error-Correction Procedure

3

Error-Correction Procedure

3

Error-Correction Procedure

3

Error-Correction Procedure

3

Mark I Perceptron

It works!

Does it happen every time?

Yes*.

Perceptron Convergence Theorem (1962, various)

* If there is a line that perfectly separates the points, the perceptron will find it, in a finite number of steps.

Perceptrons (1969)

Marvin Minsky

Marvin Minsky

Confocal Microscope

.. Computer Science too

Alan Newell wrote a book on designing computer processors (hardware)

@MIT they decided to build their own computers!

What can't you do with a line?

Exclusive Or

You need a few units to do this with linear thresholds.

10 8080

What can this do?

You could have a perceptron for people over 80.

What can this do?

Or under 10.

Exclusive Or

But not both (with a single unit).

10 8080

In two dimensions,

It will circle forever.

Not just XOR

Minsky and Paper proved that Perceptrons* can't learn:● Connectedness (book cover)● Parity (odd or even number)

* ( with restrictions, like the number or width of connections )

Complexity : can you actually build this machine?

Perceptrons (1969)

Hidden units.

... a long winter

And then,

David Rumelhart

Hidden units.

Nagy, 1968

What can this do?

The perceptron has a hard boundary.

If you smooth it, you can take derivatives.

Backpropagation

Error-correction, but now the hidden units can figure out how much they are helping (or not).

Chain Rule.

Now you can learn hidden units.

1986: The year neural networks broke.

.. it keeps working.

What can you do with a neural network?

What can you learn with a neural network?

1987: Department of Cognitive Science

Cognitive Science looks very different when you know that

learning is possible.

http://mplab.ucsd.edu/wordpress/projects/bev1/Banner3b.png

(Butko et al, 2006)

What became of our old friend, the Perceptron?

maybe we don't need to be so clever, and we can just have a fixed hidden layer

Vapnik

2006: don't call it a comeback.

What about a random hidden layer?

Wrapup

You can learn things with neural networks.Within limits, (xor, local minima).You can get really far if you push hard on a simple representation.Learning is possible for more things that you might think.We have a theory.

This is what those math classes are for.