18
Introduction* (Symbolic) A. I. A rtificial I ntelligence If we can “make”/design intelligence, we ca 1). Build incredibly powerful technology 2). Understand intelligence Practical Scientific Aims A.I: ksander & Piers Burnett (1987): “Thinking machines: the search for a intelligence”. Oxford University Press, Oxford.

Introduction* (Symbolic) A. I. A rtificial I ntelligence If we can “make”/design intelligence, we can: 1). Build incredibly powerful technology 2). Understand

Embed Size (px)

Citation preview

Introduction* (Symbolic) A. I.

Artificial Intelligence

If we can “make”/design intelligence, we can:

1). Build incredibly powerful technology

2). Understand intelligence

Practical

Scientific

Aims A.I:

•Igor Aleksander & Piers Burnett (1987): “Thinking machines: the search for artificial intelligence”. Oxford University Press, Oxford.

PROBLEM: How do we know that we designedsomething "intelligent“ ?

Definition-problem

But how do you know that

“something is understood”

by someone other than yourself?

Intelligence:Something to do with “understanding”

Performance of“intelligent behaviour”

What about a machine that behaves AS IF it is intelligent?

Critical reply: That “intelligence” reflects the design of its creator (machines: the engineer animals: God or Genes)

• Refute: Then ants have a mind: understand situation and consciously solve problems• Accept: Animals are dumb machines and just follow genetically programmed instruction Where do we draw the line?

Nest building in birds; beavers making a dam, we building a house?

IF behaviour of machines/animals has nothing to do withintelligence How then should a truly intelligent “entity” understand ?

In the same way as we

But how do we understand?Is our intelligence a sufficient basis for understandingintelligence ?

Is the brain capable of providing an explanation for itself?

“Intelligent Behaviour”

Problem Solving

Is a thermostat intelligent ? “Solves” the “problem” of temperature regulation

but does it have a “mind”?

Can machines (animals) dothis?

Does it use Knowledge

and Reasoning

Relation between “Mind”,“Knowledge” and “Reasoning”goes back to Greek philosophy

In itself insufficient.Psychology:

knowledge-independent tests(“G”: IQ)

“That what can be thought is identical to that what is”

Illusory!A stick put partly under water looksbroken, but isn’t

The only science is about that what IS(“ontogeny”)

“Truth” is that what always IS

Parmenides of Elea (5th century BC)

Power of Reason as seat of Knowledge, instead of sensory perception

UnchangeableBeing instead of BecomingStatic instead of Dynamic

Knowledge is beyonddirect physical experience:

META-PHYSICS

The universe is ordered following laws of reason

Human mind discovers that physical experienceis insufficient to explain the “reality”

After “naïve realism”: DOUBT

Metaphysics: thinking about “being” beyond perception

Two observers “see” one and the same oak in a different way.However, both agree about what they see is an oak

The “objective Oak-in-itself”

The oak we see is an instantiation of the “object oak”Which in turn belongs to the “class” of “trees”

Plato (427-347 BC).

What we see are imperfect projections of idealintelligible objects.An individual tree as we perceive it is non-generic and cannot be defined, but the ideal “tree” can!

How to study the world of ideas?When reason is the principled way of knowing (meta physics),then we should study the rules of reasoning

perfect,unchangeable

Aristotle (384-322 BC)

Formal LogicTool that results in knowledge about

that what isLater: decoupledfrom platonic idealism

Parmenides: Don’t believe your eyes, but: What one thinks, is (one cannot think about something that is not)

First truth: It is

Things can be known only when they are

Descartes: Starts from the subject instead of the object

What is undeniable in thinking is I am

(one cannot think when one is not)

Cogito ergo sum

To find truth: Whatever could be doubted should be rejected What remains: something that doubts (me)

Still meta-physics! But focus is on epistemology instead of on ontology

The correct way to obtainknowledge (by reasoning = ratio)

Reasoning is beyond perception:Mind-Body Dualism

Because of doubt, Descartes does not accept the obviousnessof his own senses

What did Descartes think about behaviour ?

1) If automatons had the shape of animals*,we should have no means of knowing that they did not possess the same nature as animals

2) If automatons perfectly imitated actions ofanimals*, we would be in no doubt that animalswere automatons too

* that lack reason

Behaviour can be understood mechanistically

Can machines be intelligent ?

Turing (1950)

Intelligence can be understood as computation

If a computer perfectly imitated answers ofhumans, we should have no means of knowing that it did not possess the same intelligence as humans

BUT: When a computer “does” something in the way we do it, does it also understand what it is doing in the same way as we do?

Daniel Dennett: If a computer behaves as if it tries to win a game of chess, it is meaningless to ask whether it really wants to win

“Intentional Stance”

John Searle (1980, 1987): The Chinese Room

Give an English person a Chinese story + detailed instructions in English how to manipulate the characters, she will provide answers in Chinese characters about the story (even when she doesn’t understand a WORD of it!!)

Intentionality: “Knowing what it is about”

Allows Empathy: words recall visions and feelings

How to describe unknown, newly encountered thingswithout referring to known objects?

Intentionality is based on the ability to build internal representations Sensory

Perception

Do Machines need Complicated Sensors to build Internal Representations

?

A.I.: NO Pre-processed versions of real world manifestations suffice

Just tell the machine what it needs to know to carry out its task

We can plan a trip (to an unknown area) by just using a map

A “mental map” suffices

SYMBOLS&

SYMBOLPROCESSING

… but then you need the ability to interpret symbols

AND: only in a very limited number of cases you can “pre-pack reality”

in “Models”

and use these to execute meaningful behaviour

f.i. mathematical equations

Relationships between symbolsto represent (in)equalities, functions etcetera

s 0

s t + t

t t + t

t

s t

s t + t – s t = s

F o r v e r y s m a l l t ( t 0 = d t ) :

ttanconss)t('sdt

)t(ds

t

slim 0t

= v e l o c i t y

C H A N G E i n D i s t a n c e =s t + t – s t = s

C H A N G E / U n i t T i m e =

ttanconstgt

s

C H A N G E i n D i s t a n c e =s t + t – s t = s

C H A N G E / U n i t T i m e =

ttanconstgt

s

S t r a i g h t l i n e e q u a t i o n f o r t h e g r a p h a b o v e : s ( t ) = v t

v ( t )

tv 0

v ( t ) = a . t + v 0 ( 2 )

S i m i l a r a s t o s ( t ) :

v)t(vdt

)t(dv

t

vvlim ttt

0t

c o n s t a n t

a c c e l e r a t i o n

= s u r f a c e u n d e r t r i a n g u l a r p a r t = h e i g h t ½ b a s i s= [ v ( t ) – v 0 ] ½ t= [ ( a t + v 0 ) – v 0 ] ½ t = ½ a t 2

P L U Ss u r f a c e o f r e c t a n g u l a r p a r t = v 0 t

s ( t ) i s t h e s u r f a c e u n d e r t h e c u r v e = v ( t ) d t

v ( t )

t

v 0

+

s ( t ) = v 0 t + ½ a t 2

= s u r f a c e u n d e r t r i a n g u l a r p a r t = h e i g h t ½ b a s i s= [ v ( t ) – v 0 ] ½ t= [ ( a t + v 0 ) – v 0 ] ½ t = ½ a t 2

P L U Ss u r f a c e o f r e c t a n g u l a r p a r t = v 0 t

s ( t ) i s t h e s u r f a c e u n d e r t h e c u r v e = v ( t ) d t

v ( t )

t

v 0

+

s ( t ) = v 0 t + ½ a t 2

s ( t ) i s t h e s u r f a c e u n d e r t h e c u r v e = v ( t ) d t

v ( t )

t

v 0

+

s ( t ) = v 0 t + ½ a t 2

From classical (Newtonian) Kinematics

ARE WE REALLY DOING THIS IN OUR HEAD

WHEN ACCELERATING OUR CAR?

Even a mathematician doesn’tsolve equations when playing

tennis

When a child catches a ball it is NOT solving equations

It learns to do this by: • better muscle control• improved motor abilities• experience

Induction instead of Deduction

Now try a computer (or a robot running on software) getting this done …

“Problem Solving”? But NOT by Computation!