29
ROBOTICS Dr. Tom Froese

ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Embed Size (px)

Citation preview

Page 1: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

ROBOTICSDr. Tom Froese

Page 2: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

“Why does the burnt kitten avoid the fire?”Ashby (1960)

Page 3: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

The Ultrastable System (Homeostat)

Page 4: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Adaptation to inverting goggles

• Erismann 1930s• Kohler 1950s and 60s

• They demonstrated the plasticity of perceptual systems by perturbation.

• Inverted senses would, part by part, adapt over time and the perceived world would return to the “normal” state.

Page 5: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

The rise of computer science

Page 6: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

”Sense-think-act" cycle

LabVIEW Robotics (2014)

Page 7: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Challenges for symbolic AI• Robustness

• Lack of noise and fault-tolerance; lack of generalizability• If a situation arises which has not been predefined a traditional

symbol processing model will break down

• Integrated learning• Learning mechanisms are ad hoc and imposed on top of non-learning

systems

• Real-time performance

• Sequential processing• Programs are sequential and work on a step-by-step basisPfeifer (1996)

Page 8: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

The Frame Problem

Pfeifer (1996)

Page 9: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

The Frame Problem• The robot R1 has been told that its battery is in a room

with a bomb and that it must move the battery out of the room before the bomb goes off.

• Both the battery and the bomb are on a wagon. R1 knows that the action of pulling the wagon out of the room will remove the battery from the room.

• It does so and as it is outside, the bomb goes off

• Poor R1 had not realized that pulling the wagon would bring the bomb out along with the battery.• Dennett (1984)

Page 10: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

The Frame Problem• The designers realized that the robot would have to be

made to recognize not just the intended implications of its acts, but also its side-effects by deducing these implications from the descriptions it uses in formulating its plans.

• They called their next model the robot deducer, or short R1D1, and did the same experiment. R1D1 started considering the implications of pulling the wagon out of the room.

• It had just finished deducing that pulling the wagon out of the room would not change the color of the room's walls when the bomb went off.

Page 11: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

The Frame Problem• The problem was obvious. The robot must be taught the

difference between relevant and irrelevant implications. • R2D1, the robot-relevant-deducer, was again tested. The

designers saw R2D1 sitting outside the room containing the ticking bomb. "Do something!" they yelled at it.

• "I am", it retorted. "I am busily ignoring some thousands of implications I have determined to be irrelevant. Just as soon as I find an irrelevant implication, I put it on the list of those I must ignore, and ..."

• the bomb went off.

Page 12: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

The Symbol Grounding Problem• “once we remove the human interpreter from the loop, as

in the case of autonomous agents, we have to take into account that the system needs to interact with the environment on its own.

• Thus, if there are symbols in the system, their meaning must be grounded in the system's own experience in the interaction with the real world.

• Symbol systems in which symbols only refer to other symbols are not grounded because the connection to the outside world is missing. The symbols only have meaning to a designer or a user, not to the system itself.”• Pfeifer (1996); see also Harnad (1990)

Page 13: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Searle’s (1980) “Chinese room” argument

Page 14: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Braitenberg’s (1984) “Vehicles”

Page 15: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Brooks’ (1991) “Creatures”

Herbert

Genghis

Allen

“The key observation is that the world is its own best model. It is always exactly up to date. It always contains everydetail there is to be known. The trick is to sense itappropriately and often enough.”

Brooks (1990)

Page 16: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Brook’s (1991) “Creatures”

Page 17: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Behavior-based robotics

LabVIEW Robotics (2014)

Brooks’ “subsumption architecture”

Page 18: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Brooks’ creatures and creatures-no-more

CogBaxter

Roomba

PackBot

Page 19: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Pfeifer’s (1996b) “Fungus Eaters”

Page 20: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Pfeifer’s (1996b) “Fungus Eaters”

Page 21: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Pfeifer’s (1996b) design principles

Page 22: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Humanoid robot walking

Page 23: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Asimo takes a nasty fall down the stairs

Page 24: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Passive dynamic walking

• Collins et al. (2001) built a passive dynamic walking robot based on the ideas of McGeer.

• It was built from metal rods, springs, and weights.

• The robot could walk down a plank without power, sensors, or a control system.

• The robot was also able to walk efficiently on a flat surface by giving it a small push.

• McGeer had previously noticed that adding knees made passive walking more stable for bipedal machines.

Page 25: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Passive dynamic walking

Collins et al. (2001)

Page 26: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

New cognitive science (4E)

Page 27: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

References• Ashby, W. R. (1960). Design for a Brain: The Origin of Adaptive

Behaviour (Second ed.). London: Chapman & Hall• Braitenberg, V. (1984). Vehicles: Experiments in Synthetic Psychology.

Cambridge, MA: MIT Press• Brooks, R. A. (1990). Elephants don't play chess. Robotics and

Autonomous Systems, 6, 3-15• Brooks, R. A. (1991). Intelligence without representation. Artificial

Intelligence, 47(1-3), 139-160. • Collins, S. H., Wisse, M., & Ruina, A. (2001). A three-dimensional

passive-dynamic walking robot with two legs and knees. International Journal of Robotics Research, 20(7), 607-615

• Dennett, D. C. (1984). Cognitive wheels: The frame problem of AI. In C. Hookway (Ed.), Minds, Machines and Evolution: Philosophical Studies (pp. 129-152). Cambridge: Cambridge University Press

• Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42, 335-346

Page 28: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

References• Pfeifer, R. (1996). Symbols, patterns, and behaviour:

Towards a new understanding of intelligence. Proceedings of the 10th Annual Conference of Japanese Society for Artificial Intelligence (pp. 1-15). Tokyo: JSAI

• Pfeifer, R. (1996b). Building "fungus eaters:" Design principles of autonomous agents. In P. Maes, M. J. Matarić, J.-A. Meyer, J. Pollack & S. W. Wilson (Eds.), From Animals to Animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior (pp. 3-12). Cambridge, MA: MIT Press

• Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3): 417-424

Page 29: ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

HomeworkPlease read the whole article if possible:

• Di Paolo, E. A. (2015). El enactivismo y la naturalización de la mente. In D. Pérez Chico & M. G. Bedia (Eds.), Nueva Ciencia Cognitiva: Hacia una Teoría Integral de la Mente (in press). Zaragoza: PUZ

• Optional:

• van Gelder, T. & Port, R. F. (1995). It’s about time: An overview of the dynamical approach to cognition. In: R. F. Port & T. van Gelder (eds.), Mind as Motion: Explorations in the Dynamics of Cognition (pp. 1-43). Cambridge, MA: MIT Press