Social Robot Partners: Still Sci-fi?

Embed Size (px)

Citation preview

  • 8/6/2019 Social Robot Partners: Still Sci-fi?

    1/8

    1

    Social Robot Partners: Still Sci-fi?Kadir Firat Uyanik

    KOVAN Research Lab.

    Dept. of Computer Eng.

    Middle East Technical Univ.

    Ankara, Turkey

    [email protected]

    AbstractDesigning a man-made-man has always beenone of the most exciting dreams of humankind. It hasattracted many scientists, engineers and inquisitive peoplethrough the history of technology. Particularly in the lastdecade, many roboticists have shifted their fields of interestfrom robotic manipulation and navigation to humanoid

    science (e.g. human-robot interaction, social robots, robotlearning etc.). Although computational power, sensor tech-nology and production techniques advanced a lot, the worldis still waiting for the first heartbeat signal of a robot beingable to recognize itself and its environment, walk aroundwithout falling over, communicate with people, do daily-lifetasks for/with people and learn how to behave properlyin an unanticipated situation. However, it is obvious thatrobotics still has a long way to go. The question is Howmuch complicated can it really be?.

    I. INTRODUCTION

    A. Historical Notes

    Artificial humans, human-shaped mechanisms andhuman-like automata are nothing new for mankind. The

    Greek myths, such as Hephaestus and Talos, talks about

    golden robots and bronze human machines. A Chinese

    artificer Yan Shi designed a mechanical handiwork [1]

    which is able to sing and act, BC 1000s. In the eighthcentury, the Muslim alchemist, Jabir ibn Hayyan (la-

    tinized as Geber), gives recipes of artificial slave humans

    in his Book of Stones based on the ultimate goal,takwin1.

    Ebul Iz (=Al-Jazari) is known as the creator of the first

    programmable humanoid robot, 1206 [2]. His mecha-nism was a programmable drum machine consisting of

    four automatic musicians in a boat floating in a lake to

    entertain guests during royal drinking parties. Melodyof the music is changed by moving pegs in what may

    be called programming. According to Charles B. Fowler,

    more than fifty facial and body actions can be generated

    during each musical selection[3]. Leonardo Da Vinci

    1The act of takwin is an emulation of the divine, creative and life-giving powers of God.

    Fig. 1. First programmable humanoid robotic system

    designed a humanoid automaton, 1495. Leonardos robotwas capable of doing humanlike movements such as,

    sitting up, moving its arms and neck, and anatomically

    correct jaw. Late in the 1700s, Wolfgang von Kempelen

    built the Turk, a chess-playing humanoid automatoncontrolled by a human staying inside the cabinet. In

    the same century, Jacques de Vaucanson built The Flute

    Player, a life-size figure of a shepherd, and also The

    Tambourine Player that plays a flute and a tambourine.

    Pierre Jaquet-Droz, his son and Jean-Frederic Leschot

    built the Musician, the drawer and the writer which are

    controlled by operators so as to realize some basic works,

    such as playing an instrument, drawing a womans

    picture and writing 40-letter long texts.

    Throughout years of study, humanoid robots became

    more and more complicated. After the mid of the 20thcentury, many theoretical models of biped locomotion

    are suggested and the first active anthropomorphic ex-

    oskeleton was built [6] at the Waseda University in

    Tokyo. During 90s, there were many humanoid robots

    like famous ASIMO (Advanced Step in Innovative MO-

    bility)being able to walk on two legs and even run fairly

    enough, or like the robot Cog being designed by Rodney

    Brooks from MIT which is intended to emulate human

  • 8/6/2019 Social Robot Partners: Still Sci-fi?

    2/8

    Fig. 2. Left:Leonardos robot, a knight. Right:Reconstruction of theTurk, a Chess playing humanoid robot controlled by a human operator

    thought and learn how to behave by experiencing the

    world as we humans do.Today, robotics community tries

    to make robots more social, more dexterous and more

    mobile, or in short, much more humanlike.

    B. Converging to human day by day

    Sixty five years ago, there was only one working

    computer in the world. A computer to debug a program

    of which you should open it up and walk inside (see The

    first computer bug, in the collections of the US Naval

    Historical Center). More strangely, people of that time

    have confidently predicted that United States will only

    need six of these machines which is certainly not the

    case today.

    Just 40 years after the first computer, we got robots,

    at least, working in the factories where environment is

    well structured and working space is fully under control.

    Nevertheless, RoboCup2, aims to build a team of fullyautonomous humanoid robot soccer players that shall

    win the soccer game, comply with the official rule of

    the FIFA, against the winner of the most recent World

    Cup, by 2050. This may imply that we can have robot

    partners, companions and assistants in 40 years, like we

    have computers, laptops and pdas today.

    The rest of this article examines how scientists cope

    with the issues related to the humanlikeness of the robots

    mainly in terms of appearance and intelligence.

    I I . TO BE HUMANLIKE

    Information technology has made a remarkableprogress recently. Internet, networking and communi-

    cation advanced, and the form of communication and

    social life changed considerably. Until now, robots have

    been to oceans[27] and volcanoes[28]. Theyve become

    2one of the most well known annual robotics competition that wasstarted in 1997, see for detailed information www.robocup.org

    helicopters performing inverted-flight [10], and even

    been to Mars[29]. As a next step, they will enter probably

    the most sophisticated environment, our living rooms (!).

    They should not only act on physical objects around

    them, but also interact with people. They should be

    capable of not only doing things for us, but also with us,

    which necessitates generating proper actions in unantic-ipated situations and understanding of human believes,

    desires, and physical actions. Hence, those robots should

    setup a human-centric communication, move around

    the environment particularly designed for people, make

    sense of what they see, hear and touch, and learn doing

    things in a social manner.

    A. Appearance and Interactive Behaviors

    People try to make animals, plants or even inanimate

    objects talk, walk, see and think or in a way pretend

    to behave like a human. There are many examples of

    this attempt in the movies (iRobot, Artificial Intelligence,Transformers, Wall-e, Short Circuit etc.), cartoons (Irona

    in Richey Rich, Rossie in Jetsons), toy designs, adver-

    tisements but more seriously in robotics science.

    There are several concerns about the humanlikeness of

    the robots, like appearance and behavior. To tackle with

    these problems two different approaches are necessary.

    One is from a robotics science point of view, that is

    building humanlike robots based on the knowledge from

    cognitive science. The other one is from a cognitive

    science which uses robots to verify hypotheses to under-

    stand humans. This interdisciplinary framework is called

    Fig. 3. Ishiguro and his android twin

    as android science[9] by many Japanese roboticists, like

    2

  • 8/6/2019 Social Robot Partners: Still Sci-fi?

    3/8

    Prof. Hiroshi Ishiguro, whose lab has child and adult

    sized androids including his electromechanical twin,

    figure 3. This robot, Geminoid[11], is not able to walk or

    create complicated movements autonomously. It is tele-

    operated by Ishiguro since AI technology is inadequate,

    for now, to create humanlike conversations. As it can

    be seen from the figure 4, captured sound and the lipmovements are encoded and transmitted through internet

    to the Geminoid server. Server maintains the state of

    the conversation and generates necessary outputs by

    evaluating the input data packet and the state of the

    conversation. It also generates unconscious behaviors

    such as breathing, blinking and other hand and head

    movements.

    Ishiguro investigates the followings by developing this

    robot;

    How we define humanlikeness,

    What human existence and presence mean,

    How recognition mechanism works in human brain,

    Is intelligence or long term communication crucial

    factor in overcoming the uncanny valley.

    Fig. 5. Simplified version of the figure in [13].

    Uncanny valley is a hypothesis introduced by

    Masahiro Mori in 1970 [13]. It represents a revulsion

    among human observers which happens when robots

    and any other thing resembling to humans act almost

    but not entirely like actual humans.To bridge this valley,

    robots behaviors and communication capabilities should

    be as familiar as possible to the humans. An interac-

    tion that increases familiarity and makes communication

    smoother is called as social interaction [14]. Although

    it seems unnecessary, humans chat with each other to

    accomplish tasks. This interaction may not have an

    explicit purpose of information exchange, but it serves as

    the basis of smooth communication. Thats why, humans

    will tend to prefer more familiar, in a way more social,

    robots as their partners among the robots having identical

    functionalities.

    Fig. 6. The Robot Robovie is managing a conversation (adapted fromB. Mutlu et al. 2009)

    A communication robot should have capabilities that

    androids dont have yet. Firstly, this robot should be self-

    contained in terms of its actuation mechanisms, which

    makes communication more effective. This means, there

    shouldnt be any wire or other links that prevents robot

    from moving around, or communication mechanisms like

    speakers or microphones outside of the robots body,

    again limit its travel area. Haptic communication is also

    important which makes communication more familiar

    requiring touch sensors on the body of the robot.Communication robots are supposed to serve various

    kinds of informational tasks (e.g.museum guide, infor-

    mation booth personnel, shop assistant etc). This requires

    enhanced communication skills like managing turn-

    taking behavior and performing appropriate listening

    behavior. During conversation, people switches between

    different participant roles, such as speaker and addressee.

    However, it is fairly possible that there might be other

    side participants and also non-participant bystanders and

    over-hearers. Although communication robots are still

    not capable of recognizing speech robustly and generat-

    ing speech adaptively, it is proved that gaze behaviors

    play an important role in establishing and maintainingthose conversational roles [15].

    B. Intelligence and Learning

    A humanoid robot should be able to adapt itself to

    the dynamically changing circumstances and it should

    also be a quick learner to be useful in human populated

    3

  • 8/6/2019 Social Robot Partners: Still Sci-fi?

    4/8

    Fig. 4. Overall system and data flow in Geminoid System[12].

    environments. The degree of intelligence of a robot

    is generally understood as how much successful it is

    in a task or several tasks. For an intelligent robot,

    achieving a goal or accomplishing a task depends on its

    perception, decision making and actuation capabilities.

    Although actuation is not totally solved yet, like in the

    robot ASIMO ( it has zero-moment point theory based

    control and non-regenerative actuation system -highly

    inefficient-) or the robot Petman[16] ( it is the first

    robot moving dynamically like a real person with its

    heel-toe type walking pattern yet having a combustion

    engine -improper for indoor environments-), perception

    and learning are not even that much promising.

    1) Perception: Humanoid robots should be aware of

    themselves, they also should get necessary information

    from the outside world to behave successfully. Today self

    awareness can be mimicked by using several sensors,

    such as motor encoders, force and tactile sensors, poten-

    tiometers etc. to obtain proprioceptive information; gyro-

    accelerometer couples are used to get the information

    about posture alignment, microphones are for audial in-

    formation. Stereo cameras and other superhuman sensors

    ( infrared range cameras or ultrasonic range finders) are

    used as vision sensors.

    The problem is, robots never understand what they

    sense. They pretend as if they sensed by utilizing the

    algorithms that are just the interpretations of the roboti-

    cists. Unfortunately, scientists still dont know how ex-

    actly human brain interprets the electrical signals which

    are similar to the numerical values that robots obtain

    from their sensors.

    Robot vision is one of the major problems in per-

    ception. An example is grasping of novel objects that

    are seen for the first time. Stereo cameras are not

    good enough if the objects are textureless or they

    are transparent. Time of flight range cameras are low

    resolution sensors, and laser range finders necessitates

    too much time to scan 3D. Although stereopsis or 3D

    reconstruction works well, only visible portions of the

    object can be reconstructed. One solution is not to try

    to obtain whole 3D representation of the object but

    learn how to use partial shape information to find an

    optimum grasping point [17], [18], [19] by computing

    and evaluating several features, such as contact area,

    contact symmetry, force closure and so on.

    However, another problem with objects is the un-

    derstanding of their permanence. Human infants obtain

    knowledge of their environment by interacting with the

    objects. One of the milestones in developing this ability

    is learning the permanence of objects or conception

    of physical causality. That is, knowing the continuity

    of the existence of an object even it is occluded by

    other objects. This requires extracting information about

    an object depending on the state of its environment.

    Recently, a model of situation-dependent predictor is

    proposed in [20]. This model consists of four majormodules, such as attention, environment, predictor se-

    lector and motion predictor modules which are briefly

    explained in figure 7.

    Not only visual recognition but also audial recognition

    has similar problems. One of the major problem is

    discrimination of the sound source of interest (SoI) or

    4

  • 8/6/2019 Social Robot Partners: Still Sci-fi?

    5/8

    Fig. 7. A model of physical causality perception. Attention module extracts the geometrical information of the object and its surroundings. Inenvironment module this information is self organized by Restricted Boltzmann Machine type network[21] and the next position of the objectis calculated by prediction module based on the current state of the object and its environment [20].

    detection of sound source location (SSL). Although there

    are several methods to detect SSL, such as receiver

    operating characteristic analysis [30] or time-delay of

    arrival based approaches [31], speech recognition and

    mood detection is still a crucial problem in the human

    communication partner case.

    Researchers, in a way, neglects these problems, so

    as to deal with higher level problems, by utilizing

    teleoperation systems in which perception ability isdistributed to the environment by adding multi infrared

    range camera setup (motion capture system) to obtain

    more complete information about the object of interest,

    or piezoelectric pressure sensors to the floor ground

    to locate communication partners (e.g. addressee or

    bystander partners during a conversation), or multiple

    microphones to locate SoI, or remote control panels to

    control some of the higher level behaviors of the robot,

    like the robot Geminoid (see figure 4) and the Robonaut

    of NASA [22].

    2) Learning: Social robot partners are supposed to

    work in environments designed in a human-centered

    manner.That is, those robots will come up with highly

    changing circumstances and they should adapt their ca-

    pabilities according to those changes and add new skills

    to their repertoire quickly. Today, robots are suffering

    from the computational complexity of the perception

    algorithms, long-time requiring task learning phase, low

    generalizablity of the learned behaviors between dif-

    ferent agents and between different tasks for the same

    agent. To deal with those problems scientists proposed

    several techniques;

    a) Reinforcement Learning:

    In reinforcement learning (RL), a robot is rewarded or

    punished according to the results of its interactions with

    the environment. Learning is done by finding a policy

    of actions that maximizes the subsequent award. If we

    define accomplishing a task as the properly generatedaction sequences, a robot -learning to achieve a goal-

    actually learns what to do next in a particular state.

    RL is successfully implemented on different plat-

    forms, such as an autonomous helicopter which learns to

    flight invertedly [10], a robot soccer team which learns

    to keep the ball away from opponent robots[32], or a

    humanoid robot learns how to play air-hockey against

    a human opponent[33]. One difficulty with RL is that

    the state-action space can be very large (slows down the

    learning process and decreases the generalizability of the

    learned tasks as well ), which is a usual case in highly

    antropomorphic robots having large degree of freedom

    body kinematics. A solution to this problem would be

    manually defining or hard-coding some parts of the task

    to be learn. For instance, in Atkesons work on air-

    hockey playing, primitive behaviors are manually given

    to the system which decreases the state-action space and

    helps system to converge to the optimum action policy

    much quicker.

    5

  • 8/6/2019 Social Robot Partners: Still Sci-fi?

    6/8

    b) Affordance Learning:

    J.J. Gibson introduced the concept of affordances em-

    phasizing the relationships between the organism and

    its environment[26]. Gibson claims that each action

    needs only relevant perceptual feature for its execution

    which can be supplied by dedicated filters -running

    concurrently- to extract certain cues from the environ-ment. This results in an immense perceptual economy.

    He also mentions that an affordance is relative to the

    organism. For instance, a bowling ball is liftable for an

    adult, yet it is not for a little child.

    This concept has been studied by various research

    groups, commonly in terms of learning of consequences

    of a particular action [24] or learning of invariant prop-

    erties of environments that afford a certain action[25].

    According to the representation given in figure 8 affor-

    dances can be used to estimate outcomes of actions, to

    plan actions to accomplish a task or to recognize objects

    and actions of others. This representation has been

    applied to various problems, such as directly ground-ing symbolic planning operators to continuous sensory-

    motor experiences[35], reaching goal-directed behaviors

    from primitive behaviors by learning the effects of

    actions on different objects [36], learning how to grasp

    novel objects by learning local visual descriptors of good

    grasping points [37], or learning traversability affordance

    [38]. In the affordance study of Sahin et. al., affordance

    is formalized as a nested triple of

    (effect, (entity, behavior))

    where entity represents the initial state of the envi-

    ronment (directly perceived by the agent) before robotperforming the action, behavior is the mean by which

    agent interact with the entity, and effect represents the

    perceptual change of the entity ( including the object

    of interest) after the behavior is applied. For instance,

    a robot can obtain the relationship between a black-can

    and the action it applied to this object as;

    (lifted, (black-can, lift-with-right-hand))

    In addition, if the same agent applies the same behavior

    on a different can, let it be yellow, and obtains the same

    effect, then it will generalize its representation as;

    (lifted,

    (can, lift-with-right-hand))

    Here, perception of the color of the object -can in this

    case- looses its importance when the behavior lift-with-

    right-hand is to be realized which is an example of

    perceptual economy. Hence, a robot,learning via affor-

    dances schema, does not try to extract an object model

    to plan actions upon, yet it obtains its own representation

    of the world in terms of several features including

    shape, orientation, color and many other relevant factors.

    Robots experiences with the objects are categorized (e.g.

    via support vector machines) so as to build higher level

    symbols of the world.

    Fig. 8. Encoding affordances as relationships between actions, objectsand effects [34].

    c) Social Learning:

    There are some useful mechanisms to transfer knowledge

    between agents (biological, computational or robotic

    autonomous systems), such as social learning, be-

    havior matching, imitation[7] and programming by

    demonstration[23]. For instance, humans rely on im-

    itation or observational learning in social interaction,

    mostly to broaden their behavior repertoire, coordinate

    interactional characteristics and ground the understand-

    ings of others behaviors in own experience.

    Psychologists proposed different theories about how

    imitation ( i.e. social learning), occurs in the hu-man infant. Three of them are active intermodal map-

    ping(e.g. Meltzoff and Moore, 1983, 1994, 1997), asso-

    ciative sequence learning(Heyes, 2001, 2005; Heyes and

    Ray 2000) and the theory of goal directed imitation(

    Wohlschlger et al., 2003). These theories explain how

    matching behaviors are generated and correspondence

    problem is bridged. Correspondence problem[8] is a cru-

    cial problem in imitation which shows up when imitator

    agent tries to find and execute sequence of actions, using

    own embodiment, that are generated by a demonstrator

    possibly having a dissimilar embodiment.

    In robotics, one difficulty is the perception of the

    counterpart. To overcome this problem, generally motion

    capture systems are used to sense the movement of the

    counterpart. However, obtaining the information about

    the motion of the counterpart is inadequate, this data

    should also be mapped to the robots frame of reference.

    However, in this stage, the problem of what to imi-

    tate emerges. There are studies enabling robots to per-

    6

  • 8/6/2019 Social Robot Partners: Still Sci-fi?

    7/8

    Fig. 9. The problem of producing a behavior that matches anobserved one is due to the coding that represents observed and executedmovements. Things get more and more complicated if the agentshave different body kinematics. The picture is from the book RobotProgramming by Demonstration, by Sylvain Calinon.

    ceive relevant aspects of the counterparts movements.

    For instance, Breazeal and Scasselatis [39] work on the

    robots Cog and Kismet includes the detection of human

    faces and eyes, and following humans gaze direction.Those robots are also capable of recognizing human

    facial expressions and emotional vocalizations. Billiard

    and Schaals work [40]also shows how to segment rel-

    evant actions, that is, starting and finishing instances of

    the action to be matched.

    Inferring the goal of the demonstrator is another diffi-

    culty. Currently, researchers set goals by hand. For exam-

    ple, Alissandrakis et. al.s work [41] shows how a robot

    can be told to imitate at the path level, trajectory level

    or end point level which correspond to imitation of

    whole action, sub-goals only or goal only, respectively.

    On the other hand, Billard et. al.s work [42] shows how

    a robot can infer the demonstrators goal. Their robotextracts the invariants across each demonstrations ( e.g.

    moving several different boxes by using left hand). The

    robot starts to copy this behavior at the coarse level, by

    replicating all the trajectory or path of the action, then it

    obtains the crucial parts of the movement and it tries to

    reach to the same results by using the actions that robot

    has already know.

    III. CONCLUSION

    Considering four decades of research and eminently

    promising results, social robot partners are not a matter

    of science-fiction anymore. Due to its multi interdisci-

    plinary nature, robotics benefits from the advancements

    in social sciences and engineering which results in a

    growing community and rapidly accumulating knowl-

    edge. Today, many researchers believe that robotics is

    at the edge of a revolution like the computers in 80s.

    Although they have stronger groundings than Marvin

    Minskys1 -former head of the AI Lab of MIT-, various

    problems related to perception, control and learning

    still makes us suspicious about having that dream robot

    which is able to adapt itself to our highly dynamic envi-

    ronment, understand and learn what we say, show, do and

    even think in order to set up an intuitive communication.

    REFERENCES

    [1] C. Cheng-Yih, A Re-examination of the Physics of Motion, Acout-stics, Astronomy and Scientific Thoughts, p. 11, Hong KongUniversity Press,1996

    [2] N. Gunalan, Islamic Automation: A Reading of al-Jazaris The Book of Knowledge of Ingenious Mechanical Devices, Media ArtHistories, edited by Oliver Grau, Cambridge (Mass.): MIT Press,2007, pp. 163-178

    [3] Based on Prof. N. Sharkeys work, A 13th CenturyProgrammable Robot, The University of Sheffie ld,shef.ac.uk/marcoms/eview/articles58/robot.html .

    [4] W.C. Chittick The Sufi path of knowledge: Ibn al-Arabis meta-physics of imagination, State University of New York Press, 1989,p. 183

    [5] G. Wood, Living Dolls: A Magical History Of The Quest For Mechanical Life by Gaby Wood, guardian.co.uk

    [6] M. Vukobratovic, Legged Locomotion Robots and Anthropomor-phic Mechanisms, Mihailo Pupin Institute, Belgrade, 1975

    [7] Edited C.L. Nehaniv and K. Dautenhahn, Imitation and Social Learning in Robots, Humans and Animals, Cambridge UniversityPress, 2007

    [8] C.L. Nehaniv and K. Dautenhahn, The Correspondence Problem,MIT Press, 2002

    [9] H. Ishiguro, Android Science-Toward a new cross-interdisciplinaryframework, -, Stresa, Italy, July 25-26,2005, pp. 1-6

    [10] Ng A., Coates A., Diel M., Autonomous inverted helicopter flight via reinforcement learning, International Symposium onExperimental Robotics, 2004 pp.1-10

    [11] Ishiguro H., Tele-operated Android of an Existent Person, Hu-manoid Robots: New Developments, 2007 pp.2-4

    [12] Ishiguro H., Building artificial humans to understand humans, ,

    Artificial Organs, 2007, pp. 133-142[13] Masahiro Mori, Building artificial humans to understand humans,

    Artificial Organs, 2007, pp. 133-142

    [14] Mitsunga N, Miyashita T, Ishiguro H, Kiyoshi K, Hagita N., Robovie-IV: A Communication Robot Interacting with PeopleDaily in an Office, IROS, 2006.

    [15] Mutlu B, Shiwa T, Kanda T, Ishiguro H, Hagita N., Footing In Human-Robot Conversations: How Robots Might Shape Par-ticipant Roles Using Gaze Cues, 4th ACM/IEEE Conference onHuman-Robot Interaction.Vol 2., 2009.

    [16] Boston Dynamics, Petman, www.bostondynamics.com

    [17] Saxena A, Driemeyer J, Kearns J, Osondu C., NG A.Y., Learningto grasp novel objects using vision, International Symposium ofExperimental Robotics, 2006

    [18] Saxena A, Driemeyer J, Kearns J, Ng A.Y., Robotic graspingof novel objects, Neural Information Processing Systems (NIPS19).Vol 19.; 2007

    [19] Saxena A, Wong LL, Ng A.Y., Learning Grasp Strategies withPartial Shape Information, AAAI, 2008

    1In three to eight years we will have a machine with the generalintelligence of an average human being. I mean a machine that will beable to read Shakespeare, grease a car, play office politics, tell a joke,have a fight. At that point, the machine will begin to educate itselfwith fantastic speed. In a few months it will be at genius level, and afew months after that its powers will be incalculable.

    7

  • 8/6/2019 Social Robot Partners: Still Sci-fi?

    8/8

    [20] Ogino M, Fujita T, Fuke S, Asada M. Learning of Situation De-pendent Prediction toward Acquiring Physical Causality, EpiRob.;2009.

    [21] Hinton, G. E., Osindero, S., and Teh, Y., A fast learning algorithmfor deep belief nets, Neural Computation, 18:15271554, 2006

    [22] Ambrose R, Aldridge H, Askew R., Robonaut: NASAs spacehumanoid, IEEE Intelligent Systems. 2000;15(4):57-63

    [23] Cypher A. (Ed.), Watch What I Do: Programming by Demon-

    stration, MIT Press, 1993[24] Stoytchev, A., Behavior-grounded representation of tool affor-

    dances, ICRA, 2005.[25] MacDorman, K. Responding to affordances: Learning and pro-

    jecting a sensorimotor mapping, ICRA, 2000[26] Gibson, J.J, The Theory of Affordances, In R. Shaw & J.

    Bransford (Eds.). Perceiving, Acting, and Knowing: Toward anEcological Psychology. Hillsdale, NJ: Lawrence Erlbaum pp.67-82, 1977

    [27] Michael V. Jakuba, Modeling and Control of an AutonomousUnderwater Vehicle with Combined Foil/Thruster Actuators, MSthesis,2003

    [28] G. Muscato, D. Caltabiano,, S. Guccione,, D. Longo,, M.Coltelli,, A. Cristaldi,, E. Pecora,, V. Sacco,, P. Sim,, G.S. Virk,, P.Briole,, A. Semerano, T. White, ROBOVOLC: a robot for volcanoexploration result of first test campaigns, Industrial Robot: AnInternational Journal,2003, vol.30/3, pp:231 - 242

    [29] Huntsberger, TL, Rodriguez, G, Schenker, PS, Robotics chal-lenges for robotic and human mars explorations, 2000

    [30] D.M. Green and J.M. Swets, Signal detection theory andpsychophysics, New York: John Wiley and Sons Inc., 1966

    [31] Valin, J.M., Michaud, F., Rouat, J., Letourneau, D., Robust soundsource localization using a microphone array on a mobile robotInternational Conference on Intelligent Robots and Systems, pp.1228-1233, 2003

    [32] Stone P., Sutton R.S., Park F., Scaling Reinforcement Learningtoward RoboCup Soccer, Machine Learning, 2001, pp:537-544

    [33] Bentivegna D.C, Atkeson C.G., Learning From ObservationUsing Primitives, ICRA, 2001,

    [34] Montesano L., Lopes M., Bernardino A., Learning object af- fordances: From sensory-motor coordination to imitation, IEEETransactions on Robotics, 2008

    [35] Ugur E., Oztop E., Sahin E., Learning object affordances forplanning, ICRA, 2009

    [36] Dogar M, Cakmak M, Ugur E, Sahin E., From Primitive Be-haviors to Goal-Directed Behavior Using Affordances, IEEE/RSJ,2007, pp:729-734

    [37] Montesano L, Lopes M., Learning affordance visual descriptorsfor grasping, ICDL, 2009

    [38] Ugur E, Sahin E. A Case Study for Learning and PerceivingAffordances in Robots, Ecological Psychology, 2009, pp:1-27.

    [39] Breazeal C., Scassellati B., Challenges in Building Robots ThatImitate People, Imitation in animals and artifacts, 2002

    [40] Billard , Schaal S., 1. Billard A., Schaal S., Robust learningof arm trajectories through human demonstrationInternationalConference on Intelligent Robots and Systems, 2001

    [41] Alissandrakis A., Nehaniv C.L., Dautenhahn K., Imitation With ALICE: Learning to Imitate Corresponding Actions Across Dis-similar Embodiments IEEE Transactions on Systems, Man, andCybernetics, vol. 32, no. 4, July 2002

    [42] Billard A., Epars Y., Cheng G., Schaal S., Discovering imitation

    strategies through categorization of multi-dimensional data, In-ternational Conference on Intelligent Robots and Systems. IEEE;2003:2398-2403

    8