47
Children's conceptions of behaving artifacts 1 The effect of constructing a robot's behavior on young children's conceptions of behaving artifacts and on their Theory of Mind (ToM) and Theory of Artificial Mind (ToAM) Karen Precel & David Mioduser (submitted for publication to the Children, Youth, Environments Journal)

The effect of constructing a robot's behavior on young ...muse.tau.ac.il/publications/100.pdf · The effect of constructing a robot's behavior on young children's conceptions of behaving

  • Upload
    dangnhu

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

Children's conceptions of behaving artifacts 1

The effect of constructing a robot's behavior on young children's conceptions of

behaving artifacts and on their

Theory of Mind (ToM) and Theory of Artificial Mind (ToAM)

Karen Precel & David Mioduser

(submitted for publication to the

Children, Youth, Environments Journal)

Children's conceptions of behaving artifacts 2

Abstract

Children's leisure and playing habits have changed recently from outdoor to indoor playing

and their gaming environments have become more technological. Nowadays, children play

with various technological toys, among them robot-pets, which are provided with artificial

adaptive behavior - an artificial mind. Thus, it is of interest to examine whether the children’s

Theory of Mind (ToM) is generalized to a human-made mind (or to a non-human mind), and

if being engaged in building an artificial mind supports ToM and ToAM development. This

pilot study addressed two main questions: (1) What are children's conceptions while

constructing a robot’s behavior? (2) What are the effects of constructing a robot’s behavior

on children's development of ToM and ToAM? Two 5 and two 7 year-old children

participated in the study. Children were administered a battery of pre and post-tests assessing

ToM and ToAM. They constructed the robot's behavior in two complexity levels: script- or

rule-based behavior. Results indicated a progression in children's thinking as a function of

time and age in all sessions - in both their model of the artificial mind, and in their script- and

rule-based thinking in relation to the robot's behavior.

Keywords: Robots, Theory of Mind, Theory of Artificial Mind, Behavior-Construction

Children's conceptions of behaving artifacts 3

The effect of constructing a robot's behavior on young children's conceptions of

behaving artifacts and on their

Theory of Mind (ToM) and Theory of Artificial Mind (ToAM)

In the last decades, children's leisure and playing habits have changed from outdoor to

indoor playing and their gaming environments have become more technological. Nowadays,

many children play on-line games, have virtual pets and play with robot-pets. A robot-pet, as

many microprocessor-based artifacts in our everyday environment, is a unique artifact: it is

characterized by purposeful functioning (namely, it 'behaves'), autonomous decision-making,

programmability and knowledge accumulation capabilities (it 'learns'), and adaptive behavior.

This new category of creatures affects the traditional and intuitive distinctions between the

alive and not-alive, animate and inanimate, human-operated and autonomous. As a

consequence, new questions arise - since these artifacts are provided with an artificial mind, it

is of interest to examine whether the children’s Theory of Mind is generalized to a human-

made mind (or to a non-human mind), and if being engaged in constructing an artifact’s

artificial mind supports Theory of Mind (ToM) and Theory of Artificial Mind (ToAM)

development.

Although this new area of the artificial world, the human-mind-made world, is part of

our everyday environment, we still know little about the genesis and development of the

technological stance required to perform within this environment at different levels of

complexity. In addition, examining the influence of children's interaction with behaving

artifacts and their involvement in constructing the robot's behaviors on the development and

learning of ToM and ToAM is, to the best of our knowledge, a new research area.

Children's conceptions of behaving artifacts 4

In the current pilot study, we address these issues focusing on 5 and 7 years old

children, since significant development of cognitive, affective and social systems were found

to emerge between these ages (Sameroff & Haith 1996). Two main questions were addressed:

(1) What conceptions evolve during the construction process of a robot’s behavior?

(2) What are the effects of interacting with behaving artifacts and constructing the artifact’s

behavior on children's ToM and ToAM?

Background

The theoretical framework of this study integrates research concerning three main

areas: the technology (i.e., the world of artificially behaving artifacts, and in particular of

robots and artificial minds), children’s cognition (focusing on 5 and 7 year olds, for reasons

to be presented later), and the cognitive implications of children’s interaction with the

technology.

Artificial Minds of robots and robotic systems for children

Recent research in the area of artificial mind focuses on robots that are provided with

the ability to perform human-like processes such as behavior control and decision making

(Zhao, 2006). For example, Kitamura, Tahara and Asami (2000) developed consciousness

based architecture (CBA) and examined it in two robots that displayed conscious behavioral

changes in various situations toward a prey until they captured it. At MIT three 'minded'

robots were constructed (Scassellati, 2001): Cog, designed for investigating how people use

and respond to social cues (Edsinger 2001; Irie 1995; Marjanovi´c 1995; 2001; Williamson

1999); Kismet, showing expressions analogous to anger, fatigue, fear, or disgust, designed to

interact with a caregiver who can regulate its environment and shape its experiences as a

parent would do for a child (Breazeal 2000); Lazlo, provided with anthropomorphic

appearance to explore the aesthetic of social interactions (Edsinger, O’Reilly, & Breazeal

2000). Recently, Akiguchi and Maeda (2006) developed pet-like robots that are capable of

Children's conceptions of behaving artifacts 5

changing their emotion behavior according to a stimulus and Takeno (2006) developed a

conscious robot that has the ability to "recognize oneself in a mirror", becoming the world's

first successful result of mirror image cognition.

Technologies focusing on the construction of behaving artifacts were brought into

children’s learning and playing environments since the early 70’s. According to Papert

(1980), children become epistemologists when they encounter "objects to think with",

namely, since knowledge is embodied in them, they serve as cognitive artifacts that provide a

link between sensory and abstract knowledge, and between the individual and the social

worlds. By the Constructionist viewpoint, a robot can be defined as "an object to think with".

The "floor turtle" controlled with the Logo programming language developed by

Papert and his colleagues at MIT Media Lab (Harvey 1985; Papert 1980) is an example of an

"object to think with". When children worked with LEGO/Logo they learned powerful ideas

such as control and feedback (Resnick 2006; Resnick, Martin, Sargent & Silverman 1996),

principles of engineering, design, artistic expression, programming and scientific inquiry

(Druin 2000; Resnick, Berg, & Eisenberg 2000).

In the last two decades, a variety of robot construction kits -like the one implemented

in our study- have been developed, e.g., Electronic Bricks (Wyeth & Purchase 2000), or

ToonTalkTM used in preschool classrooms (Kahn 1996; Morgado, Cruz & Kahn. 2001).

While working with these kits, young children face the challenges involved in constructing

the actual behaviors of physical devices.

Children’s cognition and developmental issues

Two specific issues are of relevance concerning the children participating in the

study: the children’s age levels -5 and 7 year old- and their Theory of Mind.

Children's conceptions of behaving artifacts 6

Developmental changes throughout the years 5 to 7

Our choice of the children’s age levels aims to unveil the development of the

conceptions addressed in our questions in a critical developmental stage. The literature

concerning developmental changes throughout the years 5-7 pinpoint major changes in a

range of aspects: Cognitive, neuropsychological, emotional, social, and in Theory of Mind.

Cognitive changes include changes in attentional capacity, working memory,

executive functions and concept acquisition (i.e., Case 1992; Navon 1977; Rueda, et al. 2004;

Siegler 1996), as well as in language and reading abilities and literacy skills (i.e. Ehri 1999;

Ehri & McCormick 1998; Morrison, McMahon-Griffith & Frazier 1996; Nelson 1996).

Neuroanatomical or organic changes include changes in frontal and prefrontal areas of

the brain, which are attributed to high cognitive functions (executive functions) such as

attention, memory, planning, decision making, directed goal selection, monitoring of ongoing

behaviors and other higher order abilities (i.e. Anderson, Northam, Hendy & Wrennall 2001;

Cycowicz 2000; Fuster 2001; Kanemura et al. 2003; Stauder, Molenaar & Van der Molen

1993; 1999).

Emotional and social development include changes in moral development (Kohlberg

1981, 1984), development of emotional awareness and regulation (Case 1992) changes in

self-attributes, self's structure and organization, understanding of opposite valence attributes

and emotions, the ability to observe, evaluate and criticize the self (Harter 1996),

development of self-Knowledge (Burton & Mitchell 2003) and integrated understanding of

gender (Wehren & De Lisi 1983).

As well, several theories of ToM claim for the development of ToM processes and

abilities beyond age 4 specifically between the ages 5 and 7 (i.e. Case 1992; Chandler &

Lalonde 1996; Flavell 2004; Perner & Wimmer 1985; Pillow 1991; 1993). We expand on this

in the next section.

Children's conceptions of behaving artifacts 7

We believe that this transitional stage offers a rich developmental setting for the study

of the evolving children’s conceptions of the artificial in general, and of artificially

constructed behavior in particular.

Theory of Mind (ToM): Definition and main findings

The term ‘Theory of Mind’ was coined by Premack and Woodruff in 1978 as the

ability to conceive mental states, knowing that other people know, want, feel or believe

things. Lewis & Mitchell (1994) referred to ToM as the ability to make inferences about

others’ representational states and to predict behavior accordingly. Baron-Cohen (2001)

defined ToM as the ability "to infer to a full range of mental states (beliefs, desires,

intentions, imagination, emotions, etc) that cause action". "In brief", Baron-Cohen indicates,

"having a theory of mind is to be able to reflect on the contents of one's own and other's

mind". (pp. 174).

The majority of the studies that focused on the pre-school child indicate that most

normally developed children passed the test-tasks at age 4, whereas younger children failed,

asserting that full understanding of other people's mind occurs at age 4 and not earlier (i.e.

Flavell, Flavell & Green 1983; Wellman, Cross and Watson 2001). However, studies of

diverse foci evidenced later development of concepts of ToM and suggested stage

development, e.g.,: studies showing that awareness of mental activity is increased between 4-

year-old children and older ones (Case 1992); that 6-7 year-old children but not younger ones

are able to appreciate that things that people like and dislike will dictate what these people

will judge as moral or factual events (Pillow 1991; 1993); that only 7-8 year-old children are

capable of understanding interpretive nature of knowledge whereas 5 years olds failed to do

so (Carpendale & Chandler 1996).

The studies in the ToM development literature focused on aspects such as

understanding pretence, beliefs and desires. Since the current pilot study focuses on adaptive

Children's conceptions of behaving artifacts 8

behavior, we believe that, in addition to these aspects which we refer to as "classical aspects

of ToM", other aspects of the mind should be examined. Thus, in the current study the mind

is referred to not just as a belief system but also as a behavior control system – being

responsible for adaptive behavior and decision making. These aspects, which are, to the best

of our knowledge, a new research field in the area of Theory of Mind, will be referred to in

this paper as "newly defined aspects of ToM".

Children's conceptions of behaving artifacts

Much research has been conducted regarding children's conceptions of natural kinds

and human made artifacts (e.g., Bloom 1996; Diesendruck, Hammer & Catz 2003; DiYanni

& Kelemen 2005; Matan & Carey 2001; Ross, Gelman & Rosengren 2005). In contrast,

research on children's conceptions of behaving artifacts and artificial mind is sparse. In

examining 7-8 year-old children's conceptions of behaving artifacts, Bumby and Dautenhahn

(1999) found that children tended to describe robots in social settings and animate them: in

their stories, robots were placed in schools or families, and were talked to as to a pet. Turkle

(1984) and Ackerman (1991) suggested psychological and physical/technological explanatory

frameworks of behaving artifacts. Whereas the former focuses on intentions and emotions,

the latter focuses on the building blocks of the mechanism (such as motors, sensors etc.).

Resnick and Martin (1991) added the information level, which relates to how information

flows among subsystems.

In examining which frameworks are used by 5-6 year-old children when reasoning

about a robot, ** (2007) found that in easy tasks, most children tended to use a technological

perspective. The higher the level of difficulty of the task – the more the children shifted to a

psychological perspective. Support of an adult influenced the type of explanations given by

children who described the robot's behavior in more technological terms. In addition, the

Children's conceptions of behaving artifacts 9

greater was the interaction between the child and the robotic system – the better was the

children's understanding of the robots behavior.

Children's conceptions of the artificial mind

Research on children’s conception of the brain provides evidence that 5-year-old

children are more limited in their brain understanding and brain-related attributions than older

children and adults. For example, whereas 5-year-old children attributed brain only to people,

older children and adults attributed brain to a person, robot and computer (Scaife and Van

Duuren 1995). In addition, whereas young children's attributions of the brain were based on

perceptual cues (physical similarity to humans), older children and adults tended to make

these determinations based on cognitive capabilities, autonomy and conceptual classifications

and only older children viewed intelligent artifacts (robot and computer) as capable of brain-

related behavior (Van Duuren & Scaife 1996). More recently, Bernstein (2006) found

interaction between children’s judgments of animacy, beliefs about intelligence capabilities

of robots and the amount of prior exposure to them.

One of the aspects of children's conceptions of the artificial mind is their relation to

the robot's thinking procedures (e.g. algorithms, rules). In assessing children's understanding

of the rules underlying a robot's behavior, six 5-6-year-old-children were asked to program a

robot using rules (* 2007). Children's representations were classed as: (1) Episodes (basic

description of a unique sequence of events), (2) Scripts (structured description in which

events are organized based on repeating patterns and specific goals) and (3) Rules (high

description level which refers to temporal associations between environmental conditions and

the robot's actions). It was found that the longer the children interacted with the robot – the

higher were the description levels and their conceptualizations of the robot and the system

were technologically rather than behaviorally oriented.

Children's conceptions of behaving artifacts 10

Gaps in the literature –brief conclusion

In the theoretical background we found gaps in the literature and open questions that

were not addressed in previous studies: (1) Little is known about children's conceptions of

behaving artifacts, including robots, (2) In most studies participants were school-aged

children, and less focus was given to the critical ages of 5-7. Only two studies examined the

influence of constructing the robot's behavior among 5-7 year-old children, however the

focus was on aspects other than cognitive development (* 2007; ** 2007), (3) To the best of

our knowledge, no studies examined how interacting with robotic devices might influence the

development of ToM and ToAM.

The current pilot study is part of a comprehensive research plan aiming to examine

the influence of constructing a robot's behavior in low and high complexity levels on the

development of ToM and ToAM by means of a systematic and thorough research among 5-7

year-old children.

Methodology

Research population

Two 5-year-old (boy aged 5.5 and girl aged 4.10) and two 7-year-old (boy aged 7.5

and girl aged 6.11) children from central Israel participated in the pilot study. All children

were recruited to the pilot study using a convenience sampling method. All children were

administered the battery of tests and tasks. The children's parents approved their child's

participation in the study.

Research Instruments

Two main research instruments were used in the proposed study: (1) A robotic

environment, and (2) Data collection tools (Pre-Post tests and Process tasks).

Children's conceptions of behaving artifacts 11

The robotic environment

A modified version of the computerized control environment designed and piloted in

previous studies (* 2007; ** 2007; *** 1998) was used in this study. This environment

includes a computer interface (Figure 1a), a physical robot (Figure 1b) and modifiable

“landscapes” for the robot’s navigation (Appendix 2). The environment includes an iconic

interface for defining the control rules in simple and intuitive fashion. The right panel

presents the progression of the modes for constructing the robot's behavior, from immediate

remote control of the behavior (top, easy task) to constructing two interrelated rules (bottom,

advanced task). The central section is the 'Desktop' in which children construct the behavior

procedures. The learning of the modes’ options is accomplished while performing the tasks.

For example, in Figure 1 (high complexity level task), the construction of the rule requires

reference to two inputs modes (light, darkness): the child is required to drag icons that

represent the expected behavior for each condition. Upon constructing the rule, it is

downloaded to the robot using the 'download' icon (circled).

---------------------------------------------------------------------------------

Insert Figure 1 about here

---------------------------------------------------------------------------------

Data collection tools

Two sets of data collection tools were used in the study: (1) Pre-Post tests and (2)

Process tasks.

Pre and Post tests

Children were administered a battery of pre and post tests that relate to the 2

dependent variables: assessment of ToM and assessment of ToAM.

Assessment of ToM:

Two main batteries of tasks were used in the study: (1) Tasks that assess classic

aspects of ToM (understanding first order beliefs by means of the ‘Diverse Desires’ task and

Children's conceptions of behaving artifacts 12

second order beliefs by means of ‘The Ice-cream Story’ task, Perner & Wimmer 1985); and

(2) New tasks that were developed along the lines of the classic tasks in order to assess

aspects of the mind that relate to behavior control and adaptivity, and were not part of

previous ToM studies.

For example, in the 'Diverse Desires' task (first-order task1), instead of focusing on

Mr. Jones preferences, we elaborated the story so that Mr. Jones's decision making

mechanism is emphasized. The child is told that Mr. Jones likes ice-cream as a dessert during

the day however at night he likes cookies. Following, the child is asked the target question:

"So, now it's time to eat. Mr. Jones can only choose one desert, just one. Which desert will

Mr. Jones choose – ice-cream or a cookie?" To pass this task the child must say that Mr.

Jones needs to decide what to eat based on the weather outside.

In the ice-cream story (see Appendix I), the main purpose is to assess whether the

child understands that all characters–Marry, the Ice-cream man, John- each from her/his own

perspective can examine reality and make decisions accordingly.

(b) Assessment of the development of ToAM:

Two main batteries of tasks were developed and used in the study: (1) Elaboration of

the classic tasks – in order to assess whether the child acquired ToAM. For example, in the

Diverse-Desires task, we ask the child the following: "Robi needs to choose a carrot or a

1 Diverse Desires Task (Wellman & Liu, 2004) – Children see a toy figure of an adult and a paper with a draw of

carrot and cookie. The child is being told: "Here's Mr. Jones. It's snack time, so, Mr. Jones wants a snack to eat. Here are two

different snacks: a carrot and a cookie. Which snack would you like best? Would you like a carrot or a cookie best?" (the

own-desire question). According to the child's answer, he is being told: "Well, that's a good choice, but Mr. Jones really likes

[the opposite]". Following, the child is asked the target question: "So, now it's time to eat. Mr. Jones can only choose one

snack, just one. Which snack will Mr. Jones choose - a carrot or a cookie?" To pass this task the child must answer the target

question opposite from the own-desire one.

Children's conceptions of behaving artifacts 13

cookie for another child. What will Robi the robot choose? why?". (2) Tasks that were

developed for the study in order to assess aspects of the artificial mind that relate to behavior

control and adaptivity. The Following are two examples of such tasks:

In an adaptation of the 'Diverse Desires' task (first-order task), the story describes the

following: the child is presented with a red and blue balloon and Robi the robot. The child is

told that children in class decided to bring balloons when there is a birthday celebration. They

decide to bring red balloons when the celebration is during the day and blue balloons when it

occurs in the evening. They also decided that Robi the robot is the one to bring the balloons.

Following, the child is asked the target question: "It's David’s birthday party. Robi can bring

red or blue balloons. Which balloons will Robi choose - blue or red?" To pass this task the

child must say that only if Robi was programmed to make that decision, he will be able to

decide which balloon to bring.

In an adaptation of the ice-cream story (Appendix I), the child is told the following:

Robi the robot is helping the ice-cream man to carry ice-cream. Robi does that when there are

people at the park. Every day Robi does the same routine: he brings the ice-cream from the

ice-cream bin near the apple tree in the park to the play-ground, where the ice-cream van is

usually located. Roni the robot knows that Robi helps the ice-cream man to carry ice-cream.

One day Robi is taking the ice-cream from the ice-cream bin near the apple tree in the park to

it's' destination but there are no people around. The question that the child is asked: "What

would Robi do"? "Will Robi continue his routine and leave the ice-cream near the play-

ground?" 'Does Robi know where the ice-cream van is? How can he know that?" The purpose

of this task is to assess whether the child thinks that Robi behave according to: (1) a script

and thus will leave the ice-cream even if the van is not there and there are no people around

or (2) a rule that is dependent on the robot's sensors that percept motion and thus can make

decisions accordingly and bring the ice-cream to the location of the van. Following, the child

Children's conceptions of behaving artifacts 14

is told that Roni wants to visit Robi at the park. "Where would Roni look for Robi? Why?".

Then, the child is told that Roni didn't find Robbi at the park but she still wants to find him.

"Where would Roni look for Robi? Why? Can she follow Robi? Why?". The purpose of these

questions is to assess whether the child knows that Roni the robot was programmed to make

decisions according to the environment's conditions, and that if she knows that Robi is where

there are people she will look for him accordingly.

The entire battery of tests was administered twice: in the pre-test and post-test

sessions of the study.

Process tasks

In order to assess the influence of constructing the robot's behavior on the

development of ToM and ToAM, two tasks differing in their complexity levels were

developed: one for the low level of complexity and one for the high level of complexity. In

the low level of complexity condition we used the "going to the basketball yard" task. In the

high level of complexity condition we used the "politeness task". These examples are

described in appendix II. Appendix III present the behavior construction tasks, the structured

interview questions, and expected answers.

Design and Procedure

The conduction of the pilot study consisted of the following stages:

• Pre-test – A battery of 10 tasks assessing children's ToM and ToAM was administered to

all four children. The Pre-test session lasted between 30-50 minutes.

• Construction session – the construction session (about one hour) included the following

steps:

o Ten-minutes training with the research robotic environment.

o Low level construction task – the children were asked to perform the script-task

"the robot has to arrive to the basketball yard".

Children's conceptions of behaving artifacts 15

o High level construction task – the children were asked to perform the rule-task

"the robot has to move freely on the white, without touching the black area".

• Post-test – the battery of 10 tasks assessing children's ToM and ToAM administered to all

four children.

Data were collected for each child separately, in three sessions that lasted between 30-

60 minutes, during 10 days. Where considered relevant, the interviewer activated “prompting

interventions” (PI) encouraging the children to expand their explanations. At a given extent,

these interventions, which obviously were not aimed to lead the child to any given answer,

triggered explanations that can be seen as pertaining to the child’s Zone of Proximal

Development (Vigotzky 1986). When relevant, the use of PI will be reported in the results

section. All sessions were recorded and videotaped.

Data analysis

According to the research questions and the configurations of dependent and

independent variables, data analysis emphasize the following two foci:

Between age-groups - looking for the interaction between active involvement in

constructing artificial behaviors and development of ToM and ToAM.

Within in each child's working process - tracking conceptual development, mental

modeling, language and formal notational development, and behavior construction

capabilities along the process.

Before the primary data analysis was conducted, two independent judges analyzed

40% of the data, reaching 87% of agreement.

Children's conceptions of behaving artifacts 16

Results

Research question 1: What are children's conceptions during the construction process of the

robot’s behavior?

We present children's conceptions in relation to three aspects: (1) Children's device

knowledge, including structural, functional and interface knowledge as well as knowledge

that refers to the overall system’s behavior; (2) Children's model of the artificial mind; and

(3) Children's conceptions of the nature of the constructs underlying the robot's behavior (i.e.,

in terms of scripts or rules).

Device knowledge

All 5 and 7-year-old children displayed knowledge of the device's structure, functions,

the behavior-controlling interface and the relation between the work with it and the robot's

actual behavior. Moreover, we found that over time their language was more technological,

complex and precise and shifted from focusing on structural elements to focusing on the

interaction among subsystems and even on the system's constraints and limitations. For

example, 5-year-old children initially explained that the robot "drives with the wheels and

can see from here [pointing to a part in the robot but not to the sensors]. With the session's

progression they referred to the relation between inputs and outputs: "we told him that he

needs to walk forward on the white" and "he went to all directions because you operated it...

He turned around because we operated it". A similar path was observed with the 7-year-old

children. From a focus on the robot’s structural elements at the beginning ("He is made out of

Lego".. "He is working because of the wires") and basic assumptions about how it functions

("the battery is telling him what to do"), to a more complex and precise understanding

focusing on the system’s functioning ("Suddenly he went forward but I told him backward",

"we also told him to remember, to put it in his memory").

Children's conceptions of behaving artifacts 17

All children displayed a full understanding of the computer interface as the session

progressed (e.g., they knew what the buttons mean, knew what to activate in order for the

robot to 'remember' and operate, understood the importance of the IR tower and the

transmission of information between the computer and the robot, knew that the robot can be

operated only when the correct setting requirements are met).

Children's model of the artificial mind

What is the model of the artificial mind held by the children? Our analyses of the data

unveiled 3 levels of models: (1) ToM-like model - completely based on the children’s model

of the human mind, (2) Tom-based ToAM (TbTa) - technological model referring to the

artificial mind but using elements borrowed from the model of the human mind, and (3) fully

technological ToAM model. Table 1 summarizes the findings on children’s models of the

artificial mind by age.

---------------------------------------------------------------------------------

Insert Table 1 about here

---------------------------------------------------------------------------------

Age 5

No evidence was found for ToM-like models in 5-year-old children's conceptions of

the artificial mind. We found evidence for TbTa and fully ToAM models in the children's

conceptions. Their model was mostly ToM-based at the beginning of the sessions, and at

stages where the tasks were difficult. A gradual shift to a fully ToAM model was observed as

they proceeded in performing the tasks. Examples for TbTa and ToAM models are evident in

the children’s explanations: "He can remember what to do.. because he did that before", or "if

he will do what you told him.. he will succeed". As the session progressed, the children

became aware that the robot's knowledge can be changed by teaching it what to do, and that it

operates following commands. However, their models still included references to the human

mind: "He listened to the computer!", "He didn't do it.. because he is tired", or "He tried and

Children's conceptions of behaving artifacts 18

tried but couldn't make it" when failed the task. In addition, we observed that in some cases

there was a gap between the children’s doing in technological terms (constructing the

behavior with the interface) and their explanations in behavioral terms: "I told him to walk on

the white" [instead of "I put a forward arrow on the white square"] and "he was walking.. and

suddenly.. black!". We can thus assume that in part their use of anthropomorphic language is

related to their still shallow acquaintance with terms and definitions related to robots and

artifacts. Further research including a number of construction sessions is needed to examine

this assumption.

Signs for fully-ToAM model were observed already in the middle of the construction

session. The children understood that the robot needs to be programmed "He knew what to do

because I did that on the computer", "He will know how to reach [the park] because I will

teach him", "He was able to arrive to the basketball yard... because he was driving on his

own.. because I moved him". The older child's language was even more technological in

relation to his ToAM model: "He will succeed because he has it already in the computer",

"we touched the computer and he made it", "the tower drove him there", "this [the tower] and

this [the robot] are electric", "The tower knew because of the computer" and "the arrows

moved him and I choose the arrows". The child's full understanding of the robot's artificial

mind and the information-transmission chain involved in its functioning was also evidenced

in his claim that "in the black... he learned from the memory" and "he was able to do it

because of the computer... from the computer it reached the wire that told Robi what to do.

He knew how to walk because of the tower... and he knew to turn right or go forward because

of the computer... The computer knew... because it was pressed... I pressed him!".

Age 7

Similar to 5-year-old children's conceptions of the artificial mind, no evidence was

found for ToM-based model in 7-year-old children's conceptions. We found little evidence

Children's conceptions of behaving artifacts 19

for ToM-based ToAM model in relation to the artificial mind in 7-year-old children's

conceptions in their answers: "…because he remembers... because he already did it… he

stopped because he was on the darkness... and here he didn't stop because he was on the

light". At the end of the session, upon the construction of the robot's polite behavior, the older

child said "the computer must be also smart". Even though this indicates a ToM-based ToAM

model, we assume that his real model is fully technological and that he only used a human-

related language to describe his beliefs in relation to the artificial mind.

We found convincing evidence for a fully ToAM model in 7-year-old children. For

example, a child explained that "He will move because of the computer... because you

pressed this [the download button] and then this [the run button] and the batteries for sure

"catched" it... so you moved it and it moved that way and order him [the robot] to perform the

command", or "He knew how to do it because you told him via the computer". Another child

said that "The computer reminds him what to do.. The computer is connected to the tower and

the tower reminds him.. He knew what to do because of the computer... the computer told

him”. Both children used more complex language while constructing the robot's behavior in

rules: "he has to distinguish.. to differentiate... [between the black and the white]"; and both

understood that programming is fundamental for the robot's behavior: "If he saw 'black' he

turned.. because I gave him an order", "He knows how to reach the basketball yard.. because

I tell him what to do" and "because we told him... the computer told him: you have to do that

on the white".

Children's conceptions of the constructs underlying the robot's behavior: script and rules

Table 2 summarizes the main findings of children's conceptions in relation to the

robot's behavior.

---------------------------------------------------------------------------------

Insert Table 2 about here

---------------------------------------------------------------------------------

Children's conceptions of behaving artifacts 20

Age 5

Both 5-year-old children understood how the robot's behavior follows a script in the

low complexity level task, but initially they had difficulties in assessing whether the script

(being temporal and fitting a specific spatial context) is effective in new situations. For

example, one child said correctly "you should put him here because he is in the beginning,

because the tower is here". However, when she was asked whether he will reach the

basketball yard if we put him in a different initial location, she referred to the environment

(the surface) instead of the fact the robot was programmed to follow a specific script: "He

wouldn't make it to the basketball yard because this is the playground" and "because it is too

far". The older child succeeded in the task and understood that the robot's performance is

dependent on its' initial location. With PI, working in his ZPD, he "learned by doing" - his

model of the robot's behavior as script changed after experimenting.

We found variance in 5-year-olds thinking in relation to the robot's behavior as rules.

The younger 5-year-old child found it difficult to accept that the robot can behave with half a

rule (one condition >> one action). She insisted on thinking in complete rules (two conditions

>> two actions) even if the only condition in the task referred to ‘white’: "Every time he will

see white he will walk forward and when he sees black – he will go backwards". The older

child's conceptions in relation to the robot's behavior as rules were initially similar and

became more complex with time: "in the black he doesn't go. He stops. ... Because he was

told not to go on the black". With PI support he was able to understand the robot’s behavioral

‘half rule’: "he didn't move forward on the white.. because we didn't tell him [what to do] on

the white". When he was asked how the robot will behave when his initial location is

changed, he understood that the robot's behavior is location-independent: "He would do the

same from everywhere.. because this is what I told him". Nevertheless, it was still hard for

Children's conceptions of behaving artifacts 21

him to explain the reason for the robot's behavior according to a rule: "He would do the

same... because I presses [a button]".

Age 7

At age 7, children are more precise when describing and explaining the robot's

behavior as a script. They knew it is route-dependent and understood that it depends on their

choice about the route: "He wouldn't make if from a different location because it is not the

way I wanted".. "it can't be that he will succeed [from a different location] because he was

standing here... it's a different route" and "He is not in the location he is supposed to be. You

didn't put him in the right location so he can't arrive at the basketball yard. If you want him

to arrive there you should put him here, otherwise he will arrive at a different location".

They also knew that changing the program will affect the robot's behavior: "if we delete it all

and teach him a different route – he will succeed".

We have evidenced a progression over time in both 7-year-old children's thinking in

relation to the robot's behavior as rules. At the beginning of the half-rule task, the younger

child understood the half-rule concept but had difficulties in explaining it: "He stopped

because his eyes were in the dark.. and here [the white part of the board] because he is on the

light". With PI she was able to fully understand the half-rule concept: "He will stop in the

while because we didn't tell him what to do on the white". In relation to the robot's behavior

in full rule, she was able to explain and saw the robot's behavior as dependent on its'

program: "If he sees black.. he turns.. because I gave him an order". When she was asked

whether its movement depends on its initial location, she correctly claimed that not. However

when she saw the previous (basketball yard) surface she changed her opinion: "No, he

wouldn't make it. It's a different route". But after operating the robot she changed again her

model. Once again, we see how experiencing and constructing affects the child’s model.

Moreover, not only did she change the model, she was also able to transfer this knowledge to

Children's conceptions of behaving artifacts 22

a different situation: "If it was here [pointing at a new location] I assume he would succeed

too". The same pattern was observed with the older child. In addition he was also able to

distinguish the robot's behavior as script and rules: "If we put him in a different place.. the

same thing will happen. It doesn't matter where he will be, he is not going to bump the black..

because we told him. Previously [the script task] we told him which route to do and he had to

do it to reach [the basketball yard]. And now.. we didn't tell him in which route to go so he

had to do [go] on the white... and thus, it doesn't matter".

In sum, as indicated previously, we have evidenced both script-based and rule-based

thinking in relation to the robot's behavior. The children’s thinking changed over time and in

most cases – upon operating the robot. We observed differences in 5- and 7-year old

children's thinking in particular when explaining the robot's behavior.

Research question 2: What are the effects of interacting with behaving artifacts on children's

development of ToM and ToAM?

Children were administered 10 tasks to assess their ToM and ToAM in the pre-test

and in post-test sessions. Table 3 summarizes the main findings for these tasks by age.

---------------------------------------------------------------------------------

Insert Table 3 about here

---------------------------------------------------------------------------------

ToM - 1st order tasks

No differences were found in 5 and 7-year old children's answers in the classic

"diverse desires" task. All children answered the target question and explained it correctly. In

the post-test, no only did the younger child answered and explained the situation correctly,

she also indicated that the puppet changed her snack's preference - from "cookie" in the pre-

test to "carrot" in the post-test, evidently keeping record over time of the puppet’s “state of

mind”.

Children's conceptions of behaving artifacts 23

In the expanded "diverse desires task" referring to decision making and adaptive

behavior, difference in one 5-year-old child's answers was observed. In the pre-test, the child

did not refer to adaptive decisions as a function of environmental conditions, and thus did not

pass the task. In the post-test the child referred to environmental conditions indicating that

"he will choose ice-cream.. because.. at night he eats ice-cream" adding that "he can see that

it is dark outside because he has eyes" (with PI after the child's first answer "ice-cream is

tastier"). No differences were found in 7-year-old children answers for this task.

In the "diverse beliefs" task we obtained uneven results with 5 year olds. In the pre-

test, both children answered and explained the task correctly. Even though one child's answer

was supported (PI), it evidenced "thinking in rules" in relation to the human mind even at

young age. However in the post-test, one child answered the target question ignoring the

puppet's decision making and focused on irrelevant aspects such as the dog's colors. It might

be that her young age (4.10) and the fact she was less focused in the post-test session affected

her answers.

Interestingly, 7-year-old children provided the correct answer in the pre-test but their

explanations did not refer to environmental conditions. For example, one child referred to the

preferable hiding place because of its' safety: "well.. a tree is more safe", or to her preference

"I prefer the tree because it is not so thick". In the post-test both children provided the correct

answers and explanations to the task.

ToM – 2nd order tasks

The classic "ice-cream" task

In general, there was a minor difference between the pre and post-tests by both 5-

year-old children. They both answered the verification questions correctly in both sessions.

Regarding the 2nd order questions, in the pre-test one child answered these questions correctly

with PI support, however the explanation was incomplete and referred to Mary instead of

Children's conceptions of behaving artifacts 24

John ("John doesn't know Mary went to the pool because she went home to get money”). In

the post-test the child answered the questions on her own. In addition, and most importantly,

she explained her answer correctly while referring to the question's object: "… because he

(John) did not see and did not hear… and thus he doesn't know that Mary knows where the

ice-cream van is". The 7-year-old children's answers in both pre and post-tests were more

precise.

The "ice-cream" task- expanded to adaptivity and decision making

While there was no difference in the 7-year-old children's answers to this task in both

sessions, the 5-year-old children's answers were diverse.

One 5-year-old child thinking was in rules in both sessions. Even though he provided

the wrong answer ("the pool") to the 2nd order question ("what will John think Mary will do –

will go home via the park or the pool") his explanation justified this answer: "because he is in

the same school and he sees both places from the school-window". The younger child's

thinking shifted from thinking in scripts in the first session while ignoring aspects of

adaptivity of the human mind in decision making, to thinking in rules (with PI support).

Both 7-year-old children's thinking was in rules. They both referred to aspects of

adaptivity of the human mind when making decisions and correctly answered the 2nd order

questions concerning these aspects.

ToAM - 1st order tasks

Age 5

In the "diverse desires" task a difference in 5-year-old children's answers was found.

In the pre-test, both children gave human reasons to the robot's choice. For example, one

child said that "the robot will choose carrot; otherwise he will have to go to the dentist".

When the experimenter clarified that the robot chooses the snack for another person – they

still 'ignored' the robot's mind by saying "carrot is healthier and tastier". Both 5-year-old

Children's conceptions of behaving artifacts 25

children's answers in the post-test were similar to the one they provided in the pre-test and the

robot's adaptive behavior upon programming not taken in consideration. Nevertheless, in the

post-test but not in the pre-test, they used technological language and referred to many

aspects that relate to the robot. In other words, they mostly focused on the robot, trying to

learn what it can do and how it functions. They asked many questions in relation to the robot

and barely referred to the task. For example, one child asked "Robi's buttons are connected to

the computer?" and "Where do you turn it off?", and indicated that "he can see that the cookie

is brown because he has eyes (sensors)" and that "he has two large wheels in front and one

small wheel in the back". The other child ignored the task completely at first and referred to

the robot by saying "we need to turn him on, because he can move on his own". When he was

asked how he can move on his own the child said: "we need a computer, tower, computer

wires, and the robot must be set up... he can be set up by the computer... with my help". The

child indicated that the robot will be able to choose the cookie "because of the electricity" and

will be able to distinguish the carrots from the cookie because the first is elliptic and the latter

is round and he will be able to do so using "his eyes" (sensors). The differences in children's

answers, terminology and language used between the pre and post-test indicate that they

acquired device knowledge and were aware of its' constructed mind and behavior and

decision making actions. Their model resembles what they have seen during the construction

session. Even though their model is partial and sometimes not precise, it is technology-based.

Evidently, extended session and further research is needed to determine the influence of

constructing the robot's mind on the youngest children's ToAM model.

In the "diverse beliefs" pre-test, one child answered and explained the task correctly:

"he will look for the dog near the van because for him it is night.. he can see that it is night

because he has eyes… he has a tool". In the post-test the child gave the same answer but

added that "he can see the darkness with its' eyes" (sensors)… he knows what to do in the

Children's conceptions of behaving artifacts 26

darkness because John told him". The other child's answer in the pre-test ignored the robot's

decision making according to the environment's conditions: "the robot will look for the dog

near the ice-cream van… because he wants to eat ice-cream". Even though her answer did

not change in the post-test, she referred to structural elements of the robot ("he will see the

ice-cream.. because he has eyes [sensors]"..). Both children's language was more

technological following the construction session referring explicitly to artificial behavior and

to the technological components related to this behavior.

The "Birthday task" refers to aspects of the artificial mind that relates to adaptivity

and decision making. In the pre-test, both children answered correctly. One child answered

following the experimenter PI and both explanations refer to structural elements of the robot:

"he will know that it is blue… because he walks with flash-light" and "he knows it is red

because he has red (on the Lego)... and he knows to bring it in the morning because the

children told him so". The older child's answer indicated his "thinking in rules": "the robot

will bring red balloons because birthdays are usually in the morning... but if the party will

occur at night he will bring blue balloons". In the post test, we see a difference between the

two 5-year-old children. The younger child did not provide the right answer but included

aspects of the robot's structure and behavior ("he can see it's blue because he has eyes"

[sensors]). As was previous mentioned she was less focused in the post-test session and that

might have influenced her answers. The older child answered and explained the task correctly

while referring to aspects of the robot's structure and behavior. We observed that the children

mostly focused on the sensors of the robot as the most important component for the decision

making process. It might be that since there was only one construction session they were not

able to construct knowledge in relation to the entire behavior-generation process.

Children's conceptions of behaving artifacts 27

Age 7

In the "diverse desires" task, in both pre and post-tests, children's answers did not

refer explicitly to the robot's behavior in relation to programming. Nevertheless, in the post-

test, their answers included aspects that relate to the robot such as "the robot can see" and

"the robot will ask the child".

In the "diverse beliefs" task, inconsistency was found in 7-year-old children's

answers. In the pre-test, the older child said the robot will look for the dog "near the van…

because maybe the dog fell asleep and the robot will hear him snoring". This answer does not

indicate that the robot's behavior is rule-based, but it includes reference to the environmental

conditions. The younger child changed her answers while referring to her preferences.

Following, she said that both John and the robot will look for the dog in the morning

(reference to environmental conditions). When she was asked how will the robot knows that

it is morning she said "he has special tools.. he is not human and thus he doesn't have real

things.. so he has special tools that can look at the sun-light and see if it's morning". A

change in post-test's answers was observed for the younger child: "I think he will look for the

dog near the tree because in my opinion it is morning now and in the morning the dog is

behind the tree". In addition, the child's answer included reference to the robot's elements and

functions: "he has special tools for the day and special tools for the night" as well as

reference to the robot's artificial mind "he knows that in the morning the dog is near the tree".

Nevertheless, even though she remembered how the robot moved during the construction

session (and mentioned the arrows, batteries and the tower connected to the computer), she

still thought the robot knows where to look for the dog because "maybe it (the dog) told

him…" because "maybe he (the robot) has various tools that operate various animal

languages". Following PI, she said "I think I can teach him what to do in the computer…I

would tell him where to look for the dog in the morning and in the evening" and "for sure he

Children's conceptions of behaving artifacts 28

(the robot) can understand what the computer tells him... for sure the computer can tell him".

Similar to the 5-year-old children, we see that this child acquired artifact's knowledge

supporting her construction of her ToAM.

ToAM - 2nd order tasks

Age 5

In general, in the "ice-cream" 2nd order ToAM task, the children answers in the pre-

test were similar to those in the equivalent ToM test. In the post-test, even though that both

children provided no reference to programming the robot's behavior, their answers included

aspects of the robot's structure: "The electricity is Roni's hair" and "We can turn her eyes on".

We can see that even though their conceptions are not precise, they refer to the robot's

functioning.

In the expansion of the task referring to adaptivity and decision making, we have

observed differences in both children's thinking. Whereas the younger child thinks in scripts

in both sessions, the older child’s thinking is in rules. In addition, even though both children

did not refer to aspects of the robot's behavior as connected to programming, we still see a

change in both children's use of technological language in the post test. For example, from

claiming that "the robot can't look for the van because he can't see" in the pre-test, the

younger child's language shifted to "she (the female robot) can follow him (the male robot)

because she has wheels and she can see". The older child was claimed already in the pre-test

that "She will look for him in the pool because she can see" but his language was more

technological in the post test: "She can follow him because she has a tool with which she can

see his foot-prints".

Age 7

In general, in the "ice-cream" 2nd order ToAM task, both 7-year-old children

performed well and almost no difference was found between the pre and post tests. However,

Children's conceptions of behaving artifacts 29

for one child, his 2nd order thinking in the pre test was not complete ("Robi the robot doesn't

know that Roni the robot went to the pool but he does know that the van is in the pool... Thus,

Robi will look for Roni in the pool and since he will find her he will not have to go to the

park"). No child made reference to the ability to program the robot in the post-test session.

In the expansion of the task referring to adaptivity and decision making, both

children's thinking was rule-based in both sessions. In the post-test, upon constructing the

robot's behavior, both children made reference to elements of the interface ("the arrows lead

him there"), to the robot's mind ("he is a developed robot"), its' components ("he has senses

to each side") and its' behavior ("he can follow another robot because of the wheels traces or

the flash-light"; "he knows were the van is because he can see [sensors]).

Concluding remarks

The purpose of the current pilot study was to assess children's conceptions in relation

to robots’ artificial mind and behaviors and to examine the effects of constructing the robots’

behavior on children's development of ToM and ToAM. We focused on children between the

ages 5 and 7 since significant development of cognitive, affective and social systems were

found to emerge between these ages.

Concerning children's conceptions, our observations unveiled three models defined in

terms of their detachment from the basic model of the human mind: (1) ToM-like model -

completely based on the children’s model of the human mind, (2) Tom-based ToAM (TbTa) -

technological model referring to the artificial mind but using elements borrowed from the

model of the human mind, and (3) fully technological ToAM model. We observed that over

the long construction session all children's model of the artificial mind and behavior became,

at least to some extent, artificial, namely close to fully ToAM model. Nevertheless, while 7-

year-olds’ model referred to technological aspects during most of the session, 5-year-old

children moved toward a full understanding of the artificial mind gradually over the

Children's conceptions of behaving artifacts 30

construction session. Still, all children lack knowledge in relation to the robot’s capability to

'act' and behave autonomously - thanks to the fact that the computer brick is part of its

structure. They all believe that the computer is needed at all times to control the robot's

behavior - even if they recognize that "he learned from the memory" and "we have to feed it

[the program] to him... to his memory [in the brick]". We assume that the fact that in this pilot

study the children worked for one session focusing mainly on the robot's behavior and on its

representation with scripts and rules, and not on features of the brick or other hardware

elements, may help in understanding these laps in the children’s conceptions.

Concerning the effects of constructing the robot’s mind and behavior on children's

development of ToM and ToAM, out of the many focal issues presented in the results section

we would like to focus in this concluding remarks on two interesting points. The first relates

to the shift in thinking and in the nature of the explanations in the ToM tasks, particularly in

reference to adaptivity and decision making aspects. Aware of the methodological and

sampling limitations of the pilot study, we believe notwithstanding that we have obtained

clear indications of the children’s awareness to aspects related to adaptive behavior and

decision making processes in the ToM tasks as a result of their involvement in constructing

the robot’s behavior. Highly intriguing is the children’s rapid adoption of technological

features and language as part of their explanations in the post-test ToM tasks.

The second issue of interest relates to the children’s ability to think about the

behaviors described in the tasks in terms of the underlying constructs of these: scripts or

rules. Scripts imply temporal and ad-hoc descriptions, tied to specific contexts, while rules

are a-temporal and general constructs potentially applicable in wide range of contexts. All

children explanations in the post-test tasks evidenced the use of these constructs. This

followed the children (even the youngest) successful performance in the behavior

construction task demanding rule construction, at the end of the construction session. This

Children's conceptions of behaving artifacts 31

findings are in line with previous findings regarding rule-thinking and the use of

technological language even by very young children (** 2007).

As mentioned earlier, we are aware of the limitations of this preliminary study. It is

obvious that only a large scale study (both in terms of population and scope of construction

tasks) will supply robust data on the trends unveiled so far. However, we believe that this

study has already supplied important insights at two main levels: (a) the identification of

intriguing aspects of children’s thinking about behaving and adaptive artifacts, and (b) on the

nature of suitable tools, tasks and “objects to think with” allowing children to enact, construct

and reflect on their understandings of- the world of behaving artifacts.

Children's conceptions of behaving artifacts 32

References

Ackermann, E., (1991). The agency model of transactions: Towards an understanding of

children' theory of control. In J. Montangero & A. Tryphon (Eds.), Psychologie

genetique et sciences cognitives. Geneve: Fondation Archives Jean Piaget.

Akiguchi, S., & Maeda, Y. (2006). Autonomous pet-type robot with emotion behavior-

learning system based on neuromodulators. Journal of Systems and Control

Engineering, 220(I), 717-724.

Anderson, V., Northam, E., Hendy, J., & Wrennall, J. (2001). Developmental

Neuropsychology: A clinical approach. Philadelphia, PA: Psychology Press.

Baron-Cohen, S. (2001). Theory of mind in normal development and autism. Prisme, 34,

174-183.

Bernstein, D.L. (2006). Searching for signs of intelligent life: An investigation of young

children's beliefs about intelligence and animacy. Unpublished thesis dissertation,

University of Pittsburg.

Bloom, P. (1996). Intention, history and artifact concepts. Cognition, 60, 1-29.

Breazeal, C. (2000). Sociable machines: Expressive social exchange between humans and

robots. PhD thesis, Massachusetts Institute of Technology.

Bumby, K. E., & Dautenhahn, K. (1999). Investigating children's attitudes towards robots: A

case study. In Proceedings of the Third International Cognitive Technology

Conference (CT '99), pp. 391-410. San Francisco: August 1999.

Burton, S., & Mitchell, P. (2003). Judging who knows best about yourself: Developmental

change in citing the self across middle childhood. Child Development, 74(2), 426-443.

Carpendale, J. I. & Chandler. M. J.(1996). On the Distinction between False Belief

Understanding and Subscribing to an Interpretive Theory of Mind. Child

Development, 67(4), 1686-1706.

Children's conceptions of behaving artifacts 33

Case, R. (1992). The role of frontal lobes in the regulation of cognitive development. Brain

and Cognition, 20, 51-73.

Chandler, M. & Lalone, C. (1996). Shifting to an interpretive theory of mind: 5- to 7-year

olds' changing conceptions of mental life. In A. Sammeroff, & M. Haith (Eds.), The

five to seven year shift (pp. 111-139). Chicago, IL: The University of Chicago Press.

Cycowicz, Y. M. (2000). Memory development and event-related brain potentials in children.

Biological Psychology, 54, 145-174.

Davis, D. (2004). Children’s beliefs about what it means to have a mind. Unpublished

doctoral dissertation, University of Texas at Austin.

Diesendruck, G., Hammer, R., & Catz, O. (2003). Mapping the similarity space of children

and adult's artifact categories. Cognitive Development, 18, 217-231.

DiYanni, C., & Kelemen, D. (2005). Time to get a new mountain? The role of function in

children's conceptions of natural kinds. Cognition, 97, 327-335.

Druin, A. (2000). Robots for kids: Exploring new technologies for learning experiences. San

Francisco: Morgan Kaufman / Academic Press.

Edsinger, A. (2001). A Gestural Language for a Humanoid Robot. Master’s thesis, MIT

Department of Electrical Engineering and Computer Science.

Edsinger, A., O’Reilly, U.-M. & Breazeal, C. (2000). Personality through Faces for

Humanoid Robots, in ‘IEEE International Workshop on Robot and Human

Communication (ROMAN-2000)’.

Ehri, L. C. (1999). Pases of development in learning to read words. In J. Oakhill & R. Bard

(Eds.), Reading Development and the teaching of reading: A psychological

perspective (pp. 79-108). Oxford, UK: Blackwell Publishers.

Children's conceptions of behaving artifacts 34

Ehri, L. C. & McCormick, S. (1998). Phases of word learning: Implications for instruction

with delayed and disabled readers. Reading and Writing Quarterly: Overcoming

Learning Difficulties, 14, 135-163.

Flavell, J. H. (2004). Theory-of-Mind Development: Retrospect and Prospect. Merrill-Palmer

Quarterly, 50(3), 274–290.

Flavell, J. H., Flavell, E. R., & Green, F. L. (1983). Development of the appearance-reality

distinction. Cognitive Psychology, 15, 95–120.

Fuster, J., M. (2001). The prefrontal cortex-an update: Time is of the essence. Neuron, 30,

319-333.

Harter, S. (1996). Developmental Changes in self-Understanding across the 5 to 7 shift. In A.

Sammeroff, & M. Haith (Eds.), The five to seven year shift (pp. 207-236). Chicago,

IL: The University of Chicago Press.

Harvey, B. (1985). Computer Science Logo Style. Cambridge, MA: MIT Press.

Irie, R. (1995). Robust Sound Localization: An Application of an Auditory Perception System

for a Humanoid Robot. Master’s thesis, MIT Department of Electrical Engineering

and Computer Science.

Kahn, K. (1996). ToonTalkTM – An animated programming environment for children.

Journal of Visual Languages and Computing, 7, 197-217.

Kanemura, H., Aihara, M., Aoki, S., Araki, T., & Nakazawa, S. (2003). Development of the

prefrontal lobe in infants and children: a three-dimensional magnetic resonance

volumetric study. Brain & Development, 25, 195-199.

Kitamura, T., Tahara, T., & Asami, K. (2000). How can a robot have consiousness?

Advanced Robotics, 14(4), 263-275.

Kohlberg, L. (1981). The philosophy of moral development: Moral stages and the idea of

justice: Essays on moral development 1. San Fransisco: Harper & Row.

Children's conceptions of behaving artifacts 35

Kohlberg, L. (1984). Essays on Moral Development: Vol. II. The Psychology of Moral

Development: The Nature and Validity of Moral Stages. San Francisco, CA: Harper &

Row.

* (2007) - Levy, S.T., & Mioduser, D. (2007). Episodes to Scripts to Rules: Concrete-

abstractions in kindergarten children's construction of robotic control rules. Accepted

for publication in the International Journal of Technology and Design Education.

Lewis, C. & Mitchell, P. (Eds). (1994). Children's early understanding of mind: Origins and

Development. Hove: LEA.

Matan, A., & Carey, S. (2001). Developmental changes within the core of artifact concepts.

Cognition, 78, 1-26.

Marjanovi´c, M. (1995). Learning functional maps between sensorimotor systems ond

humanoid robot. Master’s thesis, MIT Department of Electrical Engineering and

Computer Science.

Marjanovi´c, M. (2001). Teach a Robot to Fish. A Thesis Proposal, Technical report,

Massachusetts Institute of Technology. Retrieved from:

http://www.ai.mit.edu/people/maddog.

** (2007) - Mioduser, D. Levy, S. T., & Talis, V. (2007). Does it "want" or "was it

programmed to…"? Kindergarten children’s explanatory frameworks of autonomous

robots' adaptive functioning. Accepted for publication in the International Journal of

Technology and Design Education.

Morgado, L., Cruz, M.G.B., & Kahn, K. (2001). Working with the ToonTalk with 4- and 5-

year olds. Playgroung International Seminar, Porto, Portugal, April 3rd 2001.

Morrison, F. J., McMahon-Griffith, E. & Frazier, J. A. (1996). Schooling and the 5 to 7 shift:

A natural experiment. In A. Sammeroff, & M. Haith (Eds.), The five to seven year

shift (pp. 161-186). Chicago, IL: The University of Chicago Press.

Children's conceptions of behaving artifacts 36

Navon, D. (1977). Forest before trees: The precedence of global features in visual perception.

Cognitive Psychology, 9, 353-358.

Nelson, K. (1996). Memory development from 4 to 7 years. In A. Sammeroff, & M. Haith

(Eds.), The five to seven year shift (pp. 141-160). Chicago, IL: The University of

Chicago Press.

Papert, S. (1980). Mindstorms: children, computers and powerful Ideas. NY: Harvester Press.

Perner J., Wimmer, H. (1985). John thinks that Mary thinks that: attribution of second order

beliefs by 5-year-old to 10-year-old children. Journal of Experimental Child

Psychology, 39, 437–71.

Pillow, B. (1991). Children’s understanding of biased social cognition. Developmental

Psychology, 27, 539-551.

Pillow, B. (1993, March). Children’s understanding of biased interpretation. Poster session

presented at the biennial meeting of the society for Research in Child Development,

New Orleans, LA.

Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a 'theory of mind'?

Behavioral and Brain Sciences, 4, 515-526.

Resnick, M. (2006). Computer as paint brush: Technology, Play, and the creative society. In

Singer, D., Golikoff, R., & Hirsh-Pasek, K. (eds.), Play = Learning: How play

motivates and enhances children's cognitive and social-emotional growth. Oxford:

Oxford University Press.

Resnick, M., Berg, R., Eisenberg, M. (2000). Beyond Black Boxes: Bringing Transparency

and Aesthetics Back to Scientific investigation. Journal of the Learning Sciences,

9(1), 7-30.

Resnick, M. & Martin, F. (1991). Children and artificial life. In I. Harel & S. Papert (Eds.),

Constructionism. Norwood, NJ: Ablex Publishing Corporation.

Children's conceptions of behaving artifacts 37

Resnick, M., Martin, F., Sargent, R. & Silverman, B. (1996) Programmable Bricks: Toys to

Think With. IBM Systems Journal, 35(3-4), 443-452.

Ross, B. H., Gelman, S. A., & Rosengren, K. S. (2005). Children's category-based inferences

affect classification. British Journal of Developmental Psychology, 23, 1-24.

Rueda, M. R, Fan, J., McCandliss, B. D., Halparin, J. D., Gruber, D. B, Lercari, L. P. &

Posner, M. I. (2004). Developmenat of attentional networks in childhood.

Neuropsychologia, 42, 1029-1040.

Sammeroff, A, & Haith, M. (1996). The five to seven year shift. Chicago, IL: The University

of Chicago Press.

Scaife, M. & van Duuren, M. A. (1995). Do computers have brains? What children believe

about intelligent artefacts. British Journal of Developmental Psychology, 13, 321-432.

Scassellati, B. M. (2001). Foundations for a Theory of Mind for a Humanoid Robot. PhD

thesis, Massachusetts Institute of Technology.

Siegler, R.S. (1996). Unidimensional thinking, multidimensional thinking and characteristics

tendencies of thought. In A. Sammeroff, & M. Haith (Eds.), The five to seven year

shift (pp. 63-84). Chicago, IL: The University of Chicago Press.

Stauder, J. E. A., Molenaar, P. C. M., & van der Molen, M.W. (1993). Scalp topography of

event-related brain potentials and cognitive transitions during childhood. Child

Development, 64(3), 769-788.

Stauder, J. E. A., Molenaar, P. C. M., & van der Molen, M.W. (1999). Brain activity and

cognitive transitions during childhood: A longitudinal event-related brain potential

study. Child Neuropsychology, 5(1), 41-59.

Takeno, J. (2006). The self-Aware robot - a response to reactions to discovery news. Japan:

HRI Press.

Children's conceptions of behaving artifacts 38

*** (1998) - Talis, V., Levy, S.T. & Mioduser, D. (1998). RoboGAN: Interface for

programming a robot with rules for young children. Tel-Aviv University.

Turkle, S. (1984). The Second Self: Computers and the Human Spirit. NY: Simon and

Schuster.

Van Duuren, M. & Scaife, M. (1996). "A robot’s brain controls itself, because a brain hasn’t

got a brain, so it just controls itself"- Young children’s attributions of brain related

behaviour to intelligent artefacts. European Journal of Psychology of Education, 11,

365-376.

Vygotsky, L. S. (1986). Thought and Language. The MIT Press, Cambridge, MA.

Wehren, A. & De Lisi, R. (1983). The development of gender understanding: Judgments and

explanations. Child Development, 54, 1568-1578

Wellman, H. M., Cross, D., & Watson, J. (2001). Meta-Analysis of Theory-of-Mind

Development: The truth about False Belief. Child Development, 72(3), 655-864.

Wellman, H. M, & Liu, D. (2004). Scaling of Theory-of-Mind Tasks. Child Development,

75(2), 523 – 541.

Williamson, M. M. (1999). Robot Arm Control Exploiting Natural Dynamics. PhD thesis,

Massachusetts Institute of Technology.

Wyeth, P., & Purchase, H. C. (2000). Programming without a computer: A new interface for

children under eight. User Interface Conference, 2000. AUIC 2000. First

Australasian, 31 January-3 February, 2000.

Zhao, S. (2006), Humanoid social robots as a medium of communication. New Media and

Society, 8(3), 401–419.

Children's conceptions of behaving artifacts 39

Figure 1

The robotics environment

(a) The behavior-construction interface (b) The Lego-made robot

Children's conceptions of behaving artifacts 40

Table 1

Children's model of the artificial mind

Model

Age 5

Age 7

ToM-based

No evidence was found for ToM-based model in 5-year-old children's conceptions of the artificial mind.

No evidence was found for ToM-based model in 7-year-old children's conceptions of the artificial mind.

ToM-based ToAM

We found evidence for ToM-based ToAM model in relation to the artificial mind in 5-year-old children's conceptions, especially at the first section of the session and as a function of the task's difficulty. Their model at this stage is not fully artificial; their model is of a robot but use metaphors of the human mind.

We found little evidence for ToM-based ToAM model in relation to the artificial mind in 7-year-old children's conceptions.

Fully ToAM

We found evidence for fully ToAM model in relation to the artificial mind in 5-year-old children's conceptions, especially at the second and third sections of the session and as a function of the task's difficulty.

We found that for the 7-year-old children, their model of the artificial mind is mostly fully artificial.

Children's conceptions of behaving artifacts 41

Table 2

Children's conceptions of the underlying constructs of robot's behavior – scripts or rules

Conceptions

Age 5

Age 7

Scripts

We have seen evidence for thinking in script in relation to the robot's behavior in both 5-year old children, especially in the beginning of the session, in which the robot's task was of a script. Nevertheless, it was hard for these children to explain the robot's behavior. In some cases their answers were supported by an adult.

We have seen evidence for thinking in script in relation to the robot's behavior in both 7-year old children, especially in the beginning of the session, in which the robot's task was of a script. In addition, their explanations were technological and related to the task and the environment.

Rules

We have seen evidence for thinking in rules in relation to the robot's behavior in both 5-year old children, especially at the end of the session, in which the robot's task was of a rule. As in the script task, it was hard for these children to explain the robot's behavior in rules. In some cases their answers were supported by an adult.

We have seen evidence for thinking in rules in relation to the robot's behavior in both 7-year old children, especially at the end of the session, in which the robot's task was of a rule. Their explanations were technological and related to the task and the environment.

Children's conceptions of behaving artifacts 42

Table 3

ToM and ToAM tasks – by stage (pre and post tests) and age

Age 5 Age 7

Task Pre Post Pre Post ToM 1st order: Diverse desires

Both children answered the target question and explained it correctly

Both children answered the target question and explained it correctly

Both children answered the target question and explained it correctly

Both children answered the target question and explained it correctly

ToAM 1st order: Diverse desires

Both children gave human reasons to the robot's choice and did not refer to the robot's mind

Answers were similar to the pre-test however the children used technological language and mainly referred to the robot's elements and functions

Children's answers referred to their own preferences or to the puppet's preferences from the first task

Children's answers did not refer to the robot's behavior upon programming

ToM 1st order: Diverse desires & Decision making

One child answered and explained the task correctly. The other child did not refer to aspects of the mind's adaptivity in relation to the environment's conditions and thus did not pass the task

Both children answered and explained the task correctly. One child's answer was supported by PI

Both children answered the target question and explained it correctly, considering the environment condition in making decisions. Nevertheless, one child's explanation was supported by PI

Both children answered the target question and explained it correctly, considering the environment condition in making decisions

ToM 1st order: Diverse beliefs

Both children answered and explained the task correctly. One child's answer was supported by PI

One child answered and explained the task correctly. The other child answered the target question but gave a wrong explanation

Both children answered correctly however their explanations lack

Both children answered and explained the task correctly

ToAM 1st order: Diverse beliefs

One child answered and explained the task correctly. The other child's answer ignored the robot's decision making according to the environment's conditions

The first child's answer was similar to the pre-test but included aspects of the robot's elements and functions. The other child did not refer to environment's conditions but she referred to the technological elements

Inconsistency was found in children's answers to this task

One child answered the task correctly and in addition her language was more technological. No difference was found in the other child's answers

ToAM 1st order: Decision making

Both children answered and explained the task correctly. One child's answer was supported by PI

One child answered and explained the task correctly. The other child answered the target question but gave a wrong explanation

Both children answered and explained the task correctly

Both children answered and explained the task correctly. Their explanations referred to aspects of the robot's structure, mind and behavior

Children's conceptions of behaving artifacts 43

ToM 2nd order: The ice-cream story

Generally, both children passed the task, one with support

Generally, both children passed the task, with no support

Generally, both children passed the task, one with support

Generally, both children passed the task, with no support

ToM 2nd order: Decision making

One child thinking was rule-based while the other – script-based

Both children's thinking was rule-based

Both children passed the task and their thinking was rule-based

Both children passed the task and their thinking was rule-based

ToAM 2nd order: The ice-cream story

One child answered the task correctly while the other child ToAM thinking lacked

Both children answered the task correctly but gave similar answers to the ToM task without referring to programming the robot

One child answered the task correctly while the other child ToAM thinking lacked

Both children answered the task correctly but gave similar answers to the ToM task without referring to programming the robot

ToAM 2nd order: Decision making

One child thinking was rule-based while the other – script-based

One child thinking was rule-based while the other – script-based. However, their answers were more complex and referred to aspects of the robot's structure, behavior and mind

Both children's thinking was rule-based

Both children's thinking was rule-based. Their answers were more complex than in the pre-test. However, they did not refer to the robot's programming

Children's conceptions of behaving artifacts 44

APPENDIX I: The Ice-Cream story

The Ice-Cream story (Perner & Wimmer, 1985)

Introduction: This is a story about John and Mary who live in this village. This morning John and Mary are

together in the park. In the park there is also an ice-cream man in his van.

Episode 1: Mary would like to buy an ice-cream but she has left her money at home. So she is very sad. 'Don't

be sad,' says the ice-cream man, 'you can go home and get some money and buy some ice-cream later, I'll be

here in the park all afternoon.' 'Oh, good,' says Mary, 'I'll be back in the afternoon to buy some ice-cream. I'll

make sure I won't forget my money then.'

Episode 2: So Mary goes home. . . . She lives in this house. She goes inside the house. Now John is on his own

in the park. To his surprise, he sees the ice-cream man leaving the park in his van. 'Where are you going?' asks

John. The ice-cream man says, 'I'm going to drive my van to the church. There is no one in the park to buy ice-

cream; so perhaps I can sell some outside the church.'

Episode 3: The ice-cream man drives over to the church. On his way he passes Mary's house. Mary is looking

out of the window and spots the van. 'Where are you going?' she asks. 'I'm going to the church. I'll be able to sell

more ice-cream there' answers the man. 'It's a good thing I saw you,' says Mary. Now John doesn't know that

Mary talked to the ice-cream man. He doesn't know that!

Episode 4: Now John has to go home. After lunch he is doing his homework. He can't do one of the tasks. So he

goes over to Mary's house to ask for help. Mary's mother answers the door. 'Is Mary in?' asks John. 'Oh,' says

Mary's mother, 'She just left. She said she was going to get an ice-cream.'

Second-order false-belief question: So John runs to look for Mary. 'Where does he think she has gone'? Park.

Justification question: 'Why does he think she has gone to the _______'? He doesn't know that Mary knows the

ice-cream man has moved to the church.

Probe question 1: 'Does Mary know that the ice-cream van is at the church'? Yes.

Probe question 2: 'Does John know that the ice-cream man has talked to Mary'? No.

Probe question 3: 'Where did Mary go for her ice-cream'? Church.

The Ice-Cream story –Robi the robot version

Robi the robot is helping the ice-cream man to carry ice-cream. Robi does that when there are people at the

park. Every day Robi does the same routine: he brings the ice-cream from the ice-cream bin near the apple tree

in the park to the play-ground, where the ice-cream van is usually located. Roni the robot knows that Robi helps

the ice-cream man to carry ice-cream. One day he is taking the ice-cream from the ice-cream bin near the apple

tree in the park to it's' destination but there are no people around. The question that the child is asked: "What

Children's conceptions of behaving artifacts 45

would Robi do"? "will Robi continue his routine and leave the ice-cream near the play-ground?" 'Does Robi

know that the ice-cream van is? How can he know that?" The purpose of this task is to assess whether the child

thinks that Robi behave according to: (1) a script and thus will leave the ice-cream even if the van is not there

and there are no people around or (2) a rule that is dependent on the robot's sensors that percept motion and thus

can make decisions accordingly and bring the ice-cream to the location of the van. Following, the child is told

that Roni wants to visit Robi at the park. "Where would Roni go search Robi? Why?". Then, the child is told

that Roni didn't find Robbi at the park but she still wants to find him. "Where would Roni search for Robi?

Why? Can she follow Robi? Why?". The purpose of these questions is to assess whether the child knows that

Roni the robot was programmed to make decisions according to the environment's conditions, and that if she

knows that Robi is where there are people she will look for him accordingly.

Children's conceptions of behaving artifacts 46

APPENDIX II: Examples of process tasks

Description of the task

'Going to the basketball yard':

The robot is placed in a neighborhood, in which there is one school, a play-ground

and a basketball yard. The children are instructed to program the robot to go to the

basketball yard.

The environment includes a draw of a neighborhood with the above settings.

Children in the construction condition will be asked to teach the robot to sell ice-

cream in 3 different locations

Script (low level of

complexity) –

in the robotic

environment's right

panel – second button

from top

'Be polite and don't step on the garden':

The robot is placed upon a white surface with black areas.

The children are instructed to program the robot to move freely on the white surface

while being polite – without touching the black areas.

The robot's structure includes a light sensor facing down, distinguishes light from

dark.

Full rule (high level of

complexity) 'the Island

task':

in the robotic

environment's right

panel – second button

from top

Children's conceptions of behaving artifacts 47

APPENDIX III: Examples of questions in the structured interview

Situation during sessions Questions in the

structured interview

Possible explanations of

answers

Aspects of

artificial mind

examined

In the robotic environments right

panel – 'Instruction' mode (second

button from top): The squares in the

central section (the 'Desktop' in

which children construct the robot's

behavior) are empty.

(1) What can the

robot do now?

(2) What does the

robot has to do in

order for him to

move?

If the child understands that

the robot was not

programmed and thus it

does not have a 'mind' the

child will answer that the

robot will not be able to do

anything.

Human-

dependent

In the 'Going to the basketball yard'

task: the robot is placed in a different

location.

(1) "Will Robi arrives

at the basketball

yard?"

We expect children that

acquired ToAM to answer

that since the robot was not

programmed to arrive at the

basketball yard from a

different location he will not

be able to arrive at the

basketball yard.

Adaptivity

In the 'Be polite and don't step on

the garden':

(1) What can the

robot do?

(2) Will he be polite if

we put him here [the

experimenter puts the

robot in a different

location of the

board]"?

We expect children that

acquired ToAM to answer

that the robot is capable of

being polite because he was

programmed to do so and

since it is a rule it doe's not

matter where is its primary

location. He will be polite

from everywhere.

Decision

making