Upload
susqu
View
1
Download
0
Embed Size (px)
Citation preview
Can Statistical Learning Bootstrap the Integers?
Lance J. Ripsa, Jennifer Asmuth
b, Amber Bloomfield
c
aPsychology Department, Northwestern University, 2029 Sheridan Road, Evanston, IL 60208 USA
bPsychology Department, Susquehanna University, 514 University Avenue, Selinsgrove, PA 17870 USA
cCenter for Advanced Study of Language, University of Maryland, 7005 52nd Ave., College Park,
MD 20742 USA
Corresponding author:
Lance Rips
Psychology Department
Northwestern University
2029 Sheridan Road
Evanston, IL 60208 USA
847.491.5947
Fax: 847.491.7859
Email: [email protected]
Reply to Piantadosi et al. / 2
Abstract
This paper examines Piantadosi, Tenenbaum, and Goodman’s (2012) model for how children
learn the relation between number words (“one” through “ten”) and cardinalities (sizes of sets with one
through ten elements). This model shows how statistical learning can induce this relation, reorganizing its
procedures as it does so in roughly the way children do. We question, however, Piantadosi et al.’s claim
that the model performs “Quinian bootstrapping,” in the sense of Carey (2009). Unlike bootstrapping, the
concept it learns is not discontinuous with the concepts it starts with. Instead, the model learns by
recombining its primitives into hypotheses and confirming them statistically. As such, it accords better
with earlier claims (Fodor, 1975, 1981) that learning does not increase expressive power. We also
question the relevance of the simulation for children’s learning. The model starts with a preselected set
of 15 primitives, and the procedure it learns differs from children’s method. Finally, the partial knowledge
of the positive integers that the model attains is consistent with an infinite number of nonstandard
meanings—for example, that the integers stop after ten or loop from ten back to one.
Keywords: Bootstrapping, Number knowledge, Number learning, Statistical learning
Reply to Piantadosi et al. / 3
1. Introduction
According to the now standard theory of number development, children learn to recognize and
produce one object in response to requests, such as “Give me one cup” or “Point to the picture of one
elephant.” They then gradually learn to handle similar requests for two objects and eventually three. At
this point, they rapidly extend their success to larger collections—up to those named by the largest
numeral on their list of number terms, for example, “ten” (Wynn, 1992). (The largest numeral for which
they are successful can vary, but let’s say “ten” for concreteness.) The standard theory sees this last
achievement as the result of the children figuring out how to count objects: They learn a general rule for
how to pair the numerals on their list with the objects in a collection in order to compute the total. We will
refer to this procedure from here on as enumeration rather than counting in order to avoid confusion with
the other meaning of counting—reciting the number sequence “one,” “two,” “three,”….
No one doubts that children in Western cultures learn enumeration as a technique for determining
the cardinality (i.e., the total number of items or set size) of a collection. However, debates exist about
how children make this discovery (see, e.g., Carey, 2009; Leslie, Gelman, & Gallistel, 2008; Piantadosi,
Tenenbaum, & Goodman, 2012; and Spelke, 2000, 2011) and about its significance for their beliefs about
number (e.g., Margolis & Laurence, 2008; Rey, 2011; Rips, Asmuth, & Bloomfield, 2006, 2008; Rips,
Bloomfield, & Asmuth, 2008). Our aim in the present article is to examine a recent theory of how
children learn to enumerate by Piantadosi et al. and to compare it to an earlier proposal by Carey. In doing
so, we are motivated by Piantadosi et al.’s claim that their model solves difficulties we earlier identified
in Carey’s theory (Rips, Asmuth, & Bloomfield, 2006).
1.1. Carey’s bootstrap proposal
Carey (2004, 2009) has given a detailed account of learning to enumerate as an instance of a
process she calls Quinian bootstrapping. In brief, children start with a short memorized list of numerals in
order from “one” to “ten,” but where these numerals are otherwise uninterpreted. Over an approximately
Reply to Piantadosi et al. / 4
one-year period, children successively attach the numeral “one” to a mental representation consisting of
an arbitrary one-member set (e.g., {o1}), the numeral “two” to a representation consisting of a two-
member set (e.g., {o1, o2}), and the numeral “three” to a representation consisting of a three-member set
(e.g., {o1, o2, o3}). Children next realize that a parallel exists between the order of the numeral list (“one”
then “two” then “three”) and the set representations ordered by the addition of one element ({o1} then {o1,
o2} then {o1, o2, o3}). They infer that the meaning of the next element on the numeral list is the set size
given by adding one to the set size named by the preceding numeral. For example, the meaning of “five”
is the cardinality one greater than that named by “four.” This inference allows them to determine the
correct cardinal value for the remaining items on their count list (up to “ten”). We can refer to the
conclusion of this inference (the italicized proposition above) as the bootstrap conclusion.
According to Carey, Quinian bootstrapping provides a child with new primitive concepts of
number, concepts that the child’s old conceptual vocabulary can’t express, even in principle:
Quinian bootstrapping mechanisms underlie the learning of new primitives, and this
learning does not consist of constructing them from antecedently available concepts (they
are definitional/computational primitives, after all) using the machinery of compositional
semantics alone (Carey, 2009, p. 514, emphasis in the original).
No translation is possible between the old number concepts and the new ones:
To translate is to express a proposition stated in the language of [Conceptual System 2] in
the language one already has ([Conceptual System 1])… In cases of discontinuity in
which Quinian bootstrapping is required, this is impossible. Bootstrapping is not
translation; what is involved is language construction, not translation. That is, drawing on
resources from within CS1 and elsewhere, one constructs an incommensurable CS2 that
is not translatable into CS1 (Carey, 2011, p. 157).
In the last quotation, CS1 is the child’s conceptual system prior to an episode of Quinian bootstrapping,
and CS2 is the conceptual system that results from bootstrapping. As the first of these quotations makes
clear, Quinian bootstrapping is a kind of learning, usually an extended process that takes months or years
Reply to Piantadosi et al. / 5
to complete. Children don’t acquire the new number concepts by mere maturation. Likewise, external
causal forces don’t merely stamp them in. A challenge in understanding Quinian bootstrapping is how to
reconcile the claim about learning with the claim about discontinuity between old and new concepts.
1.2. Some further issues about the bootstrap inference
Aside from the question about discontinuity, Quinian bootstrapping raises several issues about the
inductive inference that takes children from facts about the first three or four positive integers to a general
conclusion. The bootstrap inference has roughly this form:
“One” precedes “two” precedes “three” precedes … precedes “ten.”
“One” means one.
“Two” means two.
“Three” means three.
___________________________________________
The next numeral on the count list means one more than its predecessor.
A question about this inference, then, is whether a psychologically reasonable and well-defined process
could carry it out. This is a question about Quinian bootstrapping’s internal feasibility—a demand for a
proof of concept. Both Carey’s (2009) original proposal and Piantadosi et al.’s (2012) new one are meant
to reassure us on this point by providing detailed descriptions of how children make this inference. We
briefly described Carey’s proposal in Section 1.1, and we will describe Piantadosi et al.’s in Section 2.2.
Assuming that such a process is feasible, we can also ask whether the process is faithful to the
way children actually do it. Although Carey’s (2009) and Piantadosi et al.’s (2012) proposals could both
be feasible, they can’t both be true, barring big individual differences in the way children carry out the
inference. For one thing, Carey’s proposal includes an analogical inference as its cornerstone, but analogy
plays no part in Piantadosi et al.’s theory. Which proposal (if either) is correct is an obviously empirical
question.
Reply to Piantadosi et al. / 6
But even if we grant Quinian bootstrapping’s feasibility and faithfulness, we still need to know
how much its conclusion contributes to children’s understanding of the positive integers. The bootstrap
conclusion is certainly true: The next numeral in the list of positive integers stands for a cardinality one
greater than its predecessor. But although this generalization correctly coordinates two infinite sequences,
it says nothing about the structure of the sequences. For example, a child could draw the bootstrap
conclusion while quite consistently believing that the integers stop at 78. He or she would know that
“one” refers to set size one, “two” to set size two, and … and “seventy-eight” to set size 78, but also
believe that this is the end of the story. This issue is one of scope: How much does Quinian bootstrapping
tell children about the integers?
1.3. An overview
Of the four issues about Quinian bootstrapping that we’ve just surveyed, Piantadosi et al.’s (2012)
paper focuses mainly on feasibility. 1 They present an explicit model of how children learn to enumerate
objects, and we believe the model is successful in showing that a mechanism could exist that derives the
bootstrap conclusion from its premises. The same seems true of Carey’s (2009) original proposal, but the
Piantadosi et al. model puts this inference on a firm computational basis.
How well does the model handle the other psychological and logical issues that surround
bootstrapping? First, the relevance of the model depends on establishing that it qualifies as an instance of
Quinian bootstrapping in the sense in which that concept figures in current debate about the topic. Thus,
the model must exhibit discontinuity in the way Quinian bootstrapping is supposed to. Although
discontinuity may not be a central concern for Piantadosi et al., we believe it is the essential feature for
most developmentalists, who are interested in number learning as an example of discontinuous conceptual
change.
1 Unless we otherwise attribute them, all references to Piantadosi et al. are to Piantadosi et al. (2012).
Reply to Piantadosi et al. / 7
Second, the model needs to be faithful to the method children use in learning the positive
integers. The fact that some computational method or other exists for getting to the bootstrap conclusion
could hardly be in doubt, since adults surely agree that the conclusion is true. If the model’s method of
reaching the conclusion is quite different from that of children’s, the psychological lessons we can derive
from it would be unclear.
Third, questions about the scope of the bootstrap conclusion are also relevant to Piantadosi et al.’s
model. According to these authors (p. 212), their model “was motivated in large part by the critique of
bootstrapping put forth” in our earlier articles (Rips et al., 2006; Rips, Asmuth, & Bloomfield, 2008;
Rips, Bloomfield, & Asmuth, 2008). This criticism focused on whether the bootstrap conclusion pinned
down the meaning of the positive integers—whether it ruled out nonstandard interpretations of the
numerals that are very different from that of the integers’ meaning. (Piantadosi et al. may have read these
criticisms to be about feasibility or rigor, but they were instead about scope.) So if their aim is to meet
these criticisms, Piantadosi et al. have to show that their model yields the right interpretation.
In the following sections, we argue that because the model learns to enumerate by straightforward
concept combination, it does not create a new discontinuous conceptual system. Instead, it illustrates
Fodor’s (1975, 1981) hypothesis that learning elaborates old concepts: It cannot produce a new
representational primitive or construct a new language that increases the child’s expressive power.
Because of this limitation, the model is incapable of bootstrapping in Carey’s sense and does little to clear
up the issues surrounding bootstrapping. Of course, this doesn’t mean that the model is incorrect. It could
provide a correct account of enumeration even if Quinian bootstrapping has no role in its procedure.
However, the model’s method of enumerating differs from that of children, and these differences raise
questions about the faithfulness of the model to children’s actual abilities. Finally, Piantadosi et al.’s
model maps sets of one to ten elements to the terms “one” to “ten,” but has no implications for the
structure of the integers. So the same difficulties about scope that beset Quinian bootstrapping carry over
to this new proposal.
Reply to Piantadosi et al. / 8
Here’s a summary of our disagreements with Piantadosi et al.: They have tried to rescue Quinian
bootstrapping from the problems that surround it by constructing a computational/statistical model. But
although the model demonstrates one way to get from premises about the meanings of “one” through
“three” to a more general conclusion, we believe the model is not an example of Quinian bootstrapping,
does not accurately model the way children enumerate objects, and does not solve bootstrapping’s central
logical deficiencies.
2. Is the Piantadosi et al. model a form of bootstrapping?
Carey (2004, 2009) introduced “Quinian bootstrapping” as a term for procedures in which people
learn new concepts that are discontinuous from their old ones. Thus, we take the central claims of Quinian
bootstrapping to be these (see Beck, submitted for publication):
Learning: In Quinian bootstrapping, an agent learns a new conceptual system, CS2, in terms of
an old system, CS1.
Discontinuity: After Quinian bootstrapping, CS2 is conceptually discontinuous from CS1.
These properties are explicit in the quotations in Section 1.1 and in many other places in Carey’s (2009)
presentation. Important for the present discussion is the fact that Carey introduces her chapter on how
children acquire representations of the positive integers by setting herself two challenges: “to establish
discontinuities in cognitive development by providing analyses of successive conceptual systems, CS1
and CS2, demonstrating in what sense CS2 is qualitatively more powerful than CS1” and “to characterize
the learning mechanism(s) that get us from CS1 to CS2” (p. 288).
Because of the Discontinuity claim, Quinian bootstrapping opposes the view that all forms of
learning derive new concepts by recombining (or translating from) old ones. According to this alternative
view (Fodor, 1975, 1981), learning is a form of hypothesis formation and confirmation in which the
hypotheses are spelled out in the old conceptual vocabulary (i.e., the concepts that the person possesses
prior to learning). Confirmation merely increases their likelihood without producing anything
fundamentally new. To have a general term for such non-bootstrapping forms of learning, we’ll use
Reply to Piantadosi et al. / 9
concept recombination. Proponents of bootstrapping agree that mundane forms of learning use
recombination. Bootstrapping occurs only in special cases. So the question that divides theorists is
whether any actual instances of learning are examples of bootstrapping.
When we turn to Piantadosi et al.’s theory of learning enumeration, we find that their approach is
much closer to recombination than to Quinian bootstrapping. Their clearest statement about this issue is
the following passage (p. 214):
One of the basic mysteries of development is how children could get something
fundamentally new… Our answer to this puzzle is that novelty results from
compositionality. Learning may create representations from pieces that the learner has
always possessed, but the pieces may interact in wholly novel ways…This means that the
underlying representational system which supports cognition can remain unchanged
throughout development, though the specific representations learners construct may
change.
We will describe Piantadosi et al.’s model in more detail in Section 2.2, but this excerpt makes apparent
that, whatever its merits, it has nothing to do with creating new primitives or conceptual systems that are
discontinuous with old ones. As such, it jettisons a central part of Carey’s bootstrapping theory, the
Discontinuity claim. “Quinian bootstrapping” is a technical term, so there is limited room for redefining it
while simultaneously claiming to defend it.
We note that developmentalists have used the term “bootstrapping” in ways that differ from
Carey’s “Quinian bootstrapping.” In research on language acquisition, syntactic bootstrapping is a
hypothetical process in which children use syntactic properties of sentences to determine the referents of
component words, and semantic bootstrapping is a process in which children use the referents of words to
determine their syntactic category (see Bloom & Wynn, 1997, for a discussion of these possibilities in the
context of number). Neither of these forms of bootstrapping qualifies as Quinian bootstrapping, according
to Carey (2009, p. 21), since neither creates a conceptual system discontinuous with earlier ones.
Moreover, Piantadosi et al.’s proposal for number learning is not an example of either syntactic or
Reply to Piantadosi et al. / 10
semantic bootstrapping, since syntactic categories play no role in it. In fact, it might be possible to
contend that their proposal fails to qualify as bootstrapping in an even wider sense, but we won’t argue
for that conclusion here.2 Our concern, instead, is to show that the Piantadosi et al. model is not a type of
Quinian bootstrapping—it does not satisfy both the Learning and Discontinuity criteria. (In the rest of this
article, we will use “bootstrapping” to mean Quinian bootstrapping.)
2.1. Bootstrapping’s central features
To make the distinction between bootstrapping and recombination a little more precise, let c*
represent a new concept that is created in learning, and let c1, c2, …, ck represent old concepts. Proponents
of recombination believe that learning is a function taking the old concepts into the new one:
( ) It matters very much, however, what the function f is like. Advocates of bootstrapping
agree that the input to learning is a set of old concepts and the output a new concept. As Carey (2011,
p. 157) remarks, “Clearly, if we learn or construct new representational resources, we must draw on those
we already have.” But Carey would maintain that in examples of bootstrapping the function is not mere
recombination, as the first quotation in Section 1.1 makes explicit. To distinguish between recombination
and bootstrapping, then, we need some restrictions on f or on its arguments (Rips & Hespos, 2011).
As a first possibility, proponents of bootstrapping could insist that bootstrapping algorithms are
so complex that they go beyond what could reasonably be considered recombination. For example,
Carey’s (2009) theory of how children learn to enumerate includes an analogical inference that maps the
first few numerals (“one,” “two,” “three”) to corresponding representations of cardinalities (see the
description in Section 1.1). If analogical inference is too complicated to be recombination, then learning
to enumerate may be a form of bootstrapping.
2 D. Barner has suggested (personal communication, June 14, 2012) that all prior bootstrapping theories
appear to require that earlier representational stages be psychologically necessary steps in the acquisition
of later ones, whereas earlier representations of number in Piantadosi et al.’s theory (e.g., their Two-
knower function) play no role in producing its later representations (e.g., the CP-knower function). (See
Section 2.2 for a description of this function.)
Reply to Piantadosi et al. / 11
A second potential way to distinguish bootstrapping and recombination is to hold that the input
concepts (c1, c2, …, ck) in bootstrapping come from a broader domain of knowledge than is possible in
recombination. In the case of number learning, the input concepts to bootstrapping may belong to two or
more distinct cognitive modules. For example, c1, c2, …, ci may come from a module devoted to natural
language quantifiers (e.g., some or all), whereas ci+1, ci+2, …, ck may come from a module for representing
small sets of physical objects. Or the input concepts may include some that don’t appear in the child’s
earlier number representations. For example, the old number representations may include only concepts
c1, c2, …, ci, whereas input to the new representations may also include concepts ci+1, ci+2,…, ck.
It is unclear to us whether either of these strategies suffices to show that bootstrapping and
recombination differ in kind. Sheer complexity of a process doesn’t seem inconsistent with
recombination. The individual steps in learning may be lengthy or difficult without creating anything
fundamentally new. Advocates of bootstrapping owe us an explanation of what aspects of learning cause
it to go beyond recombination. Similarly, why must recombination respect limits on the domain of its
input? Why shouldn’t recombination be allowed to draw on all old concepts in the learner’s repertoire?
Fodor’s (2010) response to Carey’s theory is to deny any such limits. We are not claiming that proponents
of bootstrapping have explicitly adopted either of these strategies. Nor do we claim that the strategies are
exhaustive. 3 In looking at Piantadosi et al.’s proposal, however, let’s keep these options temporarily
open, since they may help us see why these authors believe their model performs a type of bootstrapping.
The underlying issue with bootstrapping is that advocates have to reconcile the Learning and
Discontinuity claims. But doing so is tricky because these claims seem to pull in opposite directions, with
Learning suggesting continuity rather than discontinuity. If Learning and Discontinuity cannot be
reconciled, bootstrapping is incorrect and concept recombination is correct as a theory of human concept
3 The two strategies just described are examples of what Beck (submitted for publication) calls deflationary theories
for reconciling the bootstrapping claims about learning and discontinuity. Neither creates anything totally new to the
child’s conceptual system, but either could bring to light concepts that were only latent within this system. More
radical strategies are also possible for makings sense of the Learning and Discontinuity claims (Beck, submitted for
publication, and Shea, 2011). But these are farther removed from Piantadosi et al.’s theory, and we therefore don’t
discuss them here.
Reply to Piantadosi et al. / 12
acquisition. A main point of interest, then, in Piantadosi et al.’s model is that it purports to furnish a
working example of bootstrapping and may therefore demonstrate bootstrapping’s viability.
2.2. The Piantadosi et al. model
The Piantadosi et al. model receives as input a series of sets of different sizes, ranging from one
to ten elements, with the frequency of each set size determined by the corpus frequency of the associated
number words (“one” to “ten”). Its goal is to learn which number word applies to each of the ten set sizes.
To do so, it constructs hypotheses from a fixed set of primitive functions by applying syntactic rules.
Among the primitives is a function singleton? that determines whether or not a set contains exactly one
member [e.g., singleton?({a}) = yes], a function next that determines the next numeral on the list of count
terms [e.g., next(“two”) = “three”], a function select that produces a set by picking an element from a
given set [e.g., select({a, b, c}) = {c}], a function set-difference that computes a set containing all the
members of a first set not contained in a second [e.g., set-difference({a, b, c}, {c}) = {a, b}], and a
function L that recursively applies an embedding function to an embedded one (we give an example
below).
The model evaluates the hypotheses it creates from these primitives, increasing the probabilities
of hypotheses that give the right answer (e.g., labeling a set of four items with “four”) and decreasing the
probabilities of hypotheses that give an incorrect or null answer. After sufficient training, the model
converges on a hypothesis—the Cardinal Principle (CP-) knower function—that correctly labels sets of
one to ten elements:
CP-knower function:
λS. (if (singleton? S)
“one”
(next (L (set-difference S (select S)))))
This function tests whether the input set of objects S is a singleton (i.e., one-element set), and if it is,
labels it “one.” If not, it removes an element from S (i.e., set-difference S (select S)) and recursively
Reply to Piantadosi et al. / 13
applies the same CP-knower function to the reduced set (L accomplishes this recursion). If the reduced set
is a singleton, it labels it next(“one”) or “two.” And so on.
Here, for example, are the steps that CP-knower goes through in enumerating a set of three cups,
S = {cup1, cup2, cup3}:
A1. Is {cup1, cup2, cup3} a singleton?
A2. No, so evaluate the function next on the result of applying CP-knower to the set created by
subtracting an element from {cup1, cup2, cup3}:
B1. Is {cup1, cup2} a singleton?
B2. No, so evaluate the function next on the result of applying CP-knower to the set
created by subtracting an element from {cup1, cup2}.
C1. Is {cup1} a singleton?
C2. Yes, so return the value “one.”
B3. Return the value of next(“one”) = “two.”
A3. Return the value next(“two”) = “three.”
This style of computation will be familiar to Scheme or Lisp programmers.
More interesting is the order in which hypotheses emerge as the most likely candidate. Early in
training, the model’s best hypothesis labels one-element sets with “one” and all other set sizes as
unknown. It then switches to a hypothesis that labels one- and two-element sets correctly (using its
primitive singleton? and doubleton? predicates), then one-, two-, and three-element sets (using
singleton?, doubleton?, and tripleton?), and finally reaches a more complicated rule (the CP-knower
function) that correctly labels one- to ten-element sets.
This behavior is extensionally similar to the progress children make in acquiring words for set
sizes (see Section 3 for qualifications). The learning sequence is a result of several design choices: First,
the model starts with primitive predicates that: (a) directly recognize set sizes of one, two, and three
elements (e.g., the singleton? predicate); (b) carry out logic and set operations (e.g., set-difference);
(c) traverse the sequence of number words “one,” “two,” …, “ten” (the next predicate); and (d) perform
Reply to Piantadosi et al. / 14
recursion (L). (See Piantadosi et al.’s Table 1 for the full list of 15 primitives.) Second, the model
constructs hypotheses from these primitives in a way that gives lower prior probabilities to lengthier
hypotheses and to hypotheses that include recursion (depending on a free parameter, γ). Thus, the model
starts by considering simple and inaccurate non-recursive hypotheses (e.g., singleton sets are labeled
“one” and all other sets are undefined) and ends with a more complex, but correct, recursive hypothesis as
the result of feedback about the correct labeling.
2.3. Does the Piantadosi et al. model employ bootstrapping?
Piantadosi et al. try to make the case that the discovery of the correct number hypothesis is a form
of bootstrapping, though not quite of the variety Carey described in introducing this term. But on the face
of it, their model looks like a perfect example of recombination. It starts with a small stock of primitives,
and it combines them into hypotheses according to syntactic rules (a probabilistic context-free grammar).
The model learns which of these hypotheses is best by Bayesian adjustment through feedback. Thus, all
the primitive concepts that the model uses to frame its final hypothesis are already present in its initial
repertoire. The only missing element is the correct assembly of these primitives by the grammar. These
restrictions would seem to leave the model with little room for innovation of the sort that bootstrapping
requires. Why should we regard the process as bootstrapping rather than as translating one system into
another?
In setting out the bootstrap idea in Section 2.1, we mentioned two possible strategies to
discriminate it from recombination. Of these possibilities, the first one—that bootstrapping involves a
learning process more complex than standard recombination—is out for the Piantadosi et al. model. The
model’s grammar composes its hypotheses by assembling them from a previously existing base. The
model employs no inference more complex than the Bayesian conditioning that updates the hypotheses’
probability.
Piantadosi et al. have a better chance, then, of defending their bootstrapping claim by adopting
the second strategy. Perhaps the model’s discovery of the pairing between numerals and cardinalities
Reply to Piantadosi et al. / 15
incorporates concepts that aren’t available in its initial state. Here the obvious candidate is recursion. The
model’s initial hypotheses make no use of recursion, whereas the final CP-knower hypothesis does. The
recursive predicate (L) confers greater computational power on this last hypothesis than is present in the
earlier ones. So perhaps bootstrapping occurs when the model introduces recursion. (The model ensures
that this introduction happens relatively late in learning by handicapping all hypotheses containing the
recursive predicate, as we mentioned earlier.) This accords with Piantadosi et al.’s statement that “the
model bootstraps in the sense that it recursively defines the meaning for each number word in terms of the
previous number word. This is representational change much like Carey’s theory since the CP-knower
uses primitives not used by subset knowers, and in the CP-transition, the computations that support early
number word meanings are fundamentally revised” (p. 212). Similarly, they point out that “the sense in
which our model engages in bootstrapping is that it starts off representing number word meanings with
subitizing primitives (singleton? doubleton? tripleton?) and eventually transitions to a system in which
essentially other primitives (recursion, conditionals, etc.) are used” (personal communication, June 14,
2012). The idea seems to be that the model’s representations for cardinalities before bootstrapping aren’t
extendible to the representations it uses after. Something is missing from the early representations that’s
necessary for a more adult-like understanding.
But in thinking about whether the CP-transition is a form of bootstrapping, we should keep in
mind that in many mundane instances of learning—in discrimination learning, for example—people add
primitives that do not figure in earlier hypotheses. In learning to distinguish poisonous from edible
mushrooms, people may have to take into account new properties like the color of the mushrooms’ spores
that were not parts of their original mushroom representations. No one would claim, though, that
including spore color in the new concept is a discontinuous conceptual change. Likewise, merely
including a previously unused predicate in a new hypothesis about number meaning doesn’t by itself
imply that the hypothesis is discontinuous with old ones. Stretching the concept of Quinian bootstrapping
to include such simple property additions would trivialize this concept. So if adding the recursive
Reply to Piantadosi et al. / 16
predicate L does produce a big conceptual change that must be because L is special—perhaps because it
significantly increases the hypothesis’s computational power—not because it is new.
However, adding recursion still doesn’t conform to bootstrapping as Carey describes it in the
quotations of Section 1. The model’s grammar prior to adopting the CP-knower hypothesis is identical to
its grammar after adopting it, as is its primitive conceptual vocabulary. So a translation manual could
easily express the new hypothesis in terms of these primitives, precisely as is done in Piantadosi et al.’s
definition of the CP-knower function, which we displayed earlier. This undermines the idea that
bootstrapping does not reduce to translation. For much the same reason, the CP-knower hypothesis in
Piantadosi et al.’s version does not involve the creation of new primitives, and it certainly doesn’t create
them in a way that goes beyond “the machinery of compositional semantics” (see the first of the
quotations from Carey in Section 1.1).
Of course, Piantadosi et al. could position their model as a non-Quinian type of “bootstrapping”
that allows translation and dispenses with the need to create new primitives. But this move would
abandon what we take to be the important and arresting ideas that Carey had in mind in introducing this
concept. Bootstrapping in this revised sense would not create a conceptual system that is discontinuous
with the earlier one, and hence, it would not implement a method that bears the same intellectual interest.
This revision would not simply raise the “semantic” issue of how we should use the term “bootstrapping,”
but it would discard one of bootstrapping’s essential properties. In short, the Piantadosi et al. model could
have made bootstrapping plausible by showing how the Learning and Discontinuity claims can be joined.
Instead, it either jettisons Discontinuity or trivializes it. The reasonable conclusion from the Piantadosi et
al. model is not that it provides a rigorous form of bootstrapping, but that it demonstrates bootstrapping as
unnecessary. Children have no need for bootstrapping, since they can learn a correct method of
enumeration through ordinary recombination.
Reply to Piantadosi et al. / 17
3. How realistic is the model?
The points raised in the preceding section do not show that the model is incorrect as a theory of
how children learn to label cardinalities. Even if the model doesn’t learn by bootstrapping, it could still be
the right explanation of this learning process. However, three aspects of the model’s behavior deserve
comment and suggest that it does not learn enumeration in the way children do.
3.1. The model’s choice of primitives
First, the model draws on a relatively small set of handpicked primitives—the 15 predicates in
Piantadosi et al.’s Table 1. The model constructs all its hypotheses as combinations of these predicates.
Piantadosi et al. believe these predicates “may be the only ones which are most relevant” to number
learning (p. 202). But this restriction raises the question of whether children also limit their hypotheses in
the same convenient way. We don’t dispute the importance of these predicates to knowledge of number,
but how do children know prior to learning that these predicates are the most relevant ones?
Piantadosi et al. claim that their theory “can be viewed as a partial implementation of the core
knowledge hypothesis (Spelke, 2003),” but also deny that the primitive predicates are part of an
encapsulated core domain devoted to number: “These primitives—especially the set-based and logical
operations—are likely useful much more broadly in cognition and indeed have been argued to be
necessary in other domains” (p. 202). If this particular set of primitives does not come cognitively pre-
packaged, however, then children must search for them among a much larger group of predicates, and we
need an explanation of how they manage to select just those that appear in the model’s hypotheses. This
problem is pressing because many candidates exist in this larger set that carry numerical information but
aren’t included among the model’s primitives. Although the model has a few primitives (e.g., set
intersection) that are not necessary for its hypotheses, the model excludes, by fiat, analog magnitudes,
mental models, and explicit quantifiers (e.g., some), which according to many theories are relevant parts
Reply to Piantadosi et al. / 18
of children’s beliefs about number prior to (and even after) they master enumeration.4 Consider analog
magnitudes. According to this idea, people have access to a continuous mental measure that varies
positively with the number of physical objects in their perceptual array. People can therefore use this
measure as an approximate guide to cardinality. A model like Piantadosi et al.’s could easily build a
hypothesis that makes use of analog magnitudes to label set sizes, and such a hypothesis would compete
with those of the present version of the model. It could therefore slow or alter the course of number
learning by delaying the success of the CP-knower hypothesis.
On the one hand, if Piantadosi et al.’s predicates are truly the only ones children use in forming
their hypotheses, then we need to know what enables the children to restrict their attention to these items
and exclude information like analog magnitudes. On the other hand, if children consider a wider set of
predicates, what’s the evidence that the model can converge on the right CP-knower procedure and do so
in a realistic amount of time? The quantitative results from Piantadosi’s simulations are not informative
under this second possibility.
3.2. The model’s method of enumeration
A second question about the model’s fidelity is whether the CP-knower procedure is similar
enough to children’s actual enumeration to back the claim that the model learns what children do.
Children match numerals one-one to objects in an iterative way (Gelman & Gallistel, 1978). In counting a
set of three cups {cup1, cup2, cup3}, they label a first object (e.g., cup1) “one” and remove it from further
consideration. They then label the second object (e.g., cup2) “two,” and so on. As Piantadosi et al. point
out, however, “the model makes no reference to the act of counting (pointing to one object after another
while producing successive number words)” (p. 213). As we saw in the example of Section 2.2, what the
4 For analog magnitudes, see, for example, Dehaene (1997), Gallistel and Gelman (1992), and Wynn (1992). For
mental models, Mix, Huttenlocher, & Levine (2002). For quantifiers, Barner and Bachrach (2010), Carey (2009),
and Sarnecka, Kamenskaya, Yamana, Ogura, and Yudovina (2007).
Reply to Piantadosi et al. / 19
model does instead is recurse through the set of items, taking set differences until it arrives at a singleton,
and then it unwinds through the list of numerals to arrive at the total.
We can put this point about the difference between children’s behavior and the model’s in a
second, semantic way: When older children count “one, two, three…three cups,” the first three number
words do not label the size of sets of cups. Instead, these words refer to ordinal positions in the sequence
of the to-be-counted items. The children then rely on the principle—Gelman and Gallistel’s (1978)
Cardinal Principle—that the final word in the enumeration sequence is the cardinality of the set, and so
they infer that there are “three cups.” Thus, only the second “three” in the earlier phrase denotes a number
of cups. By contrast, the model always uses numerals as labels for set sizes. In their discussion (pp. 214-
215), Piantadosi et al. claim that children’s actual counting behavior is a metacognitive effort to keep their
place in the recursive routine. But what reason could there be for not taking the children’s simpler (and
equally accurate) enumeration algorithm at face value? The model’s inability to arrive at the right
procedure—the procedure children actually use—suggests that something is wrong with its architecture
and calls into question Piantadosi et al.’s claim (p. 200) that “all assumptions made are computationally
and developmentally plausible.”
3.3. The model’s knowledge of the sequence of cardinalities
A third difference between children’s behavior and the model’s behavior is the extent of
children’s beliefs about number at the time they become CP-knowers. The usual test of CP knowledge is
that children can correctly produce sets of up to ten objects when asked to “Give me n.” For example,
when asked to “Give me eight beads,” they can produce eight from a larger pile of beads. Recent evidence
by Davidson, Eng, and Barner (2012), however, shows that children who are able to perform this task are
often unable to say whether a single bead added to a box of five results in six beads rather than seven.
This is the case even though children at the same stage can correctly say that the numeral that follows
“five” is “six” rather than “seven.” Davidson et al. (2012, p. 166) note that their analysis “reveals that
many CP-knowers do not have knowledge of the successor principle for even the smallest numbers…
Reply to Piantadosi et al. / 20
These data suggest that knowledge of the successor principle does not arise automatically from becoming
a CP-knower, but that this semantic knowledge may be acquired later in development.”
The Piantadosi et al. model is limited to determining the cardinality of a given set of objects. So
no logical inconsistency arises between possessing this skill and not being able to tell that one object
added to a set of five yields a set of six. Still, Piantadosi et al.’s CP-knower function, in the course of
determining that “six” labels a six-item set, also determines that “five” labels a set with one fewer
element. (Steps A3 and B3 in the example of Section 2.2 show the analogous relation between “three”
and “two.”) Given this procedure, children’s difficulty in figuring out that a six-item set is one greater
than a five-item set is mysterious and again suggests that the model’s CP-knower function is more
complex than the routine children actually use at this point in their number development.
Piantadosi et al. (pp. 206) describe their theory as a computational-level model, in the sense of
Marr (1982). So perhaps we should discount these deviations between the model’s behavior and
children’s, since they concern particular methods of pairing number words and cardinalities. The crucial
claims of the Piantadosi et al. paper, however, depend on more than computational description. For
example, whether the model learns by bootstrapping depends on whether the procedures the model
employs before becoming a CP-knower are qualitatively different from the procedure it employs later.
This difference requires a comparison of the algorithms before and after learning, and it limits how
abstractly we can view the model when we come to evaluate it (see Jones & Love, 2011, for general
criticisms along these lines of Bayesian learning theories). We take Piantadosi et al. to be committed to
procedures, such as One-knower, Two-knower, and CP-knower, that are parts of their “language of
thought” theory (but not necessarily to particular implementation details, such as the method of search in
hypothesis space).
4. How much does the model know about the positive integers?
An intriguing aspect of bootstrapping is that it is supposed to produce the child’s first true
representation of the positive integers. According to Carey (2004, p. 65), “coming to understand how the
Reply to Piantadosi et al. / 21
count list represents numbers reflects a qualitative change in the child’s representational capacities; I
would argue that it does nothing less than create a representation of the positive integers where none was
available before.” Similarly, according to Piantadosi et al. (p. 201):
Bootstrapping explains why children’s understanding of number seems to change so
drastically in the CP-transition and what exactly children acquire that’s “new”: they
discover the simple recursive relationship between their memorized list of words and the
infinite system of numerical concepts.
In Section 2, we examined the issue of whether Piantadosi et al.’s model effects a qualitative change in
representations. But setting that issue aside here, how much does the model know about the positive
integers—the “infinite system of numerical concepts”?
4.1. Does bootstrapping capture the meaning of the first few numerals?
One thing seems clear. The model never learns the full set of positive integers or the key
successor function that generates this set. It learns only the pairing of numerals and cardinalities for the
numerals on its count list.
In another sense, however, the model does possess a general rule for relating number words and
cardinalities: the CP-knower function, shown in Section 2.2. Piantadosi et al. write, “bootstrapping has
been criticized for being incoherent or logically circular, fundamentally unable to solve the critical
problem of inferring a discrete infinity of novel numerical concepts (Rips, Asmuth, & Bloomfield, 2006,
2008; Rips, Bloomfield, & Asmuth, 2008). We show that this critique is unfounded…” (p. 200). But
although we do believe that bootstrapping is unable to solve the “problem of inferring a discrete infinity
of novel numerical concepts,” we did not criticize bootstrapping as inconsistent or circular.5 Moreover,
5 A threat of circularity looms, however, if you read too much into the bootstrap conclusion. You may be tempted to
think that the conclusion fixes the cardinal meaning of the numerals if you understand “next term on the count list”
as involving the full, infinite list for the positive integers. The full list does, of course, fix the numeral’s meaning
since it is isomorphic to the positive integers. But at the time children perform the bootstrap inference, they have no
knowledge of the full list; so assuming this structure as part of the bootstrapping process does lead to circularity.
Reply to Piantadosi et al. / 22
the bootstrap conclusion (i.e., the next item on the numeral list refers to the set size given by adding one
to that of the preceding numeral) itself is a correct generalization about number word-cardinality pairs, as
we have noted. What is learned is a correlation between advancing one step in the number word sequence
(e.g., from “four” to “five”) and increasing the cardinality of a set by one. (In the Piantadosi et al. model,
this correlation is implicit in the CP-knower procedure rather than declaratively represented, but the effect
is the same.) This is an important discovery for children, and any theory that explains how they do it is
praiseworthy.
The trouble with this principle, however, is that, at the time children learn it, it fails to specify the
meaning of the terms for the positive integers (Rips et al., 2006; Rips, Asmuth, & Bloomfield, 2008; Rips,
Bloomfield, & Asmuth, 2008). After adopting the CP-knower function, the Piantadosi et al. model has a
way to connect the word “one” to cardinality one, “two” to cardinality two,…, and “ten” to cardinality
ten. But the same function is equally extendible to either of the mappings in (1) and (2), as well as an
infinite number of others:
(1) “one” denotes only cardinality one.
“two” denotes only cardinality two.
…
“ten” denotes only cardinality ten.
(2) “one” denotes cardinalities one, eleven, twenty-one,…
“two” denotes cardinalities two, twelve, twenty-two,…
…
“ten” denotes cardinalities ten, twenty, thirty,…
That is, the CP-knower function doesn’t constrain the cardinal meanings of the number words on the
child’s list to their ordinary meanings.
Proponents of bootstrapping now appear to agree with us that the CP-knower function and its
equivalents don’t give children the meanings for numerals beyond those on their list of count terms. But it
doesn’t necessarily give them the correct meanings for numerals on their count lists either, as (1) and (2)
Reply to Piantadosi et al. / 23
reveal. Knowing that a correlation exists between the numerals and the cardinalities is of no help in
picking out the positive integers from among its rivals unless the child knows either the structure of the
numerals or the structure of the cardinalities. However, the numeral sequence, as given by the next
predicate in Piantadosi et al.’s model, does not continue beyond “ten,” and as Piantadosi et al. emphasize
(p. 212), their model does not build in a successor relation for cardinalities.6 Because the structure of the
positive integers is well understood, we can be quite specific about what the CP-knower function fails to
convey. It does not enforce the ideas that the correct structure is one that has: (a) a unique first element,
(b) a unique immediate successor for each element, (c) a unique immediate predecessor for each element
except the first, and (d) no element apart from those dictated by (a)-(c).
4.2. Can the model exclude rival meanings for the integers?
Results from their simulations show that Piantadosi et al.’s model learns the standard pairing for
the first ten integers rather than an alternative pairing in which “one” is mapped to sets with one or six
elements, “two” to sets with two or seven elements, …, and “five” to sets with five or ten elements. This
latter Mod-5 hypothesis (see their Figure 1) fails for two reasons: First, the model receives feedback that
disconfirms the Mod-5 pairings, and second, the Mod-5 hypothesis is more complex than the correct
alternative, given the choice of primitives. When feedback supports the Mod-5 hypothesis, however, the
model eventually learns it. From these facts, Piantadosi et al. (p. 211) conclude:
This work was motivated in part by an argument that Carey’s formulation of
bootstrapping actually presupposes natural numbers, since children would have to know
the structure of the natural numbers in order to avoid other logically plausible
generalizations of the first few number word meanings. In particular, there are logically
possible modular systems which cannot be ruled out given only a few number word
6 The limit at “ten” is, of course, a computational convenience. The largest numeral on the model’s count list could
have been a larger or smaller value, in accord with the fact, mentioned earlier, that the largest numeral on children’s
count lists also varies. However, the problem in the text holds no matter what the upper limit happens to be.
Reply to Piantadosi et al. / 24
meanings (Rips et al., 2006; Rips, Asmuth, & Bloomfield, 2008; Rips, Bloomfield, &
Asmuth, 2008). Our model directly addresses one type of modular system along these
lines: in our version of a Mod-N knower, sets of size k are mapped to the k mod Nth
number word. We have shown that these circular systems of meaning are simply less
likely hypotheses for learners. The model therefore demonstrates how learners might
avoid some logically possible generalizations from data…
The problem for theories of number learning, however, is not eliminating hypotheses that the data directly
disconfirm, such as Piantadosi et al.’s Mod-5 hypothesis. Instead, the difficulty lies in selecting from the
infinitely many hypotheses that have not been disconfirmed. For the simulations in Piantadosi et al., these
would include Mod-11, Mod-12, Mod-13, …. The model can’t decide among these hypotheses because
its list of numerals stops at “ten” and because it has no information about cardinalities greater than ten
(see Footnote 6).
Hypotheses like Mod-11 might seem syntactically complex relative to the CP-knower function. If
so, the model would prefer CP-knower to Mod-11, even without training on sets of eleven, due to the
model’s assignment of higher prior probabilities to simpler hypotheses. But this is not the case. How
simple or complex a function must be to capture Mod-11 depends on the structure of the numeral list
beyond “ten” (which we are assuming, without loss of generality, is the child’s highest count term). If the
list continued, “one,” “two,”…, “ten,” “one,” “two,”…, “ten,” “one,” “two,” …, “ten,”…, then the CP-
knower function would respond exactly in accord with Mod-11. Since neither children nor the Piantadosi
et al. model knows how the count list continues, syntactic complexity can’t decide between Mod-11 and
the standard meanings of the numerals; that is, it can’t discriminate between (1) and (2), above. (This is a
variation of Goodman’s, 1955, famous point about the role of syntactic complexity in induction.) In other
words, a model could fail to assign the correct meaning to number words either because it mapped the
standard sequence of number words to the wrong sequence of cardinalities or because it mapped an
incorrect sequence of number words to the right sequence of cardinalities. The literature on this topic
sometimes presupposes that children have access to the correct number word sequence, but they don’t at
Reply to Piantadosi et al. / 25
the point at which they make the bootstrap inference. They have to induce both the numeral sequence and
the number sequence, and to coordinate them in order to understand the positive integers.
The message in our earlier papers was that the bootstrap conclusion does nothing to settle the
question of whether the cardinal meaning of the first few numerals is given by their usual (adult) meaning
or by Mod-11, Mod-12, and so on. The same is true of Piantadosi et al.’s CP-knower function. As Rey
(2011) has pointed out in connection with Carey’s proposal, this issue is closely related to classic poverty-
of-the-stimulus arguments for learning natural language (e.g., Chomsky, 1965). Proponents of
bootstrapping could contend that children’s beliefs about the meaning of the numerals suffers from the
same problem that the bootstrap conclusion does. Adults clearly know that (1), and not (2), represents the
correct meaning, but children may distinguish them only at a later point in their number development.
However, this conclusion, if it is true, places a stark limit on how much children learn about the numerals
from the bootstrap’s conclusion.
Piantadosi et al. begin to acknowledge this difficulty in noting that “the present work does not
directly address what may be an equally interesting inductive problem relevant to a full natural number
concept: how children learn that next always yields a new number word” (pp. 211-212). They believe that
“similar methods to those that we use to solve the inductive problem of mapping words to functions could
also be applied to learn that next always maps to a new word. It would be surprising if next mapped to a
new word for 50 examples, but not for the 51st” (ibid.). But this conjecture is not obviously correct: Most
lists that children learn—the alphabet, the months of the year, the notes of the musical scale—don’t have
the structure of the natural numbers. Next for the English alphabet ends at the 26th item, and next for the
sequence of U.S. Presidents currently ends at the 44th.
The crucial difficulty, as we’ve emphasized, is that learning the mapping between the numerals
and the cardinalities for one to ten can’t eliminate nonstandard sequences, such as Mod-11, unless
children can somehow induce the correct structure. The structure could come from the cardinalities for the
positive integers, or it could come from the structure of the numerals for these integers, since these
structures are isomorphic. But it has to come from somewhere. Bootstrapping allows children to exploit
Reply to Piantadosi et al. / 26
the numeral sequence to determine the labels for cardinalities. But this strategy can’t pick out the right
cardinal meanings—it merely passes the buck—unless the problem about how “next always yields a new
number word” is resolved.
5. Conclusions
On our view, the Piantadosi et al. model doesn’t bootstrap. It therefore doesn’t help vindicate
bootstrapping as a cognitive process. What the model does is form hypotheses by recombining its
primitives and confirming them statistically. So should we conclude that children can learn to enumerate
through this (non-bootstrapping) sort of hypothesis formation and confirmation? Perhaps, although
accepting this conclusion depends on ignoring the facts that (a) the model learns by combining a pre-
selected set of primitives but gives no account of how they are selected from the larger set of potential
primitives, (b) finishes with a procedure that differs in important ways from children’s, and (c) has a
firmer grasp of the sequence of cardinalities than children have. But even if the model is a correct
description of how children learn to enumerate, the model still faces the problem that it leaves an
unlimited set of possibilities for the meanings of the first few count terms.
Reply to Piantadosi et al. / 27
Acknowledgements
We thank David Barner, Jacob Beck, Jacob Dink, Brian Edwards, Emily Morson, James Negen, Steven
Piantadosi, and Barbara Sarnecka for comments on an earlier draft of this article. IES grant
R305A080341 helped support work on this paper.
Reply to Piantadosi et al. / 28
References
Barner, D., & Bachrach, A. (2010). Inference and exact numerical representation in early language
development. Cognitive Psychology, 60, 40-62. doi: 10.1016/j.cogpsych.2009.06.002
Beck, J. (submitted for publication). Can bootstrapping explain concept learning?
Bloom, P., & Wynn, K. (1997). Linguistic cues in the acquisition of number words. Journal of Child
Language, 24, 511-533. doi: 10.1017/s0305000997003188
Carey, S. (2004). Bootstrapping and the origin of concepts. Daedalus, 133, 59-68.
Carey, S. (2009). The origin of concepts. New York, NY: Oxford University Press.
Carey, S. (2011). Concept innateness, concept continuity, and bootstrapping. Behavioral and Brain
Sciences, 34, 152-161. doi: 10.1017/S0140525x10003092
Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: M.I.T. Press.
Davidson, K., Eng, K., & Barner, D. (2012). Does learning to count involve a semantic induction?
Cognition, 123, 162-173. doi: 10.1016/j.cognition.2011.12.013
Dehaene, S. (1997). The number sense: How mathematical knowledge is embedded in our brains. New
York: Oxford University Press.
Fodor, J. A. (1975). The language of thought: A philosophical study of cognitive psychology. New York:
Crowell.
Fodor, J. A. (1981). The present status of the innateness controversy. Representations: Philosophical
essays on the foundations of cognitive science (pp. 257-316). Cambridge, MA: MIT Press.
Fodor, J. A. (2010, October 8). Woof, woof [Review of the book The Origin of Concepts, by S. Carey].
Times Literary Supplement, pp. 7-8.
Gallistel, C. R., & Gelman, R. (1992). Preverbal and verbal counting and computation. Cognition, 44, 43-
74. doi: 10.1016/0010-0277(92)90050-r
Gelman, R., & Gallistel, C. R. (1978). The child's understanding of number. Cambridge, Mass.: Harvard
University Press.
Reply to Piantadosi et al. / 29
Goodman, N. (1955). Fact, fiction and forecast. Cambridge, MA: Harvard University Press.
Jones, M., & Love, B. C. (2011). Bayesian Fundamentalism or Enlightenment? On the explanatory status
and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences,
34, 169-188. doi: 10.1017/s0140525x10003134
Leslie, A. M., Gelman, R., & Gallistel, C. R. (2008). The generative basis of natural number concepts.
Trends in Cognitive Sciences, 12, 213-218. doi: 10.1016/j.tics.2008.03.004
Margolis, E., & Laurence, S. (2008). How to learn the natural numbers: Inductive inference and the
acquisition of number concepts. Cognition, 106, 924-939. doi: 10.1016/j.cognition.2007.03.003
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of
visual information. San Francisco: W.H. Freeman.
Mix, K. S., Huttenlocher, J., & Levine, S. C. (2002). Quantitative development in infancy and early
childhood. New York, NY: Oxford University Press.
Piantadosi, S. T., Tenenbaum, J. B., & Goodman, N. D. (2012). Bootstrapping in a language of thought:
A formal model of numerical concept learning. Cognition, 123, 199-217. doi:
10.1016/j.cognition.2011.11.005
Rey, G. (2011). Learning, expressive power, and mad dog nativism: The poverty of stimuli (and
analogies), yet again. Paper presented at the Society for Philosophy and Psychology, Montreal.
Rips, L. J., Asmuth, J., & Bloomfield, A. (2006). Giving the boot to the bootstrap: How not to learn the
natural numbers. Cognition, 101, B51-B60. doi: 10.1016/j.cognition.2005.12.001
Rips, L. J., Asmuth, J., & Bloomfield, A. (2008). Do children learn the integers by induction? Cognition,
106, 940-951. doi: 10.1016/j.cognition.2007.07.011
Rips, L. J., Bloomfield, A., & Asmuth, J. (2008). From numerical concepts to concepts of number.
Behavioral and Brain Sciences, 31, 623-642. doi: 10.1017/s0140525x08005566
Rips, L. J., & Hespos, S. J. (2011). Rebooting the bootstrap argument: Two puzzles for bootstrap theories
of concept development. Behavioral and Brain Sciences, 34, 145-146.
doi:10.1017/S0140525X10002190
Reply to Piantadosi et al. / 30
Sarnecka, B. W., Kamenskaya, V. G., Yamana, Y., Ogura, T., & Yudovina, Y. B. (2007). From
grammatical number to exact numbers: Early meanings of 'one', 'two', and 'three' in English,
Russian, and Japanese. Cognitive Psychology, 55, 136-168. doi: 10.1016/j.cogpsych.2006.09.001
Shea, N. (2011). New concepts can be learned. Biology & Philosophy, 26, 129-139. doi: DOI
10.1007/s10539-009-9187-5
Spelke, E. S. (2000). Core knowledge. American Psychologist, 55, 1233-1243. doi: 10.1037/0003-
066x.55.11.1233
Spelke, E. S. (2011). Quinean bootstrapping or Fodorian combination? Core and constructed knowledge
of number. Behavioral and Brain Sciences, 34, 149-150.
Wynn, K. (1992). Children's acquisition of the number words and the counting system. Cognitive
Psychology, 24, 220-251. doi: 10.1016/0010-0285(92)90008-p