53
17 Chapter Three Literature Review "Statistics is a mathematical science, but it is not a branch of mathematics. Statistics is a methodological discipline, but it is not a collection of methods appended to economics or psychology or quality engineering. The historical roots of statistics lie in many of the disciplines that deal with data; its development owes much to mathematical tools, especially probability theory. But by the mid-twentieth century statistics had clearly emerged as a discipline in its own right, with characteristic modes of thinking that are more fundamental than either specific methods or mathematical theory. . . . The higher goal of teaching statistics is to build the ability of students to deal intelligently with variation and data" (Moore, 1992b, pp. 15-16). 3.1 Introduction Many statisticians and statistics educators are calling for a reform in the teaching of statistics (Bailar, 1988; Snee, 1993; Wild, 1994; Garfield, 1995; Moore, 1997). Many discuss taking a wider view of statistics, of teaching statistics through authentic statistics, project work, working with real and complex data sets and interpreting media or statistically based reports. With such experiences students can become enculturated into making sense of situations from a statistical perspective. These educators and statisticians believe that teaching must not only incorporate the teaching of the procedures and techniques of statistics but also develop students' statistical thinking. With the advent of EDA (exploratory data analysis) and the increase of student access to technology a tension exists between the EDA method and the classical method of statistics (Biehler, 1994b). The cultures of thinking associated with each of these methods are broadly categorised by Biehler as deterministic for the EDA method and probabilistic (non-deterministic) for the classical method. There is also a continuum of opinion on what should be taught. Some people are advocating removing or drastically reducing the teaching of probability (Moore, 1992b) while others believe that probability should be incorporated into the teaching of statistics instead of being taught separately (Shaughnessy, Garfield & Greer, 1996). The probabilists believe that the probabilistic way of thinking, particularly in respect to random behaviour, is a unique and useful way of perceiving the world (Borovcnik & Peard, 1996). In such a debate the optimum path

Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

17

Chapter Three

Literature Review

"Statistics is a mathematical science, but it is not a branch of mathematics.

Statistics is a methodological discipline, but it is not a collection of methods

appended to economics or psychology or quality engineering. The historical

roots of statistics lie in many of the disciplines that deal with data; its

development owes much to mathematical tools, especially probability theory.

But by the mid-twentieth century statistics had clearly emerged as a discipline

in its own right, with characteristic modes of thinking that are more

fundamental than either specific methods or mathematical theory. . . . The

higher goal of teaching statistics is to build the ability of students to deal

intelligently with variation and data" (Moore, 1992b, pp. 15-16).

3.1 IntroductionMany statisticians and statistics educators are calling for a reform in the teaching of

statistics (Bailar, 1988; Snee, 1993; Wild, 1994; Garfield, 1995; Moore, 1997). Many

discuss taking a wider view of statistics, of teaching statistics through authentic statistics,

project work, working with real and complex data sets and interpreting media or

statistically based reports. With such experiences students can become enculturated into

making sense of situations from a statistical perspective. These educators and statisticians

believe that teaching must not only incorporate the teaching of the procedures and

techniques of statistics but also develop students' statistical thinking.

With the advent of EDA (exploratory data analysis) and the increase of student access to

technology a tension exists between the EDA method and the classical method of statistics

(Biehler, 1994b). The cultures of thinking associated with each of these methods are

broadly categorised by Biehler as deterministic for the EDA method and probabilistic

(non-deterministic) for the classical method. There is also a continuum of opinion on what

should be taught. Some people are advocating removing or drastically reducing the

teaching of probability (Moore, 1992b) while others believe that probability should be

incorporated into the teaching of statistics instead of being taught separately

(Shaughnessy, Garfield & Greer, 1996). The probabilists believe that the probabilistic

way of thinking, particularly in respect to random behaviour, is a unique and useful way

of perceiving the world (Borovcnik & Peard, 1996). In such a debate the optimum path

Page 2: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

18

for developing statistical thinking could be to incorporate and elucidate both cultures of

thinking (Biehler, 1994b; Pfannkuch & Brown, 1996).

Another consideration in this debate is the separation of probability and statistics in

teaching. This separation has tended to result in probability teaching being focussed on

the mathematical root of chance and gambling rather than the statistical root of chance and

social data. This teaching approach is problematic for the statistics learner (Pfannkuch,

1997a). The conceptualisation of chance itself is subject to constant re-evaluation and re-

interpretation. For example, it has led to: a deterministic versus non-deterministic debate

amongst philosophers and scientists last century (see Chapter 2); a reassessment of

system or chance causes amongst quality management statisticians in the light of higher

industry expectations (Pyzdek, 1990); and an ever changing conception of chance

amongst the new chaos mathematicians (Stewart, 1989). Furthermore the quantification

of probability has produced a concerted Bayesian versus frequentist debate amongst

statisticians today. It is against this background of changing perspectives and changing

technology that an analysis on the nature of statistical thinking becomes a matter of

perception and depends on the particular stance of the researcher. Therefore it is pertinent

that the particular stance and perspective taken in this thesis is clarified, including the

domain of the statistical thinking that is under consideration.

The domain of this research is on the broad thinking skills that are invoked during the

carrying out of an empirical statistical enquiry and in the reading of a report on such a

process. This enquiry cycle ranges from the problem situation to the formulation of the

questions, through data collection and analysis to an interpretation of the data in terms of

the original situation.

Polya (1945) proposed a four phase model (understand the problem, devise a plan, carry

out the plan, look back) to describe a general approach to problem solving. Statisticians

such as MacKay and Oldford (1994) have devised a five step model to describe the

approach for statistical investigations (problem, plan, data, analysis, conclusion) while

other people (Davis, 1991) prefer a modelling perspective to describe the approach for

applied mathematics (real world situation, real model, mathematising to mathematics

model, mathematics results, interpreting and validating to real world situation). All these

approaches are helpful tools for thinking about the statistical aspects of a problem

situation and the characteristics and nature of thinking involved at and between each

phase. But, according to Schoenfeld (1987a), there is a huge difference between

description which characterises a procedure and prescription which characterises a

procedure in sufficient detail to serve as a guide for implementing the strategy. The same

argument could apply to statistical thinking in that it could be described in broad terms

Page 3: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

19

such as curiosity and scepticism (Department of Statistics, 1997) yet to prescribe

statistical thinking in a form that is useful for teaching is entirely another matter.

Exploring the characteristics of statistical thinking for the purpose of informing teaching

practice necessitates reviewing research from fields such as psychology, statistics

education, mathematics education, general education and statistics. From the previous

chapter on the history of statistical thinking it is clear that mathematics, probability and

statistics are linked together in the content domain. Therefore it is appropriate to review

applicable research in these three content areas as well. In order to cover such a wide area,

only research that is perceived to be relevant to the main debate is considered. This review

is divided into the following categories: mathematical problem solving perspective;

psychologists’ perspective; thinking in a data-based environment - educationists’

perspective; thinking in a data-based environment - statisticians’ perspective; current

theoretical models for stochastic thinking. This literature review was completed in May

1997. Thus references from 1997 onwards have only been added if it was deemed

essential to the research.

3.2 Mathematical Problem Solving PerspectiveShaughnessy (1992) states that there are close links between research on mathematical

problem solving and statistics as each involves the modelling of physical phenomena and

decisions on how to approach problems. In these respects teaching statistics is teaching

problem solving. Therefore a consideration of research in the mathematical problem

solving area may be informative for statistical problem solving.

3.2.1 Influences on Mathematical and Statistical Problem Solving

Key Points:

• Domain specific knowledge is vital.

• Students need facility in recognising similarities in problems.

• Students need to develop a disposition to engage in critical analysis.

• There are socio-cultural influences on how mathematics is perceived and learnt.

According to Silver (1987), research on cognitive skills invariably suggests that domain

specific knowledge appears to be vital in problem solving. "Expertise develops when an

extensive experience with a rich set of examples creates a highly textured knowledge

base" (p. 52). Kilpatrick (1987) concurs with these findings and adds that well organised

subject matter knowledge and background knowledge are needed for the problem

Page 4: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

20

formulation stage. Failure to solve problems can often be attributed to failure to

understand the problem adequately, particularly in regard to semantic understanding.

Kilpatrick notes that students after receiving school instruction no longer attended to the

semantics. Instead, the students relied on the surface features of the problem to choose the

arithmetical operation. Another consideration is that students will only be successful in

solving a problem if there is "a match between their own knowledge representation and

the problem situation at hand" (Lester & Kroll, 1990, p. 56). Problem solving in the

classroom will also benefit if learners mathematical experience in everyday settings is

connected to mathematics in classroom settings (Lave, Smith & Butler, 1989).

In order to create a disposition towards posing questions and problem finding, Kilpatrick

(1987, p. 142) suspects that facility in "identifying important features of a problem,

abstracting from previous problems encountered and seeing problems as organised into

related classes" is required. These components appear similar for problem solving,

particularly in the work of Krutetskii (cited in Lester, 1983), who defined good problem

solvers on the basis that they could: distinguish relevant from irrelevant information;

quickly and accurately see the mathematical structure of a problem; generalise across a

wide range of problems with a significant amount of transfer of information occurring

from a target problem to a structurally related problem; and remember the formal structure

of a problem for a long time. Lester and Kroll (1990) take another viewpoint on

disposition which they have categorised into two components; affects and socio- cultural

contexts. Attitudes such as the willingness to take risks and tolerance of ambiguity are

included in the affects component, whereas the socio-cultural influence includes the

values and expectations nurtured in a school which help to shape how mathematics is

learnt and how it is perceived. These aspects are particularly important to consider (Gal,

Ginsburg & Schau, 1997), if the aim of statistics education is to produce critical thinkers.

A learning culture needs to be developed where: "students must come to think of

themselves as able and obligated to engage in critical analysis" (Resnick, 1987, p. 48).

Resnick (1989, p. 33) argues that good readers and good reasoners in such fields as

political science, social science and science "treat learning as a process of interpretation,

justification and meaning." Therefore such a disposition should be cultivated in the

teaching of mathematics as it would develop skills not only in the application of

mathematics but also in thinking mathematically. Her belief is that argument and debate

about interpretation and implications should be as natural in mathematics as it is in politics

and literature. This plea to reassess the mode of teaching mathematics is directly

applicable to statistics which by its nature, through the analysis of data, invites multiple

interpretations and implications. Resnick (1989) believes that a reconceptualisation of

thinking and learning in mathematics will occur only if teaching is perceived as a

Page 5: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

21

socialisation process. This will involve an acculturation process whereby mathematics

would be viewed as a way of thinking, of acquiring the habits and dispositions of

interpretation and sense-making, as well as a way of acquiring a set of skills and

strategies and knowledge (Schoenfeld, 1989).

This view is also supported by the Department of Statistics (1997, p. 1) although it is

unsure how to implement such a socialisation process. It states for its 'Introduction to

Statistics' first year course at the University of Auckland that its most important aim for

this group of students, is to improve "general numeracy and instill an ability to think

statistically.” This thinking is driven by and supported by specific statistical knowledge

such as understanding descriptions of statistical analyses and learning to use a set of

statistical tools. Even though the most important aim of the first year statistics university

course is interpretation and to think statistically, it is not specifically taught. There is

simply a hope that it will occur through using real data for all problems and asking

students questions about that data.

"The technical areas are easy to teach and easy to examine. Many of the ideaslisted under the first item [aim] above have as much to do with habits of mindas with technical content but they are more important in real life than thetechnicalities. A great many of these qualities cannot be taught directly. Youcan only learn them by experience, having been exposed to a great number ofsituations" (Department of Statistics, 1997, p. 1).

Clearly there is a need to articulate statistical thinking for teaching purposes so that

teachers are aware of the types of thinking that they should be developing.

3.2.2 Metacognition and Reasoning in Statistics

Key Points:

• Reasoning in mathematics is different from reasoning in statistics.

• Teaching should draw attention to the metacognitive components of problem

solving in mathematics and by implication in statistics.

There is an emerging body of research on ways of thinking for mathematics problem

solving (Resnick, 1989) which may or may not pertain to statistics. Statisticians such as

Moore (1992b, p. 15) are stating that "statistics is not mathematics.” Begg (1995)

believes that mathematics is being redefined with an emphasis on problem solving in

order to ensure that reasoning is part of instruction. Buzeika (1996, p. 18) holds the

opinion that the inclusion of problem solving in the mathematics curriculum “now brings

statistics more clearly under the umbrella of mathematics.” She believes that it depends on

one’s perception of mathematics as to whether statistics is a separate discipline. However

Begg (1995) cautions that reasoning with uncertainty in statistics and reasoning with

Page 6: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

22

certainty in pure mathematics are different types of reasoning and that teaching should

make students aware of the difference.

Another facet of mathematical problem solving that is recognised by researchers is the role

of metacognition (Schoenfeld, 1987b; Lester & Kroll, 1990). Generally metacognition is

regarded as having two aspects: knowledge of cognition and regulation of cognition

(Shaughnessy, 1992). Knowledge of cognition includes knowledge of strategies and self

knowledge of beliefs and attitudes. Regulation of cognition includes monitoring how

decisions are made under uncertainty and mentally stepping aside to reflect on the process

of decision making. Uncertainty in mathematics is used in the sense that not everything

about the problem is known. However in a statistical context the term uncertainty has a

more specific definition and therefore the decision making process in statistics will add

some more dimensions to that of mathematics. Schoenfeld (1987b, p. 210) characterises

efficient self-regulation by: "people who are good at it are the people who are good at

arguing with themselves, putting forth multiple perspectives, weighing them against each

other and selecting among them."

Beliefs can affect problem solving performance. For instance many students believe that a

mathematical problem can be solved through focussing on key words. Another example

supported by a large body of evidence (e.g. Tversky & Kahneman, 1982; Amir &

Williams, 1997; Truran, 1998) is that people’s judgements about probability and statistics

are affected by their beliefs, and perceptions of their experiences. Schoenfeld (1987a)

believes this is because people are natural theory builders continually constructing

explanations to interpret their reality. Because everything that is seen and experienced is

an interpretation of those events then misinterpretations will occur.

According to Lester and Kroll (1990) there is evidence that, if students' attention is

drawn, during instruction and evaluation, to the metacognitive components of problem

solving, then their performance will improve. Traditional instruction generally ignores

aspects such as the teacher modelling the implicit reasoning process used in solving

problems (Camione, Brown & Connell, 1989). Schoenfeld (1983) also believes that

greater attention must be paid in the teaching of mathematical problem solving to

metacognitive behaviour, as at least half of the process of mathematical problem solving is

metacognitive, as the 'manager' and the 'implementer' work in tandem. The 'manager' or

metacognitive part continually asks questions of a strategic and tactical nature deciding at

branch points such things as which perspective to select, which direction a solution

should take, or which path should be abandoned in the light of new information. He

states that "there has not been at the global level an adequate framework for clearly dealing

with decisions that ought to have been considered but were not" (p. 349) and that

Page 7: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

23

"metacognitive managerial skills provide the key to success" (p. 369). Garofalo and

Lester (1985) put forward, for discussion, a cognitive-metacognitive framework based on

Polya’s four phase model. They believe that the critical role of metacognition in

mathematical performance should be made more explicit for instruction.

Lester (1989) queries whether metacognitive behaviours are the same for solving

mathematical problems and reading a passage of prose. He believes that whilst there may

be similarities, there must be an assumption there are differences as metacognitive

activities are "driven by domain-specific knowledge" (p. 117). Similar statements could

be made for solving statistically based problems. The domain-specific knowledge for

mathematics and statistics is not the same and therefore it perhaps cannot be assumed that

the way of thinking is the same. Thus what may be needed is a theoretical base for a

reconceptualisation of statistics which links the social, cognitive and metacognitive

aspects of thought and learning and which distinguishes itself and links itself to

mathematical thinking and learning.

3.3 Psychologists’ Perspective

3.3.1 General

Key Points:

• For probability problems context is not used to solve the problem whereas it is in

statistics.

• Rationalisation of events is related to a psychological need and this leads people

to interpret what could be random events in a deterministic manner.

The foundations for research in the learning of probability could be largely attributed to

the work of the psychologists Tversky and Kahneman with the publication of their first

paper in 1972. Their basic hypothesis is that statistically naive people make probability

assumptions based on the employment of representativeness and availability heuristics.

These findings based on mathematical gambling-type problems may have a bearing on

how people solve statistical problems. For example they found that people believe that in

a family of six children the sequence BGGBGB is more likely to occur than BBBBGB. If

people are using a theoretical probability model and are 'seeing' the births sequentially,

not as three boys and three girls versus five boys and one girl, then this finding is

pertinent. Konold (1995) found out, to his embarrassment, that in coin flipping, HTHHT

is more likely than HHHHH to occur first, if run in a string, but equally likely if done in

blocks of five. Thus assumptions about how the problem is viewed by the subjects must

Page 8: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

24

be checked out, not only from a probability perspective but also from a statistical

perspective. If this problem is viewed statistically then many factors may come into play,

such as the probability of a boy is not the same as the probability for a girl, the probability

of girl increases if the child preceding is a girl (Wild & Seber, 1997), a mother with a

dominant personality is more likely to have boys (research reported in New Zealand

Herald, 1996), the country where the child is born can affect the number of boys and girls

(e.g. abortion of girls) (reported in New Zealand Herald, 1996), and so forth. This means

that such a problem would have to be looked at in context and related to the real world

situation before such a decision could be made.

According to Fischbein (1987), intuition plays an important part in people's perception of

situations and appears to be related to a psychological and behavioural need to find

plausible reasons for those situations. These primary intuitions, that every event must

have a cause, are developed naturally through enculturation during childhood. A modern

society is foundered on rationalisation, on the ability to reason about and to control

events, at least partially, within the social and physical environment. Thus this

rationalising tendency leads people to interpret what could be random events in a

deterministic manner. "Intuitions themselves become more 'rational' with age in that they

[students] adopt strategies and solutions which are based on rational grounds” (Fischbein,

1975, p. 65). Sloman (1994, p. 4) found in a study that there is a tendency for people " to

capture relevant information in one coherent package" and that the act of constructing an

explanation causes the neglect of alternative explanations but this could be "an effective

strategy for reducing uncertainty in a variety of situations.”

Despite misgivings about Tversky and Kahneman (1982) simplifying problems that are

essentially complex it is worthwhile to reflect on the framework provided by them and the

role of intuition as described by Fischbein (1987). Tversky and Kahneman describe three

heuristics: representativeness; availability; and adjustment and anchoring. The first two

heuristics, which people seem to employ when assessing probabilities and predicting

values, are discussed in Sections 3.3.2 and 3.3.3.

3.3.2 Representativeness

Key Points:

• People employ a representativeness heuristic to assess probabilities and to

predict values.

• For probability problems context is not used to solve the problem whereas it is in

statistics.

Page 9: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

25

According to the representativeness heuristic people believe that a sample will reflect the

population from which it is drawn. Many examples abound on how people use the

representativeness heuristic to give a judgement under uncertainty. This heuristic is

subdivided into explanatory components. One such example is known as the base-rate

fallacy. A frequent protocol is to give subjects a brief description of a person such as a

male, 45, conservative, ambitious and no interest in political issues. The subjects are then

asked to assess the probability that the description belonged to an engineer given that the

individual is sampled from (a) 70 engineers, 30 lawyers (b) 30 engineers, 70 lawyers. In

this case subjects give essentially the same probability for both situations. When there is

no description the probability is given correctly. Because the description appears more

representative of a stereotypical engineer people pay attention to that facet rather than the

mathematical aspect.

Again the argument could be promoted that statistics must be interpreted in context and

that such a problem is asking the student to strip away the context and solve the problem

as a mathematical or quantitative one. It is interesting to note that when no description was

given the problem was solved 'correctly'. When a description was given the problem may

have been solved from a statistical perspective. This reinforces the notion that statistics is

numbers in context.

Also attributed to the base-rate fallacy is the fact that studies have consistently shown that

people do not estimate probabilities according to Bayes’ theorem. For example if subjects

are asked to estimate the probability that a 40 year-old woman has breast cancer given that

she has had a positive test, they will focus on the probability of a positive test and do not

take into account the base rate of the disease. Gigerenzer (1996, cited in Bower 1996)

disputes such findings. In his studies he used frequency information rather than

percentages and concluded more subjects obtained a correct answer with information

presented in frequency form than in percentage form. His assumption is that in the real-

world environment human beings make decisions on the frequency of experienced events.

Another example of the representativeness heuristic occurs with insensitivity to sample

size. The effect of the sample size on probability and variation does not appear to be

considered as a factor by subjects. For example the probability of obtaining an average

height greater than 180 cm is assigned the same value for samples of 1000, 100, and 10

men. Another example of this heuristic is that subjects will, on the basis of single

assessment-performance lessons of student teachers, give extreme predictions on their

performance as teachers five years later. Subjects are taking one instance as being

representative of the whole picture. In practice this is actually done. For example, a

student sits an examination at the end of ten years schooling, and on the basis of this one

Page 10: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

26

examination is passed or failed. Statistically this is not sound practice, yet in reality

judgements are formed on such a basis. Thus it may be a matter of stating to the subjects

whose world they should operate in when answering such questions as they are very

much context specific.

A further example is the gambler's fallacy or the ‘law of small numbers’ which can be

demonstrated by the tossing of a coin. When a run of six heads produces the answer that

the next toss will more than likely be a tail, then this is called the gambler’s fallacy. This,

too, is explained by the representativeness heuristic since people expect a small sample to

reflect the characteristics of the population. Consequently researchers put too much faith

in the results of small samples. But there is an ambivalence here. From another

perspective people do not have faith in small (i.e. small in proportion to the size of the

population) randomly selected samples (Bartholomew, 1995) and do not believe that the

sample will reflect the population.

Misconceptions about regression to the mean are manifested when people believe that

their action has caused a change. For example, with the regression effect alone acting,

behaviour is most likely to improve after punishment and most likely to deteriorate after

reward. In quality control this would be called understanding the theory of variation.

According to Joiner and Gaudard (1990), many managers fail to recognise, interpret, and

react appropriately to variation in employee performance data. Therefore if an employee’s

sales for one month are poor the manager will chastise the employee. The next month the

employee’s sales are good, which the manager believes have resulted from his or her

chastisement. However the employee’s performance could be explained by variation

alone. This suggests that people are being asked to take a probabilistic perspective of the

world rather than a deterministic one in certain situations.

It would appear from the research that people often do not apply representativeness in

those instances where it is really appropriate to do so. It is believed that representativeness

is fundamental to the epistemology of statistical events as it is how claims about a

population are made with a certain degree of confidence. Judgement issues in statistics are

not simple, particularly when dealing with a real situation when the judgement is not

based on statistical evidence alone. However these scenarios are explained, one thing is

clear, that if people are operating in the world in these ways, then the teaching of

probability and statistics, which has or is offering a different view of reality, will be

problematic (Shaughnessy, 1992).

Page 11: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

27

3.3.3 Availability

Key Points:

• People employ an availability heuristic to assess probabilities and to predict

values.

• Judgement criteria used by people are complex and may be dependent upon

context, and the experiences and beliefs of the person.

The judgemental heuristic is used when people judge events based on their personal

experiences and perceptions. For example, the assessment of risk of heart attack among

middle-aged people, may be made by recalling occurrences amongst one's acquaintances.

If the data on heart attacks are not made available to subjects (as it was not by Tversky

and Kahneman) then the only recourse for people is to refer to their own inbuilt data set

and try to come up with an objective view. These people could be considered to be

operating statistically by Joiner (1990, cited in Barabba, 1991, p. 4)

"This illusion [that statistics is objective and does not involve subjectivedecisions - that someone in the end has to make a decision based on the dataon hand] perpetuates our practice of teaching students things they don't wantto know, things that will mislead once they leave school . . . this illusionprevents us from making great discoveries about how to teach statistics thathelp us understand the causes of today's problems and make useful changesin the future."

People, when using the availability heuristic, demonstrate biases of imaginability. For

example subjects are given combinatorial tasks such as "How many different committees

of k members can be formed from a group of 10 people?" Since small committees are

easier to imagine, people believe that it is possible to make up more committees of 2

people than 8 people from a group of 10 people. Another example is imagining the risk

for an adventurous expedition, which subjects tend to overestimate. This imagination bias

may be due to the lack of or type of data sets being presented to the subjects. This bias

also includes a perception bias which could have many influences. For example, Howard

and Antilla (1979, cited in Bradley, 1982, p. 3) asked three population groups, the

League of Women voters, college students and professional club members to rank

activities on the basis of their perceptions of the risk of resultant death. They found that

their perceptions appeared to be influenced by the biases of the media and peer-group

emotional reactions.

The conjunction fallacy, also associated with the availability heuristic, occurs when

subjects tend to overestimate the frequency of co-occurrence of natural associates such as

‘suspiciousness and peculiar eyes’ versus ‘suspiciousness’. Another example is that

subjects rate the statement ‘earthquake and floods’ more likely than ‘earthquakes’. Further

studies (Shaughnessy, 1992) have raised the possibility that subjects, through language

Page 12: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

28

alone, are confusing the conditional probability and conjunctional probability. This raises

issues for teaching since students ultimately have to deal with problems expressed in

everyday language.

Lecoutre (1992) proposes that the equiprobability bias should be added to those of

Tversky and Kahneman. In an experimental study of 1000 students of various

backgrounds in probability (from nothing to a lot) an equiprobability bias was observed.

This bias is highly resistant and a thorough background in probability did not lead to a

notable increase in correct solutions. These findings reaffirm Fischbein (1987), who

believes that intuitions are deeply rooted in a person's basic mental organisation. The

cognitive model used by the subjects appears to follow the argument that results are

equiprobable because it is a matter of chance, that random events are equiprobable by

nature. It is interesting to note that, in instruction, students are taught that a random

sample means that each object in the population has an equal chance of being chosen.

Tversky and Kahneman state that statistical principles are not learned from everyday

experience because the relevant instances are not coded appropriately. It is not natural to

group events by their judged probability. People do not ‘attend’ or ‘notice’. People do not

‘notice’ that average word length for successive lines of text is different on successive

pages.

At present Gigerenzer (1996) is challenging this field of study which originated with

Tversky and Kahneman. Gigerenzer believes that their proposed heuristics are flawed in

that there is an expectation that the human mind will think according to statistical

calculations. He bases his theories on the assumption that human reasoning is rational. He

theorises that individuals do not usually possess the time, knowledge or computational

ability to reason optimally. Thus he has proposed and confirmed, in some studies, that

human reasoning operates on ‘take the best’ strategy which involves a brief memory

search in reaching decisions. Judgements get shaped by numerous, sometimes

contradictory, imperatives in the social world and therefore productive theories about the

mind should give consideration to the ecology of rationality.

All these biases and Gigerenzer’s challenges tend to suggest that context is playing a large

part in the interpretation of probabilistic events, whereas context free problems, where the

mathematics is transparent tend to give normative probability solutions (Shaughnessy,

1992). This presents a conundrum to statistics educators as it suggests that teaching a

subject that depends on context for interpretation will be a complex process. Judgements

under uncertainty appear to need compatibility with an entire web of beliefs held by an

individual. Hence teaching statistics will not be easy as students have their own inbuilt

beliefs, biases and heuristics. As Konold (1991, p. 144) stated:

Page 13: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

29

"My assumption is that students have intuitions about probability and thatthey can't check these in at the classroom door. The success of the teacherdepends on how these notions are treated in relation to those the teacherwould like the student to acquire."

3.3.4 The Role of Intuition

Key Points:

• Mental models are needed for productive reasoning.

• Learners’ intuitions such as the primacy effect may become obstacles to

interpretation and statistical thinking.

As new ways of representing reality, or new ways of viewing or making sense of the

world are developed, conflicts arise between intuitive and logical thinking. Fischbein

(1987) makes the point that these perspectives cannot be arrived at through natural

experience but through some educational intervention. Therefore it is important in

education to take cognisance of these primary intuitions as they will influence the learning

process (Borovcnik & Bentz, 1991).

Fischbein (1987, p. 64) believes that intuitions play a role in the form of affirmatory,

conjectural and problem solving effects in mathematics education. From the statistical

perspective these primary intuitions may have an effect on statistical thinking. An example

of the affirmatory role effect is that there may be a tendency for students to intuitively

infer a property for a certain population, based on the fact that a certain number of that

population have been observed, or recalled as having that property. Tversky and

Kahneman (1982) have also noticed this tendency which they refer to as the availability

bias (see Section 3.3.3). A conjecture effect occurs when students base their predictions

about a future event on their everyday experience. Whilst a problem solving effect occurs

when subjective anticipatory intuitions appear to influence the solution. Another point of

interest for statistical thinking is Fischbein's (1987, p. 193) notion of epistemic freezing

whereby "a person ceases at some point to generate hypotheses . . . one tends to close the

debate . . . [need for a] decision stronger than the need to know.” A feature of the

judgemental process is that the first interpretation tends to influence subsequent

inferences. Fischbein calls this the primacy effect and believes it is an obstacle to higher

order interpretation.

Fischbein states, that in the learning process, intuitive, analogical and paradigmatic

models are used for the learner to gain access to an understanding of the concept.

Fischbein's (1987, p. 125) hypothesis is that we need such models for productive

reasoning:

Page 14: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

30

"The essential role of an intuitive model is, then, to constitute an interveningdevice between the intellectually inaccessible and the intellectually acceptableand manipulable.1. Model has to be faithful to the original on the basis of a structural

isomorphism between them.2. Model must have relative autonomy.3. Must correspond to human information-processing characteristics."

He considers that these three are major factors in shaping intuitions. However he warns

that the properties inherent in the model may lead to an imperfect mediator and hence can

cause incomplete or incorrect interpretations.

The statistical models and tools that have been developed, such as boxplots and normal

distributions, may, in the light of what Fischbein is saying, be considered as imperfect

thinking mediums that lead to misinterpretation. Biehler (1996) has identified at least four

obstacles or barriers in student use and interpretation of boxplots. One avenue to

overcome such misinterpretations, is for the teacher and learner to become aware of these

obstacles. That is, students should be made aware that intuition can lead them astray.

Another avenue to pursue is that advocated by Fischbein (1987, p. 191): "In order to

overcome such intuitive obstacles one has to become aware of the conflict and to create

fundamentally new representations." Intuitive reasoning will not disappear and therefore it

should have a complementary role to logical reasoning. Reasoning, whether intuitive or

logical, needs models to facilitate it. Therefore statistics educators may need to reassess

the existing thinking tools in the discipline and perhaps create new thinking tools that are

more closely aligned to human reasoning as proposed by Fischbein.

Shaughnessy (1992) comments that Fischbein's ideas are particularly important in the

teaching of stochastics as many phenomena conflict with primary intuitions. Borovcnik

(1994) believes that Fischbein's ideas offer a promising strategy for teaching stochastics

as a mathematical approach does not work with empirical data. Thus conceptual

development for statistics may need new representations and new teaching approaches

which take cognisance of the ideas and intuitions of the learners.

Page 15: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

31

3.4 Thinking in a Data-based Environment - Educationists’

Perspective

3.4.1 General

Key Point:

• The domain of statistical thinking should be widened to encapsulate the whole

process from problem formulation, to interpretation of conclusion in terms of the

context.

Much research into the thinking required in a data-based environment has been entirely in

the domain of analysis. That is organising, describing, representing and analysing data

with an emphasis on graphs and the calculation of statistical summaries. Shaughnessy,

Garfield and Greer (1996, p. 206) suggest widening the domain to include ‘Look Behind

the Data’ since data arise from a specific context.

"Data are often gathered and presented by someone who has a particularagenda. The beliefs and attitudes lying behind the data are just as important toinclude in the treatment of data handling as are the methods of organising andanalysing the data . . . it is mathematical detective work in a context . . .relevance, applicability, multiple representations and interpretations of dataare lauded in a data handling environment. Discussion and decision-makingunder uncertainty are major goals . . . so too are connections with otherdisciplines .”

Whilst agreeing that the domain should be widened so that statistics is not viewed as

‘number crunching’ there is room for debate on the notion that someone has an agenda. It

may be that a person has a particular perception of the situation. To suggest that there may

be an agenda is implicitly suggesting a motive. This is probably not the case but rather is a

reflection of the author’s perspective or view of reality. The statement that statistics is

‘mathematical’ detective work makes an unwarranted assumption about the nature of the

detective work. Biehler and Steinbring (1991) use the term ‘statistical’ detective work to

describe the process of questioning the data through to a solution. Perhaps the case for the

use of the term ‘statistical’ detective work is best illustrated by Cobb and Moore (1997)

where they demonstrate how interpreting a graph involves the interplay between pattern

and context with essentially no reference to mathematical content.

Hancock, Kaput and Goldsmith (1992) describe and identify data creation and data

analysis as making up the domain of data modelling. Their modelling perspective of

statistics encapsulates the idea that data are a model of a real world situation. "Like any

model it is a partial representation and its validity must be judged in the context of the uses

to which it will be put. The practical understanding of this idea is the key to critical

Page 16: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

32

thinking about data-based arguments.” They state that data creation has been neglected

and includes:

"deciding what data to collect, designing a structure for organising the dataand establishing systematic ways of measuring and categorising . . . datacreation informs data analysis because any conclusion reached throughanalysis can only be as reliable and relevant as the data on which it is based.The most interesting criticisms of a data-based argument come not fromscrutinising graphs for misplotted points . . . but from considering someimportant aspect of the situation that has been neglected, obscured or biasedin the data collection” (p. 339).

Their data modelling domain appears to capture only the domain of providing

information, not the decision-making, or information-using domain, or the problem-

formulation domain, which should be part of the data-based environment

The widening of the domain for thinking in a data-based environment is supported by

MacKay and Oldford (1994), who view the domain of statistics as being akin to the

scientific enquiry empirical cycle, and have coined the term PPDAC to describe their

interpretation

“• Problem: The statement of the research questions.•Plan: The procedures used to carry out the study.•Data: The data collection process.•Analysis: The summaries and analyses of the data to answer thequestions posed.•Conclusion: The conclusions about what has been learned" (adapted fromMacKay & Oldford, 1994, p. 1.8).

Thus for the research question under investigation a broad view, from the problem

formulation stage to the interpretation of the results in terms of the context in which the

problem is set, is taken. An attempt will be made at describing and prescribing the broad

or global characteristics of statistical thinking in a way that is similar to Polya (1945),

who gave a broad outline for problem solving in mathematics.

3.4.2 Student Thinking in a Data-Based Environment

Key Points:

• Students tend to focus on individual causes to generalise rather than on group

propensity.

• Students believe that their own judgement of a situation is more reliable than

what can be obtained from data.

I will review Hancock et al.’s (1992) research in some detail as it is one of the few

research projects that is directly applicable to the wider domain of statistical thinking. In a

year long study of grade 5 to 8 students’ progress using Tabletop, a computer-based data

Page 17: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

33

analysis tool, two problems emerged as obstacles for the students: (a) reasoning about the

group versus the individual and (b) the objectification of knowledge. The researchers’

belief that data creation and data analysis concepts are connected was affirmed. In the data

creation phase students were faced with defining the right measures. For example they

struggled to define ‘expensive’ and whether serving size of cereal should be by weight or

volume.

"A surprising number of decisions thus need to be made in the data definitionphase, including some that override understandable and reasonable tendenciesto work from individual experience. . . . this part of the process almostinevitably brings the students into subtle questions of data definition. This is acritical aspect of data modelling, one that tended to be ignored . . . incurricula. . . . Students came to recognise the large amount of processing,choosing, and judging that takes place even before we have "raw" data” (p.349).

On the matter of the individual-based reasoning obstacle they found that:

"students' inability to construct representative values for groups is a seriousproblem in data modelling because the concept of representative values is acritical link in the logic of most data-based enquiries. We saw that studentsoften focused on individual cases and sometimes had difficulty lookingbeyond the particulars of a single case to a generalised picture of the group . .. even when students could talk in terms of trends, individual cases took onmore importance than they should" (p. 354).

Hancock et al. believe that students can construct a notion of an aggregate property and

reason about the group propensity. It seemed that aggregate-based reasoning requires the

ability to generalise about a group and therefore, according to Hancock et al., appears to

be linked to a developmental phase of the students.

For the ‘objectification of knowledge’ obstacle there are two aspects that are relevant for

statistical reasoning. The first aspect concerns students realising that, in order to answer

questions, data must be collected and analysed as their personal experience is inadequate

and possibly biased. Furthermore they should be prepared to revise their opinion in the

light of the evidence gained. Hancock et al. found that:

"most of the students with whom we worked have shown little expectationthat collecting and analysing data might yield knowledge that is more reliablethan their own personal experience . . . students did not distinguish betweenholding a personal, plausible view about a question or issue and checking todetermine whether that view was shared by others. Neither did the studentssee the value of examining their own view in light of information collectedspecifically to address it. It was not unusual to find students making graphs .. . only to ignore them when formulating opinions and conclusions" (p. 356).

However, in Hancock et al.’s research there are occasions when students were willing to

change their mind in the light of the evidence gained. The examples given are blind tasting

of four colas and the testing of radios for sound quality. It is interesting to note that these

are both experiments. It would be worthwhile to compare the context and the design of

Page 18: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

34

the investigation, to the situations where students are prepared or not prepared to revise

their opinion. In what type of situations is opinion revised? In fact the researchers

mention that the topic of enquiry had an effect on the motivation and interest of the

students, and in particular they mention the cola unit. Another consideration for the first

aspect, is that: "the dominance of personal knowledge and bias can reach into the data

creation phase as well as data analysis" (p. 357). Hancock et al. give the example of how

some students (15 yr olds) were reluctant to include country and western music as an

option in their music preference questionnaire and furthermore would not survey students

with that preference. By the end of the year the students did begin to show an inclination

to answer questions by collecting data, though the researchers posit that this could be

epistemological.

The second aspect in the objectification of knowledge is the process of "weighing

evidence, reasoning and reaching conclusions” (p. 356).

"Students' weak grip on objectivity was matched by a certain lack ofawareness in processes of formalising or objectifying data . . . the datamodelling process introduces students to a subtle and new relation betweensubjective and objective knowledge" (p. 357).

They give the example of how students did not see the necessity to objectify gender as a

separate field for data analysis as they already had that information in the name field of the

database. "The inclination to objectify is bound up with one's knowledge of the requisite

data structures and the operations that are possible on them" (p. 358). Hancock et al. call

this second aspect of statistical reasoning the externalisation of knowledge whereas the

first aspect is validating knowledge. "In both cases students need to develop the

inclination to objectify, but they also need to learn the structures that make objectification

possible" (p. 356). These comments suggest a lack of data experience and familiarity with

the structure and format of data sets. Such experiences would seem to represent an

opportunity for the students to learn, to become acculturated into a statistical way of

thinking about the data.

3.4.3 Statistical Literacy

Key Point:

• For statistical literacy students need to experience both analytic and synthetic

aspects through carrying out project work for themselves, and through

interpreting and critiquing a report on a project done by other people.

Landwehr, Scheaffer and Watkins (1995) believe that for students to be statistically

literate they need: (1) a number sense; (2) an understanding of variables; (3) an ability to

Page 19: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

35

interpret tables and graphs; (4) knowledge of how a statistical study is planned and (5) an

understanding of how probability relates to statistics. In particular, for number sense,

students should 'see' numbers in a context before making a judgement. And in order to

understand variables students should realise that data must be organised into meaningful

groups for useful summaries and comparisons (Hancock et al., 1992).

When interpreting tables and graphs possible associations, which Landwehr et al. (1995)

call 'professional noticing', should be sought. However research has shown that the area

of graphicacy has three components involving reading the data, reading between the data

and reading beyond the data (Curcio, 1987). The third component is considered a higher

level thinking skill as it involves extrapolation, elaboration of what is given, and the

making of inferences beyond what is explicitly presented (Curcio, 1987; Resnick, 1987).

These findings are based on the analytical level whereas at the synthetic level Hancock et

al. (1992) found that there is a difference between explaining the logic of a graph and

"(a) using that graph to characterise group trends; (b) constructing the graphin order to generate, confirm, or disconfirm a hypothesis; (c) connecting thegraph with the data structures necessary to produce it; and (d) embedding thegraph in context of a purposeful, convergent project" (p. 362).

Shaughnessy (1997a) also considers that multiple graphical representations derived from

the same raw data may be crucial in developing students’ understanding of the problems

associated with extracting meaning from data. Research by Mevarech and Kramarsky

(1997) confirms that graph construction presents another set of problems.

For the area of student understanding of how probability relates to statistics, Landwehr et

al. state that probability should be viewed as the study of random behaviour, not as

counting. The unifying thread should be the idea of distribution. Students should

experience randomness and probability distributions through simulation as they believe

simulation is a natural way to learn mathematical modelling. For statistical literacy also

they believe that students need to develop an intuition about probabilistic events, so that

they can estimate probabilities and assess the reasonableness of results.

As part of general statistical literacy, Gal, Ahlgren, Burrill, Landwehr, Rich, and Begg

(1995) state that the interpretation of statistically based reports is an important outcome in

a statistics course. Their definition of what is meant by interpretation of statistical reports

includes the aspirations of the New Zealand secondary school curricula.

"Interpretive skills include whatever knowledge, ideas, and dispositions wewould like students to be able to invoke when reading newspaper articles,listening to news on TV, being exposed to advertisements, or otherwisereacting to or making sense of statements or situations in which statisticalterms or statistical processes are involved. These are common situations

Page 20: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

36

which do not involve generation of data and do not require people to do anyformal computations or analyses" (p. 23).

They suggest that "interpretive skills involve both a cognitive component and a certain

attitude or dispositional components that operate together" (p. 23). The cognitive

component is the ability to:

"(1) comprehend displays or statements and texts with embedded statisticalterms or claims

(2) have "in their heads" a critical list of 'worry' questions(3) be able to evaluate and express an opinion or raise concerns about what

is being communicated or displayed to them" (p. 24).

It is also acknowledged by Gal et al. (1995) that it is crucial for students to reason in the

light of alternative explanations, and to make judgements. Watson (1997) basically

concurs with this cognitive definition but adds a pre-condition that first there should be a

basic understanding of probability and statistical terminology. The dispositional

component of interpretive skills recognises that students should adopt a critical attitude to

information at all times and become 'professional noticers'. De Lange (1987) concurs and

argues that a critical attitude is essential in statistics and should be an explicit goal in

instruction. Friel, Bright, Frierson and Kader (1997, p. 63) warn that the assessing of

interpretive skills is complex as educators need “to be clearer about how we will judge

their [the students] responses in light of what we think reflects sound statistical thinking.”

A substrand in the statistics strand in the Mathematics in New Zealand Curriculum

(Ministry of Education, 1992) is interpreting statistical reports. The presence of this

substrand clearly signals the importance of this particular skill. At Level 7 (16-17 year age

group) a suggested learning experience is: "evaluating statistics presented in the news

media, and in technical and financial reports, and confidently expressing reasoned

opinions on them" (p. 199). At Level 6 (15-16 year age group) a stated learning objective

is: "make and justify statements about relationships between variables in a sample as a

result of a statistical investigation"(p. 192). Clearly there is an emphasis on producing

intelligent citizens who can make sense of statistical information. Such an aim is not

surprising when an analysis of some New Zealand newspapers found that 87% of written

items contained numerical information (Knight, Arnold, Carter, Kelly & Thornley,

1993).

"The results of the newspaper survey make it clear that numeracy is part ofliteracy. Every newspaper reader is continually bombarded with numericaland graphical information of various kinds. A general mathematicaleducation should certainly include enough understanding of statistics andmathematics to be able to make informed judgements about the meaning andvalue of this information" (p. 29).

Page 21: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

37

An international comparison of adult understanding of scientific terms and concepts

(National Science Foundation, 1993) found that only about one third of adults in Europe

and USA had sufficient knowledge to comprehend a newspaper or magazine article on a

current issue or controversy involving science and technology. This and other studies

support the view that the interpretation of media articles should be an aim of statistics

education. Critical theorists in mathematics education (e.g. Frankenstein, 1989) are also

asking for programmes which empower students to challenge statistics presented by any

authority.

Garfield (1994) also believes that a good statistics education should include the acquiring

of a critical attitude and that students should experience framing a problem, collecting and

analysing data, developing a report on the results and arguing about the conclusion of

their statistical work. Thus statistics education researchers and commentators appear to

suggest that for statistical literacy students need to experience and carry out project work

and to interpret and critique statistically based information produced by other people (Gal

& Garfield, 1997). This reaffirms that research on statistical reasoning processes needs to

be in these broad areas.

3.4.4 Instruction

Key Point:

• Statistics cannot be taught as mathematics. There must be a convergence to a

conclusion with empirical data.

Hancock et al. (1992) found that an obstacle to student learning in a data-based

environment is the approach to teaching which raises several issues about the classroom

culture. The paradigm for American classrooms, and possibly New Zealand classrooms

also, is that no goal or purpose is required in what are usually make-believe activities. The

activities are explored and divergence is valued. It is revealed in their research that

statistics projects began without clear questions and ended without clear answers. They

gave an interesting comparison with Asian classrooms where activities often revolve

around one problem with the aim being convergence to a conclusion. Convergence in

teaching requires prioritising, synthesising findings, resolving contradictions, monitoring

for relevance and so on.

Moore (1990) suggests that current approaches to teaching do not develop an awareness

of variation in students and thus there is a need to revolutionise instruction in statistics.

Singer and Willett (1993) proffer the idea of ‘cognitive apprenticeship’ as a way of

reshaping statistics teaching. They claim that learning occurs most effectively when

Page 22: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

38

learners engage in authentic activities with community members (statisticians). To engage

students in cognitive apprenticeship they say statistics educators must: (1) select authentic

activities, (2) model practitioner behaviours, and (3) provide practice opportunities.

Accepting this approach implies that the teaching of statistics should reflect the ways in

which practising statisticians think when analysing a statistically based problem. Lajoie,

Jacobs and Lavigne (1995) argue that the first step in improving instruction is to make the

practitioner’s tacit knowledge explicit to the student. This may pose a problem when

courses are taught by people who have no experience as practitioners of statistics, or the

practitioners themselves, are unable to articulate their tacit knowledge. Authentic activities

may also be far too complex for the learner.

Falk and Konold (1991) hold the opinion that probabilistic thinking is an inherently new

way of processing information as the world view shifts from a deterministic view of

reality. They state:

“In learning probability , we believe the student must undergo a similarrevolution in his or her own thinking . . . We advocate starting the process ofprobabilistic education by building on the firm basis of students’ soundintuitions” (p. 151).

Borovcnik and Bentz (1991) suggest that conventional teaching of stochastics establishes

too few links between the primary intuitions of the learner and the clear cut codified

theory of the mathematics. They suggest that teaching has to start from the learner’s

intuitions attempting to change and develop them. Borovcnik (1990) indicates that a

logical thinking approach and a causal thinking approach are accessible at the intuitive

level, and that teaching must develop secondary intuitions that clarify how stochastic

thinking is related to these approaches. The current teaching approach seems to set up too

few links with students’ deterministic thinking and does not raise their awareness of a

probabilistic interpretation.

In statistics the statistician requires a mixture of deterministic and non-deterministic

thinking. Biehler (1989) comments that beginning teaching in probability focuses on

almost ideal random situations which do not allow students to practise this dual way of

thinking. Any set of data (apart from the rare extreme case) will contain variation. The

statistician will extract or identify systematic influences, which form the so-called

deterministic part of the model that is constructed to describe the data. But these factors

will rarely account for the full variation observed. Because the non-systematic causes

underlying this variation cannot be analysed directly, they are conveniently described as

random. An experienced statistician is able to judge when the search for causes ends and

the acceptance of random variation begins. This dual way of thinking must somehow be

conveyed in instruction (Pfannkuch & Brown, 1996).

Page 23: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

39

Another perspective on instruction is promulgated by Hawkins (1996) who supports,

with research, that teachers teach statistics as if it is mathematics and not as an empirical

enquiry process. In a teaching experiment Biehler and Steinbring (1991) used an EDA

(exploratory data analysis) approach and found that they underestimated the difficulty for

teachers to change, from the traditional mathematics instruction of diagrams and methods,

to the conceptual side which EDA demands. However, Mallows (1998, p. 2) describes

Tukey as promoting EDA as just another collection of statistical techniques without regard

to clarifying, particularly in observational studies, “how these are to be used to help

understanding real-world problems.” Garfield (Moore, Cobb, Garfield & Meeker, 1995)

suggests that a change in the teaching of statistics will not happen unless there is a change

in the way teachers view statistics. However, unless there is an articulation of what is

implicit in statistical thinking and action, it cannot be communicated what or how changes

to instruction can be made. Furthermore such changes cannot be made without the

support of resource material and teacher professional development (Moore et al., 1995;

Ellis, Miller-Reilly & Pfannkuch, 1997).

3.4.5 Misconceptions

Key Points:

• Instruction should be designed to confront students’ misconceptions.

• Students tend to focus on deterministic causes and do not consider randomness

as a possibility.

Landwehr (1989, cited in Shaughnessy 1992, p. 478) presents a list of common statistical

misconceptions:

“1. People have the misconception that any difference in the means betweentwo groups is significant.

2. People inappropriately believe that there is no variability in the 'realworld'.

3. People have unwarranted confidence in small samples.4. People have insufficient respect for small differences in large samples.5. People mistakenly believe that an appropriate size for a random sample is

dependent on overall population size.To which Shaughnessy adds:

6. People are unaware of regression to the mean in their everyday lives.”

Landwehr's list seems to apply to people and how they experience their lives because

doubt could be cast on some of these misconceptions from the perspective of statistics. In

statistics students are expected to have confidence in small (in relation to the population

size) carefully selected samples, yet often they do not when making an inference about the

population (Bartholomew, 1995). The implication in statement 5 that people do not realise

that a carefully drawn sample of a few hundred instances can tell a lot about a very large

Page 24: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

40

population could be agreed with on the basis that sampling statistics is more difficult to

understand than census statistics. Statement 4 does not take cognisance of another

dilemma in statistics about the practical implications of variability which are again context

specific. Small differences in large samples may not be practically significant (Wild &

Seber, 1997). Thus respect for such differences depends on the context, and the

consequence of not acting on that information.

Shaughnessy (1992, p. 478) believes that "the real world for many people is a world of

deterministic causes" and that "there is no such thing as variability for them because they

do not believe in random events or chance.” This statement could be disagreed with, as

other researchers (Konold, Pollatsek, Well, Lohmeier & Lipson, 1993) have found that

people's beliefs and strategies for solving problems tend to be context specific. Such

contexts as gambling may be looked upon by people as a chance event whereas a car

accident might not be. It also depends on the underlying motive for looking for a cause,

particularly if the event under scrutiny has a consequence such as job performance

appraisal (see Section 3.3.2). It is interesting to note that Shaughnessy (1997b) found that

students had good intuitions about variability when presented with a sampling task based

around the concept of a range of likely values.

Batanero, Godino, Vallecillos, Green and Holmes (1994) report on research that

identifies misconceptions existing in: procedural and conceptual understanding of

frequency tables; graphical representations of data; the mean; measures of spread; the

median; contingency tables; linear regression; sampling; and hypothesis testing. Such an

identification can inform the teaching process. A continual theme amongst statistics

education research, which may be relevant to statistical thinking and would tie in with the

development of intuition, is that instruction should be designed to confront students'

misconceptions (Landwehr et al., 1995). Hake (1987) suggests students should be

actively involved in a dialogue with themselves, the data and with other students in order

to overcome misconceptions. To change misconceptions in physics students, he

encourages students to test whether their beliefs are borne out by empirical evidence.

Shaughnessy (1992, p. 481) based an intervention on this type of instructional design and

though it proved "successful in overcoming misconceptions there were still some students

who did not change their beliefs.”

Much research has confirmed that misconceptions in probability (Garfield and Ahlgren,

1988) and statistics (Batanero et al., 1994) are very difficult to change. Batanero et al.

hint that besides procedural and conceptual understanding being necessary for statistical

reasoning, there is also the other dimension of different discipline meanings for statistical

concepts. Underlying this research also, is the fact that statistical thinking is a new

Page 25: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

41

scientific way of viewing reality that is not intuitively obvious. But perhaps the problem

goes deeper than this. It may be that substantive context knowledge is also needed about

the problem (Pfannkuch, 1996). With empirical data statisticians and novices will check

out the data against a theoretical model. If it doesn't fit then other models and/or the

underlying causes for the phenomenon or perturbation are looked for. The difference is

that statisticians have more experience with data than novices but nevertheless they are

carrying out the same thinking processes. Perhaps teachers should be more aware, and

take into account, that students will be limited by their subject and context knowledge.

Students should not be expected to think like an ‘expert’. The teaching process should

gradually shape and acculturate students into a statistical view of reality through repeated

exposure to a rich array of data analyses.

3.4.6 The Need for Thinking Tools

Key Points:

• There is a growing alignment of mathematics learning with mathematics

thinking.

• More scaffolding tools need to be developed to aid thinking processes.

Pea (1987, p. 90) states that mathematical thinking is now receiving more attention in

mathematics education and that a "growing alignment of mathematics learning with

mathematics thinking is a significant shift in education.” These sentiments could equally

apply to statistics education. Pea (1987, p. 91) believes that tools initiate and aid the

thinking process.

“Intelligence is not a quality of the mind alone but a product of the relationbetween mental structures and the tools of the intellect provided by theculture. A cognitive technology is any medium that helps transcend thelimitations of the mind in thinking, learning and problem-solving activities.Cognitive technologies have had remarkable consequences on the varieties ofintelligence, the functions of human thinking and past intellectualachievements. They include all symbol systems, including writing systems,logics, mathematics notational systems, models, theories, film and otherpictorial media and now symbolic computer languages. Each has transformedhow mathematics can be done and how mathematics education can beaccomplished. A common feature is that they make external the immediateproducts of thinking which can be analysed reflected upon and discussed.They help organise thinking outside the physical confines of the brain.”

Statistics has developed many tools to think with and statistics education uses a subset of

these tools. There are tools for EDA (exploratory data analysis), tools for data reduction

and representation, and tools for multiple representations of data. If we seek to improve

statistical thinking, then we should develop the tools that could aid this process. Resnick

(1989, p. 57) believes that more scaffolding tools need to be developed in mathematics.

Page 26: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

42

“Students can often engage successfully in thinking and problem solving thatis beyond their capacities if their activity receives adequate support either fromthe social context in which it is carried out or from special tools or displaysthat scaffold their early efforts.”

In the quality management area specific sequential statistical tools have been used

successfully with students who were learning how to carry out an investigation (Hoerl,

Hahn & Doganaksoy, 1997).

Biehler (1994a) believes that new cognitive technologies have the potential to provide

qualitatively new aspects of statistical thinking. In particular, with computer technology,

students could experience the use of resampling techniques as an alternative to classical

inference methods, or could experience 'real' data analysis. Konold (1994, p. 204) found

in a small study that "students using the resampling approach consistently outscored the

students using the traditional approach.” However he sounds a note of caution. Students,

after instruction, still appeared unaware that a difference in medians in boxplots could be

due to chance. That is, they were "unaware of the fundamental nature of probability and

data analysis” (Konold, 1994, p. 204). Shaughnessy (1992) sounds another note of

caution on computer technology. He believes students need to experience concrete

simulations before using computer simulations. New thinking tools are becoming

available for statisticians, but are these thinking tools suitable for statistical learning? The

availability of more thinking tools will not necessarily translate into better learning. Tools

or new mediums for learning statistics, and for aiding the development of statistical

thinking, need to be developed or uplifted from other sources. Perhaps this cannot be

done successfully until the implicit statistical thinking processes can be characterised in

some way.

3.4.7 Probabilistic and Deterministic Thinking

Key Point:

• Statistics has the dual goals of developing both probabilistic and deterministic

thinking.

I will explore the work of Biehler (1994b) in some detail as he is one of the few statistics

education researchers writing about the domain of statistical thinking pertaining to this

research. He believes there are two cultures of thinking in statistics, deterministic and

probabilistic. The EDA (exploratory data analysis) deterministic-thinking culture looks for

patterns in a data set and seeks systematic variation which can be explained by causal

factors or can be classified or ascribed to a class. Context knowledge for exploring and

interpreting the data is valued. EDA is concerned with the data set in hand and data are

Page 27: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

43

aggregated or dissected. Borovcnik (1994, p. 355) states the case even more definitively

by affirming that EDA is not based on the random sample argument and that data are

investigated without a theory of probability. "Data are analysed in a detective way,

interactive way between the results of the intermediate analyses and the analyst's

knowledge of the subject matter from which the data originate.”

The probabilistic thinker does not seek context connections but instead concentrates on the

regularities and stabilities over the long run. The fundamental idea is to shift from

individual cases to systems of events because long run distribution can be modelled,

predicted and partly explained. Thus the stochastic or probabilistic thinker works with

models which will display the regularity in random outcomes and hence give information

about the population. Borovcnik (1994, p. 356) suggests that probabilistic thinking

establishes a distinct approach towards reality and "is different from logical thinking and

causal thinking.”

Biehler and Borovcnik polarise and overstate the case for the two cultures of thinking.

EDA does not try to calibrate variability in data against a formal probability model.

Patterns are sought but there is an awareness that people often 'see' patterns in

randomness and that a filter is needed for such a phenomenon. In reality statistical

thinking requires that both stochastic and deterministic thinking are used and that

systematic and random variation and their complementary roles are understood.

Biehler (1994b, p. 2) believes that the relationships between data analysis and probability

need to be developed in the teaching process and that the use of EDA is an opportunity to

connect the two extremes of determinism and randomness. "Probabilists seek to

understand the world by constructing probability models, whereas EDA people try to

understand the world by analysing data.” In reality the EDA revolution recognises that

there are dualistic goals in statistics. One goal is to find and analyse causes, the other goal

is to produce probability models of the variation and ignore causal explanations. These

two cultures of thinking produce a tension in statistics education, which appears to be

currently focussed on the probabilistic side.

"However the essence of the probabilistic revolution was the recognition thatin several cases probability models are useful types of models that representkinds of knowledge that would still be useful even when further previouslyhidden variables were known and insights about causal mechanisms arepossible . . . a situation can be described with deterministic and withprobabilistic models and one has to decide what will be more adequate for acertain purpose" (Biehler, 1994b, p. 4).

These two cultures of thinking have implications both for evaluating students' thinking

and for teaching. In the case of evaluating students’ thinking, researchers must be aware

Page 28: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

44

of whether the students’ ‘solutions’ are normative for deterministic or probabilistic

thinking, and aware that both approaches may be correct. EDA thinking, according to

Biehler, seeks connections and looks for patterns and structure and relationships among

the variables in the data. The ethos is to explain the variation among the groups and to

explain the individual cases such as outliers that may affect the 'group' data. "EDA people

seem to appreciate subject matter knowledge and judgement as a background for

interpreting data much more than traditional statisticians seem to" (Biehler, 1994b, p. 7).

The culture of probabilistic thinking could be described as "the deeper, although not

completely known reality, of which the data provide some imperfect image" (Biehler,

1994b, p. 8). The thinking behind this theoretical modelling is that chance variation rather

than deterministic causation explains many aspects of the world, that there is no pattern or

relationship among variables.

The overall thinking is on the aggregate, the group, not on individual cases. It is the dual

thinking, the interface between the two, and when to use which type of thinking that can

be confusing in the education process. An example is the ‘hot hand’ in basketball play,

which can be modelled by the binomial probability distribution, and thus suggests the

non-existence of a cause. "A series of successes may be explained by some factor, even

when a binomial model well describes the situation" (Biehler, 1994b, p. 10). The

assumption of independence may not be plausible in this case, nor in coin flipping cases

(Wild & Seber, 1997), and herein lies a conundrum for teaching. Today elementary

teaching seems to be stuck in a probabilistic time warp before Galton in the 1900s and has

not advanced to regression analysis type thinking which is interested in looking for

sources of variation with the ‘unexplained variation’ being modelled by probability

(Biehler, 1994b). Biehler (1994b, p. 12) characterises the thinking between causal and

probabilistic aspects by the following graph (Fig. 3.1):

conditions new conditions

distribution new distribution

Conditions determine a certain distribution, a change in conditions results in a change in distribution.

Figure 3.1 Schematic View of Statistical Determinism (Biehler, 1994b, p. 12)

"This scheme implies that a (probability) distribution is a property of something (a chance

setup, a complex of conditions) and that this property can be influenced and changed by

external variables" (Biehler, 1994b, p. 13). Biehler suggests using the quincunx (see Fig.

Page 29: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

45

2.1) as a model device in education. The normal distribution is obtained if the board is

level whereas if the board is tipped a skewed distribution results. This illustrates that a

change in conditions or a cause produces a different distribution. A more practical

example is on the subject of road fatalities. There are several levels of analysis for road

fatalities: the level of individual events; and the level of the system, determined by looking

at the overall distribution in relation to the background conditions. Causal explanations

can be sought at the individual level to find common risk factors such as alcohol or

speeding, whereas the aggregation of data for a longer period can look at system causes

such as seasonal patterns and weekend patterns. Such patterns may not be detectable at

the individual level. "My thesis is that learning EDA can contribute to this way of thinking

if data analysis examples are taught from this spirit" (Biehler, 1994b, p. 13). "This

scheme of statistical determinism is often hidden in standard approaches to statistical

inference" (Biehler, 1994b, p. 14).

Students also need to adopt an attitude in special cases of random samples which says ‘if

the data are a random sample’, and to recognise that another sample may produce different

summaries. Inferential statistics requires a conceptual shift in thinking, from long run

stability to analysing variation in samples. The reasoning is from the sample to the

population which, according to Landwehr et al (1995), is alien to most students. Whilst

agreeing that significance testing, for example, may be confusing and alien to students the

reasoning is certainly not, as it is common to use personal experience to reason generally.

What would appear to be alien is that when analysing the variation in a sample, students

are required to consider the data-set as if it was a random sample. This awareness of

random variation may not be part of students' experience. That is, they must be aware that

in the long run there is stability but in the short run there will be fluctuations.

Biehler has two conclusions. The first conclusion is that to understand probability one

must distinguish and discuss influencing variables and causes and the second conclusion

is that the practice of teaching inference after EDA could lead to compartmentalisation of

experience and thus conceptual ideas are never enriched or adjusted so that the dual goals

of statistical thinking are never realised.

Page 30: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

46

3.5 Thinking in a Data-based Environment - Statisticians’

Perspective

3.5.1 General

Key Point:

• The domain of statistics should be broadened to encapsulate the empirical

enquiry process from the problem formulation stage to the decision making

stage.

Statistics today is infiltrating many fields as there is an assumption that the real world can

be understood, if only partially, through measurement and classification. It deals with

uncertainty and variability, incomplete information and conclusions enwrapped with

qualifications. In a world that expects the objectivity of quantification (Porter, 1995) or

the certainty of truth from numbers (Moore, 1990), statistics has an uphill battle. Since

expert knowledge and judgement are no longer believed (Porter, 1995) in democratic

societies today, the quantification and objectification of knowledge, to agreed rules or

conventions, are paramount for trusted communication globally. This situation has arisen

from such cases as the thalidomide disaster which according to expert judgement was

safe. Such a disaster led to more rules being laid down for scientific procedure and

quantification of new knowledge. Thus this global shift to impersonal knowledge pushes

the quantification of knowledge and hence statistical knowledge to the forefront. The

limitations and power of statistics are not widely understood. To many people, statistics

will give their beliefs the veneer of respectability and will prove what they already know.

If statistics does not do this then it is because the sample was not representative or too

small. Critics of statistical evidence will use the same argument. Therefore if statistical

knowledge is to be understood, the view of statistics must be broadened into a way of

thinking and making sense of the world.

When talking about statistics and thinking we need to clarify the type of statistics in which

we are interested. At a broad level, statistics could be categorised into theoretical statistics

and applied statistics. Applied statistics is dependent upon addressing a real-world

problem and draws upon some theoretical statistics during the process of solving that

problem. From a modelling perspective there is a real-system and a statistical-system and

in applied statistics these systems link and interact. In the case of theoretical statistics the

problem-solving process is wholly in the statistical-system and there is no interaction with

the real-system. The theoretical statistics problem could have arisen as a result of issues

raised during the solution of a real problem or could have arisen at a more abstract level.

Ultimately, at whatever level the theoretical problem arises, the new theories and

Page 31: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

47

methodologies could eventually be useful for real-world problems. This research is firmly

based in applied statistics.

However it may be not so much the type of statistics that has to be clarified but rather the

domain of statistics. Normally it is expected that statisticians will only be involved in the

analysis domain of applied statistics. Chambers (1993) refers to greater and lesser

statistics. He defines the domain of greater statistics as being related to learning from data.

That is, it covers the process from the preparation of the data through to the presentation

of a report. Whereas lesser statistics is defined as a subdomain that involves the analysis

of data and probabilistic inference. He notes that it is this subdomain that is usually

associated with statistics. Bartholomew (1995, p. 13) captures further the reason why the

statisticians’ domain cannot be solely in lesser statistics.

“Statistics is not an abstract system of thought but a set of tools for engagingwith the world of phenomena. Parts of this world can be expressed in termsof mathematics but never wholly captured by it. There is an irreduciblesubjective element in how we seek to represent that world and a consequentambiguity in any inferences that we draw from it. There is no 'best'mathematical representation and therefore there is bound to be an element ofuntidiness and incompleteness in what we do. Statistics is a collective activitywhich must cope with the fact that no two individuals can be expected to haveexactly the same perception of a situation.”

Amongst a group of statisticians (e.g. Chambers, 1993; Bartholomew, 1995; Wild, 1994)

there is a plea to broaden the domain of statistics as they believe that statisticians have a

unique contribution and perspective to make in all areas of the empirical enquiry cycle.

Bradley (1982, p. 7) states: "statisticians tend to think of the design and analysis of an

experiment as an entity in itself and not as a step in an iterative scientific process.” Gail

(1996) believes that statisticians should actively involve themselves in the solution of real

problems. That they should not only provide the technical expertise but also be involved

in the problem definition, the observational or experimental plans and in the interpretation

process. There is a growing belief that statisticians should not see themselves as appliers

of tools, who only take the data-set and manipulate it. They should not perceive

themselves as dispassionate mathematical statisticians but rather as being fully involved in

projects if they are to make sense of the data. This redefining of themselves as being full

participants in the enquiry cycle means a redefining of the domain of statistics.

Barabba (1991) widens the domain of statistics even further than the empirical enquiry

cycle. He asserts that there are two domains operating on data which statisticians should

be aware of: the information-producing domain; and the information-using domain.

Bradley (1982) considers that there has been a failure to understand that the first domain

contributes only part of the information that goes into decision making. The information-

producing domain is governed by conventional systematic statistical procedures or

Page 32: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

48

objective guidelines for such things as sampling procedures, experimental design, truth

tests, and formal analytical procedures. The information-using domain relies heavily on

personal judgement, personal viewing lenses, personal experience and how the user

reacts to the information. Thus the decision-makers perception of reality can be different

from the information-producers reality. This can lead to differences in interpretation and

Barabba considers that it is as important to understand why interpretations differ as it is to

understand the 'right' answer. Complex issues will give rise to multiple 'right' answers

dependent upon such things as underlying assumptions, beliefs, values, choice of

statistical test, selection of significance level. He considers that processes should be

developed to minimise the difference between the two domains through a constant

dialogue between producers and users, as decision quality is dependent upon information

quality. "The quality of thinking about an issue prior to the collection of data is the major

determinant to the quality of thinking after the data have been collected” (Barabba, 1991,

p. 2). The issue of thinking, of thinking statistically, appears to play a large part in the

formation of knowledge and in decision-making in a broad domain.

3.5.2 Quality Management Perspective

Key Points:

• Quality management is based around understanding the theory of variation. The

attitude is that all variation is caused and should be identified and minimised.

• Quality management has produced many papers on statistical thinking.

Of all the disciplines that use statistics quality management is the only one that focuses

specifically on giving courses in statistical thinking and actually writing about and

defining what it means in management terms (e.g. Joiner, 1994). Perhaps a consideration

of the quality management perspective will be informative for this research. What stands

out immediately in the quality management definitions of statistical thinking is the role of

variation. Variation is not a word or an idea that is used a lot or is central to the teaching

of a typical statistics course (Shaughnessy, 1997).

Hare, Hoerl, Hromi and Snee (1995) state that statistical thinking has its roots in the work

of Shewhart, in other words in the roots of quality control. The literature on statistical

thinking would tend to support this view as it is mainly in quality management that such

an idea has been discussed and thinking tools such as pareto analysis and 7-M tools have

been developed. The basis of Shewhart’s work was that there are two sources of

variation: variation from assignable causes and variation from chance causes. Later on

Deming renamed these as special causes and common causes. For quality control the

prevailing wisdom for a long time has been to identify and fix the special causes and to

Page 33: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

49

accept the inherent variability within a process, that is the common cause or chance

variation. Pyzdek (1990) believes that this attitude to variation has to change in a climate

of continually shifting standards and higher expectations. It is no longer quality control

but continuous quality improvement that should be the focus of management.

Pyzdek’s (1990, p. 102) new approach to thinking about variation is summarised as:

"• all variation is caused• unexplained variation in a process is a measure of the level of ignorance

about the process• it is always possible to improve understanding (reduce ignorance) of the

process• as the causes of process variation are understood and controlled variation

will be reduced.”

Hamada, MacKay and Whitney (1992) suggest that "continuous improvement is an

iterative process . . . As sources of variation are eliminated or their effects reduced, new

sources will become important" (p. 14). They suggest that different 'views of the

process' are needed to assess the effects of the different sources and, to do this, their own

special sampling schemes are needed.

According to Hare et al. (1995, p. 55) "Statistical thinking is a mind-set. Understanding

and using statistical thinking requires changing existing mind-sets". They state that the

key components of statistical thinking for managers are:

“1. process thinking2. understanding variation3. using data whenever possible to guide actions.”

In particular they reinforce such ideas as: improvement comes from reducing variation;

managers must focus on the system not on people; and data are the key to improving

processes. Kettenring (1997, p. 153) supports this view when he states that managers

need to have an “appreciation for what it means to manage by data.”

Snee (1990) believes there is a need to acquire a greater understanding of statistical

thinking and the key is to focus on statistical thinking at the conceptual level or from a

'systems' perspective rather than focus on the statistical tools. Snee (1990, p. 116):

"I define statistical thinking as thought processes, which recognise thatvariation is all around us and present in everything we do, all work is a seriesof interconnected processes, and identifying, characterising, quantifying,controlling and reducing variation provide opportunities for improvement.This definition integrates the ideas of processes variation, analysis,developing knowledge, taking action and quality improvement."

Joiner and Gaudard (1990) concur with Snee. They believe that if managers understood

the theory of variation they would recognise, interpret and react appropriately to such

Page 34: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

50

variation in data. They list seven concepts about variation that should be employed in the

workplace:

“1. All variation is caused.2. There are four main types of causes: common causes; special causes;

tampering causes; structural causes (e.g. seasonal patterns and long termtrends).

3. Distinguishing between the four types is crucial for action.4. The strategy for special causes is to investigate immediately.5. The strategy for common causes is the collection and analysis of data.6. When all variation in a system is due to common causes the result is a

stable system.7. Control limits describe the range of variation due to the aggregate effect of

the common causes” (adapted from Joiner & Gaudard, 1990, p. 32).

In quality improvement it is believed that to truly minimise variability the sources of

variation must be identified and eliminated (or at least reduced). However the first task is

to distinguish common cause and special cause variation. It is recognised that variation

from special causes should be investigated at once while variation from common causes

should be reduced via structural changes to the system and long term management

programmes. The method for dealing with common causes is to investigate cause and

effect relationships using such tools as cause and effect diagrams, stratification analysis,

pareto analysis, designed experiments, pattern analysis, and modelling procedures. In-

depth knowledge of the process is essential and if the manager is not happy with the range

of variation then he or she must look for patterns, and depending on the question asked,

aggregate, re-aggregate, re-stratify, or stratify by categories. There is a need to look at the

data in all possible ways in the search for knowledge about common causes. The context

must be known in order to ask good questions of the data. Pyzdek (1990) also observes

that identifying common causes may require knowledge which may only be obtained from

a broad education, perhaps in areas seemingly unrelated to the problem at hand.

Provost and Norman (1990, p. 43) believe that the quality management way of thinking

about variation will alter the way people view reality as:

"the 21st century will place even greater demands on society for statisticalthinking throughout industry, government and education. The continuedincrease in complexity of products will make variation that is insignificanttoday a critical issue.”

Implicit in their concepts about variation is that causal thinking is paramount and that once

the cause has been categorised there are certain strategies on how to deal with that cause.

This new approach not to leave variation to chance has fundamental implications for

education and statistics. In education this may mean a refocussing of statistics on finding

causes to reduce the variation, or a reinterpretation of chance which will perhaps be more

aligned to how people think and to the purposes of statistics. Statistics is beginning to pay

Page 35: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

51

more attention to statistical models for causal inference (Holland, 1986; Cox, 1992)

whereas previously statistics had traditionally removed itself from this territory.

3.5.3 Epidemiology Perspective

Key Point:

• In epidemiology many judgement criteria are used for causal inference.

In reading accounts of statistical thinking in medicine, variation is never mentioned yet it

is at the heart of the methodology and the thinking. Perhaps it is because medicine has

only recently accepted the quantification and objectification of its practice (Porter, 1995).

Such examples are the acceptance of the randomised controlled clinical trial and the

acceptance of a code of practice for observational studies. The long drawn out debate on

whether smoking causes lung cancer has increased the awareness of the importance of

statistical thinking in medicine. Gail (1996, p. 1) believes that "statistical thinking, data

collection and analysis were crucial to understanding the strengths and weaknesses of the

scientific evidence . . . [and] gave rise to new methodological insights and constructive

debate on criteria needed to infer a causal relationship.”

Furthermore according to Gail (1996, p. 11):

"some of the most important elements of applied statistics do not requireadvanced statistical calculations. Seeking out important problems, workingwith colleagues in other fields to define critical issues and objectives,understanding the nuances of the consultees problems before attempting aquantitative description, expressing objectives in measurable terms,developing an organised plan (permeated with the experimental spirit) togather the data, paying special attention to possible sources of systematicerror (such as recall bias), interpreting results in light of various alternativeexplanations, performing follow-up experiments to clarify special issues,communicating clearly with colleagues about the meaning of data for theirproblem - these are critical elements we sometimes fail to emphasise."

In epidemiology it is recognised that statistical methods cannot prove a causal

relationship. Therefore causal significance is based on 'expert' judgement and some

causal criteria such as consistency of association in study after study, strength of

association, temporal pattern, and coherence of the causal hypothesis with a large body of

evidence (Gail, 1996). Whether the study is experimental or observational there is always

the obligation on the researcher to seek out and evaluate alternative explanations before

drawing causal inference. Hill (1953, cited in Gail, 1996, p. 10) stated that statistical

research should be permeated with the experimental spirit and that "imagination in

combination with a logical and critical mind, a spice of ingenuity coupled with an eye for

the simple and humdrum and a width of vision in the pursuit of facts . . ." were critical

factors in many medical breakthroughs such as the work of Snow in the cholera epidemic.

Page 36: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

52

Quality management could be considered to be dealing with much simpler closed systems

than medicine which deals with complex stochastic systems. It may be easier to deal with

variability in management, not so easy on the human scale. This may account for the

differences in perspective or it may be historical. Private business may invest more in

improving its systems rather than publicly owned enterprises such as hospitals. Another

reason could be that statistical thinking is used in quality management since the gathering

of data for industry is a new phenomenon whereas in other fields such as science it would

be known as scientific thinking. Thus the new tools, or the new ways of thinking, or the

new scientific discipline of statistics, may be more readily used by the new disciplines of

market research and quality management rather than traditional disciplines such as

medicine.

3.5.4 Causation and Variation

Key Points:

• Causation is a critical driving force in applied statistics.

• Random variation is subject to reinterpretation.

This focus on causality and its interpretation is not a feature in statistics education

(Schield, 1995; Schield, 1998) or in statistical literature (Cox, 1992) yet it is the driving

force in applied statistics in such fields as epidemiology, econometrics and sociology. For

these fields Cox (1993, p. 366) suggests the following approach for the analysis and

interpretation of empirical data:

"[first] where there is substantive background it is important to incorporate itinto the analysis of specific sets of data [and] secondly where suchbackground substantive information is relatively weak it is desirable thatmodels for interpretation should at least point towards one or more possibleprocesses that might have generated the data and thus in a sense be potentiallycausal even though it is not reasonable to expect causality in any strong senseto be established from a single observational study."

Cox's (1993) definitions of causality, although stated in mathematical terms, would

appear to be similar to the consistency, strength and coherence criteria as stated in Section

3.5.3. Because a deterministic theory often drives the collection of data in these fields,

Cox (1992) suggests that background information could be incorporated in the following

ways:

(1) the ordering and prioritising of variables as explanatory, intermediate response, or

response;

(2) the classification of explanatory variables into: possible causal, intrinsic properties of

the individual under study, and non-specific such as different countries;

(3) determining the dependence and independence of two or more variables.

Page 37: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

53

Holland (1986, p. 959) emphasises these ideas about causation:

"1. The analysis of causation should begin with studying the effects of causesrather than the traditional approach of trying to define what the cause of agiven effect is.

2. Effects of causes are always relative to other causes (i.e. it takes twocauses to define an effect).

3. Not everything can be a cause; in particular, attributes of units are nevercauses.”

This distinction between attributes and causes is important. There can be variation because

of attributes (e.g. gender, height) but because these cannot be manipulated or reduced or

eliminated they are not causes. Holland (1986, p. 959) unequivocally states "no causation

without manipulation."

In the social science area Breslow (1996, p. 26) states that one school of thought holds

the opinion that randomisation has been over-promoted as a means of evaluation, as more

could be gained from observational studies by "thinking hard about causal relationships

among variables and by integrating knowledge of causal structure into the data analysis.”

For causal analysis in social science the real challenges are bias and systematic error such

as non-participation bias, 'recall' bias (as most studies involve a questionnaire only), and

confounding 'hidden variables', rather than random variation (Breslow, 1996; Gail,

1996).

In epidemiology and quality management there is a continuous search for an explanation

of variation, a looking for causes so that the system as a whole can be improved. "A new

approach is necessary that makes it clear that one never leaves variation to chance"

(Pyzdek, 1990, p. 104). Pyzdek gives a graphic example of how viewing chance as being

explicable and reducible rather than unexplainable but controllable in a system can lead to

improvements. In a manufacturing process the average number of defects in solder wave

boards declined from 40 to 20 per 1000 leads, through running the least dense circuit

pattern across the wave first. Another two changes to the system later on reduced the

average number of defects to 5 per 1000 leads. Thus Pyzdek (1990, p. 108) repudiates

the "outdated belief that chance causes should be left to chance and instead presents the

viewpoint that all variation is caused and that many, perhaps most processes can be

improved economically.” His perspective in the marketplace with its increasing emphasis

on continuous improvement could be equally applied to medicine and sociology.

This new approach to thinking about variation is echoed by the recent mathematics of

chaos where chaos is defined as "stochastic [random, chance] behaviour occurring in a

deterministic system" (Stewart, 1989, p. 17). New concepts or interpretations of

randomness are being developed through work in these fields. In the field of chaos it is

Page 38: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

54

known that a very small cause can have a considerable effect on a system. If the cause is

unknown then the effect is called chance. Until the early 1980s it was believed that

randomness came from complexity but it is now known that its effect is seen in both

complex and seemingly simple systems. Therefore it is now believed that the source of

the randomness lies in the choice and measurement of the initial conditions. Because one

cannot ever measure these initial conditions accurately, or cannot perceive the pattern in a

complex dynamical system, one chooses to model unaccountable influences (chance,

chaos) by random variation. If one can see only part of a complex situation it will appear

random. Hence fields such as social science are difficult since measures taken will only

reflect a sub-system of the situation and will be constantly perturbed by unexpected and

uncontrollable outside influences (Stewart, 1989). Thus social science must use statistical

methods to model or filter out these outside effects. Therefore the randomness modelled

in statistics comes from the complexity of the system, the detailed behaviour of which is

beyond the capacity of the human mind.

From these perspectives one could conjecture that chance has dominated the teaching of

statistics too much and that in fact statisticians are seeking out causes all the time. The

cause, the why the data display a particular pattern, should be the driving force in

statistics. Statistics is detective work. Perhaps statistics is in a transition period of

accommodating EDA and the new computer technology into new ways of modelling and

viewing reality. In relation to EDA Cobb and Moore (1997, p. 805) state that “the theory

of exploration is newer, and at present still primitive . . . the theory of interpretation is

fragmentary at best.”

3.5.5 The Nature of Statistical Thinking

Key Points:

• Statistical thinking involves understanding variation and a construction of

interconnected ideas about determinism and indeterminism.

• Statistical thinking is an independent intellectual method.

• Statistics is not mathematics. It has its own characteristic modes of thinking.

Several statisticians who have an interest in statistics education have expressed their

opinion on the characteristics or nature of statistical thinking. Moore (1990, p. 135)

summarises statistical thinking as:

“1. The omnipresence of variation in processes. Individuals are variable;repeated measurements on the same individual are variable. The domain ofstrict determinism in nature and in human affairs is quite circumscribed.

2. The need for data about processes. Statistics is steadfastly empirical ratherthan speculative. Looking at the data has first priority.

Page 39: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

55

3. The design of data productio n with variation in mind. Aware of sources ofuncontrolled variation, we avoid self-selected samples and insist oncomparison in experimental studies. And we introduce planned variationinto data production by use of randomisation.

4. The quantification of variation. Random variation is describedmathematically by probability.

5. The explanation of variation. Statistical analysis seeks the systematiceffects behind the random variability of individuals and measurements.”

Moore quotes Nisbett's research, which showed that a course in statistics increased

students' willingness to consider chance variation compared to students only exposed to

deterministic disciplines, as evidence that statistical thinking is an independent intellectual

method.

Ullman (1995, p. 2) concurs that statistical thinking or quantitative intelligence is

fundamentally a different way of thinking because the reasoning involves dealing with

uncertain empirical data: "I claim that statistical thinking is a fundamental intelligence.” He

perceives the framework in which statistical thinking operates as being broadly based, to

the extent that it could be used informally in everyday life. "We utilise our quantitative

intelligence all the time . . . We are: measuring, estimating and experimenting all without

formal statistics" (p. 6). Some principles he suggests as a basis for quantitative

intelligence are:

“• to everything there is a purpose• most things we do involve a process• measurements inform us• typical results occur• variation is ever present• evaluation is on going• decisions are necessary” (p. 5).

In order to develop statistical thinking Ullman believes that the framework must be

enlarged so that mathematics is seen as one part of the thinking. Part of the problem, in

identifying and communicating what is meant by statistical thinking, is that these aspects

are not readily articulated even by experts (Wild, personal communication, 1995).

Therefore Ullman suggests that

"if we create, codify and legitimise a basic underlying "spoken" quantitativelanguage we will also be providing a vehicle for putting people in touch withtheir own innate understanding of the basic statistical concepts . . . thenmaybe they will easily recognise and develop their skills in the higher levelsof quantitative activities" (p. 8).

In quality management a common language is being developed through the creation of

thinking tools, course materials and intense discussion on the characteristics of statistical

thinking. Britz, Emerling, Hare, Hoerl and Shade (1997, p. 68) state that “the uniqueness

of statistical thinking is that it consists of thought processes rather than numerical

techniques. These thought processes affect how people take in, process, and react to

Page 40: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

56

information.” They used, as the basis for a session they ran on how to apply statistical

thinking, the ASQC (1996) definition:

“Statistical thinking is a philosophy of learning and action based on thefollowing fundamental principles: (1) all work occurs in a system ofinterconnected processes; (2) variation exists in all processes; (3)understanding and reducing variation are keys to success.”

However Hoerl et al. (1997, p. 152) overstate the case for such ‘statistical concepts’

when they suggest students should “unlearn their deterministic view of the world.” A

preferable outcome would be that students expand their view to incorporate a non-

deterministic one.

In attempting to describe the domain in which statistical thinking operates Amstat News

(Sylwester (chair), 1993, p. 7) perhaps captures the main aspects. "Statistical Thinking

encompasses a) the appreciation of uncertainty and data variability and their impact on

decision making and b) the use of the scientific method in approaching issues and

problems." Moore (1992a, 1997) has a slightly more narrow view. He suggests what

should be emphasised in teaching and therefore by implication what is fundamental to

statistical thinking is: (1) data analysis in the context of basic mathematical concepts and

skills; (2) design of data production; and (3) an appreciation of the role of variation and

uncertainty. Inherent in the first aspect is a sense of numeracy in thinking and the ability to

think with graphs and numerical descriptions. In particular, statistical data involve "the

exercise of judgement and a stress on interpreting and communicating results" (Moore,

1992a, p. 424). Cobb and Moore (1997, p. 801) extend this perspective by expanding on

the role context plays: “statistics requires a different kind of thinking, because data are just

not numbers, they are numbers with a context.” They emphasise that the data ‘literature’

must be known in order to make sense of data distributions. From this is implied that

statistical thinking involves going beyond and looking behind the data and linking them,

and making connections to the context, from which they came. The second aspect

emphasises that the quality of data is dependent upon the design of the data production

process. The implicit thinking required is the ability to detect bias, through knowledge of

the context and through knowledge of methodology, that will help objectify the collection

of such data. The third aspect highlights the role of variation and uncertainty in statistical

thinking which is considered a key component: "pupils in the future will bring away from

their schooling a structure of thought that whispers ‘variation matters'" (Moore, 1992a, p.

426). What specifically that structure of thought is and how it should be translated or

modelled is a matter of conjecture. At the root of that structure appears to be ideas about

determinism and indeterminism.

Page 41: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

57

Mallows (1998, p. 3) believes that these definitions are inadequate because they do not

include the need for thinking about the relevance of the data to the problem. His definition

is: “Statistical thinking concerns the relation of quantitative data to a real-world problem,

often in the presence of variability and uncertainty. It attempts to make precise and explicit

what the data has to say about the problem of interest.” Hoerl et al (1997) hint at this data

aspect when they suggest that attention should be paid to the quality of data, as does

Scheaffer (1997, p. 156) when he states that “what is lost is a thorough discussion of

how the data originated, what the numbers might mean.”

Moore (1992b) unequivocally states that statistics is not mathematics and that statistics has

its own characteristic modes of thinking. Cobb and Moore (1997, p. 803) expand on this

theme by describing a difference:

“the ultimate focus in mathematical thinking is on abstract patterns: the contextis part of the irrelevant detail that must be boiled off over the flame ofabstraction in order to reveal the previously hidden crystal of pure structure.In mathematics, context obscures structure. Like mathematicians, dataanalysts also look for patterns, but ultimately in data analysis whether thepatterns have meaning and whether they have any value, depends on how thethreads of those patterns interweave with the complementary threads of thestory line. In data analysis, context provides meaning.”

Hawkins (1996) goes even further and suggests that a mathematically educated person can

be statistically illiterate implying that statistical thinking is a different way of reasoning.

She gives her definition of statistical literacy, though there is still much debate in the

statistical profession on its nature, as "an ability to interact effectively in an uncertain

(non-deterministic) environment" (Hawkins, 1996, p. 2). Hawkins coins the term

'informacy' in an attempt to describe what it is to be statistically literate. To be informate

means “one requires skills in summarising and representing information, be it qualitative

or quantitative, for oneself and others” (Hawkins, 1997, p. 144). Hawkins makes the

point as Ullman (1995) did that a move towards statistical literacy should be accompanied

by a move towards making statistical language intelligible. Hawkins strongly emphasises

that students cannot acquire statistical reasoning without knowing why and how the data

were collected. "Persons whose mathematical education leads them to believe that

knowledge about a statistical distribution is itself a final product are not likely to be

statistically literate" (Hawkins, 1996, p. 7).

Page 42: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

58

3.5.6. The Need for Statistical Thinking

Key Point:

• Students should experience statistical thinking through dealing with real world

problems. Statistics education should focus on analytical studies rather than

enumerative studies.

Amongst statisticians there is an increasing clamour for statistical education to focus on

statistical thinking (e.g. Moore, 1995; Snee, 1993; Bailar, 1988). Their argument is that

the traditional approach of teaching which has focussed on the development of knowledge

and skills has failed to produce an understanding of statistical thinking. “Typically people

learn methods, but not how to apply them or how to interpret the results” (Mallows,

1998, p. 2). They suggest there is a need to focus on 'authentic' activity with the

emphasis on "data collection, understanding and modelling variation, graphical display of

data design of experiments, surveys, problem solving and process improvement" (Snee,

1993, p. 151) rather than on the mathematical and probabilistic side.

There is also a call for statistics education to focus on analytic studies rather than

enumerative studies. Enumerative studies are concerned with estimation for the population

from which the sample is drawn (e.g. opinion polls), whereas analytical studies are

concerned with planning for the future, and prediction for the process which produced the

data (e.g. tests of varieties of tomatoes, comparison of ways to advertise a product).

Hahn and Meeker (1993) suggest there are important conceptual differences between

these types, and that failure to distinguish the difference can result in misleading or

incorrect conclusions. Snee (1993) believes that the way to develop statistical thinking is

through analytic studies and that the focus should be on solving problems, improving

processes, and predicting process performance.

Bailar (1988, p. 7) also deplores the fact that universities are not teaching statistical

thinking but rather the mechanical manipulations.

"Part of the problem is that many people who teach have little or no practicalexperience in what they teach. Practical experience tells students when theunderlying assumptions are not met, when there are data gaps, when data arecensored when results are needed immediately and are on a tight budget . . .Students learn much more about how to confront data and what questions toask when faced with real problems than . . . from a textbook."

According to these statisticians, the solutions for changing this situation are: that a greater

variety of learning methods must be employed at undergraduate level; and that, in

particular, students must be allowed to experience statistical thinking through dealing with

real world problems and issues. A problem, as Bailar points out, is teacher inexperience,

but perhaps another problem is the lack of an articulated coherent body of knowledge on

Page 43: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

59

statistical thinking. In fact, Mallows (1998) based his 1997 Fisher Memorial lecture

around the need for effort to be put into developing a theory for understanding how to

think about applied statistics and that these principles should be useful for elementary

teaching.

3.6 Current Theoretical Models for Stochastic ThinkingShaughnessy (1992) states that the only detailed conceptual model of stochastic thinking

that he is aware of is the Structure Process model of thinking developed by Scholz

(1987). Some other research such as: the Watson and Collis (1994) use of the SOLO

taxonomy model for assessment of learning in statistics; the epistemological triangle

proposed by Steinbring (1991); Ben-Zvi and Friedlander's (1996) work on modes of

thinking; and Shaughnessy's (1992) characterisations of stochastic conceptions; could be

considered to be partial models for statistics thinking and are therefore included in this

section.

3.6.1 Scholz Model

Key Point:

• A detailed conceptual model of stochastic thinking has been developed by

Scholz, from investigating student thinking on closed tasks and from a

probabilistic stance.

Scholz (1987, 1991) has developed a cognitive framework (Fig. 3.2) for information

processing by the learner in stochastic thinking. The main processing units are the

working memory and the guiding system, while the heuristic and evaluative structures are

assumed to use the knowledge base and the goal system. The decision filter and sensory

system are assumed to be related to the individual and his or her environment. Scholz

(1987) theorises that there are two essentially different modes of cognitive activity,

intuitive and analytical, that influence probability judgements. He states that each mode is

necessary in stochastic thinking. He believes that the intuitive mode of thinking is the

more natural mode, and that the analytic mode needs some sort of switch in order to

operate all the units in his model at the higher order level. The intuitive mode does not

result in a systematic search of the knowledge base or heuristic structure. Only directly

accessible knowledge is retrieved from the knowledge base and only simple everyday

heuristics are applied. This model has been developed from research on probability and

therefore may not be appropriate for statistics for the following reasons:

Page 44: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

60

(1) In order to interpret the subjects' process in tackling a problem the tasks were divided

into a problem solving frame and a social judgement frame. The tasks in the problem

solving frame had a mathematical setting, were embedded in a closed story and had

one solution. These could be considered to be mathematical but not statistical

problems. The tasks in the social judgement frame did not require an exact answer

but rather an estimate based on experience and hence could be considered as akin to

one type of statistical problem.

(2) The tasks in the form of text and questions were given to the subjects to work on.

Therefore this model may be considered to be only appropriate for subjects reacting

to given information on paper, not to subjects being involved in an investigation

which might require synthetic modes of thought, and not to subjects thinking with

different cognitive technologies such as computers.

Figure 3.2 Scholz Model of Stochastic Thinking of a Person (from Scholz, 1991, p. 231)

However because of the inter-relationship between probability and statistics there should

be similarities in thinking structures, and an overlap between this model and a cognitive

model for statistical thinking. It should be noted that this model (from psychological

Page 45: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

61

research) is for the purpose of understanding how the learner thinks. It is not for the

purpose of understanding the type of thinking that stochastics requires.

3.6.2 Other Cognitive Models

Key Points:

• Shaughnessy has characterised stochastic conceptions from a probabilistic

stance.

• The SOLO taxonomy model appears to assess the development of student

statistical thinking.

• Four thinking modes used by students in statistical investigations have been

identified.

Shaughnessy (1992, p. 485) outlines his characterisation of stochastic conceptions as

below. These cannot be considered sequential as he has found people who can operate at

several levels, dependent upon the nature of the task.

“Types of Conceptions of Stochastics1. Non-statistical

Indicators: responses based on beliefs, deterministic models, causality orsingle outcome expectations; no attention to or awareness of chance orrandom events

2. Naive-statisticalIndicators: use of judgemental heuristics such as representativeness,availability, anchoring, balancing; mostly experientially based andnonnormative responses; some understanding of chance and randomevents

3. Emergent -statisticalIndicators: ability to apply normative models to simple problems;recognition that there is a difference between intuitive beliefs and amathematised model; perhaps some training in probability and statistics;beginning to understand that there are multiple mathematicalrepresentations of chance such as classical and frequentist.

4. Pragmatic-statisticalIndicators: an in-depth understanding of mathematical models of chance(i.e. Bayesian, frequentist, classical); ability to compare and contrastvarious models of chance; ability to select and apply a normative modelwhen confronted with choices under uncertainty; considerable training instochastics; recognition of the limitations and assumptions of variousmodels.”

According to Shaughnessy (1992) the teaching of introductory courses in probability and

statistics is almost wholly in the emergent-statistical stage creating a dissonance with the

students who are invariably in the first two stages. He believes that instruction must

confront student belief systems and replace them with mathematical models and the ability

to operate in a stochastic setting, despite the fact that research has shown student beliefs

are very robust.

Page 46: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

62

“The dominance of deterministic models with algorithmic presentations in ourscience and mathematics teaching precludes much exposure to models ofchance and uncertainty for many of our students. Thus they may look forcausal influences to make decisions under uncertainty” (p. 485).

Such a model suggests that he has defined four very broad categories, with the emphasis

on probability rather than statistics. The model for statistics may have some different

aspects as statisticians search for and extract signals (causal patterns) and perhaps model

the noise. Consideration of Biehler’s (1994b) statement that statistics requires dual modes

of thinking, deterministic and non-deterministic, demonstrates that Shaughnessy’s model,

as stated above, does not appear to deal with this aspect.

Watson and Collis (1994) suggest that the types and levels of cognitive functioning

occurring, when students solve problems involving chance and data, can be explained by

the Collis and Romberg 1991 model. This model, evolved from the SOLO Taxonomy

(Biggs & Collis, 1982), suggests that the learning modes, sensori-motor, ikonic,

concrete-symbolic, formal and post-formal, develop from birth to adulthood, and that

each mode continues to develop in parallel with later modes. The two modes of

functioning that are of interest to statistics learning are the ikonic mode, which is

associated with intuitive functioning, and the concrete symbolic mode, which is

associated with logical mathematical functioning. Subsequent studies related to this

Australian project (Watson, Collis & Moritz, 1994a; Watson, Collis & Moritz, 1994b;

Watson, Collis & Moritz, 1994c; Watson, Collis, Callingham & Moritz, 1994) confirm

that this theoretical model is able to describe and classify responses from students

assessed by short-answer questionnaires, media reports, open-ended tasks, interviews

and concrete materials. The types of responses by the students were classified into

unistructural, multistructural and relational (U-M-R) responses. Two U-M-R cycles were

identified within one mode. This proposed classification, developed by Biggs and Collis

(1982), is summarised briefly below, the first and last of which were not pertinent to the

tasks set:

1. Prestructural

The learner is distracted by an irrelevant aspect. Responses are not

meaningful.

2. Unistructural

The learner focuses on one aspect of the question or stimulus.

3. Multistructural

The learner focuses on several aspects but does not integrate them.

Responses represent several disjoint aspects, usually in a

sequence.

Page 47: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

63

4. Relational

The learner sees interconnections between the various elements of

a task. Responses involve several integrated aspects which have a

coherent structure and meaning.

5. Extended Abstract

The learner brings information to the problem which is external to

the question. This level, when applied to some responses in a

given mode, takes the whole process into a new mode of

functioning and can be equated with a unistructural response in a

successive mode.

The influence of multimodal functioning, ikonic and concrete-symbolic, is considered to

be important for statistics where problems are often situated in the real world, and hence

intuition and perceptions are operationalised. Furthermore these studies found that the

context of the problem influenced the strategies employed.

Jones, Langrall, Thornton and Mogill (1997) have recently developed a framework which

they believe describes students’ probabilistic thinking. They state that it generally agrees

with the above Biggs and Collis (1982) classification. These theoretical models could be

regarded, along with the Scholz model, as general models that are applicable to

mathematics and probability. Nitko and Lane (1992) adapted a mathematical framework

for the generation of assessment tasks that assessed how students think about and reason

with statistics. Their framework consists of a relationship between the domains of

statistical activity and cognition. The statistical activity domains include problem solving,

statistical modelling and statistical argumentation whilst the cognitive domain includes

representation, knowledge structure, connections among types of knowledge, active

construction of knowledge and situated cognition. In my opinion, such matrix

frameworks, although an improvement on the content-by-behaviour matrices, are still

reminiscent of the old world frameworks which seek to measure learned knowledge rather

than creation of knowledge. A completely new way of thinking about statistics, teaching

and assessing is needed for change (Romberg, Zarinnia & Collis, 1990). If statisticians

such as Hawkins (1996) are stating that statistics is an independent intellectual method

then these models may be inadequate for capturing the essence of the cognitive

functioning that is required in statistics.

Ben-Zvi and Friedlander (1996) proposed a framework for thinking modes in the learning

of statistics. This framework, derived from observations of students, seems to offer ideas

for assessing the creation of statistical knowledge. It appears to have similarities to the

Biggs and Collis (1982) classification. The four identified thinking modes, for students

who were using computers for statistical structured activities and investigations, are:

Page 48: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

64

Mode 0: Uncritical thinking

In this mode graphs were used for illustration rather than being

used as an analytical tool.

Mode 1: Meaningful use of representation

Features of this mode include the ability to select and justify an

appropriate graph or measure within the data analysis or statistical

model stage. Typically inferences were justified graphically. There

were poor connections to the situation under investigation.

Mode 2: Meaningful handling of multiple

representations: developing metacognitive abilities

The data analysis stage is marked by organisation and

reorganisation of data, hypothesis generation and an ongoing

search for meaning and interpretation in relation to the situation

under investigation.

Mode 3: Creative thinking

In a search to communicate and justify ideas drawn from the data

the students employ an innovational graphical representation or

method of analysis.

The context of the investigation affects the modes of thinking employed. Some topics

invoke higher modes of thought whereas other topics leave performance at a descriptive

level. It is conjectured that preconceptions related to the context may lead students to

ignore statistical ideas. The role of teaching in encouraging students to employ critical

thinking strategies is considered to be crucial in the learning process.

3.6.3 Epistemological Considerations

Key Points:

• The epistemological triangle as used by Steinbring for stochastics may help to

develop statistical thinking.

• Theories of instruction for mathematical problem solving and, by implication,

statistical problem solving, are inadequate.

Steinbring(1991, p. 506) believes that "[stochastic] knowledge is created as a relational

form or linkage mechanism between formal calculatory aspects on the one hand, and

interpretive contexts on the other." His theory is based on probability considerations. A

circular cycle develops when probability concepts draw on notions of randomness, but in

order to understand randomness, there must be a concept of probability. Hence a logical,

sequential, mathematical, teaching approach does not work. Biehler (1994a) adapted

Steinbring's epistemology for statistical concepts. Thus this circularity may mean then

Page 49: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

65

that for statistical concept development a strong linkage has to be created between the

statistical thinking tools, such as graphs and statistical summaries, and the context of the

situation. The assumption behind this epistemological triangle (Fig. 3.3) is that the

statistical concepts would be subject to development over a long period of time with

different tools and different contexts.

Concept

Real SituationInterpretive Contexts Statistical Tools

Statistical Model

Variation

Figure 3.3 Epistemological Triangle (adapted from Biehler, 1994a, p. 175)

The development of statistical knowledge, and concepts, in this way would seem to

suggest that this epistemology would also develop statistical thinking. Statistical thinking

could be regarded as the interactions between the real situation and its statistical model and

between these and the resulting conceptual development. This interdependence between

similar elements has been noted by statisticians such as Bartholomew (1995) who stated

that statistical reasoning was based on the interplay of data and theory, and educationists

such as Pfannkuch (1996) who found that context knowledge and subject knowledge

appeared to operationalise statistical reasoning.

Lester (1989, p. 122) observes that what is needed is adequate theories of instruction for

problem solving.

“. . . the link between cognition and instruction requires a compatibilitybetween a theory of cognition and theory of instruction and that these theoriesmust apply at two levels : classroom unit and the individual. . . . in my mindcurrent theories of cognition apply to individual problem solving performanceand theories of instruction are concerned mostly with classroom processes . .. imperative that greater attention be given to instructional theories that canserve as a link between cognitive theory and educational practice. Extanttheories of instruction that have relevance for mathematical problem solvingare woefully inadequate.”

His sentiments could equally apply to the teaching of statistics, that models are needed

that will be accessible to teachers, be useful for improving learning in the classroom and

the learning of the individual.

Lester (1983) also raises two questions about the development of a theoretical model for

problem solving. The first is the question as to whether there are existing psychological

Page 50: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

66

theories that would explain mathematics problem solving, or should a special theory be

developed. The second question raises the possibility that problem solving might not be

exclusive to the domain of mathematics. He feels that research in this area would go a

long way towards linking mathematical thinking to, and distinguishing it from, other

types of thinking. Similarly, such a consideration should be given to research in statistical

thinking.

3.7 SummaryMathematical Problem Solving

• Domain specific knowledge is vital.

• Students need facility in recognising similarities in problems and need to develop a

disposition to engage in critical analysis.

• There are socio-cultural influences on how mathematics is perceived and learnt.

• Reasoning in mathematics is different from reasoning in statistics.

• Teaching should draw attention to the metacognitive components of problem

solving in mathematics and by implication in statistics.

Psychologists’ Perspective

• Rationalisation of events is related to a psychological need and this leads people to

interpret what could be random events in a deterministic manner.

• For probability problems context is not used to solve the problem whereas it is in

statistics.

• People employ such heuristics as representativeness and availability to assess

probabilities and to predict values.

• Judgement criteria used by people are complex and may be dependent upon

context, and the experiences and beliefs of the person.

• Mental models are needed for productive reasoning.

Page 51: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

67

• Learners’ intuitions such as the primacy effect may become obstacles to

interpretation and statistical thinking.

Educationists’ Perspective

• The domain of statistical thinking should be widened to encapsulate the whole

process from problem formulation, to interpretation of conclusion in terms of the

context.

• Students tend to focus on individual causes to generalise rather than on group

propensity.

• Students believe that their own judgement of a situation is more reliable than what

can be obtained from data.

• For statistical literacy students need to experience both analytic and synthetic

aspects through carrying out project work for themselves, and through

interpreting and critiquing a report on a project done by other people.

• Statistics cannot be taught as mathematics. There must be a convergence to a

conclusion with empirical data.

• Instruction should be designed to confront students’ misconceptions.

• Students tend to focus on deterministic causes and do not consider randomness as

a possibility.

• There is a growing alignment of mathematics learning with mathematics thinking.

• More scaffolding tools need to be developed to aid thinking processes.

• Statistics has the dual goals of developing both probabilistic and deterministic

thinking.

Statisticians’ Perspective

• The domain of statistics should be broadened to encapsulate the empirical enquiry

process from the problem formulation stage to the decision making stage.

• Quality management is based around understanding the theory of variation. The

attitude is that all variation is caused and should be identified and minimised.

Page 52: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

68

• Quality management has produced many papers on statistical thinking.

• In epidemiology many judgement criteria are used for causal inference.

• Causation is an important driving force in applied statistics.

• Random variation is subject to reinterpretation.

• Statistical thinking involves understanding variation and a construction of

interconnected ideas about determinism and indeterminism.

• Statistical thinking is an independent intellectual method.

• Statistics is not mathematics. It has its own characteristic modes of thinking.

• Students should experience statistical thinking through dealing with real world

problems. Statistics education should focus on analytical studies rather than

enumerative studies.

Current Theoretical Models for Stochastic Thinking

• A detailed conceptual model of stochastic thinking has been developed by Scholz,

from investigating student thinking on closed tasks and from a probabilistic

stance.

• Shaughnessy has characterised stochastic conceptions from a probabilistic stance.

• The SOLO taxonomy model appears to assess the development of student

statistical thinking.

• Four thinking modes used by students in statistical investigations have been

identified.

• The epistemological triangle as used by Steinbring for stochastics may help to

develop statistical thinking.

• Theories of instruction for mathematical problem solving and, by implication,

statistical problem solving, are inadequate.

Page 53: Chapter Three Literature Review - University of Aucklandthomas/staff/mf/MaxinePhD... · would be viewed as a way of thinking, of acquiring the habits and dispositions of interpretation

69

This literature review has revealed a multiplicity of perspectives on the learning, teaching

and practice of statistics. If the viewpoint is taken that students learn more effectively

through ‘authentic’ tasks then I believe that there is a need for more research on student

and practitioner thinking and behaviour. The research would not only help to define the

characteristics of the discipline itself but also help to uncover the modes of thinking that

are inherently and uniquely statistical. Such research should ultimately benefit the

knowledge base of statistics teaching. I will describe in the next chapter the research

process I used in a quest, at first, to develop statistical thinking in students and, after

some time, finally, to define some characteristics of statistical thinking.