20

Click here to load reader

Learning to use a spreadsheet by doing and by watching

Embed Size (px)

Citation preview

Page 1: Learning to use a spreadsheet by doing and by watching

ln#erucfi~g with Computers ml 6 no 2 (2994) 3-22

Learning to use a spreadsheet by doing and by watching

Michael P. Kerr and Stephen J. Payne*

An important practical question is: how should instruction for compu- ter skills be designed to facilitate effective learning? The reported study examines the instructional efficacy of animated demonstrations within active and passive learning contexts of teaching basic spread- sheet skills. Four content-matched instructional regimes were com- pared: the commercially available tutorial (a ‘scenario machine’), an animated demonstration of this tutorial being used, and problem- solving supported by either the user manual or a set of task-specific demonstrations. Acquired spreadsheet skills were then tested on a standard task. Results indicate a clear learning advantage of problem- solving, over prompted interaction (the scenario machine).

The study suggests two distinctive roles that animations could exploit within computer instruction. Simply watching an animated demonstration can provide a useful introduction to complex interfaces; additionally, animations can be an effective ‘example following’ re- source for more active problem-solving.

Keywords: human-computer interaction, training, animated demonstrations, problem solving

Instruction for computing skills

The acquisition of computing skills is an important activity in many professional lives. In a range of work environments, new users are confronted with rapidly advancing technology and are expected to learn the latest application quickly. Despite advances in-user interface design, state-of-the-art systems challenge learners, and require from them a substantial investment of effort. Thus, an important practical problem is how to design instructional materials that facilitate effective learning.

One might expect a good deal of direction on such issues from the general

MRC/ESRC Social & Applied Psychology Unit, Dept. of Psychology, University of Sheffield, Sheffield S10 2TN, UK ‘School of Psychology, University of Wales, Cardiff, PO Box 901, Cardiff CFl 3YG, UK

Paper received: April 1993; revised October 2993

0953~5438/94/010003-20 @ 1994 Buttexworth-Heinemann Ltd 3

Page 2: Learning to use a spreadsheet by doing and by watching

field of instruction and training, and indeed some useful guidance can be gleaned, particularly from work on instruction for procedural cognitive skills (see Patrick (1992) for a recent review). However, computer systems additionally pose some unique problems, and some unique opportunities, so that a specialised research literature has developed.

This paper explores two themes that have been developed within the specialised literature. The first, borrowed from the mainstream, relates to the possible relative benefits of active over passive learning. The second, more specialist theme concerns the potential applicability of animated demonstra- tions. By ‘animated demonstrations’ we mean software that generates a sequence of screens so that the computer system display appears as if some task were being performed by a skilled user. We report an experiment that pits active problem-solving against more passive instructional regimes, evaluating the effectiveness of animated demonstrations within both contexts.

Active learning of computer systems Amongst the most visible and influential specialised work on instruction in computer skills is the ‘minimalist’ project of Carroll and colleagues (Carroll, 1990). The term ‘minimalist’ denotes a broad instructional philosophy, in which the design of instructional materials seeks to interfere with the learner’s own purposes and motivations as little as possible. In the case of computer manuals, such a minimalist philosophy leads to documents that are minimal in a second, cruder, sense - they have fewer pages than commercial variants - but the philosophy has also led to the development of instructional techniques that involve extensions to user interfaces (see e.g. Singley et al., 1991).

Among these innovations is the ‘training wheels interface’ (Carroll and Carrithers, 1984), in which some of the functionality of a system is hidden from novices, so as to restrict their explorations, and lessen the possibility of them floundering in blind alleys, unable to return to a system state that they recognise. The limiting case of a training wheels interface is one in which only a single action is available to learners at any one time (all other actions lead to feedback and error-correction, of some kind). Such an interface hardly allows ‘exploration’, of course, but it does enable an approach to strictly pre- determined problems that is perhaps more ‘active’ than the provision of scripted solutions for the learner to read and then apply. Carroll and Kay (1988) labelled such an instructional interface a ‘scenario machine’, and explored variations in the detailed way the machine responds to user actions that are not on the prescribed path. The most successful variant, they discovered, did not give the learner feedback, but rather behaved just as if the learner had entered the correct command.

A comparison of these three cases of minimal instruction - minimal manuals, training wheels, and scenario machines - opens interesting ques- tions concerning active versus passive learning. Just what aspects of the minimalist regimes are leading to their reported success, relative to typical commercial training manuals? With a minimal manual, the designed incom- pleteness of the instructions forces the user to explore the system actively, making guesses and learning from mistakes. With the training wheels interface,

4 Interacting with Computers vof 6 no 1 (1994)

Page 3: Learning to use a spreadsheet by doing and by watching

the worst costs of an exploratory strategy are avoided, and with the scenario machine, mistaken guessing costs the learner virtually nothing (nothing at all in Carroll and Kay’s preferred version). The success of all these techniques confirms that active learning, involving interaction with the target machine, is more effective than reading instructions, but why? Is it because the user is free to set his or her own goals? (If so, then we might expect the scenario machine to be the least successful method, but no direct comparisons among these techniques have been made.) Or is it that the learner has to attempt to solve problems him or herself, actively choosing or inferring the next command, rather than merely reading and applying potted solutions? Or is it simply that the learner gains more exposure to the interface - seeing real screens, pressing real buttons - while learning task procedures? From the psychological litera- ture on learning and instruction, one might predict and begin to explain any or all of these effects. But which experiences are really making a difference in the context of learning to use a computer system?

In two experiments on the acquisition of spreadsheet skills, Chamey et al. have begun to separate these issues, and open them to more systematic investigation. Their first pertinent study (Chamey and Reder, 1986) compared learning spreadsheet commands by reading the manual, copying example solutions (i.e. entering at the interface a textually-desc~bed method) or problem-solving. The main finding was that performance when tested was reliably better when training contained problem-solving. Copying example solutions proved no better than simply reading the solutions in a training manual. When example solutions were combined with problem-solving, for a particular command, performance was not any better than pure problem- solving.

Later, using a similar methodology, Chamey et al. (1990) investigated the relative benefits of acquiring spreadsheet skills through learner initiated Zterstis experimenter supplied goals, and within the latter condition, active practice at selecting and applying procedures (i.e. problem-solving) zlersus copying exam- ple solutions (repeating one of the comparisons in the first experiment).

Thus an ‘exploratory learning’ group experimented with the commands at will, setting their own goals and exploring freely, with only a descriptive manual for support (i.e. a manual containing instructions about commands, but no example methods), while an ‘interactive instruction’ group worked on problems presented in a training manual. For the interactive group, the training problems for half the commands were presented with explicit solutions that subjects typed in verbatim, whereas problems for the remaining commands were presented without solutions, and subjects solved them by actively selecting and applying procedures (and afterwards received feedback). Leam- ing commands by problem-solving led to longer training times, but also faster and more accurate test performance than solution-copying or exploration conditions. Exploration took longer than solution-copying at training, but was no better at test.

As with any empirical investigation, some specific properties of Chamey ef al.‘s experimental method constrain its proper interpretation. First, the com- parison between problem-solving and method-following is, in both experi-

Kerr and Payne 5

Page 4: Learning to use a spreadsheet by doing and by watching

ments, a within-subjects comparison. This means that each subject is provided with solution-methods for some of the tasks. It may be that these given methods provide a platform of secure knowledge from which the acquisition of skills through problem-solving can proceed. This is made more plausible by the high degree of between-task consistency in modern user interfaces, so that solutions to new problems can in part be transferred from known methods. Consequently there is no guarantee from Charney et aZ.‘s experiments that problem-solving would be the best instructional regime in isolation, i.e. in between-subjects designs.

Second, the problem-solving condition in both Charney et al.‘s experiments provided subjects with complete feedback in the form of correct method- solutions for any problem that they failed to solve correctly. Again, this is a perfectly sensible experimental design decision, but nevertheless limits the practical generalisation from the findings: perhaps problem-solving is only an effective instructional technique if subjects have the safety net of correct answers for reference, should their problem-solving fail.

In summary, the work of Charney et al. suggests that learning by problem- solving, with given goals, may be an effective training schedule for novice users of spreadsheets. Problem-solving is more effective than more passive example- copying and than more unconstrained exploration. But we cannot yet be certain about the conditions under which this generalisation holds. In particular, we do not know whether problem-solving will still be effective in relatively unstruc- tured conditions, where learners have neither guided practice with other parts of the system, nor correct solutions for failed problems.

Animated demonstrations The idea of teaching procedural skills by demonstration is not new: it is a staple of apprenticeship training, a paradigm that has recently been held up as some kind of educational ideal (Collins et al., 1987). In the instructional literature, several studies have been reported that evaluate the use of filmed demonstra- tions to teach procedures (e.g. Baggett, 1979; 1987). Animated demonstrations share some of the features of such instructional films, but are also distinct, in that they do not show a human user (instead, the system displays changes as if some invisible user were operating the system). In pure form, animations contain no sound, and no linguistic descriptions, other than those that appear on the system’s display.

There is very little experimental evaluation of the efficacy of animated demonstrations as instruction for computer skills, but this has not hindered their increasing popularity. Several systems have been described in the research literature or exist in the commercial market. Research examples include: programming by example (Duisberg, 1988); NLS-SCHOLAR (Gring- netti et al., 1975) which uses artificial intelligence techniques to teach text- editing skills through demonstrations; and CADHELP (Cullingford et al., 1982). Commercial examples include the ‘getting started tours’ that are currently packaged with Apple Macintosh systems, and the ‘demo disks’ that accompany the instructional manual for the Wingz Version 1.0 spreadsheet application (Informix Software).

6 Interacting with Computers voZ6 no I (1994)

Page 5: Learning to use a spreadsheet by doing and by watching

Two aspects of animated demonstrations are particularly relevant considering the above discussion of active learning of computing skills. First, an animated demonstration exposes the user to sequences of screens. If this is one of the most important features that underlies the success of active instruction, then animated demonstrations might share this success. Second, as noted by Lewis et al. (1989) it is difficult to imagine an animated demonstration being readily available for consultation during problem-solving. With textual instructions, example-following can take place through closely intertwined, step-by-step consultation and copying. With animated demonstrations, a sequence of actions must be viewed, and remembered, before it can be tried out. In principle it may be possible to limit this sequence to a single action and screen-response, but in practice the consult-attempt cycle will be lengthened. In the light of the negative findings for textual instruction-following, this may be a positive advantage in certain situations.

In the face of these optimistic u priori prospects, the few published studies of animated demonstrations offer a mixed evaluation. The most negative finding is that of Palmiter et al. (1991), who compared animations with informationally equivalent text within a didactic task-by-task instructional programme for teaching elementary HyperCard interactions. Subjects who watched animations were quicker to reach criterion performance during training, i.e. subjects were more able to correctly enact a method having just viewed a demonstration than having just read a description. However, one day later, subjects who had been instructed with animations showed poorer retention of the task methods. Palmiter et al. infer that the animation subjects achieved good training performance by superficially mimicking the demonstrated method, whereas subjects who read textual instructions were forced to process the instructional material more deeply to achieve criterion performance.

It is worth noting that subjects’ use of textual instructions in this experiment was constrained, for the purpose of comparison with animations, to a read- then-act sequence, as opposed to the usual intertwined consultation-action cycle. If our argument is correct, this use of text, although limiting, may lead to better retention than classical ‘method copying’. Even so, in a close replication of the study, that used the MacDraw graphics editor as a target system, Waterson and O’Malley (1992) discovered an advantage for animations over text both in training and retention.

Payne et al. (1992) investigated the instructional potential of ‘pure’ (i.e. no commentary or supporting documentation) animated demonstrations (silent video recording) as an aid to exploratory learning of the MacDraw graphics editor. Learning advantages over no instruction were found, and this readily implemented and flexible method of instruction proved to be as effective as text-based instruction.

Together, these studies confirm the a priori analysis that animated demon- strations have interesting instructional potential, but warn that this potential may be constrained by properties of the tasks being instructed, and by the instructional regime within which the animations are embedded.

Kerr and Payne 7

Page 6: Learning to use a spreadsheet by doing and by watching

In~oduction to the experiment

The study reported below attempts to offer further evidence regarding the relative importance of the separate features of active learning, and the potential of animated demonstrations. Clearly, there are more open empirical questions in this domain than can be addressed in a single study. Our approach is therefore to accept pragmatic constraints on the development of instructional regimes, and to base our experiment around commercially developed instruc- tional materials. We compare four instructional regimes for teaching the basic functions and uses of the Microsoft Excel spreadsheet application (version 2.2; Microsoft Corporation, 1989).

The instructional regimes include, and are all derived from, the commercially provided interactive tutorial (Microsoft Excel Tour - A HyperCard based Tutorial, Microsoft Corporation, 1989). One of the regimes also uses the commercial user manual. The experiment compares four training regimes, each of which we consider to be a genuine practical alternative for introductory instruction in spreadsheet skills; and each of which is potentially excellent, given the current state of knowledge - there are no ‘straw regimes’. The four training regimes are approximately matched in terms of their content: under each regime, subjects watch or perform (or have the opportunity to perform) exactly the same set of spreadsheet tasks, in the same order.

l Regime 1: Interacting with a commercial scenario machine (SM) The ‘interac- tive tutorial’ supplied with Excel can be characterised as a scenario machine. In the terms of Carroll and Kay (1988) it is a mixture of two types: ‘Prompting feedback’ for some tasks, ‘Feedback’ for other tasks.

l Regime 2: Watching a scenario-machine demonstration (SM-Demo), i.e. a pure demonstration of the scenario machine being used without error.

o Regime 3: Problem-solving using a set of task-specific animated demonstra- tions (PS-Demo) .

l Regime 4: Problem-solving using a commercial manual (IS-Manual).

Comparisons among different subsets of these 4 regimes address the relative importance of different features of active learning. By comparing SM with SM.-Demo, we can examine the importance of interaction, relative to mere exposure to screen-states. Watchers of the SM-Demo will be exposed to exactly the same information as users of SM, but without having to act. By comparing PS-Demo with SM and SM-Demo we can examine the benefits of having to encode and then repeat task-methods, relative to being prompted at each step. Users of the PS-Demo will have access to almost exactly the same demonstra- tions as viewers of SM-Demo, but the demonstrations are partitioned task-by- task, and must be applied (copied) as well as viewed. By comparing PS-Demo with PS-Manual we can test the benefits of having to search and interpret textual materials, that are imperfectly matched to the task-at-hand. The two I’S groups have to solve exactly the same problems, so that if their performances were perfect they would perform the same sequence of interactions as the SM group. But the PS-ManuaI group have to derive relevant information for &ems&es, from the complete commercial manual, whereas the PS-Demo

8 Interacting with C~rn~~~e~ noI 6 nn 2 (29941

Page 7: Learning to use a spreadsheet by doing and by watching

group can choose to watch demonstrations that show exactly how to solve each problem.

We recognise that none of these comparisons is perfectly controlled: the study is something of a compromise between a practical evaluation (as mentioned, all the regimes are practical alternatives, and rely on commercially produced materials) and a controlled psychological experiment. Such a compromise is typical, perhaps, of most empirical investigations of training regimes. We believe that compromise is appropriate in view of the rich nature of real-world computer-learning experience and the urgent practical requirements of good instructional designs. We trust that our efforts to control the content of instructional regimes, and our use of entirely between-subject manipulations nevertheless afford reasonably secure inferences about the relative merits of the different instructional styles.

After undergoing instruction under one of the four regimes, each subject’s acquired competence is assessed with a standard task; that used by Baxter and Oatley (1991). This allows some additional comparison of our results with those derived from the quite different instructional methods in that study.

Method

Design A simple between-subjects design was used in which four groups of subjects underwent different training followed by the same standardised test task.

Subjects Thirty-six subjects (19 males and 17 females), who had no previous experience of spreadsheet applications, volunteered to take part in the experiment for payment. Nine subjects were assigned to each of the four training conditions according to their order of arrival at the experimental laboratory. Before starting the experiment, subjects were asked to indicate approximately their level of experience with computer systems. In each of the four groups there was a spread of experience with most subjects in each group having regular Macin- tosh word-processing experience but little other exposure to computers.

Training materials Instruction in all training conditions was presented to subjects through the HyperCard application (Version 2, Apple Macintosh Inc., 1991) which allowed the fo~ation of computer-based instructional conditions that users could negotiate without any experimenter interference.

Scenario machine condition (SM) This condition employed 3 of the 4 sections of the commercially available tutorial for Microsoft Excel. The sections used were Introduction (including instructions on how to use the tutorial), Worksheets (subsections: ‘What is a worksheet?’ and ‘Using a worksheet’) and Charfs (‘What is a chart?’ and ‘Using a chart’). The section on databases was omitted. Subjects negotiated the sections through a mix of mouse (i.e. clicking and dragging and releasing) and keyboard

Kerr and Payne 9

Page 8: Learning to use a spreadsheet by doing and by watching

(i.e. typing) operations. An introductory HyperCard card was added, which instructed subjects to complete the sections in the following order: Introduction (requiring 30 mouse operations over about 5 minutes), What is a worksheet? (20 mouse operations; 4-5 mins approx), Using a worksheet (50 mouse and 16 keyboard operations; 20-25 mins approx), What is a chart? (20 mouse opera- tions; 4-5 mins approx.) and Using a chart (50 mouse and 3 keyboard operations; 10-15 mins approx.).

Scenario machine demonstration condition (SM-Demo) The tutorial sections on Worksheets and Charts were used. The Introduction was unnecessary because SM-Demo subjects did not need to work the tutorial. Demonstrations were constructed using the Media Tracks 1.0 software applica- tion (Farrallon Computing, 1990). Demonstrations took the form of a screen recording of an ‘expert user’ completing a subsection of the tutorial. A HyperCard stack instructed subjects to view the four demonstrations in the order: What is a worksheet? (4 mins 22 sets), Using a worksheet (22 mins 29 sets), What is a chart? (4 mins 09 sets), Using a chart (12 mins 37 sets), and allowed subjects to choose each of these demonstrations, so that progression through the separate sections of the instruction was self-paced. (In practice, subjects did not pause between demonstrations).

Problem-solving conditions Subjects in these conditions watched introductory demonstrations and solved problems in the following schedule:

l Watch demonstration ‘What is a worksheet?‘; l Attempt to solve 6 problems, corresponding to the ‘Using a worksheet’

tutorial; l Watch ‘What is a chart?’ demonstration; l Attempt to solve 4 problems corresponding to the ‘Using a chart’ tutorial.

The content of the tutorial was divided into separate problems by judgement. The complexity of each problem was kept roughly equal.

Subjects attempted problems one at a time, in fixed order (corresponding to the order in the tutorials), by selecting a ‘try next problem’ button from a HyperCard stack, and turning to the next page in a booklet. For example, Figure 1 is a screen dump of how Chart problem 4 was presented to subjects in the IS-Demo condition.

Selecting each problem opened Excel in the appropriate start state. The start state and corresponding goal state for Chart problem 4 are illustrated by Figures 2 and 3 respectively.

Similarly, Figure 4 illustrates how Worksheet problem 2 was presented, while Figures 5 and 6 indicate the appropriate start and goal states for this particular problem.

Each page in the booklet was also headed by the problem description, and a screen dump of the associated goal state. Figure 7 lists two complete sets of problems employed. Subjects were instructed to attempt each problem in order,

10 Interacting with Computers vol 6 no 1 12994)

Page 9: Learning to use a spreadsheet by doing and by watching

Change the type of chart to a percentage pie as shown in Fig 4:GOAL D.,with legend.

HENU OF DEMONSTRATIONS: Click on appropriate buttons to view

FRY PROBLEM 4 [cl>

0 RODING R LEGEND

0 RTTRCHING TEHT

0 CRERTING R CHRAT 0 MULTIPLE SELECTION

0 CHANGING CHRRT TYPE 0 ADDING OATR FROM SPRERDSHEET TO CHRRT

0 SRUING U’ PRINTING R CHRRT

Figure 1. Screen dump of how chart problem 4 was presented (ELDerno)

but to abandon any problem that they failed to solve within 5 minutes. However, this second instruction was not enforced, so that some subjects chose to spend rather longer working on some problems.

In one of the problem-solving conditions, (IS-Manual), subjects had access to the Microsoft Excel User manual throughout problem-solving. The second condition (IS-Demo) provided subjects with two libraries of demonstrations (10 for worksheets, 7 for charts), showing features, procedures and functions associated with spreadsheet use, which could be viewed during problem solving. The demonstrations were accessed from an array of named buttons that appeared on every card from which problems were accessed. While worksheet problems were being done, only worksheet demonstrations were available, and likewise for charts (see Figure 1 for an example). Each demonstration corres- ponded to a meaningful task unit; defined as tasks that were given separate names in the standard user manual (these names were used for the correspond- ing buttons). Consequently, full instruction for some problems would require more than one demonstration (of the 10 problems, 5 had 1 relevant demonstra- tion, 3 had 2 relevant demonstrations, 1 had 3 relevant demonstrations and 1 had 4 relevant demonstrations). Figure 8 shows the full set of demonstrations, as they appeared to users on the HyperCard stack.

Kerr and Payne 11

Page 10: Learning to use a spreadsheet by doing and by watching

r 6 File Edit Galleru Chart Format Macro Window

E25,OOO

E20,OOO

El 5,000

El 0,000

t5,ooo

EO

Flight Totals

Champaign to Peoria to Chicago Chicago to Peoria

Chicago

L Ready

Figure 2. Start state for chart problem 4

Test material

Subjects in all groups performed the same test once training was completed. The test was a standard task taken from Baxter & Oatley (1991) (see Figure 9). The criteria they used to construct the task were deemed relevant and important for this study. Namely:

l an easy to administer, short, self-contained task needing minimal ex- perimenter instruction;

l covering key features about spreadsheet usage: the ability to manipulate and change both data and formulae by, for example, cutting, copying and pasting, and the ability to present data in the appropriate format, e.g., in chart/table form, using formatting options, etc.;

l having an unambiguous scoring system.

The test comprises a simple balance sheet that subjects must input into a new spreadsheet and a list of calculations and operations (see Figure 9) that subjects must perform. One task, changing the formatting of figures, was omitted from Baxter and Oatley’s version of the task. (Excel’s default setting is the one required to produce the target worksheet).

12 Interacting with Computers voZ6 no 1 (1994)

Page 11: Learning to use a spreadsheet by doing and by watching

r & File Edit 6alleru Chart Format Macro Window

Flight Totals

13.00%

t!!ll!b 38.98%

23.46%

24.56%

w Champaign to Chicago

n Chicago to Champaign

.Reedy

Figure 3. Goal state for chart problem 4

Procedure Subjects participated in the experiment one at a time. Each subject was introduced to the experiment, and told they would be learning to use a spreadsheet, and then tested. Subjects were then left to work through their particular training schedule without the experimenter being present. After subjects completed training, the experimenter (who was watching subjects on a remote monitor, with their knowledge) removed any training materials and presented subjects with the test. Subjects were allowed 15 minutes to complete the test at which point they were stopped. All sessions were video-recorded for later analysis.

Results

Scoring for the test followed the rubric of Baxter and Oatley (1991). Points were allocated to subtasks, giving a maximum score of 22 (see Figure 10).

In order to compute training scores for subjects in the two problem-solving conditions, each problem was divided into stages and criteria based on Baxter and Oatley (1991) (see Figures 11 and 12). The maximum training score for

Kerr and Payne 13

Page 12: Learning to use a spreadsheet by doing and by watching

t- c File Edit Go

IPROBLEM 2: SPREROSHEET STRCK

Format the figures in column C so that they

Ctick on appropriate buttons to view 0 CHRNGING COLUMN WIDTH

0 CHANGING CELL ENTRY

0 ENTERING ORTR ITRY PROBLEM 2 10

0 ENTERING NUMBERS

0 RRNGE SELECTION

0 COLUMN SELECTION

0 FBRMR~lNG:FO~

0 FORMR~lNG:NUMBER

0 FORMULR CRERTIBN t? USE

0 FUNCTION CRERTION 0 USE

Figure 4. Screen dump for how worksheet problem 2 was presented (PS-Demo)

worksheet problems was 15 and for chart problems was 17. Table 1 summarises the training and test data for all subjects.

Training Training time for the SM-Demo group was fixed at 35.1 minutes. Training times in the other 3 groups varied according to subjects’ behaviour. As Table 1 shows,

Table 1. Training and test results

Phase SM SM-Demo B-Demo E-Manual

Training Mean time SD Mean score SD (max = 32) Test Mean time SD Mean score SD (max = 22)

35.174 35.115 48.031 59.385 6.42 0 17.769 14.200

N/A N/A 21.556 18.222 4.157 6.320

15.000 13.572 12.870 11.479 0 2.895 2.870 2.929 6.556 7.778 12.00 13.00 2.895 6.22 5.916 5.339

14 Interacting with Computers ~016 no 1 (1994)

Page 13: Learning to use a spreadsheet by doing and by watching

6 File Edit Formula Format Data Options Macro Window

Figure 5. Start state for worksheet problem 2

the mean time for the SM group was very closely matched to the SM-Demo time, with a fairly small variance, although two subjects took over 40 minutes. In the problem-solving conditions mean times and variances were much higher. An ANOVA was conducted for the 3 groups that were free to vary. This indicated a significant effect of training regime on training time (F 2,24=7.1, p<O.O05). Post hoc pairwise comparisons revealed that the B-Manual group took significantly longer over training than the SM group, but the E-Demo group was not significantly different from either.

The training scores of subjects in the two problem-solving conditions were not significantly different (F 1,16=1.75, p>O.O5). As can be seen from Table 1, subjects were not successful at all training problems (the highest score in the B-Demo was 27, and in E-Manual was 25).

Test None of the subjects in the SM group completed the test within the allowed time of 15 minutes. Similarly, 7 subjects in SM-Demo, 5 subjects in B-Demo and 3 subjects in B-Manual failed to complete the test. As a consequence, statistical analysis of test times is meaningless.

An analysis of variance revealed there to be a significant difference in test

Kerr and Payne 15

Page 14: Learning to use a spreadsheet by doing and by watching

- File Edit Formula Format Uata Options Macro Window

Cl2 I I

Figure 6. Goal state for worksheet problem 2

score between the four training groups (F 3,32=3.3; p~O.05). It was found that subjects who solved problems in training, with either the manual (IS-Manual) or with the library of demonstrations (E-Demo), scored significantly better than those who underwent the scenario machine (SM) training (both Duncan’s p<O.O5). No other pairwise comparisons reached significance.

From visual inspection of the means, it appears that both problem-soling conditions achieve very similar levels of success at test, as do both regimes based on the scenario machine. Subjects’ scores were pooled into these two groups and a further ANOVA was conducted. The difference between I’S and SM groups was significant (F 1,34=9.8; p<O.O05).

Training - test relationships In order to examine the relationship between training and testing for each group several correlations were computed.

In both problem-solving conditions subjects who performed well, in terms of their scores on the training problems, also tended to perform well on the test (IS-Manual: r=0.856; E-Demo: r=0.752; df=7; crit. r=0.75; pCO.01).

No significant relationship between training time and test score was found for subjects in the SM group (r=0.266), nor for the two problem-solving groups

16 i~feracfing with ~o~p~fe~ vol6 no I (2994)

Page 15: Learning to use a spreadsheet by doing and by watching

Worksheet problems

1. Enter the data, both text and numbers in exactly the same format as

shown in Fig 1: Goal A.

2. Format the figures in column C to match Fig 2: Goal 8.

3. Enter a formula to calculate Gross Sales

(i.e. Units Sold x Unit Price) to match Fig 3: Goal C.

4. Format the entire D column to match Fig 4: Goal D.

5. Use an addition function to arrive at a Total Sales figure

shown in Fig 5: Goal E.

6. Alter the number of sandwiches sold to 7200 and title the worksheet in

the format shown in Fizz 6: GoabF.

Chart problems

1. Using the appropriate data from the starting spreadsheet create a titled

and labelled chart as shown, Fig 1: Goat A. Save this chart as Chart B.

2. From the File menu open Chart A. Then copy from worksheet 2A the two

sets of flight figures (i.e. from Chicago to.....) to Chart A so that the

resulting chart matches Fig 2: Goal B.

3. Create a chart as shown in Fig 3: Goal C, depicting flight total ticket

sales for the four routes.

4. Change the chart to a percentage pie,with legend as shown in

Fig 4: Goal D.

Figure 7. Worksheet and chart problems

(a) Worksheet demos (b) Chart demos

Changing column width (20.3) Adding a legend (15.9)

Changing cell entry (24.2) Attaching text (34.1)

Entering data (583) Creating a chart (45.2)

Entering numbers (292) Multiple selection (44.4)

Range selection (11.4) Changing chart type (34.1)

Cohunn selection (11.8) Adding data: worksheet to chart (44)

Formatting: Font (52.3) Saving & printing a chart (68.7)

Formatting: Number (31.1)

Formula creation & use (68.4)

Function creation & use (38.5)

Figure 8. Two sets of demonstrations available (duration in seconds)

Kerr and Payne 17

Page 16: Learning to use a spreadsheet by doing and by watching

Standard task: Put the following balance sheet into a new spreadsheet “exactly” as

shown. Then carry out the operations (a) to (d).

15 mins are allowed for this task

Forecast for first quarter

January

Gross profit loo00

Expenses:

Wages

Overheads 1000

Sundries 465

Net profit

February

12ooo

loo0

324

March

lltm

loo0

254

-

INSTRUCTIONS:

(a) Using a formula calculate wages at 10% of gross profit.

(b) Net profit is equal to gross profit less expenses

(i.e. wages, overheads and sundries).

(c) Increase wages to be 15% of gross profit.

(d) Create a percentage pie chart with a legend, showing wages,

overheads, sundries and net profit.

Figure 9. Standard task

(IS-Demo: r= -0.53; IS-Manual: r=0.49). Note that in the PS-Demo group; the (non-significant) trend is for subjects who spend less time in training to perform better at test.

Discussion

In the study of spreadsheet learning reported by Baxter and Oatley (1991), ‘experienced’ and ‘naive’ subjects learned to use the Excel spreadsheet through undirected exploration of the system, supported by the commercial training manual. ‘Naive’ subjects are directly comparable to the subjects in the current experiment, in that they had no previous experience of spreadsheets. Baxter and Oatley allowed these subjects 90 minutes of ‘self-training’. On average they achieved test scores of about 7 (which often included 2 points for reformatting figures in the item that was dropped from our version of the test).

Performance of subjects in all experimental groups compares very favourably with this standard. Training times in all cases were substantially less than 90 minutes, and test scores were close to, or considerably better than 7. This between-experiments comparison lends support to the claim that all training regimes studied above are realistic practical alternatives. Furthermore, the

18 Interacting with Computers vol 6 no I (1994)

Page 17: Learning to use a spreadsheet by doing and by watching

Yf-AGE

A

B

C

D

E

F

G

H

I

J K

L

M

jCORE OPERATION

)

1

1

1

2

2

2

2

2

2

2

1

2

tage cri

All *stages

E,G,f

K

F&J

Max=22

Entering title

Entering column headings

Entering row headings

Entering figures

Entering formula to calculate wages

Copying wages form across cells

Entering formula to calculate net profit

Copying net profit form across cells

Revising wages formula

Copying revised wages formula

Selection of appropriate data for chart

Creation of chart

Changing chart from bar to pie

ria

Maximum score for completion of operation.

Score 0 for incorrect or non-completion of operation.

Score 1 for successful entry of formula containing minor errors,e.g. the

fo~at/ent~ technique of the formula is correct though it may contain

arithmetical errors that result in erroneous values.

Score 1 for selecting inappropriate information for chart eg inclusion of

gross profit,ommission/or partial selection of information

Score 1 for errors in completion of copying operation eg. formula is

copied across only 1 cell instead of two, or for entering formula

individually into each cell.

Score 1 for inappropriate format.

Figure 10. Test scoring and criferia

relatively weak performance of Baxter and Oatley’s subjects can be seen as confirming the finding of Chamey et al. (1990), that unguided exploration is a suboptimal form of active learning.

The comparisons between experimental groups allow tentative conclusions regarding the issues raised in the introduction. First, the null comparison between SM and SM-Demo suggests that one of the major advantages of scenario machine types of instruction is the exposure to system displays. The comparison has limited statistical power of course, and null results must be

Kerr and Payne 19

Page 18: Learning to use a spreadsheet by doing and by watching

PROBLEM

la

b

2

3a

b

4

5

6a

b

PROBLEM CRITERIA

2,4,&r 6 b

3a & 5

3b

OPERATION

Entering data

Changing column width

Formatting figures

Entering formula

Copying formula down column

Formatting figures

Entering addition function

Changing value of cell entry

Adding a title to the worksheet

SCORE

1

1

2

2

2

2

2

1

2

Max= 15

Score 1 for inappropriate format

Score 1 for successful entry of formula containing minor errors, e.g. the

format/entry technique of the formula is correct though it may

contain arithmetical errors that result in erroneous values.

Score 1 for errors in completion of copying operation eg. formula is

copied down only 1 cell, or for entering formula individually into

each cell.

Figure 11. Worksheet training, scoring and criteria

treated conservatively, but despite these caveats the lack of any evident effect for interacting with the scenario machine over merely watching the correspond- ing demonstration is quite striking.

Next, consider the advantage of the problem-solving groups over the SM groups. This finding extends the conclusions of Charney et aZ. (i.e. that working given problems is an effective way to learn spreadsheet tasks) to training regimes in which subjects are not given the correct answer to any problem on which they fail, and in which problem-solving, with free access to an informa- tion source, is the only method of instruction.

As in the Chamey et al. experiments, a possible confound in this key comparison is that some of the I’S subjects actually spend longer in training than do the SM subjects. In the current study, however, only the E-Manual group spent reliably longer in training. Training times in both PS groups varied a great deal (not surprisingly, perhaps, considering the freedom under which subjects worked). It is worth reiterating that there was no sign of a reliable correlation between training time and test performance for either of the problem-solving groups, and that for the B-Demo group the trend was for the correlation to be negative (i.e. faster training times led to better test perform- ance). None of these findings eliminate the confound, but together they

20 Interacting with Computers vol 6 no 1 (2994)

Page 19: Learning to use a spreadsheet by doing and by watching

PROBLEM OPERATION SCORE

1 a Sefection of appropriate data for =&art 2

b Creating a chart 1

C Entering chart title 1

d Adding a legend 1

e Saving the chart 1

2 a Selection of appropriate data 2

b Copying the data 1

C Pasting the data 1

d Opening the chart 1

3 a Multiple selection of appropriate data 2

b Creating a chart 1

4 a Changing chart from cohnnn to pie 2

b Adding a legend 1

max17

PROBLEM CRITERIA

la,2a,3a Score 1 point for selection of incorrect data

4a Score 1 point for incorrect chart type

Fig~ye 12. Chart training, scoping and criteria

strongly suggest that increased training time is not the only factor in the success of the PS groups.

Finally, we believe that our study shows that animated demonstrations do have potential uses in the training of computer skills. Indeed, this study suggests two somewhat separate roles for animated demonstrations. The null comparison of the SM-Demo to the SM condition, together with the fact that SM-Demo learners, with a very brief training time, achieved equivalent competence to learners in Baxter and Oatley’s (1991) study, confirms the conclusion of Payne et al. (1992) that merely watching an animated demonstra- tion can provide a useful introduction to complex interfaces. The success of the ES-Demo group shows that within a problem-solving context, libraries of animations can be used to work training problems without necessarily repro- ducing the poor learning outcomes of ‘example following’ that have been reported in other contexts (e.g. Chamey et al. 1990, Palmiter et aI., 1991).

References

Apple Macintosh Inc. (1991) Hypercard Application Program Version B-2.1

Kerr and Payne 21

Page 20: Learning to use a spreadsheet by doing and by watching

Baggett, P. (1979) ‘Structurally equivalent stories in movie and text and the effect of the medium of recall’ 1. Verbal Learning and Verbal Behaviour 18,333-356

Baggett, P. (1987) ‘Learning a procedure from multimedia instructions; the effects of film and practice’ Appl. Cognitive Psychol. 1, 183-195

Baxter, I. and Oatley, K. (1991) ‘Measuring the learnability of spreadsheets in inexperi- enced users and those with previous spreadsheet experience’ Behav. hf. Technol. 10, 6,47%490

Carroll, J.M. (1984) ‘Minimalist training’ ~atamafiun

Carroll, J.M. (1990) llte Nurnberg Funnel: Designing Minimalist Instruction for Practical Computer SkiZl Ml7 Press

Carroll, J.M. and Car&hers, C. (1984) ‘Blocking learner error states in a training wheels system’ Human Factors 26, 377-389

Carroll, J.M. and Kay, D.S. (1988) ‘Prompting, feedback & error correction in the design of a scenario machine’ lnt. J. Man-Mach. Studies 28, 11-27

Charney, D.H. and Reder, L.M. (1986) ‘Designing interactive tutorials for computer users’ Human~ompu~e lnteyacfion 2,297-342

Charney, D.H., Reder, L.M. and Kusbit, G.W. (1990) ‘Goal setting and procedure selection in acquiring computer skills: a comparison of tutorials, problem-solving & learner exploration’ Cognition 6 Instruction 7, 4, 323-342

Collins, A., Brown, J.S. and Newman, S. (1987) ‘Cognitive apprenticeship: teaching the craft of reading, writing, and mathematics’ in Resnick, L.B. (ed.) Cognition and ~nsfyuction Lawrence Erlbaum

Cullingford, R.E., Kreuger, M-W., Selfridge, M. and Bienkowski, M.A. (1982) ‘Auto- mated explanations as a component of a computer-aided design system‘ IEEE Trans. Systems, Man and Cybernetics 12, 168-181

Duisberg, R.A. (1988) ‘Animation using temporal constraints: an overview of the animus system’ Human-Computer Interaction 3, 275-307

Farallon Computing Inc. (1990) Media Tracks Applicafion Program

Grieetti, M.C., Hausman, C. and Gould, L. (1975) ‘A n “intelligent” on-Iine assistant and ~tor_NLS-SCHOLAR PYOC. 7975 Compu~ey Conf. AFIPS Press, 775-781

Lewis, C., Hair, D.C. and Schonenberg, V. (1989) ‘Generalization consistency and control‘ PYOC. CHI 1989 ACM Press, 1-5

Microsoft Corporation (1989) Microsoft Excel Complete Spreadsheef with Business Graphics and Database Version 2.2 (user manual)

Palmiter, S., Elkerton, J. and Baggett, P. (1991) ‘Animated demonstrations versus written instructions for learning procedural tasks: a preliminary investigation’ Int. J. Man-Ma&h. Studies 34687-701

Patrick, J. (1992) Training: Research and Practice Academic Press

Payne, S.J., Chesworth, L. and Hill, F. (1992) ‘Animated demonstrations for exploratory learners’ Interacting with Computers 4, 1, 3-22

Singley, M.K., Carroll, J.M. and Alpert, S.R. (1991) ‘Psychological design rationale for an intelligent tutoring system for smalltalk’ in Koenmann-Belliveau, J., Moher, T.G. and Robertson, S.P. (eds) Empirical Studies of Programmers: Fourth Workshop. Ablex, 106-209

Waterson, P. and O’Malley, C. (1992) ‘Using animated demonstrations to teach graphics skills’ PYOC. HCF’92 Elsevier

22 Interacting with Computers vol6 no I (1994)