16
The ESP Journal Vol. 1. No. 2 Spring 1981 Accountability in ESP Progams* Ronald Mackay A number of factors are ensuring that the theory and practice of specific purpose language teaching continue to develop and evolve. One of the major factors is economic in origin, bringing with it the need for ESP program designers to answer to financing bodies as to the effectiveness of their programs. Enthusiasm and positive claims may not be enough to ensure the continued support for ex- isting programs and the launching of new ones by budget-troubled administrators. Evidence of success may become the end product required of the design team, as opposed to merely the prepared materials and the verbal assurance that they are superior to what preceded them. The demand for accountability requires that the ESP specialist add evaluation techniques to his repertoire of skills. It is important that the evaluator does not confuse research and evaluation. The former tests inferences based on observation and phrased as hypotheses; the latter, i.e., evaluation, has as its purpose the provi- sion of those in authority with information which can be used in making decisions about improving or modifying the program. A case study demonstrates an approach to determining the effec- tiveness of a specific ESP-type program and provides a com- prehensive framework within which practicable ESP evaluations can be conceived and carried out. Introduction To attempt a coherent picture of special purpose language teaching as it is developing today is to encounter a paradox. On the one hand, it would appear to be going from strength to strength. New course materials abound. New journals reporting ESP/EST activities have ap- peared. New avenues of research have been opened up. Entire sections of conferences are devoted to special purpose language teaching and new ESP programs. On the other hand, and from a different perspec- tive, ESP would seem to be encountering certain difficulties. Many new materials being published today are insignificantly different from some which appeared ten years ago. Few and far between are teacher- training courses which include an ESP component. (Indeed, some ma- jor ESP projects prefer to hire teachers of science who have some ELT experience or training.) Discourse analysts appear to be making relatively little headway in providing workable models for use with written texts. ESP-type tests are rare. And it may even be the case that subject-based ESP courses are declining in favour of study-skills type *This is a revised version of a paper presented at the first TESOL Summer Institute, held in 1979 at the University of California, Los Angeles. 107

Accountability in ESP programs

Embed Size (px)

Citation preview

Page 1: Accountability in ESP programs

The ESP Journal Vol. 1. No. 2 Spring 1981

Accountability in ESP Progams* Ronald Mackay

A number of factors are ensuring that the theory and practice of specific purpose language teaching continue to develop and evolve. One of the major factors is economic in origin, bringing with it the need for ESP program designers to answer to financing bodies as to the effectiveness of their programs. Enthusiasm and positive claims may not be enough to ensure the continued support for ex- isting programs and the launching of new ones by budget-troubled administrators. Evidence of success may become the end product required of the design team, as opposed to merely the prepared materials and the verbal assurance that they are superior to what preceded them.

The demand for accountability requires that the ESP specialist add evaluation techniques to his repertoire of skills. It is important that the evaluator does not confuse research and evaluation. The former tests inferences based on observation and phrased as hypotheses; the latter, i.e., evaluation, has as its purpose the provi- sion of those in authority with information which can be used in making decisions about improving or modifying the program. A case study demonstrates an approach to determining the effec- tiveness of a specific ESP-type program and provides a com- prehensive framework within which practicable ESP evaluations can be conceived and carried out.

Introduction

To attempt a coherent picture of special purpose language teaching as it is developing today is to encounter a paradox. On the one hand, it would appear to be going from strength to strength. New course materials abound. New journals reporting ESP/EST activities have ap- peared. New avenues of research have been opened up. Entire sections of conferences are devoted to special purpose language teaching and new ESP programs. On the other hand, and from a different perspec- tive, ESP would seem to be encountering certain difficulties. Many new materials being published today are insignificantly different from some which appeared ten years ago. Few and far between are teacher- training courses which include an ESP component. (Indeed, some ma- jor ESP projects prefer to hire teachers of science who have some ELT experience or training.) Discourse analysts appear to be making relatively little headway in providing workable models for use with written texts. ESP-type tests are rare. And it may even be the case that subject-based ESP courses are declining in favour of study-skills type

*This is a revised version of a paper presented at the first TESOL Summer Institute, held in 1979 at the University of California, Los Angeles.

107

Page 2: Accountability in ESP programs

108 The ESP Journal

courses, for economic reasons if not out of disillusionment with ESP. How can this paradox be explained? How is it that a field of activity

can appear to be so innovative and successful from one standpoint and yet to be experiencing troubles from another?

One explanation lies in the fact that the swift and early im- provements and developments in ESL in the late 60’s and early 70’s (e.g., second language acquisition studies, advances in testing pro- cedures, the relevance of sociolinguistics studies, and syllabus plan- ning techinques) generated expectations for further improvements equally rapid and equally innovative. These further improvements however have not been forthcoming as rapidly or with as little pain as had been anticipated. And this leads to the second and complementary reason for the paradox. I believe that applied linguistics has expanded so fast over the last couple of decades, employing the findings of linguistics, psychology, sociology and education to help elucidate prob- lems facing ELT, including ESP, that it has caught up with these so- called “underlying disciplines”.

The problems that now face the applied linguist working in ESP can often find no ready-researched body of information upon which to draw. On the contrary we applied linguists now often find, on turning to our colleagues in psychology, sociology, education or linguistics for information we need to answer a question, that the very same question is being posed by these colleagues and that they are far from finding a solution. There is no longer a large reserve of information on which to draw. One scholar epitomized the situation when he said that “study- ing language in the classroom is not really applied linguistics; it is really basic research. Progress in understanding language in the classroom is progress in linguistic theory.” This assertion might not in- accurately be extended to cover very many of the activities which are loosely grouped together under the title “applied linguistics”. For ex- ample, it might be saidthat studying the structure of scientific texts, developing ways of evaluating ESP programs, creating tests for students in training, determining the difficulties encountered by learners, organizing curricula, or virtually any of the other activities in which an ESP specialist might find himself engaged, might necessarily involve him in empirical research. Even new paradigms for research are being created because existing ones do not seem to provide ade- quate means for answering some of our most pressing questions.

The euphoria of the innovative phase in ESP programs has died down, leaving practitioners working on ironing out some of the very real difficulties encountered at every phase of ESP course planning and implementation. At the same time those in positions where decisions have to be made regarding the future of such projects, have quietly but insistently been demanding answers to such questions as

?? Do these ESP/EST programs work? ?? Are they more effective than previous programs aimed at general

language proficiency? ?? If so, in what ways are they more effective?

Page 3: Accountability in ESP programs

Accountability in ESP Programs 109

?? Are there any ways in which they are less effective? ?? Can the expense be justified? ?? Should we spend money on continuous quality control of ESP

courses? ?? Is there any evidence that syllabus planners are performing at

least as well now, in terms of serving their clients’ needs, as they were prior to the ESP epoch?

?? Are there any unintended or unforeseen outcomes resulting from the use of any given ESP program?

It is within the context of the current search for answers to these and related questions that the previously unquestioned principles of ESP are being scrutinized, and it is this scrutiny which both highlights the problems facing the field and provides continuing impetus for their solution.

In trying to determine the directions in which the field of ESP is developing and the nature of the obstacles which are facing it, we will concentrate on this issue of accountability, an issue which has been raised by those who are responsible for the funding of new ESP pro- jects and for the continuation of support for already established ones. Accountability requires appropriate instruments for measuring the ef- fectiveness of instructional programs; this will be dealt with under the heading of Program Evaluation. The process of evaluation of a pro- gram in turn highlights the need for greater precision in the framing of the teaching and learning objectives; this question is dealt with under the heading of Specifying Course Objectives.

Program Evaluation

By the mid-1970’s there was a large body of literature dealing with what ESP was (Strevens 1977, MacMillan 1971, Widdowson 1971); how it differed from general purpose language teaching; and guidelines for the analysis of scientific and technical discourse (Selinker, Trimble, and Vroman 1972, Allen and Widdowson 1978, Mackay and Mount- ford 1978), for syllabus development (Jones and Roe 1976), and for the creation and description of exercise types (Allen and Widdowson 1978, Mackay and Klassen 1975). However, seldom mentioned were either empirical evaluation studies of ESP programs or descriptive pro- cedures for going about conducting evaluations of these programs. In other words, although the creative phase of ESP program production was well-documented, reports of the effectiveness of these programs were virtually nonexistent. Recently, however, a useful guide to evaluation has appeared (Bachman 1981).

What may have contributed to the timidity exhibited regarding pro- gram evaluation were the great comparative studies carried out in the USA and the UK to compare the effectiveness of different approaches to the teaching of ESL (e.g., Scherer and Wertheimer 1964, Smith 1971). The manifest lack of success of these studies in determining the relative effectiveness of the outcomes of different programs appears to

Page 4: Accountability in ESP programs

110 The ESP Journal

have had the effect of convincing applied linguists that they should steer clear of such complicated and difficult-to-control activities. In fact, an extensive search in 1975 for guidance on conducting an evalua- tion of a special purpose second language program produced only three papers of any real relevance to our needs. One was the case study of the evaluation of a nonvocational home study Russian language course for native speakers of English (Rawson-Jones 1973). The second was a paper by Pillinerl(l974) based not on first-hand experience of the ap- praisal of ESP programs, but on Striven’s distinction between for- mative and summative evaluation (Striven 1967) and Stake’s model for the evaluation of any educational curriculum (Stake 1967). The third reference directly related to second language program appraisal turned up by the search was the very extensive and detailed work by Hayes, Lambert and Tucker (1967), which attempts an exhaustive inventory of factors which are believed to influence the effectiveness of any second language program.

None of these papers however, provides a well-wrought paradigm which might be adopted for the purpose of ESP program evaluation, and our attempts at that time to study the effectiveness of our new pro- gram in the National Autonomous University of Mexico were somewhat primitive (Mackay 1981). However, very recently we were again in the position of being required to conduct an evaluation of a special purpose program and I would like to illustrate the more highly developed framework which we devised for that purpose.2

A large privately owned ESP language institute known as Language Orientation on Technology, Incorporated (LOOT for short) had developed an occupationally oriented program for a chain of petrochemical plants on the Bindi Baji oil fields.3 These fields spread over territories administered by the governments of three separate countries. The plants would eventually employ 2,300 technical workers in equal proportions from the three countries concerned, each of which has a very different national language. For that reason, and because many of the key personnel for at least the first five years would be English speaking, it was decided that the working language of the plants would be English. Government agencies in each of the countries were responsible for financing the training of personnel in North America and were the intermediaries between LOOT and the

ilt is interesting to note here, that this paper, presented by Albert Pilliner of Edinburgh Universitv in 1974. had been invited bv George Perren, Director of the Centre for Infor- mation on Language Teaching and Research as one of the contributions to a CILTR con- ference on the teaching of languages to adults for special purposes. Perren seems to have unerringly identified or predicted some of the major problems and growth areas of second language teaching for the last two decades.

2I am indebted to M. Dosquet’s MA thesis, Un Mad& d’Ecaluation de Programmer d’Etudes de Lungws qui sont dans we Phase de Mise h I’Essai. TESL Centre, Concordia University, Montreal, 1981, for the perspective employed in this evaluation.

3While it is based on a program evaluation which was actually carried out, the case study as reported here is, of course, fictitious.

Page 5: Accountability in ESP programs

Accountability in ESP Programs 111

petrochemical plants in all administrative matters concerning the students.

The technical English instruction was given by LOOT teachers in a large North American petrochemical plant where the students were be- ing trained in technical matters directly related to their work functions in the Bindi Baji petrochemical complex. The mandate given to the team of evaluators by LOOT, Inc. was to appraise the entire operation, and all doors were opened to facilitate the speedy completion of the task. Two months were allotted for the conducting of the appraisal and the writing of the final report. The evaluation was conducted during the last eight weeks of a five-month course.

The first task of our team was to break down the object of appraisal, i.e., the entire project, into relatively discrete component parts. These appear in Figure 1 in the columns numbered 1 to 13. We will examine these one by one.

1. LOOT, Inc. Policy Rat&male. The general policy adopted by LOOT, Inc. in its attempts to provide the requested service to its clients. This was little more than a general rationale expressed at board level for the move from general language teaching into EST. However, this rationale was the framework within which everything else had to be accommodated, and so we felt that it was necessary to examine its adequacy.

2. Administrative Procedures. The administrative procedures adopted at all levels made up the second component. These ranged from the creation of a steering committee to provide support to the EST program director, to the procedures adopted for the reporting of grades to the client governments. Again, since decisions at the ad- ministrative level affect what can happen at all other levels, it was con- sidered necessary to examine these.

3. LOOT EST Program Objectives. These included the objectives of the program as stated for consumption by the funding governments, and the intermediate objectives (“facilitating objectives”, as they were called) and terminal objectives, which constituted part of the package supplied to the teachers.

4. Mater-i&. The materials included all the texts, tapes, etc., used in the ESL classrooms.

5. Techniques. The classroom procedures employed by the teachers. 6. Test Instruments. All testing procedures employed in the program

- placement tests, progress tests, and final tests - were examined. 7. Student Product. The scores obtained by the students on each of

the tests, and other overt behaviour attributable to the course of in- struction, such as observable improvement in communication with the technical subject teachers.

8. Student Attitude and Behaviour. Questions associated with this component probed the students’ feelings about and reactions to the course of instruction, the teachers, etc.

Page 6: Accountability in ESP programs

No.

H

1 1

5 3

4 6

7 8

L&

T

LG

T

11

%7?

T

each

er:

Tea

cher

: St

uden

t L

OO

T

tion

s

Fzg

Y

Tes

t A

ttit

ude

A

ppro

pzi-

‘n

te#r

a-

tion

m

to

Teac

her-

an

d Te

ch-

Inst

ru-

stu

den

t an

dBe

aten

ess

. ,

. ti

ves

Mat

eria

ls

niqu

es

mat

s P

rodu

ct

hav

iou

r /a

rm

zz

‘~2~

: 22

-r

x x

x x x

x x

x x

x x

x T x x

x x

x x

x

x x

x x

x x

x x

iOO

TTea

chen

3

0 x

x x

x x

x x

x x

x x

x x.

7 ?E+?

%z?

51

2 x

x x

x x

x x

x

8 ;:T;l

Subj

ect

17

x x

x x

x x

Z+se

wat

i~f

120

x x

x x

x

%?&

2&%

z%z”

” 3

x x

x x

FIG

UR

E

1 E

VA

LU

AT

ION

M

AT

RIX

SH

OW

ING

P

RO

GR

AM

C

OM

PO

NE

NT

S (f

:olu

nm

s l-1

3)

AN

D

DA

TA

SO

UR

CE

S (R

ows

l-10)

C

ross

es i

n c

ells

ind

icat

e th

at d

ata

was

sou

ght

on t

hat

com

pmen

t fm

m t

hat

part

iaLa

so

urce

.

Page 7: Accountability in ESP programs

Accountability in ESP Programs 113

9. LOOT Teacher: Appropriateness for Program. An examination of the teacher’s qualifications, previous experience, and in-class perfor- mance in an attempt to determine if the most appropriate kind of teacher was being employed to staff the program.

10. LOOT Teacher: Integration into Training Program. The ESL teacher’s role in the overall training program - technical as well as language - which was being offered to the student. One of the pur- poses was to examine the extent to which there was integration of the language teacher into what was, basically, a technical instructional en- vironment and how much and what kind of cooperation and interaction there was between the ESL teacher and the technical instructor.

11. LOOT Teacher-Training Sessions. In an attempt to assess the ef- ficacy of the in-service seminars provided for the ESL teachers, the con- tent, duration, and frequency of the training sessions were examined.

12. Physical Conditions and Equipment. Here the focus was on the environment in which the ESL instruction had to take place (noisy technical workshops, classrooms, and instrument rooms), and on the educational technology available to the teachers, such as duplication facilities, audiovisual equipment and the like.

13. LOOT Classroom Interaction. This component was different in nature from all the previous ones in that it focussed on the dynamics of the teaching/learning situation during actual instruction, involving the interaction between teacher, student, and materials, and the results of that interaction in terms of the language being produced in class.

We felt that such a framework avoided the pitfall into which evalua- tion studies frequently fall, namely, to focus exclusively on student product. A given type or quality of student product is, no doubt, the major goal of all ESL instructional programs. That product, however, is the result of the interaction of very many factors - the factors which we feel that our “components” dimension of the appraisal framework encompassed. A thorough evaluation should describe the strengths and weaknesses of all the components of any given program, not merely record the performance of the recipient of the instruction.

In rows l-10 of the evaluation matrix are listed all the sources from which data was sought. A cross in the cells where the horizontal and vertical lines meet indicates the source from which data on any par- ticular component was sought. For example, information on the in- structional materials was sought from the EST program director, the steering committee, English-speaking plant foremen, representatives of the government agencies sponsoring the program, the technical department heads in sections where LOOT classes were given, the LOOT teachers, the students, and technical subject teachers. To take another component, information on the extent to which LOOT teachers integrated into the instructional environment of the petrochemical plant was sought only from the government agencies, the technical department heads, the LOOT teachers themselves, and the technical subject teachers. In other words, information on any

Page 8: Accountability in ESP programs

114 The ESP Journal

given component of the program was gathered from all appropriate sources, and every source was consulted about each of the components by which it was directly and/or indirectly affected. Some of the data were quantitative, e.g., the test scores constituting Component 7, Stu- dent Product. The rest of the data were qualitative.

In designing our evaluation matrix, we took into consideration the need not only to try to measure and interpret the effects of the program on the students as well as on all those teachers who might be affected by it, but also the need to examine four other aspects of the LOOT project. These were (i) the planning decisions taken to determine the general and specific objectives of the program, (ii) the selection of all the instructional materials used, (iii) the procedures employed to attain the objectives of the program, and (iv) the administrative support pro- vided to the project as a whole. In so doing, we felt that we would pro- vide adequate information to LOOT, Inc. on which to base decisions about any aspect of their entire project. In other words, the explicit purpose of the appraisal was not so much to assert merely that the proj- ect was satisfactory or unsatisfactory, but to provide those responsible for its future as detailed an account as possible of every factor which might contribute to the project’s success or lack of success. They would then be in a position to make decisions, which might affect any aspect of the program, on the basis of comprehensive and objectively gathered information.

The evaluation matrix completed, the next task was to develop the data-gathering instruments. In addition to the LOOT-designed tests to provide information on Component 7, Student Product, there were three types of instrument. The first consisted of a pool of approxi- mately 150 questions written to gather information on each of the pro- ject’s components, as given in columns 1-13 in Figure 1. The pool of questions was selectively drawn upon in order to produce question- naires appropriate for use with each of the sources consulted, as listed in rows l-10 in Figure 1. The advantage of this approach lies in the direct comparisons which can be made between the responses of one source and another on the same question. For example, if the same question dealing with the appropriateness of certain of the instruc- tional materials used was posed to the LOOT teachers and the plant foremen, the students and the technical subject teachers, their responses could be compared directly. Differently worded questions to each of these sources about any one component would also have resulted in four pieces of information, but no direct comparison would have been possible.

The second instrument consisted of a 30-minute videotape made in each of the 30 LOOT teachers’ classes. These represented random samples of the kind of instruction the students were exposed to, and the kinds of activities performed and the kinds of language generated in the classes. Up to four of these were shown to the Bindi Baji plant foremen and the technical subject teachers immediately prior to their being required to complete their questionnaires. The purpose behind

Page 9: Accountability in ESP programs

Accountability in ESP Programs 115

this procedure was to elicit comments on the appropriateness of the in- structional tasks and the language used in class by teachers and students.

Thirdly, observation sheets for the LOOT EST classes were prepared. These were employed by observers, themselves experienced teachers, who sat in on LOOT classes in pairs. In this way, all teaching and learning activities were observed by two individuals at different times.

The preparation of the classroom observation sheets is a delicate matter. Examination of existing observation sheets used by depart- ments of education, teacher-training schools, and university depart- ments convinced us of their value but made us determined to create our own. Classroom observation is essential to furnish the evaluator with information about assumed important process variables that characterize the work in the classroom. It allows us to see how time is divided between teacher and student talk, the extent to which students actively participate in the class, which elements of the materials and techniques increased participation and interest, whether all or only some of the students were active, etc. Observational categories ap- propriate to the type of program being taught are essential and no ex- isting observation sheets were entirely relevant. To create our obser- vation sheets, our procedure was to interview all LOOT teachers prior to their classes being observed, in order to obtain information about their perceptions of the nature and purpose of their classroom activity. On the basis of the information gathered, a classroom observation sheet was drawn up to collect data under the following headings, in a checklist format,

1. Lesson Plans and Lesson Objectives 2. Group Homogeneity 3. Classroom Organization 4. Teaching and Learning Activities 5. Teacher Feedback on Student Performance 6. Teacher Attention to Students 7. Rapport between Teacher and Student

That, then, is a brief outline of the components of the program, the sources consulted, and the instruments used in order to carry out the appraisal.

It is the use of this type of very comprehensive framework - the crosses in the cells indicate data gathered - which allowed us to go some way towards answering the kind of question which we, as ap- praisers, had refined from those which the administrators had been asking:

1. Has the ESP program attained its stated objectives? 2. Has it been helped or hampered by the administrative pro-

cedures in the process? 3. Do the students display the kinds of behavioural (including

linguistic) changes which were planned?

Page 10: Accountability in ESP programs

116 The ESP Journal

4. Are the teachers performing as they were intended to perform? 5. Are the instructional materials and the techniques employed

generating the kinds of discourse they were designed to generate?

6. Can the students perform effectively in their work or study situa- tion, in the real world, after completing the course of instruction? and

7. Are the clients (in this case, the governments of the three oil- producing countries) satisfied?

It should be noted at this point that we had opted for a primarily qualitative evaluation. Our data, other than student scores on tests, were judgmental; there was no attempt at any sophisticated statistical analysis, and we were able to point to trends and assumed relationships rather than identify clear causal relationships. I think that such an ap- proach can be defended on the grounds that, had we limited the ap- praisal to admit quantitative data only, we would have ignored many important variables, such as the quality of the administrative pro- cedures, the program objectives, the techniques employed, etc. Well created and carefully worded questionnaires and intelligent judgments, in this case, produced a broader and more comprehensive picture of the LOOT program than would have been possible had we adopted a so-called “hard” evaluation model. However it is important to mention that the procedures we employed to gather the qualitative data were as objective and systematic as we could make them. A “soft” evaluation need imply no less rigour in its procedures than a “hard” evaluation. Evaluations employing quantitative techniques depend for their ex- cellence upon the investigator’s decisions as to which variables to con- trol, which to manipulate, and which to ignore. Those involving qualitative techniques rely upon the adequacy and validity of the in- vestigator’s description and the depth of analysis employed. In other words, both types involve a similar level of decision making, and it is at these decision-making points that error can creep in. The subsequent objectivity of the statistical design, measurement, and data analysis in the quantitative type of study may only lend a spurious authority to the results if prior decisions were inadequate or invalid.

The net result of the demand for accountability for ESP programs is twofold. First of all, there is a strong reinforcement of the conscious- ness of the utilitarian principles upon which ESP programs were originally founded. The fact that there is more to the program than classroom activity stands out. Secondly, it is forcing a reappraisal of the kinds of procedures we have been using to define needs, specify ob- jectives, prepare materials, develop techniques, and measure attain- ment. This stimulus is necessary if we are to avoid confusing the day- to-day routines we follow in implementing ESP programs with pur- pose. All of us who have had responsibility for running ESP programs know the very large amount of energy that has to be devoted just to keep existing programs going. Sometimes there is just not the time to enquire into the purpose of these activities once the excitement of the

Page 11: Accountability in ESP programs

Accountability in ESP Programs 117

creative stage is over. The temptation is great to define success by whether we maintain the impetus of existing programs, without regard to circumstances such as changing student needs or teacher profi- ciency. The demand for evaluation brings to the fore the need to main- tain at a conscious level our sense of long-range purpose and to con- stantly enquire into the meaning and impact of the associated ac- tivities. In particular, the need for more precision in the framing of the objectives of ESP courses is brought to attention and this is the second area we will deal with.

Specifying Course Objectives Special purpose language programs originally differed from general

language programs along one dimension only: that is the subject mat- ter used to exemplify the linguistic content of the syllabus. Whereas a few banal topics and situations such as railway stations, cafeterias, ballgames, and the like were entirely lacking, early ESP courses used the principles of flight or the life story df Marie Curie to teach exactly the same grammatical points in virtually the same sequence.

Gradually, additional dimensions were introduced to specify objec- tives. In addition to subject matter, finer distinctions were drawn so that science, for example, might be broken down into its component areas, which might themselves be further subdivided. The addresser- addressee relationship was specified. The so-called four basic language skills were broken down into functional subskills such as summarizing or abstracting or note-taking, as opposed to merely writing, and talk- ing on the telephone or delivering a paper, as opposed to merely talk- ing.

With the increase in the dimensions available to specify the language-related tasks, it has become a feature of ESP courses to define goals in terms of the role relationships of participants, the set- ting, the domain, the medium, the skills and the language functions to be performed and/or understood, instead of defining goals solely in grammatical terms. However, instead of these additional dimensions being employed to complement grammatically defined goals, in some cases, any indication of the specific linguistic features associated with the communicative task aimed at is totally lacking. Indeed, the greatest problem facing many ESP materials developers may have become the specification of the linguistic elements which would be in- cluded for the realization of the communicative activity cited as the ob- jective. A kind of reversal of the original problem has been taking place. Whereas early on in the development of ESP teaching, dissatis- faction had arisen from the fact that syllabuses were described in vir- tually grammatical terms alone, it would seem that now the difficulty is becoming selection of the code elements for the apparently more easily selected communicative acts. Selinker had identified the need to write communicative and grammatical descriptions as early as 1969 - the need to write arhetorical grammar to map the linguistic elements onto

Page 12: Accountability in ESP programs

118 The ESP Journal

the expository techniques of science, is how he described it then. But ten years later we are still without a rhetorical grammar for any specific field of language use and apparently none is in sight.

One of the ways in which this state of affairs has been handled by some ESP course designers has been simply to avoid the question altogether by taking goal specification to the level of behavioural ac- tivity and no further. ESP syllabuses based on this approach consist of lists of skills, acts, and functions but lack an inventory of the code elements entirely. To support this approach, arguments are invoked associated with current interest in the noted discrepancies between the input a learner receives and his own performance, the distinction be- tween acquiring and learning, linguistic competence and com- municative competence, and the importance of authentic materials. Hence, ESP teachers have sometimes faced students with little more than a handful of articles cut from newspapers and journals, and goals or objectives expressed as generally as:

to teach the student to summarize the main ideas in a text or

the student will learn to provide information about the quality and price of a product.

Occasionally the syllabus designer has left the specification of the linguistic elements and the sequence in which they will be taught en- tirely to the teacher’s discretion. Sometimes the results have been satisfactory, but not unnaturally, this has sometimes caused distress on the part of both the teacher and the students. The teacher oftens finds it difficult to know where he is going with the syllabus and the student finds it hard to know where he has been. Establishing clear divisions between proficiency levels within a program becomes a problem. If the student eventually must take a proficiency test such as the TOEFL, Michigan, CELT or other hurdle used for admission to university or the professions, the dissatisfaction is exacerbated.

In the case of the LOOT program previously referred to, the instruc- tional objectives had not, in all cases, included inventories of the linguistic items which could appropriately be used to realize the func- tions/speech acts which were being taught. At the regular in-service training sessions held four hours per week, however, the teachers had voiced their dissatisfaction. The lack was remedied, but it is never- theless interesting to note the various strategies the teachers adopted to deal with inadequately stated objectives. Some tried by themselves to identify the code elements appropriate to the speech acts. They either consulted with the technical teachers (even to the extent of sit- ting in on some of the classes), or they sought references such as Leech and Svartvik’s Communicative Grammar of English (1975), the Council of Europe’s Threshold Level (Van Ek 1975) or an even more com- prehensive reference book by Alexander, Stannard Allen, Close, and O’Neill, English Grammatical Strmctwe (1975). Others communicated their lack of confidence in a vaguely formulated functional-type

Page 13: Accountability in ESP programs

Accountability in ESP Programs 119

syllabus to the students and created an atmosphere in which surrep- titious abandonment of the functional materials in favour of a very traditional grammatical syllabus became their solution. Still others adopted a “grammar-is-a-dirty-word-anyway” approach, and, taking a highly idiosyncratic view of communicative competence in which con- formity to grammatical rules became almost entirely unimportant as compared with the student’s success in making the teacher under- stand, even though most of his fellow students could not, categorically refused to handle discrete points of grammar. This latter approach tended to maximize fluency and minimize the demand for accuracy. In the case of one class, a “classroom dialect” began to emerge, com- prehensible to the teacher and students in that class, but unintelligible to the technical subject teachers and the other students.

The demand for accountability is tending to underline the necessity for being able to specify, with some precision, the level of linguistic sophistication which the learner is to aim at in the performance of language-related tasks. Attempting to follow an ESP syllabus without any reference at all to linguistic accuracy and level of mastery of ap- propriate code elements can be a highly frustrating experience for ESL teachers and for their students. Indeed, such syllabi are not unlike those prepared for L, learners. However, whereas a general level of linguistic competence is assumed to exist in the case of L, learners, no such body of knowledge or “expectancy grammar” exists in the case of L, learners.

Probablythe most rigorous attempt so far to provide a precise set of procedures for the design of ESP courses is that offered by Munby in his Communicative Syllabus Design (1978). However, as in any rigorous activity, Munby’s approach requires of the syllabus designer time, pa- tience, effort, and a level of informedness which may dull the en- thusiasm of the planner used to making decisions without first ensur- ing the availability of an adequate information base.

There is, then, a reappraisal of teaching objectives being undertaken by many ESP course designers. More rigorous attempts are being made to provide, in addition to topic, function, and speech act, the ap- propriate linguistic encoding for functionally defined syllabuses, on the basis of either informed intuition, observation of the target discourse or selection from one of the semantic-type grammars now available.

Conclusion

We have presented here a picture of the development of the field of ESP in which the demand for accountability is seen to be an important determining factor. Responding to this demand, which has largely come from outside the field itself, we are forced to take a much closer and more critical look at the results achieved by ESP programs. This scrutiny will greatly benefit and strengthen our programs. It will necessarily lead to the development of better instruments of evaluation and to more precise formulation of program objectives, especially the

Page 14: Accountability in ESP programs

120 The ESP Journal

formal linguistic objectives. Adequate solutions to the difficulties cur- rently being encountered in these areas will require further basic research. Applied linguists who are interested in ESP may have to ac- cept the responsibility for undertaking much of this research themselves; however, perhaps significant insights will be gained through collaboration with colleagues in related disciplines, who are themselves only beginning to research these same questions.

REFERENCES

Alexander, L. G., W. Stannard Allen, R. A. Close, and R. J. O’Neill. 1975. English Grammatical Structure. London: Longman Group Limited.

Allen, J. P. B. and H. G. Widdowson. 1978. Teaching the Communica- tive Use of English. English for Specifi Purposes, 56-77. R. Mackay and A. Mountford (Eds.). London: Longman Group Limited.

Bachman, L. F. 1981. Formative Evaluation in Specific Purpose Pro- gram Development. Languages for Specific Purposes: Program Design and Evaluation. R. Mackay and J. Palmer (Eds.). Rowley, Massa- chusetts: Newbury House Publishers.

Bosquet, Maryse. 1981. Un Modale d’Evaluution de Programmes d’Etudes de Lungues qui sont dans une Phase de Mise a 1’Essai. Un- published MA thesis, Concordia University, Montreal.

Hayes, Alfred S., Wallace E. Lambert, and G. Richard Tucker. 1967. Evaluation of Foreign Language Teaching. Foreign Language An- nals 1,1:22-44.

Jones, K. and P. Roe. 1976. Problems in Designing Programmes in English for Science and Technology. Teaching English for Science and Technology, 18-35. J. C. Richards (Ed.). Singapore: Singapore University Press.

Leech, G. and J. Svartvik. 1975. A Communicative Grammar of English. London: Longman Group Limited.

Macmillan, M. 1971. Teaching English to Scientists of Other Lan: guages: Sense or Sensibility? Science and Technology in a Second Language. CILT Reports and Papers No. 7, 19-30. G. Perren (Ed.). London: Centre for Information on Language Teaching and Research.

Mackay, R. 1981. Developing a Reading Curriculum for ESP. English for Academic and Technical Purposes: Studies in Honor of Louis Trimble, 134-144. L. Selinker, E. Tarone, and V. Hanzeli (Eds.). Rowley, Massachusetts: Newbury House Publishers.

and B. Klassen. 1975. Practical Steps towards the Classification of Reading and Comprehension Exercise Types. Edutec, Seccio’n Especial, Lenguus para Objetivos Espe$icos No. 2 (July 1975), 44-56.

and A. Mountford (Eds.). 1978. English for Specific Purposes: A Case Study Approach. London: Longman Group Limited.

Page 15: Accountability in ESP programs

Accountability in ESP Programs 121

Munby, J. 1978. Communicative Syllabus Design. Cambridge: Cam- bridge University Press.

Pilliner, A. E. G. 1974. The Evaluation of Programmes. Teaching Lunguuges to Adults fm Special Purposes. CILT Reports and Papers No. 11, 39-46. G. Perren (Ed.). London: Centre for Information on Language Teaching and Research.

Rawson-Jones, K. 1973. Problems of Developing and Evaluating a Home-Study Language Course. Modern Language Teaching to Adults: Language for Special Purposes, 71-86. M. de G&e, M. Gorosch, C. G. Sandelscu, and F. van Passe1 (Eds.). Paris: Didier.

Scherer, G, A. C. and Wertheimer, M. 1964. A SociolinguisticExperi- ment in Fore&n Language Teaching. New York: McGraw-Hill.

Striven, M. 1967. The Methodology of Evaluation. Perspectives in Cur- riculum Evaluation. AERA Monograph Series on Curriculum Development No. 1, 39-89. R. E. Stake (Ed.). Chicago: Rand McNally.

Selinker, L., L. Trimble, and R. Vroman. 1972. Working Papers in English for Science and Technology. Seattle: Office of Engineering Research, University of Washington.

Smith, Philip D., Jr. 1971. A Comparison of the Cognitive and Audio- Lingual Approaches to Foreign Language Instruction: The Pennsyl- vania Foreign Language Project. Philadelphia, Pennsylvania: Center for Curriculum Development.

Stake, R. E. 1967. The Countenance of Educational Evaluation. Teach- ers’ College Record 68523-540.

Strevens, P. 1977. Special-Purpose Language Learning: A Perspec- tive. Language Teaching & Linguistics: Abstracts 10,3:145-163.

van Ek, J. A. 1975. The Threshold Level. Strasbourg: Council of Europe. Widdowson, H. G. 1971. The Teaching of Rhetoric to Students of

Science and Technology. Science and Technology in a Second Languuge. CILT Reports and Papers No. 7, 31-40. G. Perren (Ed.). London: Centre for Information on Language Teaching and Research.

Ronald Mackay received formal training in applied linguistics at the University of Edinburgh and has gained practical experience in most aspects of the field while working and visiting in Scotland, England, Sweden, Denmark, Romania, Bulgaria, Czechoslovakia, Poland, Spain, Morocco, Israel, Singapore, Mexico, the United States, and Canada, including work with Inuit in the Canadian Arctic.

Mackay’s interests are reflected in his many publications in the areas of special purpose language teaching, curriculum design and evalua- tion, second language reading, and teacher training. He has recently collaborated on two works for Newbury House Publishers: Languages for Specific Purposes: Program Design and Evaluation (R. Mackay and J.

Page 16: Accountability in ESP programs

122 The ESP Journal

Palmer, Eds., 1981) and Respond in Writhg (R. Mackay and A. Cumm- ing, forthcoming in 1982).

He is currently Associate Professor of Applied Linguistics at Concor- dia University, where he also holds the post of Director of ESL Credit and ESL Proficiency Testing.