14
Assessing Writing 8 (2002) 17–30 An assessment of ESL writing placement assessment Deborah Crusan Department of English Language and Literatures, Wright State University, 3640 Colonel Glenn Highway, Dayton, OH 45435, USA Abstract What are the best assessment practices for ESL and international students who must be placed into first-year composition courses at both two- and four-year institutions in the United States? In this article, I examine this issue in three ways. First, I enumerate the stances in the literature; following that I submit results of preliminary research which ques- tions modes of writing assessment and their relationship to final grades in composition classes. Finally, I present results of an internet search of one set of prominent American institutions’ placement practices. From this data, it might be inferred that we use multi- ple instruments to place ESL students into composition classes rather than resorting to a multiple-choice test (especially a standardized instrument) as the sole means of determining placement. My primary purposes in this paper are to argue that second language composition special- ists need to examine our placement practices and aim for a reconciliation of these practices with our classroom pedagogies. Further, if we are not involved in placement decisions at our various institutions, we must strive to be included in important decisions concerning our students and to be advocates for second language writers. © 2002 Elsevier Science Inc. All rights reserved. Keywords: Assessment; Reconciliation; Second language Writing assessment has always been problematic. The importance of respon- sible assessment practices cannot be overstated, particularly when the assessment measures influence critical decisions such as placement into composition classes, Tel.: +1-937-775-2846. E-mail address: [email protected] (D. Crusan). 1075-2935/02/$ – see front matter © 2002 Elsevier Science Inc. All rights reserved. PII:S1075-2935(02)00028-4

Crus an 2002

Embed Size (px)

DESCRIPTION

Journal article

Citation preview

  • Assessing Writing 8 (2002) 1730

    An assessment of ESL writing placementassessment

    Deborah Crusan

    Department of English Language and Literatures, Wright State University,3640 Colonel Glenn Highway, Dayton, OH 45435, USA

    Abstract

    What are the best assessment practices for ESL and international students who mustbe placed into first-year composition courses at both two- and four-year institutions in theUnited States? In this article, I examine this issue in three ways. First, I enumerate thestances in the literature; following that I submit results of preliminary research which ques-tions modes of writing assessment and their relationship to final grades in compositionclasses. Finally, I present results of an internet search of one set of prominent Americaninstitutions placement practices. From this data, it might be inferred that we use multi-ple instruments to place ESL students into composition classes rather than resorting to amultiple-choice test (especially a standardized instrument) as the sole means of determiningplacement.

    My primary purposes in this paper are to argue that second language composition special-ists need to examine our placement practices and aim for a reconciliation of these practiceswith our classroom pedagogies. Further, if we are not involved in placement decisions atour various institutions, we must strive to be included in important decisions concerningour students and to be advocates for second language writers. 2002 Elsevier Science Inc. All rights reserved.

    Keywords: Assessment; Reconciliation; Second language

    Writing assessment has always been problematic. The importance of respon-sible assessment practices cannot be overstated, particularly when the assessmentmeasures influence critical decisions such as placement into composition classes,

    Tel.: +1-937-775-2846.E-mail address: [email protected] (D. Crusan).

    1075-2935/02/$ see front matter 2002 Elsevier Science Inc. All rights reserved.PII: S 1075 -2935 (02 )00028 -4

  • 18 D. Crusan / Assessing Writing 8 (2002) 1730

    an act laden with pedagogical, ethical, political, psychometric, and financial im-plications. In fact, many researchers hold that any project to assess individ-ual students writing abilities, over time or in a single instance, is fraught withphilosophical, ethical, and methodological peril (Bullock, 1991, p. 195). Further,White (1985) asserts, most testing of writing is poorly done, destructive to thegoals of teaching, and improperly used (p. 2).

    In the United States, most writing assessment 25 years ago used indirect mea-sures of writing ability, specifically multiple-choice tests. This procedure was usedbecause direct writing measures in which students actually produce text were con-sidered unreliable due to inconsistencies in scoring by independent raters (Gordon,1987; Greenberg, 1992; Harrington, 1998). Potentially confounding factors for theuse of essays assessment included the lack of benchmark papers, lack of clear andconsistent outcomes criteria and lack of quantitative measurements for inter-raterreliability.

    However, in the past 25 years, the US practice of indirectly assessing writingability has been criticized from many academic quarters (Bullock, 1991; Conlan,1986; Moss, 1994; Stiggins, 1982; White, 1986, 1990). Experts cite problemswith validity (Bachman & Palmer, 1996), ethics (CCCC Committee on Assess-ment, 1995; Courts & McInerney, 1993; Spolsky, 1997; White, 1994a), efficiency(Williamson, 1994), the inability to measure skills teachers consider important(Hamp-Lyons, 2001), and myriad other concerns (Hamp-Lyons, 1996; Shor, 1997).

    Some researchers (Hamp-Lyons, 1990; Huot, 1990a) have claimed that indirectassessment of writing by means of a multiple-choice instrument is now a thing ofthe past. Hamp-Lyons (1990) states: 20 years or so ago, many if not most peoplein North America (to a lesser extent in Great Britain and Australia) believed thatwriting could be validly tested by an indirect test of writing. As we enter the 1990s,however, they have not only been defeated but also chased from the battlefield(p. 69). Unfortunately, even at the current time, the opposite may be closer to thetruth. White (1996) cautions: Anyone with a role in writing assessment must keepin mind the multiple-choice specter that hovers just offstage; no stake has ever beendriven through its heart to keep it in its coffin, however much it may be wounded(p. 109).

    In this paper, I review recent literature defining placement testing. I also definedirect and indirect assessment of writing, and I cite inherent problems with bothdirect and indirect methods. Further, I summarize published placement practices atBig Ten universities, confirming the assertion that, despite the contentions of manyin the composition assessment community, indirect testing as the sole measure ofplacement of ESL students into writing courses is alive and well at some of themost prestigious universities in America. Finally, I summarize a study conductedat The Pennsylvania State University which compares student performance on anessay and a multiple-choice grammar test and correlates them with students finalgrades. I argue that collectively, these studies offer evidence that, while we knowwhat we should do, we often do not do it. Our reasons are legion: cost, speed,practicality and efficiency, validity, and reliability.

  • D. Crusan / Assessing Writing 8 (2002) 1730 19

    1. What is a placement test?

    Leki (1991) defines a placement exam as a test that pairs a student with anappropriate course. At two- and four-year institutions in the United States, place-ment tests are used to measure readiness for college-level course work and helpfaculty decide whether incoming students should enroll in remedial or introductorycourses (Patkowski, 1991). They are not proficiency tests, but rather tools whichallow academic advisers to place students in the level of coursework best suitedto their preparation and skills. Institutions regularly employ scores obtained froma variety of tests (ACT, SAT, Test of English as a Foreign Language (TOEFL),locally designed essay tests, locally designed measures of the sub-skills of writ-ing) in order to make admission and placement decisions for students, includingstudents for whom English is a second language.

    2. Direct and indirect testing

    An essay test is a direct assessment of writing ability, an integrative test whichhas an emphasis on communication, authenticity, and context. It attempts to testknowledge of language as a whole, not the individual components of language.Although the constraints of timed essays have been well-noted in the literature(Crowley, 1995; Shor, 1997), the advantage of essays is that they are able to gaugethe ability of students to identify and analyze problems, to identify audience andpurpose, to argue, describe, and define, skills that are valued in composition classesin the United States (Crusan & Cornett, 2002; Ferretti, 2001; White, 1994b). In-direct assessments are also used to assess writing ability. Discrete-point tests areconstructed on the assumption that language can be broken down into its com-ponent parts and each of those parts adequately tested. Those components arebasically the four skills (listening, speaking, reading, and writing), the various hi-erarchical units of language (phonology/graphology, morphology, lexicon, syntax)within each skill, and the sub-categories within those units (Brown, 1994, p. 262).An example of an indirect test would be a multiple-choice proficiency test dividedinto grammar, reading comprehension, vocabulary, and writing which attemptsto isolate and evaluate knowledge of specific components of language.

    Discussion in the literature abounds regarding the many problems surroundingwriting assessment reliability and validity issues, rater training, holistic scoring,and whether indirect or direct methods should be used for placement, proficiency,and achievement; however, little quantitative evidence exists to support the po-sitions taken in the literature (Hamp-Lyons, 1997). Polio (1998) points out thatrelatively few studies address actual empirical research into ethical assessment forplacement of ESL composition students. Matsuda (1998) identifies an even moreserious problem, that despite the growth of the ESL population, there has not beena corresponding increase in the amount of attention given to ESL students in manywriting programs (p. 99). Huot (1994) found a similar situation in mainstream

  • 20 D. Crusan / Assessing Writing 8 (2002) 1730

    composition, commenting, It is alarming to note the dearth of research and the-ory in writing assessment and the lack of qualified personnel who direct writingplacement programs (p. 61).

    While there may be university administrators who believe in the value (mainlyin terms of efficiency and cost effectiveness) of indirect measures, over the past fewdecades few writing specialists have publicly argued this, in print or at conferences(Crusan, 2002a). Some writing assessment professionals operate on the assump-tion that using at least a one-shot writing sample coupled with a multiple-choiceinstrument is the most prevalent placement procedure used in US institutions.Haswell (1998) has been unusual in recognizing the possibility of university useof a multiple-choice test as the sole instrument for placement into writing courses.Other than Haswell, it seems that neither native English speaker (L1) or non-nativeEnglish speaker (L2) composition specialists are concerned about finding quantita-tive evidence to settle the question of whether a direct or indirect measure is betterat assessing writing in the context of college placement. The tacit assumption isthat the debate over the merits of direct versus indirect testing has been resolvedand the issue buried. Unfortunately, this issue has many lives, as demonstrated bythe proportion of institutions still using an indirect test as their primary mode ofassessment for placement into writing courses.

    In a study of 100 private and public colleges and universities in the United Statesselected for their potentially significant second language populations (Williams,1995, p. 158), the researcher found that 32% of institutions reported the use of aninstitutionally administered indirect test; 19% used a standardized test combinedwith an essay; 23% an essay alone; and 26% relied on TOEFL scores for placementin ESL composition course. If we combine the percentages of those reporting useof an indirect measure alone, we find that 58% of the institutions in her study usedan indirect measure as the sole means to assess writing for placement.

    More recently, Lewiecki-Wilson, Sommers, and Tassoni (2000) stated, On thegrounds of expediency, many two-year institutions have turned to computerizedediting tests such as COMPASS for placing entering students into writing courses,even though such tests do not directly measure writing (p. 165). COMPASS isa computerized assessment in which students find and correct errors in essayson a computer screen, according to ACT (2002), the Writing Skills PlacementTest helps to determine placement into either entry-level or developmental writingclasses and assess students knowledge and skills in eight writing skills areas: us-age//mechanics, rhetorical skills, punctuation, basic grammar and usage, sentencestructure, strategy, organization, and style.

    Extending Yanceys (1999) metaphor likening the history of writing assessmentto three waves from objective tests to holistically scored essays to portfolioassessment Lewiecki-Wilson et al. suggest that there is, especially at two-yearinstitutions, evidence of a new undertow a backward-moving current (p. 166)returning to indirect, objective testing for placing students.

    Writing assessment often seems to be carried out with little thought for who isbeing tested and how these persons might be affected by the results of the tests. I feel

  • D. Crusan / Assessing Writing 8 (2002) 1730 21

    strongly that, as educators, we must be more knowledgeable about assessment andaid administrators and colleagues to realize the pedagogical, social, and politicalimplications of the tests we administer to our students. Drawing on Haswells(1998) contention that most schools use indirect testing through a standardizedexam, (p. 136), in the remainder of this paper, I explore the contradiction betweenwhat composition specialists believe about assessment for placement and what isactually done. As I will outline, indirect measures of writing ability continue to beaccepted as the sole means of writing placement at many academic institutions inthe United States.

    3. Problems with the tests or who says which is best and why

    Researchers (Bachman, 1990; Bailey, 1998; Conlan, 1986; White, 1985) havedocumented problems inherent in using a multiple-choice test as an assessmentof writing ability. Some have found that students, when taking multiple-choicetests, have a penchant for guessing, especially those who are not well-prepared totake the examination or those who are more impulsive than reflective in test-takingbehavior (Henning, 1987, p. 32). The flaw, it seems, is using a grammar test asthe only means to measure writing ability. Grammar is one of the components oflanguage and is generally more appropriately tested through its actual use in oneor more of the four skills (reading, writing, speaking, and listening) rather than inisolation (Bailey, 1998). A score on a single multiple-choice instrument may notprovide sufficient depth of information.

    ESL writers might well be marginalized by indirect tests. Haswell (1998) as-serts: Indirect testing may be less valid for international students since they oftenshow a larger gap between the skill and habits needed to write extended essaysand the facility with the kind of surface features measured in test items (p. 136).Some ESL students, because they generally have memorized grammar rules verywell, tend to score very high on tests like the TOEFL. Consequently, it has beenclaimed that multiple-choice tests overpredict minority students performance onessay tests (Bailey, 1998; OMalley & Valdez Pierce, 1996).

    However, direct assessment of writing ability poses a different set of challenges.While numerous writing experts value the perceived gain in authenticity that timedwriting tests represent in assessing writing competence, others find fault with itsreliability and validity as a measure of writing ability (McNenny, 2001). Many ESLwriters, unsure of the structures of the language, have great difficulty producingfluent written discourse, especially in timed situations. This, coupled with a simplelack of motor fluency in writing English script (Leki, 1991, p. 55), causes thesecond language writer to write much more slowly than a native speaker, producingfewer words over a longer period of time. Hence, timed writings might impose ahandicap on many ESL students.

    For any student, another problem presented by timed writing tests is that theyare given under artificial conditions: students must compose on an assigned topic,

  • 22 D. Crusan / Assessing Writing 8 (2002) 1730

    and they are most often not allowed to access any reference material such asdictionaries. Many students find it difficult to write cold on a topic they mightnever have seen before and perhaps care nothing about or, even worse, knownothing about (Huot, 1990a; Leki, 1991). Further, in the timed essay test thereis no time for revision or any other process approaches (Wolcott & Legg,1998).

    The relative merits of both direct and indirect measures were carefully scruti-nized by Breland (1977), whose study concluded that a multiple-choice test com-bined with an essay can be the most educationally sound solution to administrativeproblems in placement. In 1983, Breland (1983) presented evidence of the valid-ity of direct assessment over and above other available measures; however, hecites cost and reliability as two obstacles and recommends consideration of auto-mated textual analysis as well as production and reporting of more than a singlescore as ways to improve better direct assessment. Breland, Camp, Jones, Morris,and Rock (1987) studied the differences between essay and multiple-choice instru-ments for assessing writing and once more suggested the multiple-choice/essaytest combination in a effort to reduce the cost of assessment and increase its re-liability and validity. Hudson (1982) drew similar conclusions and recommendedeither a holistically- or analytically-scored essay combined with a carefully chosenobjective test.

    In an effort to discover how prevalent placement testing was and what formsit took (p. 49), Huot (1994) surveyed 1,080 institutions, both two- and four-yearUS colleges and universities on the MLA list of English Chairpersons. He callsthis initial study tentative and exploratory (p. 59) and encourages more researchin this area. He found that only one-half of the institutions surveyed used anykind of direct assessment for placement of students into composition classes.In other words, almost 50% of institutions were still using some form of in-direct assessment, a problem lamented by a number of composition theorists(Huot, 1990b; Shor, 1997; Tedick, 1990; White, 1990). Further, 54% of the in-stitutions that reported using some form of direct assessment employed writingcriteria developed outside the university (p. 60).

    3.1. Placement in the ESL context

    Because Huots data did not include ESL program data, I was led to gathermy own data as well as to conduct an internet search of US institutions of highereducation both two- and four-year institutions. Preliminary findings are reportedin the following sections.

    First, I compared a grammar test to an essay test to determine which was abetter predictor of final grade in an ESL composition course. Six compositioninstructors and 124 ESL students took part in a study at The Pennsylvania StateUniversity. The students were both basic and freshman level writers enrolled insheltered (ESL only) classes. They ranged in age from 18 to 40; 65% were male,35% female, and they represented 26 different languages; however, Spanish (25%)

  • D. Crusan / Assessing Writing 8 (2002) 1730 23

    and Chinese (21%) were most prevalent followed by Korean (12%). The studentswere of varying levels of writing ability and were initially placed in classes bymeans of a score on a university administered indirect assessment taken just priorto enrollment as freshmen.

    There were two independent variables in the study. The first was a directmeasure written on the first day of class on one of three assigned topics. Thesecond independent variable, an indirect measure based on disclosed copies of theTOEFL, included 40 questions and was divided into four sub-skills word or-der, error identification, vocabulary/expression, and reading comprehension. Thereliability of this measure was .95. This test provided objective measure scoresin lieu of the university-administered placement test scores which, despite re-peated requests, were not made available for this study. The dependent vari-able, final grade, was reported to me by composition teachers at the end of thesemester.

    The six instructors were trained, and used the Test of Written English (TWE)scoring guide to score the essays, because it was initially designed to measurewriting proficiency rather than growth in the skill (Educational Testing Services,1996). In case of a discrepancy of more than two points, we used a third ratersscore to adjudicate. The inter-rater reliability was .92. Two English as a SecondLanguage instructors hand-scored the grammar tests.

    The most interesting finding in the data is the correlations between the dependentand independent variables. Both the direct and indirect assessments positivelyand significantly correlated with final grade, although the placement essay had astronger correlation with final grade. The correlation between the grammar testand final grade was the lowest but still significant (.190, P < .01). The correlationbetween the placement essay and final grade was higher and more significant (.260,P < .001).

    It is important to note here that scholars have issued caveats when employingfinal grades as variables. Armstrong (1995) argues that any model that uses finalgrades as the basis for validity of a placement would likely fail to account for asignificant source of variation, that of instructor grading variability. Further, studiesof the presence of grade inflation (see Compton & Metheny, 2000; Edwards, 2000;Nagle, 1998; Zirkel, 1999 for several perspectives on this matter) present reasonsfor care and circumspection when utilizing such data.

    Possibly of more significance both statistically and pragmatically, is the corre-lation between the two independent variables, the grammar test and the placementessay (.327,P < .0001), supporting the use of a combination of direct and indirectmeasures as a means to place our ESL writers into composition courses. Despitethe fact that this correlation is derived from a relatively small sample, it supportsthe use of multiple assessment instruments, rather than one or the other, in thecontext of ESL student placement.

    The second data set represents findings from institutional websites at some ofthe largest universities in the US on the instruments used for initial placement ofESL students.

  • 24D

    .Crusan/

    AssessingW

    riting8(2002)1730

  • D. Crusan / Assessing Writing 8 (2002) 1730 25

    Table 1 clearly shows that at least three of the largest and most prestigious uni-versities (Penn State, Purdue and Wisconsin) currently use only a multiple-choiceinstrument to place ESL students into composition courses.

    In separate internet searches, I found that a great many two-year colleges which are often the educational entry point for many immigrant ESL students, par-ticularly Hispanics (Rendon & Nora, 1994) continue to use indirect instrumentsas the sole means of placement into composition courses.

    3.2. Placement and its importance for ESL students

    Assessment for placement cannot be trivialized and weighs heavily on writingprogram administrators (Silva, 1994). So important is placement that the CCCCCommittee on Second Language Writing in its Statement on Second-LanguageWriting and Writers (2001) states: Decisions regarding the placement of sec-ond language writers into writing courses should be based on students writingproficiency and not based solely on the scores from standardized tests of generallanguage proficiency or of spoken language proficiency. Instead, scores from thedirect assessment of students writing proficiency should be used, and multiplewriting samples should be consulted whenever possible (pp. 670671).

    Armstrong (2001) recommends that institutions placement tests align withcourse content. Likewise, Bachman and Palmer (1996) and Moss (1994) advisethat writing assessments should be linked with curricula. Students should be testedin a manner similar to the work of the course: In other words, if the work of thecourse is writing, then students should write as a means of placement. I believe thata test is inappropriate unless it leads to the best available treatment or placementfor students (Crusan, 2002b).

    It has been suggested that assessment defines goals and expresses values moreclearly than do any number of mission statements (White, Lutz, & Kamusikiri,1996, p. 9). If a universitys definition of writing ability differs from that ofthe composition community, its goals will differ as well. When a university de-fines writing ability as product rather than a non-linear, recursive process, it isjustified in assessing writing ability indirectly. However, whether such assess-ment is meaningful is a separate question. Continually, experts such as Ferris andHedgcock, 1998 point out that an indirect measure might attempt to evaluate writ-ing ability by testing verbal reasoning, error recognition, or grammatical accuracy,all of which are related to writing performance in some way but only indirectlybecause they do not involve the act of composing (p. 229). The view that studentmastery of individual, bite-size, discrete-point rules can lead to full mastery ofthe language is dangerous if it informs curriculum and leads to decontextualizedwriting pedagogy (Leki, 1991).

    Institutions may cling to indirect assessment techniques because they have anarrow definition of writing ability. Institutions using only indirect assessmenttools for ESL placement may consider writing to be simply a set of sub-skills suchas vocabulary knowledge, error recognition, and mechanics. If this is the case, it

  • 26 D. Crusan / Assessing Writing 8 (2002) 1730

    is not surprising that an indirect assessment is used. However, the assessment ofwriting is complex (White, 1994a) and that complexity should not be reduced toone score on one measure (Bailey, 1998; Belanoff, 1991; Carlson & Bridgeman,1986; Haswell & Wyche-Smith, 1994; Huot, 1994, 1996). Many institutions citecost and ease of administration as justification for continuing to assess ESL studentssolely with a discrete, indirect measure. Such defense of indirect assessment evenin the face of information which indicates that what is being done may not befor the greater good of the students involved is suspect. Is ease of administrationand cost effectiveness for the university enough to justify probable financial andacademic hardships to students? Rose (1989) considers testing to be the supremeirony. He states that the very means we use to determine students needs andthe various remedial procedures that derive from them can wreak profoundharm on our children, usually, but by no means only, those who are already behindthe economic and political eight ball (p. 127).

    A number of researchers have argued that teachers should be involved in testing(Bachman & Palmer, 1996; Hamp-Lyons, 1996; White, 1996). Kroll and Reid(1994) contend that test developers should be those who teach the students whowill be tested, since these teachers would be better equipped to know what theirstudents can do, how they can do it, and what they need, and can, therefore,better assess their students writing. Moreover, White (1996) strongly advocatesthat composition faculty become involved in assessment outside the classroom;he asserts that the measurement community regards the writing community aslargely irrelevant to serious measurement (p. 102). However, he also argues thatinformed teachers can convince those in power to use and develop better types ofwriting assessment procedures once they have begun to discover that assessmentknowledge is power.

    Discussing the necessity for all teachers to address the needs of ESL stu-dents, Zamel (1995) calls upon faculty to be border-crossers, blurring the bor-ders within what was once a fairly well-defined and stable academic community,(p. 519). In much the same way, I call for more compositionists to become involvedwith the writing placement processes in their respective institutions, to cross theborder into the world of assessment and struggle to achieve a vision of activelearning, creative thinking, and a much needed blend of skills with imagination(White, 1996, p. 111) thereby supporting the educational needs of our students.How many of us could carry on an informed dialogue if asked about the writingplacement test procedures at our colleges and universities? Again, knowledge ispower.

    The form of assessment we choose is always important because as a rhetoricalact, assessment speaks to students, faculty, the wider institution, and the communityabout the values and practices of college writing (Lewiecki-Wilson et al., 2000,p. 183). This articles purpose was to encourage ESL and composition specialiststo explore how we might better serve those we teach and test. In an age where weseem to be cycling back into criticism (White, 2001), we must be cautious that ourstudents do not get lost or worse, disappear from the academic scene altogether.

  • D. Crusan / Assessing Writing 8 (2002) 1730 27

    As teachers, we should re-evaluate our commitment to ESL students (Williams,1995) by becoming more involved in the assessments that affect their lives, helpingour ESL students to carve out a legitimate space for themselves at the university,a promise made to them by universities when touting much publicized diversityprograms.

    References

    American College Test, Inc., COMPASS/ESL. (2002). Writing placement skills. Retrieved June 4,2002, from the Website: http://www.act.org/compass/sample/writing.html

    Armstrong, W. B. (1995, May). Validating placement tests in the community college: The role of testscores, biographical data, and grading concerns. Paper presented at the 35th Annual Forum of theAssociation for Institutional Research, Boston, MA.

    Armstrong, W. B. (2001). Pre-enrollment placement testing and curricular content: Correspondence ormisalignment. Abstract retrieved May 26, 2002, from: ERIC database.

    Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford, UK: Oxford Univer-sity Press.

    Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford, UK: Oxford UniversityPress.

    Bailey, K. M. (1998). Learning about language assessment: Dilemmas, decisions, and directions.Boston: Heinle & Heinle.

    Belanoff, P. (1991). The myths of assessment. Journal of Basic Writing, 1, 5466.Breland, H. M. (1977). Can multiple-choice tests measure writing skills? College Board Review, 103,

    3233.Breland, H. M. (1983). The direct assessment of writing skill: A measurement review (College Board

    Report No. 83-6, ETS RR No. 83-32). New York: College Entrance Examination Board.Breland, H., Camp, R., Jones, R., Morris, M., & Rock, D. (1987). Assessing writing skill (Research

    Monograph No. 11). New York: College Entrance Examination Board.Brown, H. D. (1994). Principles of language learning and teaching (3rd ed.). Englewood Cliffs, NJ:

    Prentice Hall.Bullock, R. (1991). Autonomy and community in the evaluation of writing. In: R. Bullock & J. Trimbur

    (Eds.), The politics of writing instruction: Postsecondary. Portsmouth, NH: Boynton/Cook.Carlson, S., & Bridgeman, B. (1986). Testing ESL student writers. In: K. L. Greenberg, H. S. Wiener,

    & R. A. Donovan (Eds.), Writing assessment: Issues and strategies. New York: Longman.CCCC Committee on Assessment. (1995). Writing assessment: A position statement. College Compo-

    sition and Communication 46, 430437.CCCC Committee on Second Language Writing. (2001). CCCC statement on second-language writing

    and writers. College Composition and Communication 52 (4), 669674.Compton, D. M., & Metheny, B. (2000). An assessment of grade inflation in higher education. Percep-

    tual and Motor Skills, 90, 527536.Conlan, G. (1986). Objective measures of writing ability. In: K. L. Greenberg, H. S. Wiener, & R.

    A. Donovan (Eds.), Writing assessment: Issues and strategies. New York: Longman.Courts, P. L., & McInerney, K. H. (1993). Assessment in higher education: Politics, pedagogy, and

    portfolios. Westport, CT: Praeger.Crowley, S. (1995). Compositions ethic of service, the universal requirement, and the discourse of

    student need. Journal of Composition Theory, 15 (2), 227239.Crusan, D. (2002a). The marginalization of ESL students through placement exams. Paper presented

    at the 36th Annual TESOL Convention and Exhibition, Salt Lake City, UT, April 2002.Crusan, D. (2002b). The quagmire of assessment for placement: Talking out of both sides of our mouths.

    Manuscript submitted for publication.

  • 28 D. Crusan / Assessing Writing 8 (2002) 1730

    Crusan, D., & Cornett, C. (2002). The cart before the horse: Teaching assessment criteria before writing.The International Journal for Teachers of English Writing Skills, 9, 2033.

    Educational Testing Service. (1996). TOEFL: Test of written English guide (4th ed.). Princeton, NJ:Author.

    Edwards, C. H. (2000). Grade inflation: The effects on educational quality and personal well being.Education, 120, 538546.

    Ferretti, E. (2001). Just a little higher education: Teaching working-class women on the vocationaltrack. In: B. Alford & K. Kroll (Eds.), The politics of writing in the two-year college (pp. 118).Portsmouth, NH: Heinemann.

    Ferris, D., & Hedgcock, J. (1998). Teaching ESL composition: Purpose, process, and practice. Mahwah,NJ: Lawrence Erlbaum.

    Gordon, B. L. (1987). Another look: Standardized tests for placement in college composition courses.Writing Program Administration, 10 (3), 2938.

    Greenberg, K. (1992). Validity and reliability issues in direct assessment of writing. Writing ProgramAdministration, 16, 722.

    Hamp-Lyons, L. (1990). Second language writing: Assessment issues. In: B. Kroll (Ed.), Secondlanguage writing: Research insights for the classroom. Cambridge: Cambridge University Press.

    Hamp-Lyons, L. (1996). The challenges of second-language writing assessment. In: E. M. White, W.D. Lutz, & S. Kamusikiri (Eds.), Assessment of writing: Politics, policies, practices. New York:The Modern Language Association of America.

    Hamp-Lyons, L. (1997). Exploring bias in essay tests. In: C. Severino, J. C. Guerra, & J. E. Butler(Eds.), Writing in multicultural settings. New York: The Modern Language Association of America.

    Hamp-Lyons, L. (2001). Fourth generation writing assessment. In: T. Silva & P. K. Matsuda (Eds.),On second language writing (pp. 117127). Mahwah, NJ: Lawrence Erlbaum.

    Harrington, S. (1998). New visions of authority in placement test rating. Writing Program Administra-tion, 22 (1/2), 5384.

    Haswell, R. H. (1998). Searching for Kiyoko: Bettering mandatory ESL writing placement. Journal ofSecond Language Writing, 7, 133174.

    Haswell, R. H., & Wyche-Smith, S. (1994). Adventuring into writing assessment. College Compositionand Communication, 45, 220236.

    Henning, G. H. (1987). A guide to language testing: Development, evaluation, research. Cambridge,MA: Newbury House.

    Hudson, S. A. (1982). An empirical investigation of direct and indirect measures of writing. Report ofthe 198081 Georgia Competency Based Education Writing Assessment Project 1981. ERIC:ED #205993.

    Huot, B. (1990a). The literature of direct writing assessment: Major concerns and prevailing trends.Review of Educational Research, 60, 237263.

    Huot, B. (1990b). Reliability, validity, and holistic scoring: What we know and what we need to know.College Composition and Communication, 41, 201213.

    Huot, B. (1994). A survey of college and university writing placement practices. Writing ProgramAdministration, 17, 4965.

    Huot, B. (1996). Toward a new theory of writing assessment. College Composition and Communication,47, 549566.

    Indiana University, South Bend, IN, Office of International Affairs. (2002). The English as a sec-ond language (ESL) placement test. Retrieved June 5, 2002, from the Website: http://www.iusb.edu/abridger/test.htm

    Kroll, B., & Reid, J. (1994). Guidelines for designing writing prompts: Clarifications, caveats, andcautions. Journal of Second Language Writing, 3, 231255.

    Leki, I. (1991). A new approach to advanced ESL placement testing. Writing Program Administration,14 (3), 5368.

    Lewiecki-Wilson, C., Sommers, J., & Tassoni, J. P. (2000). Rhetoric and the writers profile: Prob-lematizing directed self-placement. Assessing Writing, 7, 165183.

  • D. Crusan / Assessing Writing 8 (2002) 1730 29

    Matsuda, P. K. (1998). Situating ESL writing in a cross-disciplinary context. Written Communication,15, 99121.

    McNenny, G. (2001). Writing instruction and the post-remedial university: Setting the scene for themainstreaming debate in basic writing. In: G. McNenny & S. H. Fitzgerald (Eds.), Mainstreamingbasic writers: Politics and pedagogies of access (pp. 115). Mahwah, NJ: Lawrence Erlbaum.

    Michigan State University, Lansing, MI, MSU Testing Office. (2002). English language proficiencyexams. Retrieved June 5, 2002, from the Website: http://www.couns.msu.edu/testing/tests.htm

    Moss, P. A. (1994). Validity in high stakes writing assessment: Problems and possibilities. AssessingWriting, 1, 109128.

    Nagle, B. (1998). A proposal for dealing with grade inflation: The relative performance index. Journalof Education for Business, 74, 4043.

    Northwestern University, Evanston, IL, Writing Program. (2002). English proficiency. Retrieved June5, 2002, from the Website: http://www.cas.northwestern.edu/handbook/IV.html#IV.A.2.

    OMalley, J. M., & Valdez Pierce, L. (1996). Authentic assessment for English language learners:Practical approaches for teachers. New York: Addison-Wesley.

    Patkowski, M. S. (1991). Basic skills tests and academic success of ESL college students. TESOLQuarterly, 25, 735738.

    Polio, C. (1998). Examining the written product in L2 writing research: A taxonomy of measures andanalyses. Paper presented at the Symposium on Second Language Writing, Purdue University, WestLafayette, IN, September 1998.

    Rendon, L. I., & Nora, A. (1994). A synthesis and application of research on Hispanic students incommunity colleges. In: J. L. Ratcliff, S. Schwarz, & L. H. Ebbers (Eds.), Community colleges(2nd ed.). Needham Heights, MA: Simon & Schuster.

    Rose, M. (1989). Lives on the boundary: A moving account of the struggles and achievements ofAmericas educationally underprepared. New York: Penguin Books.

    Shor, I. (1997). Our apartheid: Writing instruction and inequality. Journal of Basic Writing, 16, 91104.Silva, T. (1994). An examination of writing program administrators options for the placement of ESL

    students in first year writing classes. Writing Program Administration, 18 (1/2), 3743.Spolsky, B. (1997). The ethics of gatekeeping tests: What have we learned in a hundred years? Language

    Testing, 14, 242247.Stiggins, R. J. (1982). A comparison of direct and indirect writing assessment methods. Research in

    the Teaching of English, 16, 101114.Tedick, D. J. (1990). ESL writing assessment: Subject-matter knowledge and its impact on performance.

    English for Specific Purposes, 9, 123143.The Ohio State University, Columbus, OH, English as a Second Language Composition Program.

    (2002). Placement. Retrieved June 5, 2002, from the Website: http://www.esl.ohio-state.edu/Comp/Placement Information.html

    The Pennsylvania State University, University Park, PA, First-Year Testing, Counseling and Ad-vising Program (FTCAP). (2002). English test. Retrieved June 5, 2002, from the Website:http://www.psu.edu/dus/ftcap/ftcoverv.htm

    University of Illinois, Urbana-Champaign, IL, Division of English as an International Language.(2002). The ESL placement test (EPT). Retrieved June 5, 2002, from the Website: http://www.deil.uiuc.edu/esl.service/EPT.html

    University of Iowa, Iowa City, IA, The English as a Second Language Program. (2002). Englishas a second language credit classes. Retrieved June 5, 2002, from the Website: http://www.uiowa.edu/%7Eiiepesl/ESL/eslindex.html

    University of Michigan, Ann Arbor, MI, English Language Institute. (2002). The Academic EnglishEvaluation (AEE) Schedule and Information Sheet for Winter, Spring, and Summer Terms, 2003.Retrieved June 5, 2002, from the Website: http://websvcs.itd.umich.edu/eli-bin/main

    University of Minnesota, Minneapolis, MN. (2002). Test scores. Retrieved June 5, 2002, from theWebsite: http://www1.umn.edu/twincities/

  • 30 D. Crusan / Assessing Writing 8 (2002) 1730

    University of Wisconsin, Madison, WI, Center for Placement Testing. (2002). English place-ment test. Retrieved June 5, 2002, from the Website: http://wiscinfo.doit.wisc.edu/exams/english placement test.htm

    White, E. M. (1985). Teaching and assessing writing: Recent advances in understanding, evaluating,and improving student performance. San Francisco: Josey-Bass.

    White, E. M. (1986). Pitfalls in the testing of writing. In: K. L. Greenberg, H. S. Wiener, & R. A.Donovan (Eds.), Writing assessment: Issues and strategies. New York: Longman.

    White, E. M. (1990). Language and reality in writing assessment. College Composition and Commu-nication, 41, 187200.

    White, E. M. (1994a). Issues and problems in writing assessment. Assessing Writing, 1, 1127.White, E. M. (1994b). Teaching and assessing writing (2nd ed.). San Francisco: Josey-Bass.White, E. M. (1996). Writing assessment beyond the classroom. In: L. Z. Bloom, D. D. Daiker, &

    E. M. White (Eds.), Composition in the twenty-first century: Crisis and change (pp. 101111).Carbondale, IL: Southern Illinois University Press.

    White, E. M. (2001). Revisiting the importance of placement and basic studies: Evidence of success.In: G. McNenny & S. H. Fitzgerald (Eds.), Mainstreaming basic writers: Politics and pedagogiesof access (pp. 1928). Mahwah, NJ: Lawrence Erlbaum.

    White, E. M., Lutz, W. D., & Kamusikiri, S. (1996). Assessment of writing: Politics, policies, practices.New York: The Modern Language Association of America.

    Williams, J. (1995). ESL composition program administration in the United States. Journal of SecondLanguage Writing, 4, 157179.

    Williamson, M. (1994). The worship of efficiency: Untangling theoretical and practical considerationsin writing assessment. Assessing Writing, 1, 147193.

    Wolcott, W., & Legg, S. M. (1998). An overview of writing assessment: Theory, research, and practice.Urbana, IL: National Council of Teachers of English.

    Yancey, K. B. (1999). Looking back as we look forward: Historicizing writing assessment. CollegeComposition and Communication, 50, 483503.

    Zamel, V. (1995). Strangers in academia: The experiences of faculty and ESL students across thecurriculum. College Composition and Communication, 46, 506521.

    Zirkel, P. A. (1999). Grade inflation: A leadership opportunity for schools of education? TeachersCollege Record, 101, 247260.

    An assessment of ESL writing placement assessmentWhat is a placement test?Direct and indirect testingProblems with the tests - or - who says which is best and whyPlacement in the ESL contextPlacement and its importance for ESL students

    References