13
THE INTERNATIONAL ADULT LITERACY SURVEY: WHAT DOES IT REALLY MEASURE? MARY HAMILTON and DAVID BARTON Abstract – The paper evaluates the work of the International Adult Literacy Survey as reported in OECD 1997. It assesses its contribution to understanding literacy in terms of the perspective of the New Literacy Studies. It outlines this perspective as a basis for a critique that is mostly concerned with the validity of the test. Three crit- icisms of the survey are made: that it provides only a partial picture of literacy; that culture is treated as bias; and that the test items do not represent the real-life items as claimed. Finally, the paper concludes with an overall evaluation of what the IALS achieves in terms of its own aims. Zusammenfassung – Dieser Artikel bewertet die Arbeit der internationalen Befragung über die Lese- und Schreibfähigkeit erwachsener Personen (International Adult Literacy Survey) nach dem in der OECD 1997 veröffentlichten Bericht. Er beurteilt ihren Beitrag zum Verständnis der Lese- und Schreibfähigkeit aus der Perspektive der zu diesem Thema neu durchgeführten Studien. Die Autoren skizzieren diese Perspektive als Grundlage einer Kritik, die sich zum großen Teil auf die Gültigkeit des Tests bezieht. Die Studie wird in drei Aspekten kritisiert: sie gebe lediglich ein unvollständiges Bild der Lese- und Schreibfähigkeiten wieder, die kulturellen Faktoren würden lediglich als Abweichungen behandelt, und die Einzelheiten des Tests würden nicht diejenigen des realen Lebens – wie ursprünglich beabsichtigt – wiedergeben. Der Artikel endet mit einer allgemeinen Bewertung der durch die IALS erzielten Erfolge in Bezug auf ihre eigenen Zielsetzungen. Résumé – Cet article évalue le travail réalisé par L’Enquête internationale sur l’alphabétisation des adultes présentée par l’OCDE en 1997. Il apprécie sa contribution à définir la littératie dans l’optique des nouvelles études dans ce domaine. Il souligne dans cette optique le fondement d’une critique principalement soucieuse de la validité du test. Il émet trois avertissements à l’égard de cette enquête: elle dresse un tableau incomplet des capacités de lecture et d’écriture; les facteurs culturels y sont considérés comme des écarts; enfin, les unités du test ne reflètent pas les détails de la vie quotidienne, comme revendiqué par les concepteurs. L’article conclut par une estimation globale des résultats de l’EIAA par rapport à ses propres objectifs. Resumen – Este artículo evalúa el trabajo de la encuesta internacional sobre la alfabetización de personas adultas Internacional Adult Literacy Survey, según el informe publicado en la OCDE en 1997. Aprecia su contribución a la comprensión de la alfabetización desde la óptica de los nuevos estudios realizados sobre este tema. Los autores esbozan esta perspectiva como base de una crítica que se ocupa en gran parte de la validez del test. Se han realizado tres crÌticas del estudio: que solamente proporciona una imagen incompleta de las capacidades de lectura y escritura; que los factores culturales se consideran como si fuesen meras discrepancias; y que los detalles del test no reflejan los detalles de la vida real, tal como se pretendía. Finalmente, el trabajo concluye con una evaluación general de los logros alcanzados por la IALS en cuanto a sus propios objetivos. International Review of Education – Internationale Zeitschrift für Erziehungswissenschaft – Revue Internationale de l’Education 46(5): 377–389, 2000. 2000 Kluwer Academic Publishers. Printed in the Netherlands.

The International Adult Literacy Survey: What Does It Really Measure?

Embed Size (px)

Citation preview

THE INTERNATIONAL ADULT LITERACY SURVEY: WHAT DOES IT REALLY MEASURE?

MARY HAMILTON and DAVID BARTON

Abstract – The paper evaluates the work of the International Adult Literacy Surveyas reported in OECD 1997. It assesses its contribution to understanding literacy interms of the perspective of the New Literacy Studies. It outlines this perspective asa basis for a critique that is mostly concerned with the

validity of the test. Three crit-icisms of the survey are made: that it provides only a partial picture of literacy; thatculture is treated as bias; and that the test items do not represent the real-life itemsas claimed. Finally, the paper concludes with an overall evaluation of what the IALSachieves in terms of its own aims.

Zusammenfassung – Dieser Artikel bewertet die Arbeit der internationalen Befragungüber die Lese- und Schreibfähigkeit erwachsener Personen (International AdultLiteracy Survey) nach dem in der OECD 1997 veröffentlichten Bericht. Er beurteiltihren Beitrag zum Verständnis der Lese- und Schreibfähigkeit aus der Perspektiveder zu diesem Thema neu durchgeführten Studien. Die Autoren skizzieren diesePerspektive als Grundlage einer Kritik, die sich zum großen Teil auf die Gültigkeitdes Tests bezieht. Die Studie wird in drei Aspekten kritisiert: sie gebe lediglich einunvollständiges Bild der Lese- und Schreibfähigkeiten wieder, die kulturellen Faktorenwürden lediglich als Abweichungen behandelt, und die Einzelheiten des Tests würdennicht diejenigen des realen Lebens – wie ursprünglich beabsichtigt – wiedergeben.Der Artikel endet mit einer allgemeinen Bewertung der durch die IALS erzieltenErfolge in Bezug auf ihre eigenen Zielsetzungen.

Résumé – Cet article évalue le travail réalisé par L’Enquête internationalesur l’alphabétisation des adultes présentée par l’OCDE en 1997. Il apprécie sacontribution à définir la littératie dans l’optique des nouvelles études dans ce domaine.Il souligne dans cette optique le fondement d’une critique principalement soucieusede la validité du test. Il émet trois avertissements à l’égard de cette enquête: elle dresseun tableau incomplet des capacités de lecture et d’écriture; les facteurs culturels y sontconsidérés comme des écarts; enfin, les unités du test ne reflètent pas les détails dela vie quotidienne, comme revendiqué par les concepteurs. L’article conclut par uneestimation globale des résultats de l’EIAA par rapport à ses propres objectifs.

Resumen – Este artículo evalúa el trabajo de la encuesta internacional sobre laalfabetización de personas adultas Internacional Adult Literacy Survey, según elinforme publicado en la OCDE en 1997. Aprecia su contribución a la comprensiónde la alfabetización desde la óptica de los nuevos estudios realizados sobre este tema.Los autores esbozan esta perspectiva como base de una crítica que se ocupa en granparte de la validez del test. Se han realizado tres crÌticas del estudio: que solamenteproporciona una imagen incompleta de las capacidades de lectura y escritura; que losfactores culturales se consideran como si fuesen meras discrepancias; y que los detallesdel test no reflejan los detalles de la vida real, tal como se pretendía. Finalmente, eltrabajo concluye con una evaluación general de los logros alcanzados por la IALS encuanto a sus propios objetivos.

International Review of Education – Internationale Zeitschrift für Erziehungswissenschaft– Revue Internationale de l’Education 46(5): 377–389, 2000. 2000 Kluwer Academic Publishers. Printed in the Netherlands.

Evaluating the IALS research from the perspective of the New LiteracyStudies

The International Adult Literacy Survey (IALS) is part of a continuingtradition of attempts to measure literacy levels by means of surveys, and toproduce international comparisons. Such research is driven by the search foruniversals in the relationships between literacy, education and prosperity whichcan be used to further the goal of global development. UNESCO began thisprocess of developing international definitions and statistical measures ofliteracy (see Barton and Hamilton 1990; Jones 1990). The IALS study isorganised by the Organisation for Economic Co-operation and Development(OECD) in partnership with national statistical research agencies in Canadaand the USA. The three overall aims of the study are to produce meaningfulcomparisons between countries, to understand the relationship between literacyand economic indicators of wealth and well being, and to inform and influ-ence policy decisions. Its findings are now integrated into a set of keystatistical indicators issued by international policy makers (see OECD/CERI1997).

In that this research is both commissioned by and targeted at policy-makersinternationally, it has considerable power to shape the vision that funds literacyprogrammes around the world. It is therefore worth scrutinising this researchin detail. In this paper we examine the latest report of this research, LiteracySkills for the Knowledge Society (OECD 1997), and evaluate its claims fromthe point of view of the social view of literacy which we work within.

Two kinds of data were collected in the survey: as well as a test, respon-dents were interviewed about their background and their uses of literacy athome and work. This latest report repeats some information from the earlierreport which covered results from seven countries (OECD/Statistics Canada1995). It integrates findings from the second wave of the survey, presentingfindings from twelve countries in all and reflects a wider set of concerns todo with lifelong learning. The findings in the report are organised into fourmain sections. The first section presents basic statistical distributions, lookingat literacy in relation to age, education, social background and gender. The

378

second section is concerned with correlations between educational attainment,literacy scores and level of earnings. The relationship of literacy to health,crime, welfare assistance and community participation are briefly discussed.Section 3 discusses literacy learning across the lifespan and presents data fromthe background interview about literacy practices at home and work. The finalsection, entitled Readiness to learn, discusses participation rates in adulteducation and training and relates them to occupational status and educationalattainment.

When evaluating a study of this kind, we can accept the methodologicalparadigm within which it works and critique it in terms of its own logic andaims (as in Levine 1998; Goldstein 1998). We can also look at what can beachieved with this kind of methodology in relation to other possibleapproaches, comparing the advantages and limitations of each. Whilst theIALS approach to literacy is an influential one with a long tradition behindit, it is not the only one and it cannot be accepted uncritically. There aredifferent research communities involved with literacy with differing per-spectives, who do not necessarily draw on one another’s work.

Those involved in the IALS research are testers and statisticians, committedto quantitative methodologies. They work with a model of literacy that treatsit as a set of information-processing cognitive skills and deals primarily withreading. The central part of the study is the test they have developed. Theyare keen to avoid proxy measures of literacy, where a variable such as yearsof schooling or educational qualifications are used to stand in as measures ofliteracy; consequently, they have tried to develop tests based on real-lifeliteracy materials. Recognising the complexity of literacy they have alsowanted to avoid a single measure of literacy and to include numeracy, designand layout as part of literacy. While their solution of a model of literacy basedon three statistically-derived dimensions is inadequate in our terms, as wediscuss below, we agree with their call for approaches which confront thecomplexity of literacy.

We work within what has become known as the New Literacy Studies (asin Gee 1990; Street 1996; Barton 1994; Barton and Hamilton 1998) usingan approach which sets out from a different point – starting from the local,everyday experience of literacy in particular communities of practice. Ourapproach is based upon a belief that literacy only has meaning within itsparticular context of social practice and does not transfer unproblematicallyacross contexts; there are different literacy practices in different domains ofsocial life, such as education, religion, workplaces, public services, families,community activities; they change over time and these different literacies aresupported and shaped by the different institutions and social relationships.Detailed studies of particular situations can be revealing about these differ-ences and in turn these help reveal the broader meanings, values and usesthat literacy has for people in their day-to-day lives. We would argue thatany research that purports to increase our understanding of literacy in societymust take account of these meanings, values and uses – and indeed they are

379

the source of the ideas which statisticians use to interpret their findings. Ourevaluation of the IALS then starts from this perspective. (See also thecritiques by Graff 1996; Street 1996; and the response by Jones 1997.)

Before we offer this evaluation, however, there is one important point aboutthe degree of confidence we can place on the IALS findings, which can befound in the technical reports of the research. An independent evaluation ofmethodology, carried out at the request of France, revealed significant method-ological flaws. It concluded that:

. . . all comparative analyses across countries should be interpreted with due caution.In particular we recommend against the publication of comparisons of overallnational literacy levels. We consider any rankings of countries based on suchcomparisons to be of dubious value given the methodological weaknesses . . .(Kalton, Lyberg and Rempp 1998: 4; see also Goldstein 1998).

The survey authors were advised to present only the within-country patternsin the report. In the text of the report they have largely kept to this; however,findings are also presented comparatively for the twelve countries by meansof bar charts. No information is given about the statistical significance ofdifferences between countries or other groups compared, so readers are leftto judge for themselves; the bar charts visually represent differences betweencountries and, indeed, media reports (such as the Guardian 12 Sept. 1997;Independent 12 Sept. 1997) have seized upon these differences as beingimportant ones.

Having issued this general warning about how to read the survey findings,in the rest of this paper we will concentrate on issues of validity: does thesurvey really measure what it set out to measure? We make three criticismsof the survey: that it provides only a partial picture of literacy, that culture istreated as bias; and that the tests are not the real life items they claim to be.Finally, we offer an overall evaluation of what the IALS achieves in terms ofits own aims.

A partial picture

Our first critique of the IALS is that it only provides a partial picture ofliteracy but claims definitively to represent all of literacy. We argue that itrecognises only a limited and simplistic view of what literacy is in the livesof the millions of people covered by the survey and that this partial view ispart of the logic of the survey methodology, rather than something that couldbe improved on.

In order to understand this argument, it is important to grasp the under-lying model of literacy on which the survey is based. The authors claim thatit represents a move forward in theorising literacy, in that it moves away froma single dimension of literacy, distinguishing instead three dimensions ofliteracy (prose, document and quantitative) and five levels on each of these

380

dimensions. It also used a background interview to collect data about adults’uses of literacy, especially in relation to employment. However, from the pointof view of those working within the New Literacy Studies, the approach takenin the survey test is still based on a model of literacy that ignores, rather thancontributes to, new understandings of the role of literacy in society.

The definition of literacy given in the introduction to the report (14) is moresubtle than in earlier reports (such as OECD 1992; 1995). It puts forward afunctional and socially situated view of literacy as “a particular skill” namely“the ability to understand and employ printed information in daily activitiesat home, at work and the community – to achieve one’s goals and to developone’s knowledge and potential.” The authors go on to say that, in fact, this isnot a single, simple skill, but a broad set of information-processing compe-tencies that points to “the multiplicity of skills that constitute literacy inadvanced industrialised societies.”

This definition recognises the embedded nature of literacy in everyday lifeand the variety and complexity of literacy activities that people engage within the countries studied. Nevertheless, as Street (1996) and Levine (1998)argue, this espoused definition only pays lip service to a social practiceaccount and is at odds with the approach actually taken in operationalisingthe study. In the study it is assumed that the meaning of literacy is containedin the text items in interaction with the formally described information-pro-cessing features of the task required by the test. Under this model, thesemeanings should be culturally invariant and features of the wider contexts inwhich such texts would normally be used are of no importance. The use ofthe term skills signals the model of cognitive competencies on which thesurvey is based and the use of the term advanced industrialised societiessuggests something about the value given to westernised, market economies.As Street (1996) points out, the term practices is used in a weak sense to meanactivities or tasks.

Cross-cultural testing is widely recognised as a complex and difficultundertaking. To date there is little detailed information available about localvariations and cultural patterns of literacy in the countries surveyed. In thepast two decades work in the new literacy studies has begun the painstakingwork of untangling the social, historical, cross-cultural dimensions of literacy(for example Graff 1979; Heath 1983; Scribner and Cole 1981; Street 1993;Barton and Ivanic 1991; Barton 1994). This work appears to be totally ignoredby the survey designers and authors. The IALS study fails to acknowledge theextensive work and debates around the complexities of literacy in socialpractice and the problematic nature of cross-cultural comparisons. It makessweeping claims about causality from descriptive correlations and differences.It claims to represent the whole picture in an “objective” fashion, disallowingthe possibility that alternative research paradigms may be essential in orderto interpret and elucidate statistical trends and correlations which can, bythemselves, be only suggestive. As a result, the IALS rests on an impover-ished view of the roles of literacy in society.

381

Culture treated as error

Surveys in the tradition of the IALS are based on standardising assumptionswhich mean that culture inevitably is seen as a problem: it becomes a dis-tracting variable whose influence must be minimised in order to eliminate testbias. From the IALS perspective the search for validity involves identifyinga common cultural core of test items which elicit a similar pattern of responseacross all cultures and language groups. Within this tradition it is common-sense that any literacy practice not recognised beyond a particular culturalgroup cannot be used to generate test items for a cross-cultural study sincethis would constitute cultural bias. However, from the new literacy studiesperspective, which sees literacy as constituted by its cultural context, the IALSmethodology throws the baby of literacy out with the bath water of culture.The search for cultural neutrality directs attention away from the very featuresthat are most essential for an understanding of literacy and its dynamic withineveryday life.

Throughout the report the IALS authors struggle with what they view asthe “problem” of culture and they provide many examples of where it intrudesin the test design and translation across countries. It is clear that even withintheir own terms they have not always succeeded in assuring equivalency ofitems across countries. The boundaries they draw between different coun-tries are frequently based on simplistic and unexamined assumptions. Forexample, national boundaries are usually taken as defining homogeneity ofculture, except where more than one language is officially recognised as inCanada, Switzerland and Belgium. These are political and bureaucratic dis-tinctions that are in complex relationship with lived cultures. There is anEnglish-language North American version, covering the United States andEnglish-speaking Canada, and a version covering England, Scotland andWales, whereas Northern Ireland had its own version. It is not obvious whattheory of cultural differences would account for these decisions. Is themultilingual nature of the United States to be ignored? Do the United Statesand Canada have one homogenous culture, with fewer differences than thosebetween England and Northern Ireland?

In generating test items, the starting points were real life texts such as bustimetables, advertisements and consumer instructions. These then underwentvarious transformations to turn them into test items. The final test used 35texts, each one as the basis for several question items. Each individual wastested on a maximum of 15 of these texts. Changes to the item pool werestrongly controlled from the United States Educational Testing Service. Therewas a constant selection process, getting rid of any items that did not giveconsistent response patterns across different populations and they were subtlychanged to turn them into more reliable test items. Each step makes the testitem less life-like. Any test item which showed cross-cultural differences wasdispensed with, so that the remaining items represent a common core of

382

literacy tasks that are relevant to all the countries surveyed and which elicitthe same distribution of response from populations in all these countries.

The US and the British versions of the IALS test can serve as a goodexample of this process of creating items. Comparing the two versions isinstructive as there is a common language between the two countries and somecultural differences, but there is greater cultural similarity than between manypairs of countries included in the survey; therefore cross-cultural issues ofitem choice and translation could be expected to be least problematic (see alsothe IALS’ own investigations of translation issues, Binkley and Pignal 1998).We have looked at the British version of the test. On first inspection of the35 texts, the strong US bias is immediately apparent. On page 19 of the reportit is stated that the majority of the items originate in the US. We would arguethat most of the others are obviously international or still follow a US style.1

At a superficial level the texts have been translated into British English: sothat diaper has become nappy, and odor is spelt odour, but the fact that theword odour is not commonly used in the UK in reference to household smellsis ignored. The title Recreational Swimming Facility is used in one text, aphrase not commonly used in the UK; this presumably means sports centre.Looking more closely, there are US ways of using language and US conven-tions of design and layout. The bus timetable, for instance, follows the twelve-hour clock with a.m. and p.m. The morning is written in normal font and theafternoon in bold. This is fine, it is comprehensible and it may seeminnocuous. Nevertheless, these are US conventions; in Britain bus timetablesnormally use a twenty-four-hour clock; morning and afternoon buses are notgiven a different font. Font differences are usually used to distinguish throughservices from ones where a change of bus is required. These are small points,but they are indicative of how the seemingly culture-free bus timetable mayin fact be quite a different text in two countries and be clearly perceived byrespondents as originating outside of their own culture.

Tests and reality: what is being tested?

If these are the texts, we also need to examine the practices. Continuing theexample above, what do people do with bus timetables in their lives? Whatrole do bus time-tables have in their social activities? We know of no researchwhich has actually investigated this, but the consistent message from ourethnographic work is that the uses of texts are not obvious: one cannot readoff from a text, nor from the intentions of the text-producers, how a text isactually used (Barton and Hamilton 1998). This is a simple point, but it is acrucial one for understanding the nature of literacy. It is also important forevaluating the reality of the test situation. Use of texts is represented by thetest items, the actual questions people are asked in relation to the texts; thesemay bear no relation to people’s actual everyday practices. The activity of

383

answering test questions based on a timetable may bear no relation to people’sday-to-day activities, even if they do travel by bus and use timetables. Oncea real life text such as a bus timetable is wrenched out of its real life contextit ceases to be a timetable and it becomes a test item. We have dealt with onetext only, but we believe these issues pervade all the test items.

The removal of texts from their original cultural context, the subtle trans-formations to make them acceptable across populations and the embeddingof them in a set of information-processing tasks transforms them into testitems. As a result, these test items have a remote and indeterminate relation-ship with the original literacy practices from which the texts were taken. Theyhave been changed by being recontextualised. Tests of this sort are still proxymeasures of literacy, the problem that the test designers were trying to escapefrom; they are not real life tasks, they are abstracted. In addition, the testsare designed to ensure a broad spread of responses across an arbitrarily fixedset of five levels. This involves allocating a significant proportion of peopleto each of the levels, including the lowest; this may bear no relation to thedistribution of everyday tasks people perform in their lives. We reiterate: thelevels have been invented statistically; they are not based on people’s actuallived practices. The data do not demonstrate that many people have lowlevels of literacy or that there is a broad range of skills within Britain: thesedistributions are invented by the statistical models. Further, in the surveypeople’s own judgements of their everyday literacy competence (not reportedin this later volume) were much more positive than the test scores wouldsuggest they should be. The interpretation of this favoured by the authors ofthe 1995 OECD report is that people are deluded about their own abilities,but an alternative view is that the test is measuring something other thaneveryday literacy practices.

Because of our own concerns, we were particularly interested in what theinterview data can tell us more directly about the practices in which peopleengage. These data are reported in sections 2 and 3 of the report. On closeinspection we found that these sections rely a great deal on previously col-lected data to make their arguments. For example, most of the section thatdeals with the social benefits of literacy is not based on direct informationfrom the survey. Arguments are made, but no direct evidence is presentedfrom the survey, about relationships between literacy and crime, welfareassistance and health. Where there is data which can be compared with otherpublished research, we would be cautious about relying on the IALS data, andwe would reiterate warnings about avoiding comparisons across countries. Forexample, the British survey data on community participation suggests thataround 30% of the population volunteer in community organisations, thelowest figure of all the countries studied. Detailed studies of voluntary activityin the UK consistently report a rate of volunteering of over 50% (Davis Smith1998; Elsdon 1995; Percy 1988), a very different figure from that of theIALS. As one explanation of the discrepancy between these results, our ownexperience (reported in Barton and Hamilton 1998) suggests that a single

384

interview with just one or two questions on any topic as used in the IALScan only scratch the surface of an individual’s literacy practices.

Other data from the background interview is presented in Section 3, wherefindings about peoples’ reported literacy practices at home and work arediscussed under the theme of literacy learning across the lifespan. In line withcurrent policy preoccupations, the authors of the report were concerned toaddress issues of lifelong learning and to argue that a broad cultural approachto improving literacy is needed. To sustain this argument, there is muchreliance on data from elsewhere. To give one example, there can be no directevidence from a snapshot survey like this that literacy deteriorates if it is notused; but this argument is made, and it is used to justify the need for lifelonglearning. Another example of the need to bring in data from elsewhere is thatsocio-economic background data was very limited for respondents in the IALSstudy. Yet in order to support arguments about the importance of inter-generational effects (that is that the level of parental education can predictchildren’s literacy achievements) the authors rely on data from other surveys.

Taking into account the comments above, what sense can be made of thefindings from this survey? A carefully designed test has been given to a sampleof the population of Britain; a similar test has been given to samples of peoplein different countries. There appear to be some patterns in the results. It isreasonable then to ask what these items might possibly measure. The test isconstructed in the same way as and feels like an intelligence test or a languagetest. From inspection of the content of the test items, they appear to repre-sent texts that would be familiar to people who participate in a US-based inter-national culture, based on consumer goods, common activities and news issuesthat circulate within the ambit of global media and markets. We argue there-fore that the IALS test offers yet another proxy measure for literacy, and wesuggest that what the test is really measuring is an artificially constructedtest literacy, sampling a transnational culture and tapping people’s participa-tion in the global economy. This is a literacy but it is not literacy.

The power of enactive research

Having made several criticisms of the validity of the test, we can now returnto what the research reveals and we examine this in terms of the goals theauthors originally set themselves:

1. To produce meaningful comparisons between countries. The verdict canonly be that the survey has failed in this aim. There are many reasons, butthe failure to achieve an adequately standardised sample and procedureacross countries in itself weakens the cross-cultural comparisons to thepoint that independent consultants advised the authors not to publish inter-national comparisons.

2. To understand the relationship between literacy and economic indicatorsof wealth and well-being. Within countries we believe there is a limited

385

success in this aim. As long as we recognise that these are still proxymeasures for literacy, there are suggestive findings, especially those gainedfrom the interview about practices. Some of the within-country patternsprovide ideas for further exploration using more sensitive and ethnographicmethods: for example patterns of findings related to gender, participationin voluntary activities, and work-related practices are worth investigatingfurther. For a reader from the UK, many of the findings confirm relation-ships found in other survey research (such as Hamilton and Stasinopoulos1987; Bynner 1997), but perhaps similar evidence is not available else-where in the other countries included in the survey.

We have argued that the understandings we can derive from the surveyare limited in two ways. Firstly, in order to get test items that work acrosscountries the survey designers have had to restrict themselves to a narrowband of transnational texts which only tap into a fraction of real literacypractices and particularly underrepresent writing. These texts are thenremoved from their context of use, fitted into a traditional psychometrictesting model which defines levels of difficulty and is based on a snapshotview of performance. Secondly, the findings are over-interpreted, usingexisting explanatory frameworks which are based on previously definedpolicy goals. It is clear from this that the policy is driving the researchand not the other way round.

3. To inform and influence policy decisions. The IALS research may well beeffective in this aim. We have argued above that the survey claims too muchand the findings are over-interpreted. The danger is, however, that thisresearch is framed by such powerful institutions, including the OECD, thatit uses up such a large part of the budget available for researching literacyissues, and is so closely geared to current policy agendas, that it will haveeffects far beyond its merits. The IALS reports will be used to justify policyinterventions in individual countries, based on its estimates of the per-centage of the population with “very poor” literacy. (We have already col-lected newspaper cuttings and reports from Canada, New Zealand, Irelandand the UK suggesting that this is happening.) It is also likely that thedimensions and levels of literacy it has defined will be integrated intonational assessment systems (as appears to be happening with the OntarioLearning Outcome framework).

Despite its pronounced emphasis on the employment-related aspects ofliteracy, in many ways the policy vision behind this report shares many ofthe aims of the humanistic agenda of UNESCO as expressed at the FifthInternational Conference on Adult Education declaration (see summary dec-laration, 13, 29–31). The IALS report argues for the importance of lifelonglearning to achieve a more equitable distribution of literacy within and acrossdifferent countries, the importance of developing a general cultural environ-ment of literacy and the need for cross-sectoral policy initiatives in literacy.

We would describe the IALS research as enactive research, meaning that

386

it is designed to rationalise and support policy decisions that have alreadybeen made outside of the research arena. Much government sponsored edu-cational research in Britain is enactive in this sense, and there are increasingpressures for research to support policy. What may be confusing for literacyadvocates is that we may largely find the ends of these policies benevolent,while at the same time finding the human capital rationalisations for thesepolicy goals distasteful or plain wrong. Both OECD policy and the researchuncritically support the new work-order vision of global capitalism andencourage people to see this as a fixture around which we need to adjust ourlives and national policies, rather than as something which literacy might helpshape according to a more humanitarian agenda. Rather than producing newknowledge and understandings about literacy, this research contributes to thedevelopment of a transnational culture; this is part of the vision of the newglobal capitalism (see Gee et al. 1996).

Large-scale surveys may be the only kind of research which policy-makerswill listen to and this research may succeed in releasing more secure fundingfor adult literacy programmes. Such an outcome will occur, however, at thecost of genuine understanding of the range of meanings and the power ofliteracy in people’s lives. The methodology and findings of such research arecontentious. It keeps alive the myth that prosperity necessarily follows fromliteracy (see Graff 1979), and it may unreasonably elevate expectations ofwhat literacy can achieve in economic terms. For these reasons, it is essen-tial to complement the IALS approach with other research paradigms in orderto inform policy-makers about literacy and its role in society, to meet people’sown demands for literacy, and to inform and develop an effective ABEpractice.

Note

1. If there is such a US bias in the test, one might ask why the US does not scoresignificantly higher than Britain on the test; but that would be to ignore the manyother potential sources of variation which make this test such a difficult one tointerpret, including cultural differences within countries, as well as between them.

References

Barton, D. 1994. Literacy: An Introduction to the Ecology of Written Language.Oxford: Blackwell.

Barton, D. and Hamilton, M. 1990. Literacy Research in Industrialised Countries:Trends and Prospects. UIE Reports 2.

———. 1998. Local Literacies: Reading and Writing in One Community. London:Routledge.

Barton, D. and Ivanic, R., eds. 1991. Writing in the Community. London: Sage.

387

Binkley, M. and Pignal, J. 1998. An analysis of items with different parameters acrosscountries. In: Murray, S. Kirsch, I. and Jenkins, L., eds., Adult Literacy in OECDCountries: Technical Report on the First International Adult Literacy Survey.Washington: US Dept of Education, National Center for Education Statistics (NCES98-053). 143–160.

Bynner, J. and Parsons, S. 1997. It Doesn’t Get Any Better: The Impact of Poor BasicSkills on the Lives of 37-year-olds. London: The Basic Skills Agency.

CONFINTEA, Adult Education: The Hamburg Declaration: The Agenda for theFuture. Fifth International Conference on Adult Education 14–18 July 1997. UNESCO.

Davis Smith, J. 1998. The 1997 National Survey of Volunteering. National Centre forVolunteering/ University of East London.

Elsdon, K. 1995. Voluntary Organisations: Citizenship, Learning and Change.Leicester: NIACE.

Goldstein, H. 1998. Models for Reality: New Approaches to the Understanding ofEducational Processes. Professorial Lecture, University of London, Institute ofEducation.

Gee, J. 1990. Social Linguistics and Literacies: Ideology in Discourses. London:Falmer Press, Second edition.

Gee, J., Hull, G. and Lankshear, C. 1996. The New Work Order: Behind the Languageof the New Capitalism. London: Allen & Unwin.

Graff, H. J. 1979. The Literacy Myth: Literacy and Social Structure in the 19th CenturyCity. New York: Academic Press.

Graff, H. 1996. The persisting power and costs of the literacy myth. Literacy Acrossthe Curriculum 12(2): 4–5. Montréal: Centre for Literacy.

Hamilton, M. and Stasinopoulos, M. 1987. Literacy, Numeracy and Adults: Evidencefrom the National Child Development Study. London: Adult Literacy and Basic SkillsUnit.

Jones, P. W. 1990. UNESCO and the politics of global literacy. ComparativeEducation Review l34(1): 41–60.

Jones, S. 1997. Ending the myth of the ‘literacy myth’. Literacy Across the Curriculum12(4): 10–17. Montréal: Centre for Literacy.

Kalton, G., Lybert, L. and Rempp, J.-M. 1998. Review of methodology. In: Murray,S. I. Kirsch, I. and Jenkins, L., eds., Adult Literacy in OECD Countries: TechnicalReport on the First International Adult Literacy Survey. Appendix A.

Levine, K. 1998. Definitional and methodological Problems in the Cross-NationalMeasurement of Adult Literacy: The Case of the IALS. Written Language and Literacy1(1): 41B62.

Murray, S., Kirsch, I. and Jenkins, L. 1998. Adult Literacy in OECD Countries:Technical Report on the First International Adult Literacy Survey. Washington: USDept of Education, National Center for Education Statistics (NCES 98-053).

OECD. 1997. Literacy Skills for the Knowledge Society. Paris: OECD.

OECD/CERI. 1992. Adult Illiteracy and Economic Performance. Paris: OECD.

———. 1997. Education Policy Analysis 1997. Paris: OECD.

388

OECD/Statistics Canada. 1995. Literacy, Economy and Society: Results of the firstinternational Adult Literacy Survey. Paris: OECD.

Ontario Ministry of Education and Training. 1998. Working with Learning Outcomes.Ontario Learning Outcome framework.

Percy, K. A. 1988. Learning in Voluntary Organisations. Leicester: Unit for theDevelopment of Continuing Education, Leicester University.

Scribner and Cole. 1981. The Psychology of Literacy. Cambridge, MA: HarvardUniversity Press.

Street, B., ed., 1993. Cross-cultural Approaches to Literacy. Cambridge: CambridgeUniversity Press.

———. 1995. Social Literacies: Critical Approaches to Literacy in Development,Ethnography and Education. London: Longman.

———. 1996. Literacy, economy and society. Literacy Across the Curriculum 12(3):8–15. Montreal: Centre for Literacy.

The authors

Mary Hamilton works in the Department of Educational Research, LancasterUniversity. She specialises in adult basic education, informal adult learning processes,policy issues in adult basic education and media representations of educational issues.She is also a founder member of the Literacy Research Group at Lancaster.

Contact address: Prof. Mary Hamilton, Department of Educational Research,Lancaster University, Lancaster LA1 4YL, UK.

David Barton teaches in the Department of Linguistics at Lancaster University andis interested in everyday uses of reading and writing. He is author of Literacy: AnIntroduction to the Ecology of Written Language (Blackwell 1994). He is a foundermember of the Literacy Research Group at Lancaster University, UK, and has col-laborated with Mary Hamilton on many publications about literacy, including Worldsof Literacy (Multilingual Matters 1994); Local Literacies: Reading and Writing inOne Community (Routledge 1998); and Situated Literacies (Routledge 2000).

Contact address: Prof. David Barton, Department of Linguistics and ModernEnglish Language, Lancaster University, Lancaster LA1 4YT, UK.

389