53
SAMPLE

Literacy as Numbers: Researching the Politics and Practices of International Literacy Assessment

Embed Size (px)

Citation preview

SAMPLE

SAMPLE

LITERACY AS NUMBERS Researching the Politics and Practices of International Literacy Assessment

Edited by Mary Hamilton, Bryan Maddox and Camilla Addey

SAMPLE

University Printing House, Cambridge cb2 8bs, United Kingdom

Cambridge University Press is part of the University of Cambridge.

It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence.

Information on this title: education.cambridge.org/

© Cambridge University Press 2015

This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.

First published 2015

Printed in the United Kingdom by Printondemand-worldwide, Peterborough

A catalogue record for this publication is available from the British Library

isbn 13-9781107525177 Paperback

Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Information regarding prices, travel timetables, and other factual information given in this work is correct at the time of first printing but Cambridge University Press does not guarantee the accuracy of such information thereafter.

SAMPLE

CONTENTS

Notes on contributors v Series Editors’ Preface ixForeword Gita Steiner-Khamsi xiIntroduction Mary Hamilton, Bryan Maddox, Camilla Addey xiii

PART ONE DEFINITIONS AND CONCEPTUALISATIONS

Chapter 1 Assembling a Sociology of Numbers Radhika Gorur 1

Chapter 2 New Literacisation, Curricular Isomorphism and the OECD’s PISA Sam Sellar and Bob Lingard 17

Chapter 3 Transnational Education Policy-making: International Assessments and the Formation of a New Institutional Order

Sotiria Grek 35

Chapter 4 Interpreting International Surveys of Adult Skills: Methodological and Policy-related Issues

Jeff Evans 53

PART TWO PROCESSES, EFFECTS AND PRACTICES

Chapter 5 Disentangling Policy Intentions, Educational Practice and the Discourse of Quantification: Accounting for the Policy of “Payment by Results” in Nineteenth-Century England

Gemma Moss 75

Chapter 6 Adding New Numbers to the Literacy Narrative: Using PIAAC Data to Focus on Literacy Practices

JD Carpentieri 93

Chapter 7 How Feasible is it to Develop a Culturally Sensitive Large-scale, Standardised Assessment of Literacy Skills?

César Guadalupe 111

iii

SAMPLE

Chapter 8 Inside the Assessment Machine: The Life and Times of a Test Item Bryan Maddox 129

Chapter 9 Participating in International Literacy Assessments in Lao PDR and Mongolia: A Global Ritual of Belonging Camilla Addey 147

Chapter 10 Towards a Global Model in Education? International Student Literacy Assessments and their Impact on Policies and Institutions Tonia Bieber, Kerstin Martens, Dennis Niemann and Janna Teltemann 165

Chapter 11 From an International Adult Literacy Assessment to the Classroom: How Test Development Methods are Transposed into Curriculum

Christine Pinsent-Johnson 187

Chapter 12 Counting ‘What you Want Them to Want’: Psychometrics and Social Policy in Ontario Tannis Atkinson 207

iv

SAMPLE

vv

Camilla Addey is a researcher in international educational assessments and global educational policy. She recently completed her PhD, which focused on the rationales for participation in international literacy assessments in Mongolia and Laos. Her current research enquires into PISA for Development from a governance perspective in lower- and middle-income countries. Her research has established International Assessment Studies as a field of enquiry. Dr Addey previously worked at UNESCO in the Literacy and Non-Formal Education section and taught English as a foreign language at the British Council in Rome and Paris. She is one of the directors of the Laboratory of International Assessment Studies. She is author of Readers and Non-Readers.

Tannis Atkinson completed her PhD at OISE, University of Toronto, after several decades’ experience in adult literacy including as editor of the journal Literacies. Currently a Postdoctoral Fellow in the Department of Sociology and Legal Studies at the University of Waterloo, her research focuses on the governing effects of literacy statistics in Canada, particularly how they are mobilised in social policies, and on how educators both comply with and resist policy imperatives. She is currently working on a book tentatively titled Obliged to Read: Literacy, Coercion and Advanced Liberalism in Canada.

Tonia Bieber is Postdoctoral Fellow in the Kolleg-Forschergruppe project ‘The Trans- formative Power of Europe’ at the Freie Universität Berlin. Previously she was a Senior Researcher in the research project onsoti ‘Internationalization of Education Policy’ within the TranState Research Center 597 ‘Transformations of the State’ at the University of Bremen. Specialising in international relations and comparative public policy, she has published widely in the field of European integration and internationalisation processes in social policy, especially education policy, in Western democracies. In particular, she is interested in policy diffusion and convergence research, as well as empirical research methods in this field. Tonia holds a PhD in Political Sciences from the University of Bremen and Jacobs University Bremen.

NOTES ON CONTRIBUTORS

SAMPLE

JD Carpentieri is a Lecturer in Adult Education at the Institute of Education, London, where he conducts research for the NRDC (National Research and Development Centre for Adult Literacy and Numeracy) and teaches a Masters module on international adult literacy policy. In addition to research and lecturing, he has contributed to a number of policy forums. This includes serving as Rapporteur for the European Union High Level Group of Experts on Literacy, a pan-European expert group charged with investigating and improving literacy policy.

Jeff Evans is Emeritus Reader in Adults’ Mathematical Learning in the School of Science and Technology, Middlesex University, in London. His research interests include adult numeracy; mathematical thinking and emotion; images of mathematics in popular cul-ture; public understanding of statistics. He has a lifelong commitment to numeracy, mathematics and statistics teaching to adults, with a focus on the range of methodol-ogies used in social and educational research. From 2008 to 2013, he was a member of the Numeracy Expert Group, which was responsible for the design of the items used to measure adult numeracy in PIAAC. His recent activity includes talks, webinars and articles aiming to consider the relation of international surveys of adults to alternative policy discourses, and to facilitate access to, and critical engagement with, the results by researchers, practitioners and policy-makers.

Radhika Gorur is a Senior Research Fellow at the Victoria Institute, Victoria University, Australia. Her research has sought, broadly, to understand how some ideas and practices cohere, stabilise, gain momentum and make their way in the world. Her current re-search focuses on the ways in which numbers – particularly international comparative data – are being produced, validated, contested and used in contemporary education policy. Her research is driven by an impulse to engage in productive critique, going beyond deconstruction to create arenas in which diverse policy actors can engage in seeking ways to move forward. She uses assemblage and other concepts from Science and Technology Studies and Actor-Network Theory as the main analytical and meth-odological approaches in her research. She is one of the directors of the Laboratory of International Assessments for Research and Innovation.

Sotiria Grek is a Lecturer in Social Policy at the School of Social and Political Science, University of Edinburgh. She works in the area of Europeanisation of education policy and governance with a particular focus on transnational policy learning, knowledge and governance. She has recently co-authored (with Martin Lawn) Europeanising Education: Governing a New Policy Space (Symposium, 2012) and co-edited (with Joakim Lindgren) Governing by Inspection (Routledge, 2015). She is currently writing a monograph on ‘Educating Europe: EU Government, Knowledge and Legitimation’ to be published by Routledge in 2015.

César Guadalupe completed his Education Doctorate and MA in Social and Political Thought at Sussex University. Dr Guadalupe is a lecturer and researcher at the Universidad del Pacífico (Peru) and non-Resident Fellow at the Brookings Institution (USA). Between 1992 and 2012 he worked on establishing connections between policy questions and research design, and between research results and decision-making

vi Notes on contributors

SAMPLE

vii Notes on contributors

processes in civil service institutions both in his home country (Peru) and at UNESCO. From 2007 to 2012 he led UNESCO’s Literacy Assessment and Monitoring Programme (LAMP), conducting field tests in eight countries (ten languages) as well as four full implementations of the programme.

Mary Hamilton is Professor of Adult Learning and Literacy in the Department of Educational Research at Lancaster University. She is Associate Director of the Lancaster Literacy Research Centre and a founder member of the Research and Practice in Adult Literacy group. Her current research is in literacy policy and governance, socio-mate-rial theory, academic literacies, digital technologies and change. Her most recent book is Literacy and the Politics of Representation published by Routledge in 2012. Her co-authored publications include Local Literacies (with David Barton); More Powerful Literacies (with Lynn Tett and Jim Crowther); and Changing Faces of Adult Literacy, Language and Numeracy: A Critical History of Policy and Practice (with Yvonne Hillier). She is one of the directors of the Laboratory of International Assessment Studies.

Bob Lingard is a Professorial Research Fellow in the School of Education at the University of Queensland in Brisbane, Australia, and is Fellow of the Academy of Social Sciences in Australia. He researches sociology of education and education policy. He is an editor of the Journal Discourse: Studies in the Cultural Politics of Education, co-editor with Greg Dimitriadis of the Routledge New York book series, Key Ideas and Education. His most recent book (2014) is Politics, Policies and Pedagogies in Education (Routledge).

Bryan Maddox is a Senior Lecturer in Education and International Development at the University of East Anglia. He specialises in ethnographic and mixed methods research on globalised literacy assessments and the literacy practices of non-schooled adults. He has conducted ethnographic research on literacy assessment in Nepal, Bangladesh, Mongolia and Slovenia. With Esposito and Kebede he combined methods from ethnog-raphy and economics to develop new measures of functional adult literacy assessment and the assessment of literacy values. His ethnographies of assessment provide accounts of testing situations, and how standardised tests travel and are received across diverse cultural settings. His recent research collaborations ‘inside the assessment machine’ combine ethnographic accounts of assessment practice with large-scale psychometric data. He is one of the directors of the Laboratory of International Assessment Studies.

Kerstin Martens is Associate Professor of International Relations at the University of Bremen, Germany. Her research interests include theories of international relations, inter-national organisations, global governance, and global public policy, in particular educa-tion and social policy. She heads the research project ‘Internationalisation of Education Policy’ located at the University of Bremen. She is co-editor of several books, including Internationalization of Education Policy? (Palgrave Macmillan, 2014), Education in Political Science: Discovering a Neglected Field (Routledge, 2009). She holds a PhD in Social and Political Sciences from the European University Institute, Florence, Italy.

Gemma Moss is Professor of Education at the University of Bristol. Her main research interests include literacy policy; gender and literacy; the shifting relationships between

SAMPLE

viii Notes on contributors

policy-makers, practitioners and stakeholders that are re-shaping the literacy curric-ulum; and the use of research evidence to support policy and practice. She specialises in the use of qualitative methods in policy evaluation, and innovative mixed methods research designs. She has recently co-edited a Special Issue of the journal Comparative Education with Harvey Goldstein. Other publications include ‘Policy and the Search for Explanations for the Gender Gap in Literacy Attainment’ and Literacy and Gender: Researching Texts, Contexts and Readers.

Dennis Niemann is Research Fellow in the project ‘Internationalization of Education Policy’ within the TranState Research Center at the University of Bremen. His research interests include the internationalisation of education policy and the role of interna-tional organisations in global governance. In his recently completed PhD thesis, he analysed the soft governance influence of international organisations on domestic pol-icy-making using the example of the OECD’s PISA study and its impact on German ed-ucation policy. He has published on recent internationalisation processes in education policy, with a special focus on secondary and higher education reforms in Germany.

Christine Pinsent-Johnson recently completed her PhD at the Faculty of Education of the University of Ottawa. She carried out a comprehensive analysis of the curricular and policy changes instituted at both the federal and provincial levels using the OECD’s international adult literacy assessment. She also has approximately two decades of ex-perience working in adult literacy programmes in Ontario, Canada. She is currently leading a study to further examine assessment practices in Ontario adult literacy pro-grammes, which have been shaped by OECD testing methods, in order to build on initial findings from her doctoral research that indicate there are unevenly distributed impacts on learners, educators and access to meaningful and relevant educational opportunities.

Sam Sellar is a Postdoctoral Research fellow in the School of Education at the University of Queensland. Dr Sellar is currently working on research projects investigating the measurement of subjective well-being and non-cognitive skills in large-scale assess-ments, the development of new accountabilities in schooling and the aspirations of young people. Sam is Associate Editor of Critical Studies in Education and Discourse: Studies in the Cultural Politics of Education.

Janna Teltemann is a Senior Researcher in the project ‘Internationalization of Education Policy’ within the TranState Research Center at the University of Bremen. She holds a PhD in Sociology from the University of Bremen. Her research interests are quantitative methods, migration, integration and education. In her PhD project she analysed the im-pact of institutions on the educational achievement of immigrants with data from the OECD PISA Study. She has published several papers on determinants of educational in-equality, results and methodological implications of the PISA Study as well as on OECD activities in the field of migration.

SAMPLE

ix

The manifold dimensions of the field of teacher education are increasingly attracting the attention of researchers, educators, classroom practitioners and policymakers, while awareness has also emerged of the blurred bound-aries between these categories of stakeholders in the discipline. One notable feature of contemporary theory, research and practice in this field is con-sensus on the value of exploring the diversity of international experience for understanding the dynamics of educational development and the desired outcomes of teaching and learning. A second salient feature has been the view that theory and policy development in this field need to be evidence-driven and attentive to diversity of experience. Our aim in this series is to give space to in-depth examination and critical discussion of educational de-velopment in context with a particular focus on the role of the teacher and of teacher education. While significant, disparate studies have appeared in relation to specific areas of enquiry and activity, the Cambridge Education Research Series provides a platform for contributing to international debate by publishing within one overarching series monographs and edited collec-tions by leading and emerging authors tackling innovative thinking, practice and research in education.

The series consists of three strands of publication representing three fun-damental perspectives. The Teacher Education strand focuses on a range of issues and contexts and provides a re-examination of aspects of national and international teacher education systems or analysis of contextual examples of innovative practice in initial and continuing teacher education programmes in different national settings. The International Education Reform strand examines the global and country-specific moves to reform education and

SERIES EDITORS’ PREFACE

SAMPLE

x

Series editors’ preface

particularly teacher development, which is now widely acknowledged as cen-tral to educational systems development. Books published in the Language Education strand address the multilingual context of education in different national and international settings, critically examining among other phe-nomena the first, second and foreign language ambitions of different na-tional settings and innovative classroom pedagogies and language teacher education approaches that take account of linguistic diversity.

Literacy as Numbers is a timely critical analysis of the current dominant political reliance on international comparative measures of literacy based largely on quantifiable evidence in the context of school and adult education. As the apparent paradox in the title suggests, the contributors to this volume focus on the prevailing ideology of literacy assessment and provide a crit-ical but balanced evaluation of this perspective while challenging readers, educationalists and policy-makers to reconsider the educational and socio-political assumptions of global measurements of literacy education. As such the book fits very well within the framework of this series.

Michael Evans and Colleen McLaughlin

x

SAMPLE

xi

As the use of numbers has become ubiquitous in the educational sector, the study of knowledge-based regulation, governance by numbers, or evidence-based policy planning has boomed over the past two decades. Many schol-ars in the social sciences, comparative education and in policy studies have scratched at the façade of precision, rationality or objectivity associated with these terms and uncovered a process that is steeped in political agendas and financial gains. They have analysed how indicators and statistics are pro-duced or manufactured as a new policy tool to mobilise social agreement and political support as well as financial resources. Before our eyes, international surveys from the 1980s and 1990s have taken on, in the form of PISA, TIMSS, IALS, PIAAC and other international surveys, a monumental role in agenda-setting and policy formulation. Nowadays the demand for such surveys is so great that they are administered in short sequence, recurrently, and in an ever-increasing number of subjects, grade levels and countries – in fact so much so that we are starting to see critique and resistance emerging through scholarly publications and the media.

This book moves beyond describing, analysing and criticising the suc-cess story of international measurement and surveys. The focus of several chapters is on measurement as a performative act, that is, on a new mode of regulation that produces new truths, new ways of seeing and new realities. It is an all-encompassing mode of regulation that permeates every domain in the education system, from pedagogy and curriculum to governance and education finance. As examined by Michel Foucault, the 19th-century ob-session with measurement was a project of the modern nation-state and an attempt to count, scrutinise and ‘normalise’ its citizens. Analogously, the

FOREWORD Gita Steiner-Khamsi (Teachers College, Columbia University, New York)

SAMPLE

xii

21st-century preoccupation with measurement in education is a project of global governance, constituted by internationally oriented governments and multi-lateral organisations that use the ‘semantics of globalisation’ to gen-erate reform pressure on national educational systems. Without any doubt, the declared purpose of international surveys is lesson-drawing, emulation or policy borrowing from league leaders. Whether selective import really occurs, why it does or does not happen, under what circumstances, and how the ‘traveling reform’ is translated in the new context are different matters altogether and in fact are objects of intense intellectual investigation among policy borrowing and lending researchers. Nevertheless, there is pressure on national policy actors to borrow ‘best practices’, ‘world class education’ or ‘international standards’; all broadly defined and elusive terms with a ten-dency for inflationary usage.

Every standardisation implies de-contextualisation or, in terms of edu-cational systems, a process of de-territorialisation, de-nationalisation and globalisation. Reform packages may be catapulted from one corner of the world to the other given the comparability of systems construed through international indicators and tests. Unsurprisingly, then, international meas-urements, surveys and standards have become good (for) business and for international organisations because the same curriculum, textbook, student assessment, teacher education module may be sold (global education in-dustry) or disseminated (international organisations), respectively, to edu-cational systems as varied as Qatar, Mongolia and Indonesia. Among other factors, it is this great interest in the homogenisation or standardisation of education that kicked off the perpetuum mobile of measurement and keeps the test machinery in motion, enabling OECD and IEA-type international surveys to expand into new areas and territories as evidenced most recently in the creation of ‘PISA for Development’.

The co-editors of this timely volume have gathered a group of noted social researchers, policy analysts, psychometricians, statisticians and compara-tivists to reflect on this numerical turn in education. They bring to bear a powerful array of theories and empirical data to examine in depth how inter-national assessment practices are reconfiguring our knowledge of literacy in policy and practice around the world.

Gita Steiner-Khamsi

(Teacher’s College, Columbia University, New York)

Foreword

SAMPLE

xiii

The contemporary scale of international assessments of literacy makes them a rich field for scholarly enquiry. Transnational organisations, testing agen-cies, and regional and national governments invest heavily to produce inter-nationally comparable statistics on literacy through standardised assessments such as the Organisation for Economic Development (OECD) Programme for International Student Assessment (PISA) and the Programme for the International Assessment of Adult Competencies (PIAAC). The scope of these assessments is rapidly widening. While North American and European institutions developed and promoted these measurement programmes as policy-informing instruments for higher income countries, they increasingly incorporate low- and middle-income countries in their frames of reference and comparison (see appendix for a brief historical overview).

The growth of international assessments is not driven by international organisations and high-income countries alone. These networks of com-parative measurement involve processes of globalised policy-making, policy borrowing and comparison, whose dynamics and implications are only just beginning to be appreciated and understood (Steiner-Khamsi 2003; Olssen et al. 2004; Zajada 2005; Lawn 2013). It has been argued that the growth of international literacy assessments is a global response to neoliberal demands for ‘a measure of fluctuating human capital stocks’ (Sellar and Lingard 2013, 195), a way to allocate dominant values and to stimulate global performance competitiveness (Rizvi and Lingard 2009). The findings from international surveys have also been described as a way of ensuring mutual accountability and democratic transparency, despite the patently non-transparent nature of many of the programmes themselves (Nóvoa and Yariv-Mashal 2003).

INTRODUCTION Mary Hamilton (Lancaster University), Bryan Maddox (University of

East Anglia), Camilla Addey (Laboratory of International Assessment Studies)

SAMPLE

xiv

Acts of encoding literacy as internationally comparable numbers thus raise profound questions about the ambitions, power and rationality of large-scale assessment regimes, their institutional and technical characteristics, their role in ‘governance by data’ (Fenwick et al. 2014), and associated questions about the validity of cross-cultural comparisons, institutional accountability and transparency. The products of large-scale assessment programmes op-erate as particularly influential artefacts, in which numbers are mobilised to make public impacts in the mass media and to substantiate policy agendas. Considerable work and resources, much of it invisible to the public, goes into the production of these numbers and sustaining their credibility and stand-ing in the public imagination. Enquiring into these invisible practices leads to questions of assessment methodology and the validity of cross-cultural comparison, which are important emerging areas of enquiry dealt with by contributors to this book.

This volume considers and examines in detail the commensurative prac-tices (Espeland and Stevens 2008) that transform diverse literacy practices and competencies into measurable facts, and explores their policy impli-cations. Numerical measures of literacy are not essential for describing outcomes for individuals and populations. The emancipatory and moral dis-courses of literacy invoking human rights and religious principles that mo-tivate educators in many countries may rely on different yardsticks of success (Hamilton 2012, 4–5), but the international numbers are compelling, espe-cially given the social power that is currently mobilised behind developing and promoting them.

The numbers generated through tests and surveys rest upon assumed models of literacy. The model embedded in the OECD’s programme is an information-processing theory of functional literacy (Murray et al. 1998; OECD 2012). The model encompasses a broad set of skills (mainly assessed through reading comprehension) that are used to define threshold levels of competence. This view contrasts with an alternative relational view of lit-eracy as part of everyday situated practices (Barton 2007; Brandt 2009; Cope and Kalantzis 2000; Street and Lefstein 2007; Reder 2009). This perspective on literacy offers a strong challenge to literacy survey measurement since it assumes it is not possible to lift individual performance in literacy from the context and social relations that constitute it without fundamentally chan-ging its meaning.

One of the powerful aspects of literacy as numbers is that the evidence produced through quantification seems to offer certainty and closure on issues of what literacy is, and who it is for. However, debates about the nature

Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

xv

of literacy and how to account for the diversity of everyday practices are far from resolved, as can be seen from the different assumptions outlined above and discussed in this volume. In fact, these debates are more fascinating and challenging than ever before. The meanings and practices of contemporary literacy are woven into increasingly complex and rapidly moving mixtures of languages and cultures, named by some as ‘superdiversity’ (Blommaert and Rampton 2011). Literacies are migrating into the new ‘virtual’ spaces created by digital technologies (Barton and Lee 2013; Selfe and Hawisher 2004). In these processes, the nature of literacy is being transformed in unpredicted and as yet unclassified ways. Nevertheless, the diverse character of literacy practices is transformed by assessment experts into a rational set of compe-tencies set out in assessment frameworks, enabling teachers to focus on the technical business of addressing the ‘skills gap’ of the millions of adults and students deemed to be underperforming.

The contributors to this book start from an engagement with this com-plexity to examine a set of key themes covering the politics and practices of literacy measurement. All are concerned with how to respond to and under-stand the rise of literacy as numbers and its effects on policy and practice. One response is to view such measurement regimes as a growth of ignorance, since test construction inevitably reduces diverse socially and institutionally situated literacy practices, languages and scripts into a set of comparable numbers. Certain aspects of literacy practices (typically vernacular reading and all forms of writing) are excluded from standardised tests, simply be-cause they are difficult to test and compare. Similarly, some uses of literacy are excluded on the grounds of avoiding test bias and undue controversy (see Educational Testing Service 2009). Literacy assessment regimes have also prompted robust critique on the basis of their ideological agendas, for ex-ample in the way that assessment frameworks, test items and survey reports support neoliberal competitiveness agendas (Darville 1999). Similar criti-cism might be made about how ‘data shock’ acts as a potent resource to make claims on resources and to legitimise radical policy change (see, for example, Waldow 2009).

In editing this book, however, it has become clear to us that a rejectionist critique offers an inadequate response to a complex and influential socio-logical phenomenon (see Gorur, this volume). Large-scale literacy assessment regimes demand to be theorised, researched and understood. We have to understand the ontologies of assessment regimes, to develop intimate know-ledge of their technical and institutional characteristics, and how they op-erate in processes of national and transnational governance. Such knowledge

Introduction

SAMPLE

xvi

is clearly required for robust academic encounters with assessment regimes, and to support informed policy and practitioner engagement.

The chapters in this collection benefit from having been presented and discussed at a thematically designed international symposium, Literacy as Numbers, held in London in June 2013, which brought together leading academics in this rapidly developing field, along with representatives from key policy and literacy assessment institutions, to reflect on large-scale lit-eracy assessment regimes and their methods as a significant topic for aca-demic enquiry. The chapters in this collection use a variety of ethnographic, documentary and historical research methods to gain insights into contem-porary practice and to demonstrate the complexity of local interpretations and responses. They are theoretically rich, drawing on critical policy stud-ies and theories of literacy as social practice. Themes of globalisation and Europeanisation in educational policy are explored using world institutional theories (Meyer 2010). Socio-material theories (Latour 2005; Fenwick et al. 2011; Denis and Pontille 2013) are particularly useful for the core purpose of the book, enabling authors to trace the networks, flows of ideas, informa-tion and resources, to follow the enrolment or assembly of national agents in international alliances and delve into the intricate and often invisible processes of translation whereby ‘matters of concern’ are transformed into ‘matters of fact’ (Latour, 2004). Dorothy Smith’s institutional ethnography (Smith, 2005) is used to illuminate the ways in which international assess-ments come to act as regulatory frames for literacy practices.

A distinctive feature of this book is that many chapters involve investiga-tions inside assessment regimes whether through archival research, inter-views with policy makers or through ethnographic observation. This extends the reach of critical policy discussion to include themes that were previously considered off-limits as the domain of technical specialists, such as the de-velopment and use of test items, the development of protocols and politics of test adaptation in cross-cultural assessment, and the work of policy actors. The chapters in this collection also discuss the implications of the statistical procedures involved in large-scale assessments, notably the demands and character of Item Response Theory (IRT).

The book marks the emergence of ‘International Assessment Studies’ as a field of enquiry (Addey 2014) at an historical moment when assessment regimes are reaching out in ambitious acts of big data generation and global-isation, integrating and presenting data on an international scale and from across the life-course. The vitality of this emerging field can be seen from the variety of publications, research projects, fellowships, networks, seminars

Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

xvii

and conferences gathering around it. International Assessment Studies in-corporate themes such as the sociology of measurement, the politics of assessment regimes, policy networks, actors and impacts in order to analyse the conceptual and methodological challenges and affordances of large-scale psychometric data. They also consider the implications of globalisation as assessment regimes attempt the difficult (some would say, impossible) task of recognising and integrating diverse cultures, contexts and relational literacy practices into international assessment (See Sellar and Lingard, this volume).

These challenges are particularly pertinent to the UNESCO ‘Literacy Assessment and Monitoring Programme’ (LAMP1) (Guadalupe and Cardoso 2011) and more recently to the OECD’s initiative on ‘PISA for Development’ (P4D), which reach out to greater diversity (Bloem 2013). These frontier pro-jects illustrate the complexity of international assessments involving multiple foci and levels of analysis – from questions of international goals, networks and relations, to the intimate and technical details of test-item development, adaptation and performance.

LITERACY ASSESSMENT ACROSS THE LIFE-COURSE

Global policy discourses increasingly make connections between initial edu-cation and employment and adult skills. Agencies such as the OECD are con-cerned to bring the literacy practices and learning opportunities of adults into measurable relation with school-based knowledge, and the recent de-velopment of PIAAC realises this ambition. Hence this book approaches lit-eracy across the life-course. Rather than taking a more limited perspective of either child or adult literacy, it covers surveys of literacy in childhood and youth as well as in adult life.

Adult literacy is presented in the book as a contested territory, which is currently being subjected to new forms of codification and institutionalisa-tion, parallel to the ways in which children’s literacy has long been organ-ised. This makes it a particularly important arena for exploring the processes whereby the diversity of everyday experiences and practices gives way to an ordered field of measurement (see Pinsent-Johnson, this volume). We can ob-serve how the field of adult literacy is being re-positioned in the discourse of large-scale assessments, and trace the links that are established with themes such as employability, citizenship and opportunity. For many countries this is a subtle but important shift as lifelong learning is institutionally reframed within assessment discourse and practice (See Grek, this volume). Adults

Introduction

SAMPLE

xviii

are re-entering policy discourse within the narrower frame of reference of human resource development and competitiveness.

The OECD PIAAC assessment exerts its technology to frame adult skills in relation to employability, in a similar way that the World Bank incorpo-rates adults into ‘learning for all’ (World Bank Group 2011) but with a clear orientation to the labour market. Assessment practices are at the heart of this discursive framing and policy positioning. For adult literacy practitioners, this involves a significant shift in power relations and the way that values and resources are distributed. In many contexts where standardised assessments are used, individual adults and their tutors are no longer at liberty to appraise their own abilities and make decisions on the content and goals of their lit-eracy learning (principles of andragogy that have been cherished by many adult literacy programmes). This shift in the locus of control is described in this book as sobering for adult literacy advocates who have sought to obtain new resources for adult literacy programmes in response to the survey results (see Atkinson, this volume).

Several of the chapters offer a more optimistic perspective. Namely, that for those who are able to understand and access large-scale assessment, and to understand its statistical methods, assessment data can provide evidence and insights that can inform policy and make evidence-based claims on resources (see for example, Evans, this volume). This includes analysing par-ticular age cohorts, locating them in specific epochs of educational policy and provision, and drawing connections between the literacy practices of adults and their assessment performance (Carpentieri, this volume). As test-ing agencies make data sets available online, secondary data analysis allows for greater understanding of the comparative knowledge produced , and how this is mediated by the content of test items, statistical procedures and the decisions of testing agencies (e.g. decisions about sampling, weighting, set-ting of levels and thresholds). Contributors to the book highlight both the opportunities afforded by large-scale data, and the complex demands that it makes on educationalists, policy-makers, the media and/or civil society organisations (see Beiber et al., this volume).

REFRAMING LITERACY

A distinctive theme of this book is the way that researchers and practitioners – those who are creating and conducting assessments, interpreting them for policy and research purposes, or dealing with their uses and consequences

Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

xix

in educational practice – grapple with the demands and validity of reframing literacy as numbers. The project of reframing literacy around globalised assessment regimes of standardised literacy assessment is clearly still in pro-gress. While the testing agencies acting as ‘centres of calculation’, in the sense described by Latour and Woolgar (1979), are motivated to present assessment programmes and their methods as unproblematic and routinised procedures, it appears to us that their black boxes are not yet sealed. In these processes of innovation and expansion, debates are necessarily raised about the efficacy, merits and validity of transnational assessment programmes, about their new institutional architecture and their governance and accountability. The insider approach taken by contributors to this book reveals that such debates are happening at all levels of the production and use of the survey findings, though much of the discussion remains inaccessible to the public gaze.

The chapters in this book also approach literacy assessments and statistical knowledge as forms of technology with historical roots and entanglements (see Moss, this volume). The worlds of politics and statistics have always been closely connected (Hacking, 1990; Porter, 1996). From the 1800s the insti-tutional framing of literacy as numbers (for example in the form of census data and marriage registers) took place around a dichotomy of literacy and illiteracy that was used to produce powerful discourses in the processes of governance (Graff 1979; Vincent 2014; Collins and Blot 2003). Those literacy surveys highlight the Janus-faced nature of many literacy statistics. Good data is a fundamental support to the administration and politics of liberal democracies yet it also offers unprecedented powers to control and define – as in colonial administrations (Mitchell 2002). Literacy assessments have clearly supported repressive state technologies of governance, negative ideo-logical representations and attempts to legitimise inequalities (see Lin and Martin, 2005; Maddox 2007). As Street argues, the category of ‘illiteracy’ has long been associated with forms of prejudice and supported ‘great divide’ theories of literacy and development (Street 1995). On the other hand, lit-eracy statistics provide a resource for humanistic and social justice projects to identify and challenge distributional inequalities (e.g. Sen 2003) and this was the rationale used by UNESCO in the second half of the 20th century to collect self-reported literacy statistics from governments across the world (Jones 1990).

The clear inadequacy of the dichotomous paradigm and a dissatisfaction with the self-report measures associated with it, led to the rise in the 1990s of new psychometric programmes of literacy assessment such as those used on the National Adult Literacy Survey, NALS in the United States, and the

Introduction

SAMPLE

xx

International Adult Literacy Survey (IALS) – the methodological precursors to today’s large-scale assessment programmes (see appendix). These new approaches developed by the US-based Educational Testing Service (ETS) and Statistics Canada rejected dichotomous models of literacy and replaced them with the notion of a continuum of literacy abilities, externally defined by ‘objective’ measurement which combined advances in psychometrics, reading theory and large-scale assessment with household survey method-ologies (Thorn 2009, 5).

The IALS offered telling insights into the perception of the potentials and challenges of large-scale psychometric approaches to literacy assessment, as it produced its own categories of innovators, champions and rejectionists. Critics argued that the IALS framed literacy in narrow terms around agen-das of economic competitiveness, rather than those of human development or social justice (e.g. see Darville 1999; Hamilton and Barton 2000; Blum et al. 2001). However, IALS and its successor programmes gained international influence and momentum, as governments sought to use IALS data stra-tegically (and with some success) in order to promote increased resources for adult literacy programmes and to innovate in nationally organised assess-ment programmes (Moser 1999; St Clair et al. 2010).

The present collection can be viewed as a new wave of responses, as researchers and practitioners develop new frameworks for critique and scholarly engagement, and examine the extent to which evolving assessment technologies can be used as resources to promote contrasting ideological projects and agendas. It contributes to a growing literature on the globalisa-tion of assessment practice and its impacts on educational policy borrow-ing and convergence (e.g. Lingard and Ozga 2007; Jakobi and Martens 2010; Steiner-Khamsi and Waldow 2011; Lawn 2013). Two notable collections focus on PISA (Meyer and Benavot 2013; Pereyra et al. 2011) and other publications deal specifically with Europeanisation and its intersection with global agen-das (see Lawn and Grek 2012; Dale and Robertson 2009). While publications to date address issues of governance and the role of data and standardising measures, this volume is the first to specifically focus on literacy and its cen-tral role in the global assessment project.

THE POWER OF NUMBERS

In 1991, Nikolas Rose put forward the concept of ‘policy as numbers’ to in-dicate the increased reliance on numbers. Numbers have come to be seen as

Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

xxi

the most objective and scientific form of information, with an ‘intrinsic force of persuasion’ (Pons and Van Zanten 2007, 112) within policy processes. Rose further argues that citizens become complicit in measuring themselves against others, developing what he terms ‘the calculating self ’ (Rose 1998).

Since then, there has been extensive scholarly debate on the role of num-bers in policy processes and global educational policy discourse. This trend has been discussed in terms of the rise of an audit society (Power 1999), ‘governance by ranking and rating’ (Lehmkuhl 2005), ‘governance by com-parison’ (Martens 2007), ‘governance by numbers’ (Grek 2009) and ‘govern-ance by data’ (Hamilton 2012). All these are ways of describing governments’ growing reliance on international statistical indicators to inform and frame ‘evidence-based’ literacy policy and to justify policy change (Rizvi and Lingard 2009). Fenwick et al. (2014) go further, to argue that these number-based technologies and the comparative knowledge they produce have not only changed forms of educational governance but have themselves become a process of governing. This highlights a shift in scholarly attention to an interest in how large-scale assessments are themselves governed and held accountable.

How can we understand the particular power that numbers hold over the public imagination? What is it about numbers that makes this form of symbolic representation so useful to projects of social ordering in the policy sphere? The work of social semioticians such as Lemke (1995), Van Leeuwen (2008) and O’Halloran (2008) can illuminate the processes involved (see dis-cussion in Hamilton 2012, 33–40).

Firstly, numbers help to create the categories and classification systems that fix and align points within the flux of social activity (Bowker and Starr 2000). They set clear though often arbitrary and spuriously accurate bound-aries around our knowledge and experience of reading and writing. These categories can be manipulated, ordered into levels and they generate new languages and shorthands for talking about literacy (such as ‘IALS Level 3’ or ‘Document Literacy’). The appearance of accuracy, and the way in which the arbitrary and fuzzy categories that numbers rest upon become naturalised in discourse, are powerful assets for policy and research.

Secondly, numbers enable us to deal with things and people as de-con-textualised instances of classes and groups rather than as embodied indi-viduals, thus enabling them to be counted and measured for audit purposes. Discourse analysts like Van Leeuwen understand such mechanisms of de-personalisation to be very important in ordering social life (Van Leeuwen 2008, 47).

Introduction

SAMPLE

xxii

The creation of clear-cut classifications and mechanisms for allocating people to groups and categories in turn enables comparisons to be made more easily across incomparable spaces of time and place. People can be ordered in hierarchical levels, and causal relationships can be made between literacy and other quantifiable variables, such as income, age, educational qualifica-tions. International networks and new technologies make such comparisons and relationships easier and quicker. Dense and succinct cross-referencing can be made between statements of relationship and their referents which are often each arrived at through complex processes of definition and argument. These processes are encoded in the numbers but also, as O’Halloran puts it, these operations are black-boxed for ease of access (O’Halloran 2008).

Numbers are thus particularly useful for aligning national and inter-national policy, for standardising qualifications in the global marketplace and, in the process, legitimising what come to be seen as purely technical facts and marginalising other ways of imagining literacy. Numbers facilitate a particular way of imagining literacy as a thing, a commodity or a resource that can be owned by an individual, exchanged and given a value in the edu-cational marketplace. In turn, this view of literacy as a commodity appears to distance it from moral values and personal experience while embedding the values of the market within it, thus turning literacy into a key variable in a human resource model of educational progress.

AN EMERGING RESEARCH AGENDA

Since technologies of assessment and modes of representing literacy as num-bers are now such pervasive features of the policy landscape, discussions about how such data is produced, and for what purpose, and under what systems of transparency and accountability must also become commonplace and integrated into the workings of democratic institutions. Such themes go beyond conventional concerns about the validity of assessments to speak to wider concerns with power, resource flows and accountability of the state and of transnational organisations.

This collection is intended as a substantial contribution to such discus-sions and as a springboard for future research. The contributions we outline in the following sections illuminate the amount of (often invisible) work that goes on behind the scenes in producing the tests and the policies. They offer ample evidence that the challenges faced in this area are matters of value as well as technical matters and that asymmetric power relations suffuse the

Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

xxiii

field. They begin to reveal the politics of reception as international test results travel through national policies and practice, often taken up in ways that were never intended by the test-designers and translated into dilemmas for peda-gogical practice. All these themes urgently demand further investigation.

OvERvIEW OF CHAPTERS

The book is organised into two main parts, one presenting general framings and definitions that underpin discussions about literacy as numbers, and the second exploring through specific and insider examples, the processes of pro-ducing the international tests and how they impact on policy and practice.

The first section of the book, Part 1, ‘Definitions and conceptualisations’ begins with Chapter 1, Assembling a Sociology of Numbers, by Radhika Gorur. Gorur argues that while existing sociological critiques of quanti-fication are legitimate, they are, in Latour’s term, ‘running out of steam’. Merely showing that numbers are reductive, that they are political, that they are misused and that they are an instrument of governance is not enough. The chapter suggests that a move towards a sociology of numbers in edu-cation, inspired by the conceptual tools and methodologies of Science and Technology Studies (STS), may provide useful ways to develop new forms of critique. Such a sociology would involve moving from a representational to a performative idiom, and empirically tracing the socio-material histories and lives of numbers, focusing both on the processes by which they translate the world, and the ways in which they make their way through the world. Gorur invites a collective exploration of the forms that sociology might take, and of the scope it might offer for productive interference in policy processes.

In Chapter 2, New Literacisation, Curricular Isomorphism and the OECD’s PISA, Sam Sellar and Bob Lingard problematise the underlying model of literacy used in international surveys. They elaborate a concept of ‘literacisation’ and place this within an analysis of the growing significance of education within the OECD’s policy work and within global educational governance. They draw on the approaches of Common World Educational Culture and the Globally Structured Agenda for Education in order to understand the underpinning assumptions and effects of international tests such as PISA. The authors discuss definitions of literacy in relation to the theory of multiliteracies, arguing that the increasingly broad and amorphous definition of literacy used in PISA contributes to the intensification of the human capital framing of global educational policy. It does this by enabling

Introduction

SAMPLE

xxiv

international tests to encompass more diverse aspects of education and to strengthen an underlying assumption of curricular isomorphism in inter-national contexts that is at odds with a rhetoric of situated relevance.

In Chapter 3, Transnational Education Policy-making: International Assessments and the Formation of a New Institutional Order, Sotiria Grek takes the development of the Programme for the International Assessment of Adult Competencies (PIAAC) as a specific case through which the extent of policy learning and teaching between two significant international players, the European Community and the OECD, may be scrutinised and evalu-ated. It discusses the processes of problematisation and normalisation of the notions of ‘skills and competences’ by the two organisations and examines the ways both concepts have turned into a significant policy problem, in need of soft governance through new data, standards and new policy solutions. The chapter focuses on the nature of the problem, its contours, character-istics and shifting qualities. It discusses the ways that policy problems can be transformed into public issues with all-pervasive and all-inclusive effects. Grek suggests that in order to understand the ‘problem’, one has to move be-hind it, since the very process of its creation already carries the seeds of its solution.

In Chapter 4, Interpreting International Surveys of Adult Skills: Methodological and Policy-related Issues, Jeff Evans takes a detailed and critical perspective view of the production of numbers in international assess-ments. The chapter follows in the tradition established by Blum, Goldstein and Guérin-Pace (2001) with its review of the statistical methods used in PIAAC, and associated questions of validity. Evans uses a discussion of numeracy sta-tistics in PIAAC to provide methodological insights into how PIAAC data should and should not be used, and what we can legitimately infer from the assessment results. His critical and optimistic perspective suggests that by struggling with design and methodological issues researchers, practitioners and policy-makers are able to make more of survey data as powerful know-ledge, and to develop new and empowering agendas in applied research.

Part 2 of the book, ‘Processes, Effects and Practices’, begins with an his-torical perspective on literacy and data by Gemma Moss. In Chapter 5, Disentangling Policy Intentions, Educational Practice and the Discourse of Quantification: Accounting for the Policy of ‘Payment by Results’ in Nineteenth-century England, Moss focuses on the collection and use of lit-eracy attainment data in the 1860s, and the dilemmas and uncertainties that the data created in policy and in practice. The case raises questions about how we generalise about quantitative practice and the uses to which it can be put in education. By considering how numbers were mobilised, displayed and

Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

xxv

interpreted in policy and in practice in the 1860s, a more nuanced account of the role numerical data play in the formation of educational discourse is proposed.

In Chapter 6, Adding New Numbers to the Literacy Narrative: Using PIAAC Data to Focus on Literacy Practices, JD Carpentieri takes the con-text of recent adult literacy policy in the UK, specifically the English Skills for Life initiative, to show how quantitative data produced by international assessments is used and misused by politicians. He shows the limited success of narrowly defined and evaluated literacy programmes and argues that, to date, quantitative data about literacy proficiency has had only limited suc-cess in guiding interventions in adult literacy. He suggests that practice en-gagement theory (which considers the everyday uses of reading as well as skills) can offer a sounder basis for making policy decisions in this field. Background data on reading practices collected by PIAAC offer good empir-ical, quantitative evidence for the usefulness of this approach and could be the basis of a robust pragmatic argument acceptable to policy-makers.

Chapters 7, 8 and 9 focus on the UNESCO Literacy Assessment and Monitoring Programme (LAMP) to discuss issues of cross-cultural validity and the motivations of policy actors. César Guadalupe (Chapter 7) examines an increasingly central question in globalised programmes of assessment: How Feasible is it to Develop a Culturally Sensitive Large-scale, Standardised Assessment of Literacy Skills? His account describes his role as director of LAMP as the LAMP team struggled to reconcile the use of standardised assessment items, many of which are derived from a North American experi-ence, with their use in countries that are geographically and culturally distant from such origins. As Guadalupe’s chapter illustrates, the institutional and managerial commitment to culturally sensitive assessment of literacy goes at least some way to resolving such tensions (e.g. protocols for test-item produc-tion and adaptation).

In Chapter 8, Inside the Assessment Machine: The Life and Times of a Test Item, Bryan Maddox continues the discussion of cross-cultural validity. It uses Actor-Network Theory (ANT) and Science and Technology Studies (STS) to examine the role of test items in LAMP. Maddox argues that that the study and critique of large-scale assessment programmes from the outside provides only partial insights into their character. His insider perspective on LAMP offers an intimate ethnographic account of the production of statis-tical knowledge and the challenges of cross-cultural testing. The chapter tells the story of the test item – from its initial development, to the production of statistical data. We are introduced to various characters along the way –

Introduction

SAMPLE

xxvi

the ‘framework’, Mongolian camels, Item Response Theory and statistical artefacts.

In Chapter 9, Participating in International Literacy Assessments in Lao PDR and Mongolia: A Global Ritual of Belonging, Camilla Addey explores what lies behind the growth of international assessments in lower- and mid-dle-income countries, as part of international educational policy processes. Although there is consensus that the emergence of international assessments is in part a response to the recent shift towards policy as numbers and a shift towards international educational goals measured by international per-formance benchmarks, the research presented here suggests the rationales of lower- and middle-income countries’ participation are more complex. Using Actor-Network Theory to analyse a multiple qualitative case study of LAMP in Lao PDR and Mongolia, this chapter argues that countries benefit from both scandalising and glorifying (Steiner-Khamsi 2003) through inter-national assessment numbers, and from ‘a global ritual of belonging’. The complex picture that emerges from the data contributes to our understand-ing of the politics of data reception generally as well as illuminating how international assessment data shape (or do not shape) policy processes and enter governance in lower- and middle-income countries.

In different ways, the final three chapters explore the effects of inter-national assessments on policy and practice and the challenges these pose for those implementing educational reforms.

In Chapter 10, Towards a Global Model in Education? International Student Literacy Assessments and their Impact on Policies and Institutions, Tonia Bieber and her colleagues demonstrate the heterogeneity of school policy reforms across different countries in response to PISA results and OECD guidance. The authors juxtapose careful analyses of two case-study countries, Germany and Switzerland, with descriptive quantitative data across all participating countries to show the variety of changes in school pol-icies between 2000 and 2102 relating to increased school autonomy, account-ability and educational standards framed in terms of literacy.

In Chapter 11, From an International Adult Literacy Assessment to the Classroom: How Test Development Methods are Transposed into Pedagogy, Christine Pinsent-Johnson uses the theoretical tools of institu-tional ethnography to analyse the ways in which international testing meth-odologies act as regulatory frames for understanding adult literacy. She shows how the use of tests is being inappropriately extended from their ori-ginal purpose as summary benchmark statements for adult populations to screening tests of individual capability and curriculum frameworks.

Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

xxvii

In Chapter 12, Counting ‘What you Want them to Want’: Psychometrics and Social Policy in Ontario, Tannis Atkinson uses governmentality analyt-ics to examine the statistical indicators of adult literacy promoted by the OECD and first employed in the International Adult Literacy Survey (IALS). She argues that using numerical operations to dissect interactions with text and to describe capacities of entire populations represents a new way of knowing and acting upon ‘adult literacy’. Drawing on empirical data from one jurisdic-tion in Canada – Ontario – she considers how constituting literacy as a labour market problem has individualised responsibility for structural changes in the economy and naturalised gendered and racialised inequalities. Atkinson outlines how policies based on rendering literacy calculable in this way are co-ercing and punishing those who are poor or unemployed; she also shares find-ings about how the emphasis on ‘employability’ is diminishing teaching and learning. The chapter’s conclusion urges researchers to attend to the dilemmas and dangers produced when literacy is offered as the simple, calculable solu-tion to complex social and macroeconomic problems.

NOTES

1 On LAMP, see chapters by Addey, Guadalupe and Maddox in this volume.

REFERENCES

Addey, C. (2014). ‘Why do Countries Join International Literacy Assessments? An Actor-Network Theory Analysis with Case Studies from Lao PDR and Mongolia’. Norwich: School of Education and Lifelong Learning, University of East Anglia. PhD.

Barton, D. (2007). Literacy: An Introduction to the Ecology of Written Language (second edi-tion). Oxford: Wiley-Blackwell.

Barton, D. and Lee, C. (2013). Language Online: Investigating Digital Texts and Practices. London: Routledge.

Bloem, S. (2013). PISA in Low and Middle Income Countries. Paris: OECD Publishing.Blommaert, J. and Rampton, B. (2011). ‘Language and Superdiversity’. Diversities 13 (2). Blum, A., Goldstein, H. and Guérin-Pace, F. (2001). ‘International Adult Literacy Survey

(IALS): An Analysis of International Comparisons of Adult Literacy’. Assessment in Education: Principles, Policy and Practice 8 (2), 225–46.

Bowker, G.C. and Starr, S.L. (2000). Sorting Things Out: Classification and its Consequences, Massachusetts: MIT Press.

Brandt, D. (2009). Literacy and Learning: Reflections on Writing, Reading, and Society. Chichester: John Wiley & Sons.

Introduction

SAMPLE

xxviii

Collins, J. and Blot, R.K. (2003). Literacy and Literacies: Texts, Power, and Identity  (22). Cambridge University Press.

Cope, B. and Kalantzis, M., eds (2000). Multiliteracies: Literacy Learning and the Design of Social Futures. London: Routledge.

Dale, R. and Robertson, S. (2009). Globalisation and Europeanisation in Education. Oxford: Symposium Books.

Darville, R. (1999). Knowledges of Adult Literacy: Surveying for Competitiveness. International Journal of Educational Development 19 (4–5), 273–85.

Denis, J. and Pontille, D. (2013). ‘Material Ordering and the Care of Things’. International Journal of Urban and Regional Research 37 (3), 1035–52.

Educational Testing Service (2009). ‘ETS Guidelines for Fairness Review of Assessments’. Princeton, NJ: ETS.

Espeland, W.N. and Stevens, M.L. (2008). ‘A Sociology of Quantification’. European Journal of Sociology 49 (03), 401–36.

Esposito, L., Kebede, B. and Maddox, B. (2014). ‘The Value of Literacy Practices’. Compare: A Journal of Comparative and International Education (forthcoming), 1–18.

Fenwick, T., Edwards, R. and Sawchuk, P. (2011). Emerging Approaches to Educational Research: Tracing the Socio-Material. London: Routledge.

Fenwick, T., Mangez, E., & Ozga, J. (2014). Governing Knowledge. Comparison, Knowledge-Based Technologies and Expertise in the Regulation of Education. London: Routledge.

Graff, H.J. (1979).The Literacy Myth: Literacy and Social Structure in the Nineteenth-Century City. New York: Academic Press, xix.

Grek, S. (2009). ‘Governing by Numbers: The PISA “Effect” in Europe’. Journal of Education Policy 24 (1), 23–37.

Guadalupe, C. and Cardoso, M. (2011). ‘Measuring the Continuum of Literacy Skills among Adults: Educational Testing and the LAMP Experience’. International Review of Education 57 (1–2), 199–217.

Hacking, I. (1990). The Taming of Chance (vol. 17). Cambridge University Press.Hamilton, M. (2012). Literacy and the Politics of Representation. London: Routledge. — (2013). ‘Imagining Literacy through Numbers in an Era of Globalised Social Statistics’.

Knowledge and Numbers in Education. Seminar Series: Literacy and the Power of Numbers. London: Institute of Education.

Hamilton, M. and Barton, D. (2000). ‘The International Adult Literacy Survey: What Does it Really Measure?’ International Review of Education 46 (5), 377–89.

Jakobi, A.P. and Martens, K. (2010). ‘Introduction: The OECD as an Actor in International Politics’. In A.P. Jakobi and K. Martens, eds, Mechanisms of OECD Governance: International Incentives for National Policy-Making. Oxford University Press, 163–79.

Jones, P.W. (1990). ‘UNESCO and the Politics of Global Literacy’. Comparative Education Review 41–60.

Latour, B. (2004). ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern’. Critical Inquiry 30 (2), 225–48.

— (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press.

Latour, B. and Woolgar, S. (1979). Laboratory Life: The Construction of Scientific Facts. London: Sage Library of Social Research.

Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

xxix

Lawn, M., ed. (2013) The Rise of Data in Education Systems Collection: Visualization and Use. Oxford: Symposium Books.

Lawn, M. and Grek, S (2012). Europeanizing Education: Governing a New Policy. Oxford: Symposium Books.

Lehmkuhl, D. (2005). ‘Governance by Rating and Ranking’. In  Annual Meeting of the International Studies Association (ISA), Honolulu, 2–6.

Lemke, Jay (1995) Textual Politics: Discourse and Social Dynamics. London: Taylor and Francis.

Lin, A.M., and Martin, P.W., eds (2005). Decolonisation, Globalisation: Language-in-Education Policy and Practice (3). Clevedon: Multilingual Matters.

Lingard, B. and Ozga, J., eds (2007). The RoutledgeFalmer Reader in Education Policy and Politics. London: Routledge.

Maddox, B. (2007). ‘Secular and Koranic Literacies in South Asia: From Colonisation to Contemporary Practice’. International Journal of Educational Development 27, 661–8.

Martens, K. (2007). ‘How to Become an Influential Actor: The “Comparative Turn” in OECD Education Policy’. In K. Martens, A. Rusconi and K. Leuze, New Arenas in Education Governance. New York: Palgrave Macmillan.

Meyer, H.D. and Benavot, A., eds (2013). PISA, Power, and Policy: The Emergence of Global Educational Governance. Oxford: Symposium Books.

Meyer, J.W. (2010). ‘World Society, Institutional Theories, and the Actor’. Annual Review of Sociology 36, 1–20.

Mitchell, T. (2002). Rule of Experts: Egypt, Techno-Politics, Modernity. Berkeley and Los Angeles: University of California Press.

Moser, S.C. (1999). Improving Literacy and Numeracy: A Fresh Start. London: DfEE Publications.

Murray, T.S., Kirsch, I.S. and Jenkins, L.B. (1998). Adult Literacy in OECD Countries: Technical Report on the First International Adult Literacy Survey. Washington, DC: US Government Printing Office, Superintendent of Documents, Mail Stop: SSOP, Washington, DC 20402-9328. www.files.eric.ed.gov/fulltext/ED445117.pdf (retrieved December 2014).

Nóvoa, A. and Yariv-Mashal, T. (2003). ‘Comparative Research in Education: A Mode of Governance or a Historical Journey?’ Comparative Education 39 (4), 423–38.

OECD (2012). Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD.

— (2013). OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. Paris: OECD.O’Halloran, K.L. (2008) Mathematical Discourse: Language, Symbolism and Visual Images.

London and New York: Continuum.Olssen, M., Codd, J.A. and O’Neill, A.M. (2004). Education Policy: Globalization, Citizenship

and Democracy. London: Sage.Pereyra, M.A., Kotthoff, H.G. and Cowen, R. (2011). PISA under Examination. Rotterdam:

Sense Publishers.Pons, X. and Van Zanten, A. (2007). ‘Knowledge Circulation, Regulation and Governance.’

Knowledge and Policy in Education and Health Sectors. Literature Review, part 6 (June). Louvain: EU Research Project. www.knowandpol.eu/IMG/pdf/lr.tr.pons_vanzanten.eng.pdf (retrieved December 2014).

Porter, T.M. (1996). Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton University Press.

Introduction

SAMPLE

xxx

Power, M. (1999). The Audit Society: Rituals of Verification. Oxford University Press.Reder, S. (2009). ‘Scaling up and Moving in: Connecting Social Practices Views to Policies

and Programs in Adult Aducation’. Literacy and Numeracy Studies 16 (2), 35–50.Rizvi, F., and Lingard, B. (2009). ‘The OECD and Global Shifts in Education Policy’.

International Handbook of Comparative Education. Springer Netherlands, 437–53. Rose, N. (1991). ‘Governing by Numbers: Figuring out Democracy’. Accounting, Organizations

and Society 16 (7), 673–92.— (1998). Inventing Our Selves: Psychology, Power, and Personhood. Cambridge: Cambridge

University Press.Selfe, C.L. and Hawisher, G.E. (2004). Literate Lives in the Information Age: Narratives of

Literacy from the United States. London: Routledge.Sellar, S. and Lingard, B. (2013). ‘PISA and the Expanding Role of the OECD in Global

Education Governance’. In H.D. Meyer and A. Benavot, eds, PISA, Power, and Policy: The Emergence of Global Educational Governance. Wallingford, UK: Symposium Books.

Sen, A. (2003). ‘Reflections on Literacy’. In C. Robinson, ed., Literacy as Freedom. Paris: UNESCO, 20–30.

Smith, D.E. (2005). Institutional Ethnography: A Sociology for People. Walnut Creek: AltaMira Press.

St Clair, R., Maclachlan, K. and Tett, L. (2010). Scottish Survey of Adult Literacies 2009: Research Findings. Edinburgh: Scottish Government.

Steiner-Khamsi, G. (2003). ‘The Politics of League Tables’. JSSE – Journal of Social Science Education 2 (1). www.jsse.org/index.php/jsse/article/view/470 (retrieved December 2014).

Steiner-Khamsi, G., and Waldow, F., eds (2011). Policy Borrowing and Lending in Education. London: Routledge.

Street, B. (1995). Social Literacies: Critical Approaches to Literacy in Development, Ethnography and Education. London: Longman.

Street, B. and Lefstein, A. (2007). Literacy: An Advanced Resource Book for Students. London: Routledge.

Thorn, W. (2009). ‘International Adult Literacy and Basic Skills Surveys in the OECD Region’. OECD Education Working Papers 26. Paris: OECD (NJ1).

Van Leeuwen T. (2008) Discourse and Practice: New Tools for Critical Discourse Analysis. Oxford: Oxford University Press.

Vincent, D. (2014). ‘The Invention of Counting: The Statistical Measurement of Literacy in Nineteenth-Century England’. Comparative Education 50 (3), 266–81.

Waldow, F. (2009). ‘What PISA Did and Did Not Do: Germany after the “PISA-shock”’. European Educational Research Journal 8 (3), 476–83.

World Bank Group (2011). Learning for All: Investing in People’s Knowledge and Skills to Promote Development. Washington, DC: World Bank, Education Strategy 2020.

Zajda, J.I., ed. (2005). International Handbook on Globalisation, Education and Policy Research: Global Pedagogies and Policies. London: Springer.

Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

xxxi

This brief historical overview offers some details of the main international programmes of literacy assessment discussed in this book along with some references for further reading. It is not intended to be a comprehensive sum-mary of all international programmes, but a reference for those who are not familiar with the assessments discussed in the chapters, indicating their in-stitutional origins and trajectories.

UNESCO was the first international organisation to develop comparative statistics of literacy across the world. From the 1950s onward, it collected self-reported figures from governments with the aim of promoting national development. In 1999, UNESCO’s Institute for Statistics was established in Montreal, Canada. UNESCO has always been aware of the limitations of fig-ures and has sought more reliable measures (UNESCO/UIS 2008). Since the 1970s the United States and Canada have been engaged in developing their own national literacy assessments, and these have informed the subsequent development of international assessment programmes including assessment methodologies and frameworks. Functional reading, writing, numeracy and problem-solving were measured by the US Office of Education with the Adult Performance Level Study (APL) in the mid-1970s (Northcutt 1974). The APL was followed by the Young Adult Literacy Assessment (YALS) in 1986 (Kirsch & Jungeblu 1986) and the National Adult Literacy Survey (Kirsch et al. 1993).

A few influential agencies administer and lead developments in the inter-national educational assessment programmes. The most influential of these organisations include the US-based Educational Testing Service (ETS), the Organization for Economic Cooperation and Development (OECD) and the

APPENDIX:

A Brief History Of International Assessments Of Literacy

SAMPLE

International Association for the Evaluation of Educational Achievement (IEA). These organisations and their consortia partners have become global enterprises collaborating with an ever-increasing number of national governments.

The International Adult Literacy Survey (IALS) was developed by the OECD to compare prose, document and quantitative literacy skills of 16 to 65 year olds across 22 countries between 1994 and 1998 (OECD 1997). The OECD went on to develop the IALS further to include a wider range of skills and improved assessment methods, renaming it the International Adult Literacy and Life Skills Survey (ALL). The ALL measured prose and docu-ment literacy, but also numeracy and problem-solving skills. It was imple-mented between 2002 and 2006 and carried out in 12 countries (Desjardins et al. 2005).

Shortly after the IALS, the OECD developed its well-known Programme for International Student Assessment, PISA. Since 1997, PISA has been car-ried out in an increasing number of countries every three years to measure the ability of 15 year olds to apply everyday skills and competences. It was designed and developed within a policy framework to meet the needs of policy actors and has become a widely used tool for national educational policy.

The OECD International Assessment of Adult Competencies (PIAAC) evolved from IALS and ALL to align assessment of school-based popula-tions with adult skills (Schleicher 2008). Known also as the Survey of Adult Skills, it includes measures of individual problem-solving skills in technol-ogy-rich environments (ICT and internet skills), a combination of prose and document literacy, reading components and numeracy. Using a background questionnaire, PIAAC relates test scores to socioeconomic background and educational qualifications, individual persistence and self-discipline, so-cial and cultural engagement, political efficacy and social trust, as part of a more complex understanding of human capital and its potential. PIAAC was implemented in 24 countries (two-thirds of which were European) in its first round in 2008–12 (results published in 2013). A second round of PIAAC is already under way, with the OECD calling for middle- and low-income countries to join.

Other international assessments that play a central role in the global phenomenon of international assessments are the Trends in International Mathematics and Science Study (TIMSS; for fourth and eighth grade stu-dents) and the Progress in International Reading Literacy Study (PIRLS; fourth grade literacy test) which are international curriculum-based tests

xxxii Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

for students in school. Both are administered by the IEA. In 2011, PIRLS was implemented by 49 countries and 9 benchmarking participants, whilst TIMSS was implemented by 63 countries and 14 benchmarking participants.

The European Union has become an increasingly active player in the OECD’s international assessment activities as well as developing its own re-gional assessments, such as the Common European Framework of Reference for Languages (EU 2006). The EU’s interest is to harmonise qualifications in order to shape a flexible workforce for international markets.

Alongside these international assessment programmes, UNESCO’s Insti-tute for Statistics has since 2003 implemented the Literacy Assessment and Monitoring Programme (LAMP), which aims to be a context-sen-sitive international assessment programme, capable of highlighting the literacy continuum at lower levels. LAMP has now been followed by PISA for Development (P4D), a programme with a similar intention of increasing its data value for policy in lower- and middle-income countries, by enhancing its methods and instruments to be more context-relevant, whilst allowing countries to be benchmarked on the main PISA scale. The appearance of P4D (see Bloem 2013) can be understood within the framework of UNESCO’s goals for its global educational agenda, Education For All (UNESCO 2000), which have been criticised as being narrowly measured using educational access benchmarks.

Being a widely valued form of educational measurement, international assessments will enter the post-2015 education development framework, thus increasing their reach by penetrating into all national educational systems.

REFERENCES

Bloem, S. (2013). PISA in Low and Middle Income Countries. Paris: OECD. Desjardins, R., Murray, T.S. and Tuijnman, A.C. (2005). Learning a Living: First Results of

the Adult Literacy and Life Skills Survey. Paris: OECD. European Union (2006). Key Competences for Lifelong Learning: A European Framework.

Education and Culture DG: Lifelong Learning Programme. http://eurlex.europa.eu/LexUriServ/site/en/oj/2006/l_394/l_39420061230en00100018.pdf.

Kirsch, I., Jungeblut, A., Jenkins, L. and Kolstad, A. (1993). Adult Literacy in America: A First Look at the Findings of the National Adult Literacy Survey (NCES 93275). Washington, DC: US Department of Education.

Kirsch, I.S. and Jungeblut, A. (1986). Literacy: Profiles of America’s Young Adults. Final Report. National Assessment of Educational Progress, Educational Testing Service, Rosedale Road, Princeton, NJ 08541.

xxxiii Appendix

SAMPLE

Northcutt, N. (1974). ‘Functional Literacy for Adults; A Status Report of the Adult Performance Level Study’. Paper presented at the Annual Meeting of the International Reading Asociation (19th; New Orleans; 1–4 May 1974). Available as ERIC Document ED 091 672.

OECD (1997) Literacy Skills for the Knowledge Society Paris: OECDSchleicher, A. (2008). ‘PIAAC: A New Strategy For Assessing Adult Competencies’.

International Review of Education. DOI 10.1007/s11159-008-9105-0. UNESCO (2000). The Dakar Framework for Action: Education for All: Meeting Our Collective

Commitments: Including Six Regional Frameworks for Action. Paris: UNESCO.UNESCO/UIS (2008). International Literacy Statistics: A Review of Concepts, Methodology

and Current Data. Montreal: UNESCO/UIS. www.uis.unesco.org (retrieved October 2014).

xxxiv Mary Hamilton, Bryan Maddox, Camilla Addey

SAMPLE

PART ONE: Definitions and Conceptualisations

SAMPLE

SAMPLE

1

INTRODUCTION

That numbers are now ubiquitous in education is beyond dispute. Large-scale surveys of literacies now routinely produce rankings, benchmarks and standards that participate in significant ways in national policies. Not only are they routinely used and, many would argue, misused, they appear to have colonised our collective imaginations, reducing the scope of policy issues by rendering them into variations of positions on league tables and comparative accounts. Emanating from a few ‘centres of calculation’ such as UNESCO and the OECD, numbers have become the international language of literacy, with such programmes as Education for All and Programme for International Student Assessment (PISA) which are almost global in scope.

The alarming ubiquity and influence of numbers as measures of literacy have been accompanied by strong critique of the use of numbers in policy. These arguments can be summarised as follows: (a) quantification cannot capture the complexity of education and is inherently reductive; (b) num-bers are becoming hegemonic and marginalising other ways of knowing; (c) numbers are a technology of governmentality and should be resisted; and (d) numbers are being misused in policy and should be viewed with suspicion.

In this chapter, I shall elaborate some of these criticisms, and argue that whilst all of these criticisms are legitimate and useful, they are not sufficient. Taking a cue from scholars in Science and Technology Studies, I suggest that measurements of literacy are performative – i.e. world-making – processes (Gorur 2014a; Gorur 2014b; Scott 1998; Woolgar 1991). I explore the conse-quences of such an understanding for critique, and propose that a ‘sociology

1 ASSEMBLING A SOCIOLOGY OF NUMBERS Radhika Gorur (The Victoria Institute, Victoria University, Australia)

SAMPLE

2

of measurement’ (Gorur 2014a; Gorur 2014b; Espeland and Stevens 2008; Woolgar 1991) is required to adequately and effectively critique the ever-expanding field of literacy measurements.

To craft my argument, I will focus on the Programme for International Student Assessment (PISA), the flagship international literacy comparison of the OECD. I will also draw upon a range of interviews with PISA and OECD officials and measurement experts that I have conducted across sev-eral related projects over the last six years. Whilst PISA makes an ideal ‘case study’, I suggest that the argument I develop here holds for critique of num-bers in general, and applies not only to other literacy assessments, but more widely to the current translations of the world into numbers across a range of social policy terrains.

PISA presents itself as a worthy focus of attention because of its extraor-dinary influence. With its high-profile media coverage and its international ‘league tables’, it epitomises the growth of ‘literacy as numbers’. Initially developed for the rich countries’ club of the Organization for Economic Cooperation and Development (OECD), PISA now has more non-member participants than member nations in its surveys. There are now extensions of PISA in the form of PISA for Schools to assess literacies comparatively at the school level, as well as plans for PISA for Development to assess literacies in developing nations (see Guadalupe, Maddox and Addey, this volume). So powerful has PISA become, and so threatening is its growing influence, that in April 2014, a group of high-powered academics (including Stephen Ball, Noam Chomsky, Robin Alexander and Henry Giroux) and parents, principals and others wrote an open letter1 to OECD’s Deputy Director of Education, Andreas Schleicher, to bring to his attention the negative con-sequences of PISA, and suggesting that the OECD immediately make some changes to mitigate against the worst of its effects.

That so many thoughtful people are becoming so alarmed as to write such a letter is a testament to the influence of PISA. It is hard to believe that the first PISA survey was done as recently as 2000, not only because it is now so widely used to inform and justify policy, but also because it has so easily displaced other forms of assessing, evaluating and understanding educa-tion systems. Before PISA came along to tell us that Germany had a shock-ing system and Finland had a great one, many countries, including Canada and Finland, looked to Germany to learn from them. As one well-informed policy expert said to me in an interview:

Canadians used to be constantly going to Germany to study them so we could copy their system! I went to this meeting in Berlin in 2001, where their Federal

Radhika Gorur

SAMPLE

3

Minister got up and said, ‘Well, we need to learn from Canada, because they’re doing so much better’, and I wanted to yell out, ‘Give us our money back for all the trips we’ve made!’ Finland has got new hotels to accommodate all the PISA visitors. And they were looking to Germany before the PISA results came out. Everyone was going to the US all the time – no-one goes to the US any more to see how they do schooling ... no-one thinks that the US is the international model for how to do schooling. (Interview transcript: senior policy official)

Whatever criteria Canada and Finland had used over the years to conclude that the Germans had something to teach them about education were in-stantly trumped when the PISA literacy numbers were announced. Other evaluations become mere intuitions or vague feelings – easily dislodged with ‘That’s not what the numbers say!’ Indeed, I would say that currently num-bers appear to be speaking louder than those who are critical of them.

PISA was developed explicitly for the purpose of informing policy. So PISA results are presented in specific ways with the purpose of speaking to policy-makers – framing policy issues, describing the parameters of rele-vance, and offering solutions. The logic of its comparisons is that systems that are not doing as well can learn from those that are. Analyses that ac-company these PISA reports contain interpretations and suggestions that particular practices in policy, governance and schooling would contribute to higher rankings.

With PISA results being tied to economic benefit to nations (Hanushek and Woessmann 2012), the release of PISA rankings is eagerly awaited, the anticipation mounting as the results are released at the same instant around the world. Each release is accompanied by a media blitz. This has meant that the results have to be further simplified into bytes that are suitable for news-papers and the television. As one Australian policy and measurement expert said in an interview with me in 2009:

When the results come out, there is quite a flurry of excitement. I was on Channel 7, 9, 10 and ABC and Sky News – all on the same day! But the media likes to hone in on a story and this time they just focused on ‘reading results having gone down’. Politicians too want to do this – they want to think in the short term – only for so long as they remain in that portfolio. (Interview transcript, senior measurement expert)

Moreover, officials from the OECD also are often advisors to government and have the ear of the highly placed politicians and policy-makers in various countries and are able to spread their policy message quite effectively. The fact that Andreas Schleicher is a German did a lot to make PISA important in Germany; ‘he was all over the press in Germany because he could speak

Assembling a Sociology of Numbers

SAMPLE

4

German,’ as one interviewee explained. More recently, explaining the rise of the importance of PISA in the US, a senior policy expert said that not only do the highest officials from the OECD, including the Secretary-General, Angel Gurria, come to the US to release PISA results and make public presenta-tions with a large media presence, but Schleicher also meets with members of the Congress and with people in state governments and other influential policy-making bodies, spreading PISA’s policy messages. For these reasons, PISA makes a wonderful exemplar of the spread and influence of ‘literacy as numbers’ in contemporary education policy.

CRITIqUING ‘LITERACY AS NUMBERS’

The growing takeover of the education policy space by numbers has not gone unnoticed by critics in education, policy and sociology. Not only numbers, but the current ideological climate of neoliberalism and the accompanying market orientation within which numericisation has been flourishing has also come under the critical gaze of education and policy commentators. Most of the critique can be clubbed under the umbrella of ‘debunking’ num-bers in one way or another, using a variety of arguments. The reductionism argument is that literacy assessments and the rankings that emerge from such surveys fail to capture the complexity of classrooms, schools and other learn-ing contexts. Therefore, it is argued, literacy assessments, which are single-event snapshots, make poor proxies for ‘student performance’ or, worse still, ‘outcomes’ of educational systems. Not only do they reduce learning within a particular ‘literacy’ to the ability to respond to a small range of questions, but also they only include a small number of subjects. This is in part because many valued aspects of education cannot easily be assessed internationally in a comparative way. As one PISA official explained:

Reading, science and maths are there largely because we can do it. We can build a common set of things that are valued across the countries and we have the tech-nology for assessing them. So there are other things like problem-solving or civ-ics and citizenship – that kind of thing where there would just be so much more difficulty in developing agreement about what should be assessed. And then there are other things like teamwork and things like that. I just don’t know how you’d assess them in any kind of standardised way … So you are reduced to things that can be assessed. They’ve tried writing – but … the cross-cultural language effect seems too big to be comparable. So the things we assess are a combination of the things we value and the things we can do – I think it sends an odd message about

Radhika Gorur

SAMPLE

5

science, perhaps, but I don’t think anyone would argue about literacy and nu-meracy. (Senior PISA official, cited in Gorur 2011)

Moreover, large-scale, sample-based surveys are expensive and so necessarily brief. They are limited in the breadth of their coverage. Not all aspects of even the limited range of literacies, let alone the schooling experience of students, can be captured in such surveys, however well they are designed and con-ducted, and however carefully they are interpreted. The inclusion of diverse cultures necessarily ignores differences to produce commensurate entities. Indeed, it is only by giving up the ambition to know things in depth that such assessments can become large-scale (Gorur 2011; Latour 1999).

While the reductionism argument is important to make because policy-makers appear not to realise the limits of the warrant of numbers, I would argue that reductionism, in itself, is not necessarily problematic. Nor is re-ductionism merely a product of ignorance or poor science. In my exploration of the history of indicators that underpin PISA (Gorur 2014b), and in the con-struction of PISA itself (Gorur 2011; Gorur 2014a), I have found that the la-borious processes that culminate in the PISA surveys are not trivial exercises but developed through lengthy negotiations, with a complex understanding of the challenges to adequacy and fitness for purpose of these assessments. Moreover, it is only through having reductionist inscriptions that certain patterns become visible and certain actions are imaginable and made pos-sible. Scott explains how a certain ‘tunnel vision’ can be useful in producing certain types of knowledge:

Certain forms of knowledge and control require a narrowing of vision. The great advantage of such tunnel vision is that it brings into sharp focus certain limited aspects of an otherwise far more complex and unwieldy reality. This very sim-plification, in turn, makes the phenomenon at the center of the field of vision more legible and hence more susceptible to careful measurement and calculation. Combined with similar observations, an overall, aggregate, synoptic view of a se-lective reality is achieved, making possible a high degree of schematic knowledge, control, and manipulation. (Scott 1998, 11)

Similarly, Latour (1987) has also talked of the value of such reductions. Numeric inscriptions, like the maps of the early explorers, can be trans-ported back and forth without losing fidelity, and can provide the informa-tion required to develop strategies and focus energies and other resources in ways that are not possible without such maps to guide action. Maps are an excellent illustration of the usefulness of reductionist renderings. Maps are useful because they are reductionist, because they inscribe a less complex

Assembling a Sociology of Numbers

SAMPLE

6

version of the world. Moreover, maps make it possible to plan and strategise from a distance, without requiring physical presence on site to understand the terrain. Whilst local, detailed and complex knowledge is important for certain decisions, others require only the broad-brush picture. Of course, maps developed for one purpose may be useless for another, and maps that are ridden with errors can lead its users down the wrong path. But reduc-tionism, per se, is not a reason to dismiss numbers as useless for the purposes of research or governance.

A second reason for the discomfort with numbers (and quantification in general) is that they are becoming hegemonic, displacing other forms of knowledge. Porter explains that even the knowledge that numbers have serious limitations does not limit preference for them over other ways of knowing:

Critics of quantification in the natural sciences as well as in the social and hu-manistic fields have often felt that reliance on numbers simply evades the deep and important issues. Even where this is so, an objective method may be esteemed more highly than a profound one. (Porter 1995, 5)

Numbers – measurements and calculations – have a ‘cold, hard facts’ feel to them that is difficult to argue against; they close off spaces of debate (Barry and Slater 2002; Hammersley 2001). Other evaluations become mere intui-tions or vague feelings. As one policy expert said:

Governments have become ravenous for information and evidence. A few may still rely on gut instincts, astrological charts or yesterday’s focus groups. But most recognise that their success – in the sense of achieving objectives and retaining the confidence of the public – now depends on much more systematic use of know-ledge than it did in the past. (Mulgan 2005, 215, my emphases)

Non-numeric evidence and qualitative research based on such methods as ‘yesterday’s focus groups’ now occupy the same status as ‘gut instincts and astrological charts’. Part of the reason for this privileging of numbers is the persistence of the division between ‘quantitative’ and ‘qualitative’ methodolo-gies, and a serious lack of collaborative engagement of scholars across these research traditions. This division is reinforced by institutional requirements, where researchers have to declare themselves as belonging to one camp or the other, and strengthened with journals often dedicated to one or other type of research. As a result, neither side trusts or understands the other. Even the category of ‘mixed method’ reinforces that the two methods are funda-mentally different from each other. The privileging of ‘quantitative research’

Radhika Gorur

SAMPLE

7

in education is supported by several ‘clearing houses’ that filter out research that does not conform to the requirements of quantitative methodologies.

That such a division is unjustified has been successfully argued by several theorists (see, for example, Gorard and Taylor 2004). There is no ‘quanti-tative’ work that does not involve qualitative decisions in terms of models, assumptions, choices of what to count, or what value or weight to place on different factors – ‘quantitative research’ is not just a matter of counting and measuring in some ‘objective’ way, but a science that is infused with decisions of all manner based on common sense, intuition and professional judgement. These processes, and the historical stabilisation of methodolo-gies, have been described in detail (in, for example, Gorur 2014a; Desrosières 1998; Porter 1995). And ‘qualitative research’ could never report any findings without resorting to some sense of how much, how many, when, for how long, how frequently and so on – all quantitative judgements. Despite this, mutual suspicion with regard to methodologies and understandings persists, leading to a lack of engagement with – and even dismissal of – each other’s re-search. ‘Quantitative’ work is often dismissed as ‘positivist’, and numbers are debunked on the basis that they are not objective but political. Indeed, I have myself worked in this arena of ‘revealing’ that numbers are political and not a pure ‘view from nowhere’ (Haraway 1988) (see, for example, Gorur 2011). But the point is this: no knowledge is ‘objective’, and no knowledge is detached from the methodologies, assumptions and world views that underpin it, or the vast networks of institutional and social structures that support it; nei-ther the research seen as ‘quantitative’ nor that which is classified as ‘qualita-tive’ is immune to this limitation. So a dismissal of numbers on the basis that they are not ‘objective’ or ‘apolitical’ requires a corresponding dismissal of ‘qualitative’ knowledge on the same basis.2

A third objection to the widespread use of numbers in education has been on the grounds of their contribution to the surveillance mechanisms in the heightened governmentality that is the signature of our times. Numbers of all manner seek to shine a stark and unforgiving light on the effectiveness of schools and teachers and the progress of students (or lack thereof). However, some of the critique argues that audits and accountability are instruments of governmentality, and therefore to be resisted. I suggest that accountability in itself is desirable – even essential – for good government, as the effects of lack of accountability in corrupt nations demonstrate adequately. Democratic societies that expect governments to take responsibility for organising institutions such that they are fair and run ethically, efficiently and in a well-organised manner will rightly expect that adequate accountability measures

Assembling a Sociology of Numbers

SAMPLE

8

are in place. So protesting accountability itself is not only futile but would be dangerous if successful. This kind of advocacy would throw the baby out with the bath water and is, moreover, unlikely to be taken seriously by policy-makers.

While the ‘governmentality’ argument was attractive and even useful per-haps a decade or so back, particularly in the face of growing neoliberalism, it has also been a distraction, serving to marginalise ‘qualitative research-ers’ even further. It has achieved little through the challenges it has posed; indeed over the last couple of decades, the education world has become yet more obsessed with the very things the critics of governmentality sought to resist. Such critique, perhaps, has run out of steam, to borrow Latour’s (2004) expression.

The misuse of numbers – deliberate or otherwise – both by transnational organisations such as the OECD and by national and other governments and institutions – has been another line of critique. There is much merit in this – though perhaps it would be more effective to wage this battle in the popular press than in scholarly journals. That politicians cherry-pick numbers that suit, that they choose numbers according to convenience, that they suppress unfavourable numbers, and that they seek numbers to support already-made decisions is now well known. However, every instance of it needs to be chal-lenged because decisions on literacy have far-reaching consequences for societies.

To summarise so far: some of the critique of numbers has focused on the inability of numbers to capture and represent the ‘true picture’ of the state of literacy. Other critiques have focused on the politics of interpretation and use, often claiming that policy-makers are either ignorant of the limitations of numbers, or are deliberately obtuse, in order to serve their own agendas. These criticisms are to a large extent irrelevant, to my mind. Merely show-ing that numbers are reductive is not enough – it is because they reduce, and because they can express a large amount of information in a parsimonious way that they are useful in governance. To argue that they are political rather than neutral is to deny that all forms of knowledge are political. Similarly, if the argument is that numbers are misused, the same can be said of non-numeric forms of knowledge. To merely object on the grounds that they are an instrument of governance is not enough; governance is an inherent part of all formal societies, and statistics is an invaluable tool of good governance as well. Espeland and Stevens point out that:

Measurement can help us see complicated things in ways that make it possible to intervene in them productively (consider measures of global warming); but

Radhika Gorur

SAMPLE

9

measurement also can narrow our appraisal of value and relevance to what can be measured easily, at the expense of other ways of knowing (consider how edu-cation became years of schooling in American sociology). (Espeland and Stevens 2008, 432)

To critique literacy as numbers more rigorously and usefully, I suggest, it is important to understand their performativity: that is, their productive role in knowledge creation and governance.

ASSEMBLING A SOCIOLOGY OF NUMBERS

Scholars in STS have demonstrated that measurement is not merely a de-scriptive exercise, but a productive one (Knorr Cetina 1999). Thinking in this performative idiom (Pickering 1995) requires that we understand literacy measurements as ‘world-making’ practices. The performative or productive aspects of measurement and its nuances need to be carefully understood if we are to engage with literacy measurements seriously and to develop useful critique.

The most readily understood ‘world-making’ quality of measurement is that as soon as measurements are instituted, they begin to act on the world (Gorur 2014b). PISA, for example, has set in motion large-scale changes in Germany, Poland and other countries. In the US, the Race to the Top scheme is seen as influenced by PISA (Alexander 2014). In Australia, the desire to rank in the ‘top five’ in PISA is now inscribed in the Education Act of 2013 (Gorur & Wu 2014). It has been demonstrated that in some countries, students are spending an inordinate amount of time training for high-stakes account-ability measurements of literacy, at the expense of other types of learning (Berliner 2011). There are reports of schools and teachers gaming the system or downright cheating – suggesting that certain children stay home on days when literacy assessments are being conducted, so as not to adversely affect the school’s average performance, and even changing students’ responses on the test (Polesel, Rice and Dulfer 2014). Literacy comparisons are changing policies and practices on a very consequential scale. So whilst the intent of measurement might be to just represent an existing situation or ‘reality’, the act of measurement sets in motion a series of changes to that situation, thus changing the very reality it purports to measure.

There are more subtle and deeper impacts as well. Surveys and indicators present new patterns, produce new understandings of the world and highlight new problems to battle. They redirect attention and resources. Scaled up at

Assembling a Sociology of Numbers

SAMPLE

10

national and global levels, they create a collective imaginary that requires, and is reified through, specific sets of routines and practices (cf. Appadurai 1996; Taylor 2004; Hamilton 2012). Using a variety of frameworks, tables, graphs and diagrams, technologies of quantification order and clarify the chaotic jumble of the world, acting as technologies of visibility, accountability and control (Miller 2005). For example, during World War II, UNESCO began the task of statistically mapping literacy globally, measuring levels of literacy around the world; new patterns began to emerge, marking out regions as objects of inquiry, defining nations in terms of deficit, causing alarm and cre-ating funding priorities. Advancing ‘the common welfare of mankind’, as per UNESCO’s mission, demanded that the welfare of mankind be understood in commonly held terms. Statistical ‘norms’, as Desrosières (1998) pointed out, soon begin to suggest desired conditions, create new aspirations and even pro-voke new moral norms.

Numbers translate realities into their abstract versions, and these abstrac-tions can then be imposed on realities themselves. Scott (1998) has presented an excellent example of this process through his historical tracing of scien-tific forestry practices in Prussia. Efficient and scientific forest managers rec-ognised the fiscal value of timber, and their measurements ignored the rest of the forest – the shrubs with their medicinal offerings, the biodiversity which made forests resilient, the insects and other life forms which were necessary for enriching the soil and so on. As Scott tellingly observed, this transla-tion of ‘nature’ into ‘natural resource’ began to create pictures in government tables and charts of an abstract forest that was devoid of biodiversity and that ignored the multiple uses of the forest to the communities around them, as well as the complexity of the forest ecology. Eventually, the Germans began to impose the abstract forests of their audit books onto reality itself: land was cleared and forests began to be planted with single species of the same age in neat rows so that they could be inspected, monitored and harvested with great ease. Neat and uniform forests became the new aesthetic of for-estry.3 Verran (2010) provides a fine-grained analysis of the performativity of numbers in her study of the enumeration of Australian waters. She finds that numbers are used in indexical and symbolic ways to represent ‘the eco-logical health of Australia’s creeks and rivers, lakes and billabongs’, and that a shift to an iconic use of numbers makes possible the constitution of a ‘water market’.

But it is not just that measurement changes the world once it is performed; importantly, the world has first to be changed in order that the measurement becomes possible. As Ted Porter put it:

Radhika Gorur

SAMPLE

11

Society must be remade before it can be the object of quantification. Categories of people and things must be defined; measures must be interchangeable; land and commodities must be conceived as represented by an equivalent in money. There is much of what Weber called rationalization in this and a good deal of central-ization. (Porter 1994, cited in Scott 1998, 22)

No counting is possible before categorisations, labelling, demarcations and other forms of interference with the world. The creation of the International Standard Classification of Education (ISCED) was critical to the develop-ment of comparative international literacy assessment (Smyth 2008). A vast bureaucracy, a network of scholars, developments in statistics (and a system of peer review and publication to validate the developments), complex sur-veys, intense negotiations, national mandates to sanction funds – all this had to be in place in order that international literacy measurements could be per-formed (Gorur 2014b). As extensive as it is, the infrastructure that supports the routine generation of numbers today is mostly invisible to those outside the specialist field, and so it is seldom subjected to critique.

Understanding the performativity of numbers and quantification has par-ticular implications for critique of literacy measurements and comparisons. First, research attention is drawn to the ‘instrumental’ nature of measure-ment – the many translations that need to occur in order to render literacy measurable. This means understanding measurement not as a single event, but as an historical achievement. Consequently the entire collective or assem-blage that makes possible such measurements becomes the focus of empirical tracing and critique. Understanding that a vast hinterland is required to sup-port the stabilisation of literacy assessment broadens the scope of enquiry. The indicators that underpin the measurements, the structures that support the surveys, the software that translate the data, the expert committees that set up the test items – all these become implicated in the assemblages of meas-urement. The list of actors and processes involved become greatly expanded (see, for example, Gorur 2011; Gorur 2014a). Consequently the spaces for what Law (2002) calls ‘interference’ are also increased. Processes that were once closed off as black boxes of a technical nature become available again as matters that can be debated – Latour (2004) calls this the conversion of ‘mat-ters of fact’ into ‘matters of concern’. Moreover, seeing ‘facts’ as not so much a representation of nature made through ‘hard science’ but a performance that is made possible through complex social processes expands the range of those who can participate in these debates (for a detailed description of how this might occur, see Callon, Lascoumes and Barthe 2009).

Assembling a Sociology of Numbers

SAMPLE

12

A performative understanding focuses our attention on practices – on the ways in which worlds are brought into being through assemblages of a var-iety of heterogeneous actors. It would necessitate plunging into phenomena and exploring them from the inside. It would mean entering number-pro-ducing machineries, and learning from those involved in actively producing and using numbers, rather than merely attempting to influence or ‘teach’ them from the position of all-knowing outsiders (Latour 2005). Once we thoroughly understand measurement as performative, we could begin to ask: what types of world are being made through these practices? How are these particular renderings of calculability translating the world? In what ways are the particular practices being achieved? Which actors have needed to come together, and in what ways have they been assembled? What assemblages are required to hold these calculations in place and render them intelligible? What effects do these calculative assemblages have on those they render cal-culable? Once we become interested in these questions, we can start to open up the black boxes of measurement and comparison, and thus provide many sites of interference. Advancing the notion of a ‘sociology of quantification’, Espeland and Stevens argue that:

a sociology of quantification should recognize the effort and coordination that quantification requires; the tendency of quantification to remake what it meas-ures; the capacity of quantification to channel social behavior; the polyvalent au-thority of claims made with quantitative measures; and the art and artifice of numerical expression. (Espeland and Stevens 2008, 431)

Scholars in STS have engaged with the notion of performativity in a variety of ways, and we (sociologists of education) might perhaps find some sugges-tions there on how to rethink and reposition our own work. Stengers (2011) suggests that we give up pretending that comparative processes are disinter-ested and objective, so that we can engage in debating them collectively and involve ourselves in their construction. She advocates an ‘active and inter-ested comparison ... made by a collective sharing the same matter of concern, privileging what can be associated with new questions and experimentally challenging consequences’ (Stengers 2011, 5). This approach inherently prob-lematises ‘methods’ themselves and opens up their role in processes and practices.

In the performative idiom, the unit of analysis is the collective, the assem-blage, since actors are always intimately tied to the networks that produce them. This is a critical point – it will move us away from pointing fingers at particular policy-makers or particular assessments (such as PISA) and

Radhika Gorur

SAMPLE

13

towards an understanding of how collectives or assemblages support and spread the practices of number-making.

Such landmark STS works as Laboratory Life (Latour and Woolgar 1979) and Leviathan and the Air Pump: Hobbes, Boyle, and the Experimental Life (Shapin and Schaffer 1989), as well as the work of Porter (2003; 1995), Scott (1998), Desrosières (1998), Woolgar (1991) and Thévenot (2009) provide the inspiration and guidance for a sociology of measurement. These studies chal-lenge deep-seated dichotomies such as nature and culture, and show that mundane practices play as crucial a role in producing worlds as do major and momentous events. They show the mechanisms by which knowledge comes to be validated and made credible, and elaborate the assemblages that are required for this to be achieved. These types of studies also dissolve divides between science, politics and society, as well as between quantitative and qualitative types of research.

STS methodologies such as actor-network theory (ANT) are sometimes accused of shying away from issues of power. This is a misplaced accusation; there is a deep commitment in these methodologies to examining the ways in which power is mobilised and maintained. Beyond the traditional associa-tions of discourse with power, ANT researchers ferret out the power-making technologies of mundane practices, such as the particular ways in which actors are classified and separated, the specific institutional practices that produce and reify differences, and the ways in which practices get entangled in other networks to form difficult-to-reverse worlds. An expanded soci-ology of numbers that recognises the instrumental and performative nature of numericisation paves the way for critique as a moral undertaking, since critique itself would also be regarded as performative.

What I am suggesting, then, is this: a move towards a study of processes and practices with a view to understanding not just what but how numbers are being produced, mobilised, circulated, consumed and contested – and how the character of calculability is being imposed in specific settings. This means going beyond saying that numbers are political, precarious, misused etc. to showing how this is being done, and thereby opening up the possibility that they could be done otherwise; an engagement – a participation – with the processes and practices of making numbers, rather than a distant cri-tique that simply denounces numbers and describes their deleterious effects on education; and interdisciplinary engagements between statisticians, measurement and assessment scholars, historians, sociologists, anthropolo-gists and policy scholars as part of a research collective. But I am pressing also for another type of engagement – one that requires the giving up of a

Assembling a Sociology of Numbers

SAMPLE

14

comfortable and safe position of apparent non-normativity. I am suggesting that critics need to engage not only in examining the effects of particular world-making practices, but seek ways to make better worlds, overcoming the current shyness among critics in education about being ‘normative’. Doing so would require great commitment, as it would expose us to the same kind of critique that we ourselves have previously offered. Nevertheless, this, I contend, is the need of the hour. As Latour puts it, critique is not just about disbelieving everything or problematising everything, leaving us in a world where knowledge of any kind is viewed with suspicion and cynicism:

The critic is not the one who debunks, but the one who assembles. The critic is not the one who lifts the rugs from under the feet of the naive believers, but the one who offers the participants arenas in which to gather. (Latour 2004, 246)

It is time to move our sociologies beyond the traditional approaches of de-construction and debunking. In a policy climate where it has become diffi-cult to speak back to the current reliance on numbers, it is necessary for us to leave the comfort of our distance, to work empirically, and to engage with experts in other epistemic communities to find other, better ways of doing numbers.

NOTES

1 www.oecdpisaletter.org2 Indeed, ‘deconstruction’ suffers the same fate – critique can be ‘deconstructed’ just

as effectively as that which it seeks to deconstruct, as Latour (2004) has argued so convincingly.

3 This inability to see the woods for the trees is a cautionary tale. The Germans found that their standardised forests could no longer provide the villages with kindling and small game or medicines and fruit. The lack of biodiversity meant that forests became very vulnerable to pests. Moreover, single-species planting meant that the tress all consumed the same set of nutrients – and with no biodiversity these nutrients were not getting replenished, affecting the growth of future generations of forests. Measures had to then be instituted to compensate for this impoverished model of forest.

REFERENCES

Alexander, R. (2014). ‘Visions of Education, Roads to Reform: PISA, the Global Race and the Cambridge Primary Review’. Speech given on 4 February at University of Malmö, Sweden.

Appadurai, A. (1996). Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: University of Minnesota Press.

Radhika Gorur

SAMPLE

15

Barry, A. and Slater, D. (2002). ‘Introduction: The Technological Economy’. Economy and Society 31 (2), 175–93.

Berliner, D. (2011). ‘Rational Responses to High Stakes Testing: The Case of Curriculum Narrowing and the Harm that Follows’. Cambridge Journal of Education 41 (3), 287–302.

Callon, M., Lascoumes, P. and Barthe, Y. (2009). Acting in an Uncertain World: An Essay on Technical Democracy. Cambridge, MA, and London: MIT Press.

Desrosières, A. (1998). The Politics of Large Numbers: A History of Statistical Reasoning, trans. C. Naish. Cambridge, MA, and London: Harvard University Press.

Espeland, W.N. and Stevens, M. (2008). ‘A Sociology of Quantification’. Archives of European Sociology XLIX (3), 401–36.

Gorard, S. and Taylor, C. (2004). Combining Methods in Educational and Social Research. Berkshire, UK: Open University Press.

Gorur, R. (2011). ‘ANT on the PISA Trail: Following the Statistical Pursuit of Certainty’. Educational Philosophy and Theory 43 (S1), 76–93.

— (2014a). ‘Towards a Sociology of Measurement Technologies in Education Policy’. European Educational Research Journal 13 (1), 58–72.

— (2014b). ‘Producing Calculable Worlds: Education at a Glance’. Discourse: Studies in Cultural Politics of Educaiton, Special Issue on Policy Enactments, Assemblage and Agency in Educaitonal Policy Contexts (Online). doi: 10.1080/01596306.2015.974942.

Gorur, R. and Koyama, J.P. (2013). ‘The Struggle to Technicise in Education Policy’. Australian Educational Researcher 40, 633–48.

Gorur, R. and Wu, M. (2014). ‘Leaning Too Far? PISA, Policy and Australia’s “Top Five” Ambitions’. Discourse: Studies in Cultural Politics of Education.

doi: 10.1080/01596306.2014.930020 (retrieved July 2014).Hamilton, M. (2012). Literacy and the Politics of Representation. Abingdon, UK: Routledge.Hammersley, M. (2001). ‘Some Questions about Evidence-based Practice in Education’.

Paper presented at the Annual Conference of the British Educational Research Association, University of Leeds, England. (retrieved September 2001).

Hanushek, E.A. and Woessmann, L. (2012). ‘Do Better Schools Lead to More Growth? Cognitive Skills, Economic Outcomes, and Causation’. Journal of Economic Growth 17 (4), 267–321.

Haraway, D.J. (1991). ‘Situated Knowledges: The Science Question in Feminism and the Privilege of Perspective’. Feminist Studies 14 (3), 575–99.

Knorr Cetina, K. (1999). Epistemic Cultures: How the Sciences Make Knowledge. Cambridge, MA: Harvard University Press.

Latour, B. (1987). Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press.

— (1999). Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA, and London, UK: Harvard University Press.

— (2004). ‘Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern’. Critical Inquiry 30 (Winter 2004), 225–48.

— (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press.

Latour, B. and Woolgar, S. (1979). Laboratory Life: The Construction of Scientific Facts. Princeton, NJ: Princeton University Press.

Assembling a Sociology of Numbers

SAMPLE