Transcript
Page 1: An accountability model for initial teacher education

This article was downloaded by: [North Dakota State University]On: 10 December 2014, At: 07:46Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Education for Teaching:International research and pedagogyPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/cjet20

An accountability model for initialteacher educationLarry Ludlow a , Emilie Mitescu a , Joseph Pedulla a , MarilynCochran‐Smith a , Mac Cannady a , Sarah Enterline a & Stephanie

Chappe aa Department of Educational Research, Measurement, andEvaluation , Lynch School of Education, Boston College , ChestnutHill, MA, USAPublished online: 27 Sep 2010.

To cite this article: Larry Ludlow , Emilie Mitescu , Joseph Pedulla , Marilyn Cochran‐Smith , MacCannady , Sarah Enterline & Stephanie Chappe (2010) An accountability model for initial teachereducation, Journal of Education for Teaching: International research and pedagogy, 36:4, 353-368

To link to this article: http://dx.doi.org/10.1080/02607476.2010.513843

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Page 2: An accountability model for initial teacher education

Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 3: An accountability model for initial teacher education

Journal of Education for TeachingVol. 36, No. 4, November 2010, 353–368

ISSN 0260-7476 print/ISSN 1360-0540 online© 2010 Taylor & FrancisDOI: 10.1080/02607476.2010.513843http://www.informaworld.com

An accountability model for initial teacher education

Larry Ludlow*, Emilie Mitescu, Joseph Pedulla, Marilyn Cochran-Smith, Mac Cannady, Sarah Enterline and Stephanie Chappe

Department of Educational Research, Measurement, and Evaluation, Lynch School of Education, Boston College, Chestnut Hill, MA, USA

Taylor and FrancisCJET_A_513843.sgm(Received 19 March 2010; final version received 16 June 2010)10.1080/02607476.2010.513843Journal of Education for Teaching0260-7476 (print)/1360-0540 (online)Original Article2010Taylor & Francis364000000November [email protected]

The pressure for accountability in higher education is extremely high. Someadvocate accountability systems that use standardised measures of studentlearning and non-cognitive outcomes; others argue that locally developedmeasures provide a better fit with the unique mission of institutions. We firstdescribe a general ‘proof of possibility’ accountability model for initial teachereducation that relies upon locally developed, programme-specific assessments.We then illustrate how such a model may respond to claims made by an institution,demonstrate student learning, and inform programmatic changes.

Keywords: accountability; longitudinal; teacher education; surveys

In the USA, accountability has been a buzzword for more than 30 years both in thehigher education policy discourse at the national level as well as the institutional andprogramme-specific levels. Most recently at the national level, there have been callsfor institutions of higher education to provide evidence of student learning as well asnon-cognitive outcomes in the form of standardised assessments of student achieve-ment and cross-institutional surveys of student engagement (US Department ofEducation 2006). In addition, at the higher education programme level, particularly inuniversity-based teacher preparation, there have been calls for greater emphasis onoutcomes, improved data-systems, and increased use of evidence to support claimsand guide policy decisions (Allen 2003; Cochran-Smith 2005; Wineburg 2006). At alllevels, this has led to a continuing debate about who should take control of account-ability in higher education, what kinds of evidence are appropriate for accountabilitypurposes, how different forms of evidence should be used, and how the competingpurposes of internal and external accountability can be reconciled.

This article analyses efforts at one institution to respond to demands for highereducation accountability through the development and implementation of an institu-tion-specific, programme-level model of assessment and accountability. We beginwith a discussion of the larger debate about higher education accountability in theUSA, including critique of the federally-recommended use of standardised assess-ments for accountability purposes and consideration of accountability issues in initialteacher education specifically. Based on this critique, and on our own efforts over sixyears to develop an accountability and assessment system, we propose a model withfour key elements. We show how these four elements formed the basis of the account-ability system employed at the Boston College Lynch School of Education to support

*Corresponding author. Email: [email protected]

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 4: An accountability model for initial teacher education

354 L. Ludlow et al.

claims made by the institution, demonstrate student learning, and inform program-matic changes. Finally, we discuss how local accountability systems can inform largerdebates about higher education accountability and how other institutions of highereducation can use the model described here to develop their own accountability/assessment systems and meet their specific goals.

The debate about higher education accountability in the USA

In the context of the larger test-based accountability movement in the USA (e.g.,Elmore 2002; Linn 2003; Sirotnik 2004), the then US Secretary of Education, Marg-aret Spellings, released the 2006 report of the Spelling’s Commission on HigherEducation, titled, A test of leadership: Charting the future of US higher education (USDepartment of Education 2006). Signalling a turning point in the discourse aboutaccountability at the level of higher education (Malandra 2008), the SpellingsCommission argued that due in part to its ‘remarkable absence of accountabilitymechanisms’ (US Department of Education 2006, x), US higher education needed ‘toimprove in dramatic ways’ (ix). The Commission framed accountability in terms ofmeasuring ‘meaningful’ learning outcomes, and proposed that institutions use metricsthat lead to the improvement of teaching and learning and that have the capacity tocompare the outcomes of one institution with those of the next. To meet these ends,the Commission recommended the use of standardised assessments, such as theCollegiate Learning Assessment (CLA), which measures student learning outcomesacross post-secondary institutions, and the National Survey of Student Engagement(NSSE), which serves as ‘a proxy for the value and quality of [students’] undergrad-uate experience’ (23).

Meanwhile some higher education spokespersons argued that rather than allowingfederal and other governing bodies to dictate which evidence should be used, account-ability should be left to the higher education community itself. For example, in itsprinciples of educational accountability, the Association of American Colleges andUniversities (AACU) along with the Council for Higher Education Accreditation(2008) argued that it is the responsibility of individual institutions to achieve excel-lence and collect evidence relevant to the outcomes that are meaningful to that insti-tution. Along similar lines, Lee Shulman (2007), then President of the CarnegieFoundation for the Advancement of Teaching, called for the higher education commu-nity to take control of the ‘narrative’ of accountability.

Some universities and associations have taken pro-active measures. For example,in 2005, the Council of Independent Colleges’ Collegiate Learning AssessmentConsortium (CIC/CLA Consortium) began administering the CLA to freshmen andseniors at 33 liberal arts colleges (Ekman and Pelletier 2008). This had some success,such as increased use of empirical evidence in making policy decisions, but alsocreated challenges, such as obtaining full student cooperation and participation in theassessments on a voluntary basis and involvement of faculty who were initially waryof using standardised assessment results to inform programmatic decisions.

Furthermore, some scholars questioned whether the implementation of standard-ised assessments in institutions of higher education aligns with, or contradicts, thegoals of improving student learning (e.g., Education Commission of the States 1998;Frye 1999; Labi 2007). Others were concerned about whether standardised assess-ments can measure learning across institutions that have different missions, different

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 5: An accountability model for initial teacher education

Journal of Education for Teaching 355

student populations, and different resources (e.g., Bollag 2006; Eubanks 2006; Garciaand Pacheco 1992; Schagen and Hutchinson 2007). It is clear that differing studentpopulations, resources, missions and themes could make comparisons across highereducation institutions problematic, with the potential for distorted inferences aboutparticular institutions. At the same time, a ‘one-size-fits-all’ approach could reducethe scope of what is taught across institutions each with diverse missions and goals.As noted by Richard Shavelson (2007, 28), one of the developers of the CLA:

If the learning outcomes of higher education are narrowly measured, as cost, capacity,and convenience would dictate, we risk narrowing the missions, subject matter taught,and diversity of the American system of higher education.

In contrast to standardised assessments, which may be distant from the missions,goals, and objectives of individual institutions, some proponents have advocatedlocally developed measures with the potential to more accurately represent thespecific institutional outcomes of higher education given their proximity to what isassessed (Allen and Bresciani 2003). Despite recommendations for local measures,there has been almost no discussion about what such measures would look like inpractice, how they would be analysed, or how analyses could be used to reconcile thecompeting needs of external and internal accountability.

Accountability in US teacher education

Within the larger context of the accountability movement and the debates about highereducation accountability described above, there has been great emphasis on account-ability regarding university-based initial teacher education in particular. A strongemphasis on accountability is part of what Cochran-Smith (2005; Cochran-Smith andthe Boston College Evidence Team 2009) called ‘the new teacher education’, whichhas emerged in the USA since the late 1990s. Cochran-Smith (2005) points out thatprior to the mid 1990s, the emphasis in initial teacher education was not on evidenceor outcomes, but on process, particularly how teacher candidates learned to teach, howtheir beliefs and attitudes changed over time, what the knowledge base for effectiveteaching was, and what social and organisational contexts supported their learning.The shift in initial teacher education was part of a much larger sea change in how wethink about educational accountability (Cuban 2004).

With regard to initial teacher education, there is now heavy emphasis in the USAon both external and internal accountability. The clearest examples of the push forexternal accountability are the federal reporting requirements that went into effect in1998 following the re-authorisation of the Higher Education Act. There are also manystate-wide data systems either under construction or in effect that link student data withdata about teacher effectiveness and initial teacher education programmes. In terms ofinternal accountability, the accreditation process of the National Council for theAccreditation of Teacher Education (NCATE) and the Teacher Education Accredita-tion Council (TEAC) now require that institutions provide ‘evidence’ (Williams,Mitchell, and Leibbrand 2003, xiii) of teachers’ knowledge and performance.

In a way that is parallel to the calls for change in higher education more generally,the point of shifting accountability from external policy to internal practice ininitial teacher education is to build the capacity within programmes to assess progressand effectiveness and also to generate knowledge that can be used both in local

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 6: An accountability model for initial teacher education

356 L. Ludlow et al.

programmes and more broadly. What this has meant in the USA is that, across thecountry, more and more of the people engaged in initial teacher education are alsoengaged in assembling evidence about their practices and their graduates. This ispartly to satisfy their evaluators, but it is also to see whether programmes are measur-ing up to their own standards for excellent teaching. However, as Wineburg (2006)argued based on a survey about general evidence gathering practices among the highereducation institutions that prepare most of the nation’s teachers in the USA, manyinstitutions ‘appear[ed] to be unable to organise and interpret data in ways that wouldprovide an effective response to outside mandates’ (56).

This article sets out to respond to many of the issues raised in discussions abouthigher education accountability in general and initial teacher education accountabilityin particular. In the next section of this article, we present an accountability model forinitial teacher education.

An institution-specific, programme-level accountability model for initial teacher education

Boston College (BC) has approximately 15,000 undergraduate and graduate students,with the Lynch School of Education preparing 250–270 undergraduate and graduateteacher candidates per year. Its mission includes an explicit commitment to preparingteachers to teach for social justice by focusing on teachers’ learning and students’learning. In addition to methods courses and practica that link theories, research, andpractice, teacher candidates at Boston College take courses in the social contexts andpurposes of education, teaching students with diverse needs (including courses inbilingualism and diverse learners), and human learning/development. All candidateshave at least one teaching placement in a school with a diverse population, andelementary education teacher candidates complete a fieldwork project with bilingualstudents. The capstone inquiry project requires candidates to pose a question about theimpact of their teaching on pupils’ learning, collect multiple data sources, and inter-pret these in terms of guidelines for practice and commitments to social justice. Pupillearning is used here to differentiate between the learning of teacher candidates whoparticipated in our study and their K–12 students. For further information about thespecific aspects of the programme, see www.bc.edu/schools/lsoe.

In 2003, as part of the Boston College Teachers for a New Era (TNE) project (seeLudlow et al. 2008 for further details), an initiative funded primarily by the CarnegieCorporation, an interdisciplinary evidence team (ET) of researchers and teachereducation practitioners began systematically studying and assessing initial teachereducation at the institutional level. Grounded in the experience of that group (e.g.,Cochran-Smith and the Boston College Evidence Team 2009; Cochran-Smith et al. inpress; Ludlow et al. 2008) and consistent with many of the current recommendationsregarding higher education accountability, we identified four key structural compo-nents of an institution-specific accountability model that also speak to many of thedemands of outside mandates. The four components of the BC accountability modelfor initial teacher education are: (1) a conceptual framework in which to locate acomplementary portfolio of multiple studies that assess relevant processes andoutcomes; (2) the involvement of faculty and relevant stakeholders in order to changethe culture of decision making and interpretation; (3) measurements and assessmentsthat reflect the missions, goals, and values of the programme and the institution; and

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 7: An accountability model for initial teacher education

Journal of Education for Teaching 357

(4) the integration of the results of various measures and assessments into a compre-hensive data system linked to other databases.

The development of a conceptual framework and portfolio of studies

When Boston College began its TNE work, the Evidence Team reviewed the historyand status of research on initial teacher education, value added models of educationalassessment, and more generally what Kennedy (1999) called ‘the problem of evidencein teacher education.’ The team quickly acknowledged the difficulty and complexityof linking teacher preparation with the eventual achievement of K–12 pupils. Wedescribe our conceptual framework (Figure 1) as follows:

[The framework] represents the core aspects of teacher preparation and learning toteach that the ET concluded would have to be taken into account to understand teachereducation’s impact: the characteristics of entering teacher candidates; how these char-acteristics interact with the learning opportunities available in the programme; howteacher candidates experience and make sense of these opportunities; whether and howteacher candidates/graduates actually use what they learn in classrooms and schools(including teachers’ strategies, interpretive frameworks, and ways of relating tostudents and others); desired school outcomes, including pupils’ academic, social, andcivic learning as well as teacher retention and teaching for social justice; and, how allof these are embedded within varying institutional, school, social, cultural, andaccountability contexts and influenced by the differing conditions in which teacherswork. (Cochran-Smith and the Boston College Evidence Team 2009, 460)

Figure 1. A conceptual framework for assessing teacher education.Working from this framework, the team developed the evidence portfolio ofinstruments and studies that is represented in Figure 2. In developing this portfolio,

Figure 1. A conceptual framework for assessing teacher education. Created by MarilynCochran-Smith and the Boston College TNE Evidence Team in 2004 and used with permission.

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 8: An accountability model for initial teacher education

358 L. Ludlow et al.

the team was guided by its conclusion that no single measure or study couldcompletely capture the processes and outcomes of initial teacher education. Thus wedeveloped multiple quantitative, qualitative, and mixed methods studies. Each of thesewas designed to investigate specific, yet complementary and overlapping, aspects ofinitial teacher education. Our assumption here was that these multiple studies andassessments would collectively represent a more complete picture of teacher educa-tion than would any single study or assessment.Figure 2. Boston College TNE Evidence Portfolio.Figure 3. Boston College TNE data links.Our evidence portfolio has seven major projects: (1) a series of surveys examiningteacher candidates’/graduates’ perceptions, experiences, beliefs, and reported prac-tices; (2) a set of instruments that conceptualise and measure learning to teachfor social justice as an outcome of teacher education; (3) qualitative case studies,examining relationships among candidates’ entry characteristics, learning in theprogramme, classroom practices, pupils’ learning, and social justice; (4) two analy-ses, drawing on longitudinal data bases from (1) and (3) above, designed to identifykey interrelationships between teacher development and teacher retention; (5) cross-sectional and value added assessment of the impact of BC graduates on pupils’ testperformance; (6) comparison of graduates’ classroom practices and pupils’ perfor-mance on content tests for teachers from BC and from an alternate pathway intoteaching in the same school district; and (7) a mixed methods study of teacher candi-dates’ ability to raise questions, document pupils’ learning, and interpret and alterclassroom practice using classroom-based inquiry. Each of these studies was

Figure 2. Boston College TNE Evidence Portfolio. Cochran-Smith and the Boston CollegeTNE Evidence Team (2009). Reproduced with kind permission from Sage © 2009.

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 9: An accountability model for initial teacher education

Journal of Education for Teaching 359

designed to investigate one or more relationships outlined in the conceptual frame-work in Figure 1.

Involvement of faculty and relevant stakeholders in culture change

The second element of an initial teacher education accountability model is theinvolvement of faculty and relevant stakeholders in order to change the culture ofinterpretation, deliberation and decision making about local curriculum, policy andpractice. It is important to note that the theory of action underlying culture change isdramatically different from the theory underlying initiatives to bring about changethrough formal requirements or policy mandates alone.

At Boston College, it was our goal to help create a data-rich environment thatwould support the development of a ‘culture of evidence’ in which the results of thequantitative, qualitative and mixed-methods measures and assessments could informdecisions about initial teacher education (Cochran-Smith and the Boston CollegeEvidence Team 2009). For example, throughout the survey development processteacher education faculty provided feedback on early drafts of the instruments, indi-cating which items were of particular interest, as well as which items were not relevantto the programme. In addition, a number of ‘data workshops’ were conducted for vari-ous groups associated with BC’s teacher education programmes, namely educationfaculty and administrators, arts and sciences faculty and administrators who werecollaborating on teacher education initiatives, and school-based teachers and adminis-trators from partner schools. In these workshops, the intention was not simply topresent survey data, but to create a context in which data could be jointly examined,interpreted, questioned, further analysed, and connected to other evidence, continuingexperience, and the larger goals and commitments of the programme.

Our experience at BC over six years suggests that actually changing institutionalculture is much easier said than done, and there are several critical factors that constrainthe possibilities. In higher education, finding the right balance in decision-making isa considerable challenge and can be a constraint that works against creating a cultureof evidence. As we discovered:

Some discussions about the evidence-based education movement use the term ‘decisionsdriven by evidence’ as a kind of mantra about how educational institutions ought to bechanged. But we have found that there is a difference between a culture where evidence‘drives’ decisions and a culture where evidence ‘informs’ decisions. The former suggestsa narrow, almost empiricist focus, and a linear, uncomplicated conception of the relation-ship between evidence and policy/practice. On the other hand, the latter acknowledgesthat evidence alone can never tell us what to do. Rather evidence always has to be inter-preted. (Cochran-Smith and the Boston College Evidence Team 2009, 466)

Along similar lines, Phillips (2007, 395) suggests that ‘evidence is made, by wayof an argument that links together a number of disparate premises to form a case insupport of some theory or policy … the very same pieces of evidence can be usedfor different purposes’. The important idea with regard to creating a culture ofevidence is acknowledging that how we interpret evidence is mediated by theavailability of resources, our priorities and values, and by the trade-offs involved inselecting one direction over another. At the same time, all of these are shaped by thelarger social, historical and institutional contexts within which decisions areembedded.

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 10: An accountability model for initial teacher education

360 L. Ludlow et al.

Measures that reflect the values of the institution and the initial teacher education programme

The third component of the initial teacher education model of accountability wepropose is the use of measures and assessments that are either directly related to, orat least consistent with, the mission and values of the institution and programme.Richard Hersh (n.d.), of the Collegiate Learning Assessment project, argues that theold educational saw, ‘[W]e value what we measure rather than measure what wevalue,’ turns out to be true far more often than it should be because measures arechosen for their availability, low costs and minimal administrative demands, ratherthan because they truly get at the higher order learning required for twenty-first-century work and citizenship.

Creating an accountability system with ends, means, and measures conceptuallyand methodologically linked to one another requires clarity about all of these in thefirst place. It also requires recognition of the fact that higher education (whether inengineering or initial teacher education, the health sciences or the liberal arts) alwayshas to do with values and ethical questions, and not simply with assembling goodempirical evidence about fixed goals.

The stated mission of the Boston College Lynch School of Education (LSOE)(2009, 4) is ‘to improve the human condition through education … to expand thehuman imagination, and to make the world more just.’ More specifically, the initialteacher education programme has five operating themes with the first described asoverarching the other four: promoting social justice; constructing knowledge; inquir-ing into practice; meeting the needs of diverse learners; and collaborating with others.Individually and collectively, the measures and assessments that became part of ouraccountability system were intended to provide information about whether or not theprogramme was effective in terms of its own values and goals.

In order to stay true to our own mission and values, we wanted to construct ‘learn-ing to teach for social justice’ as a complex but ‘assess-able’ outcome of teacherpreparation. We did this by developing a set of ‘just measures,’ or tools, instruments,protocols, and studies that document and measure aspects of learning to teach forsocial justice as an outcome of initial teacher education (Cochran-Smith et al., inpress). Here, we assumed that learning to teach for social justice was a complexmatter and that there was no single best measure for assessing it as an outcome ofinitial teacher education. Rather each assessment had one or more criteria designed toget at this outcome. Each assessment by itself provided a valuable but only partialpicture of teaching for social justice as an outcome. Taken together, however, theseinstruments and studies contributed to a richer understanding of the processes andimpacts of an initial teacher education programme with a stated social justice agenda.

For example, the Learning to Teach for Social Justice-Beliefs scale is part of asuite of five surveys administered at specific points in time from entry into theprogramme until three years after graduation. This measure focuses exclusively onteacher candidates’ and graduates’ beliefs and perceptions, but it does not addressclassroom practice or content and pedagogical knowledge. Similarly the TeacherAssessment/Pupil Learning protocol is a measure that highlights the intellectual qual-ity of the learning opportunities teachers provide and their pupils’ learning in responseto those opportunities, but it does not account for teachers’ beliefs, relationships withparents and colleagues, or advocacy for pupils. Each of the measures of teaching for

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 11: An accountability model for initial teacher education

Journal of Education for Teaching 361

social justice had to be understood as a piece within the larger portfolio and had to beconsidered in terms of the trade-offs it involved (for example, which aspects of learn-ing to teach for social justice were illuminated and which were left in the dark as wellas what the strengths and limitations were in scope, ease of administration, complex-ity, measurement, and generation of useful information).

Integration of multiple measures and assessments into a data system

Any institution-specific, initial teacher education assessment system that has all of thecomponents we are here describing generates large amounts of quantitative and qual-itative data for various cohorts of students, at different points in time and over time.Without a data management system, these quickly become confusing, frustrating, andunmanageable. Thus the fourth and final component of our accountability model is amechanism for organising, integrating, and systematically managing the results ofmultiple measures and assessments collected over time. A mechanism of this kind isessential to efforts to create a culture of evidence so that multiple stakeholders haveready access to well-organised data in forms that are easily retrievable and flexibleenough to be used to respond to different kinds of questions, concerns and interests.

At Boston College, for example, our suite of five surveys assesses non-cognitiveoutcomes of teacher education, such as perceived preparedness to teach, teachercandidate engagement and overall satisfaction with experiences in the teacher educa-tion programme, beliefs about teaching for social justice, knowledge about culture,language, learning, and schooling, and commitment to teaching across the profes-sional lifespan and promoting all pupils’ learning. The suite includes an entry survey,administered to all undergraduate and graduate teacher candidates at entry to theprogramme; an exit survey, administered at graduation; and one-, two-, and three-yearout surveys, administered to graduates one, two, and three years after graduation,respectively (Ludlow et al. 2008). To date, the surveys have been administered tomultiple cohorts of approximately 250 teacher candidates and graduates per year from2004 to 2010. Across 23 survey administrations, response rates have exceeded 90%for the entry and exit surveys, 65% for the one- and two-year out surveys and 60% forthe three-year out surveys. Across administrations, more than 2500 teacher candidatesand graduates have completed one or more of these surveys. The approximately 60–100 items within each survey consist of unique items relevant to particular points intime, as well as common items that appear across all surveys.

In addition we have extensive and multiple forms of qualitative data that followselected case study candidates from entry into the programme through the third orfourth year of teaching. The case studies rely heavily on interview and classroom obser-vations, but they also include teachers’ assessments and assignments, their pupils’ workin response to those assignments, and the teachers’ programme work, including inquiryprojects. We also have information from university databases that include teachercandidate standardised test scores, grade point averages, and practicum placements.

To manage these data, we developed the Teacher Education System of Assessment(TESA) represented in Figure 3. TESA is a ‘relational’ database system of interrelatedtables that hold a variety of information types. The primary tables contain dictionaries,or master files, of survey administrations linking the specific items used in eachsurvey with the particular people who participated in each administration. Other tables

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 12: An accountability model for initial teacher education

362 L. Ludlow et al.

contain qualitative case study data, data downloaded from the Boston College DataWarehouse, admission and registration data from the LSOE Admissions Office, andplacement and licensure data from the LSOE Practicum Office. Filemaker Pro v9.03was used to develop the TESA system because of its flexibility with these various datasystems and because of its versatile end-user reporting functions.

Finally, it is important to note that manuals were created and are routinely updatedto document how to: (1) administer the surveys; (2) enter data into TESA; (3) performroutine statistical procedures; (4) generate routine reports for each survey administra-tion; and (5) compile the survey results into a single continuous record extending overmultiple years. The maintenance of the TESA system has become institutionalised andis now part of the responsibility of the LSOE data manager. Continuing data collec-tion, analysis, and report generating are now the responsibility of the LSOE directorof assessment.

Accountability in initial teacher education: programme improvement

As we have noted, there are many debates about the purposes of accountability inhigher education generally and in initial teacher education more specifically. Despitethese debates, three major purposes are commonly discussed: improving students’learning, supporting the claims institutions make about programmes and graduates,and informing continuous programme improvement (Allen and Bresciani 2003; Braun2009; Malandra 2008; Shulman 2007). In this section of the article, we consider eachof these purposes and illustrate how the accountability model we described abovesupports continuous programme improvement in keeping with each of these purposes.

Figure 3. Boston College TNE data links.

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 13: An accountability model for initial teacher education

Journal of Education for Teaching 363

Improving students’ learning

One of the things we learned from our experiences developing the initial teachereducation accountability model is that a systematic internal accountability system,based on local but rigorous measures, can also be used to meet the expectations ofexternal evaluators while remaining flexible enough to address the institution’s uniqueprogrammatic objectives and thus also inform continued internal improvement. Forexample, as part of a recent university-wide regional accreditation review, BC wasasked to submit a report that focused on the institution’s approaches to the assessmentof student learning at the undergraduate and graduate levels that went beyond regularcourse grading. In response to this request, we used TESA’s database, analytic proce-dures, and report generation capabilities to compile summaries of the diversemeasures and assessments used in the department’s internal accountability systemto assess students’ learning. These included a variety of data, such as: students’scores on licensure examinations and standardised admissions assessments; scoreson the programme’s capstone inquiry project; rates of completion of courses andprogrammes; data from the surveys; ratings on a capstone Pre-Service PerformanceAssessment-Plus (PPA+); and analyses from a series of qualitative case studies. Takentogether, these measures and assessments provided rich and comprehensive evidenceof students’ learning in what the higher education accountability discourse refers to asboth cognitive and non-cognitive areas, thus meeting both internal and externalaccountability demands.

A second example of accountability for student learning speaks more to internalpurposes and involves the surveys administered to students at different points duringthe programme and after graduation. As noted earlier, embedded in the entry, exit,and one-year out surveys is a 12-item Likert-response Learning to Teach for SocialJustice-Beliefs (LTSJ-B) scale (Ludlow, Enterline, and Cochran-Smith 2008;Enterline et al. 2008). This scale was developed to reflect the social justice goals andcommitments of the programme. Rasch model psychometric analyses demonstratedthat the LTSJ-B scale could detect changes in beliefs related to teaching for socialjustice over time and across cohorts of teacher candidates. Further statistical analysesrevealed both that mean scale scores of exiting teacher candidates and first yearteachers exceeded those of entering teacher candidates and that there were greaterdifferences between entering and exiting scores for successive cohorts of teachercandidates concomitant with more explicit emphasis on social justice in theprogramme. According to one teacher education faculty member, through the devel-opment of the LTSJ-B scale the teacher education faculty ‘clarified the department’swhole idea about what it means to do teacher education for social justice’ (BC TNESurvey Team 2008). These findings enabled the teacher education faculty to docu-ment clearly learning across time in the form of increased commitment to teachingfor social justice.

Along similar lines, another theme of the teacher education programme is toprepare teachers to meet the needs of all students, including students who have beenhistorically marginalised and those who come from a variety of backgrounds. A 10-item Teaching Diverse Learners (TDL) scale on the exit and one-year out surveysreflects this focus of the programme. From 2005 to 2010, the mean score on theTDL scale increased significantly. Moreover, analyses at the item level found asignificant increase on the specific item pertaining to preparing teacher candidates

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 14: An accountability model for initial teacher education

364 L. Ludlow et al.

to work with students with different linguistic backgrounds. These increases suggestthat across time graduating teacher candidates felt more prepared specifically toteach English language learners (ELLs) and bilingual students. More importantly,they suggest that greater programmatic emphasis on these issues was effective instudents’ learning.

Our point here is to make the case that local and institution-specific measures ofstudents’ learning can function quite effectively to document learning over time. Thesame kind of information can be used to respond to both external and internal account-ability demands and to foster continuous programme improvement.

Supporting claims

A second major purpose of initial teacher education accountability is to support theclaims that faculties and institutions make about their programmes and graduates. Weuse BC’s recent national accreditation experience to demonstrate how a local account-ability system can be used to support institutional claims. To earn national accredita-tion from the Teacher Education Accreditation Council (TEAC), teacher educationprogrammes must provide reliable and valid evidence for claims made about theprogramme, teacher candidates currently enrolled, and graduates who have completedthe programme (Murray 2005). At Boston College, teacher education faculty andadministrators developed six claims about the programme and about candidates’ andgraduates’ knowledge, skills, commitments, and performance. To support theseclaims, we drew on a variety of evidence sources from the original evidence portfolioand from other sources. For example, to support the claim that teacher candidates andgraduates ‘believe in and are committed to teaching for social justice, defined asimproving the learning of all pupils and enhancing their life chances,’ we drew onsurvey data from two scales, interviews and observations in the qualitative case stud-ies, and performance scores from candidates’ inquiry projects.

The use of multiple sources of evidence to support a programme’s claims is animportant characteristic of the TEAC accreditation process. The variety of sources theBC programme had developed was even more varied and extensive than what wasrequired, and the TESA system provided easy and linked access to this information.The BC programme’s use and organisation of its multiple data sources has subse-quently been highlighted as an exemplar by TEAC in their presentations and discus-sions with other initial teacher education programmes.

Inform programmatic change

The third purpose of an initial teacher education accountability model is to informprogrammatic improvements, including policies and practices regarding everythingfrom admissions, programme completion, and clinical experiences to curriculum,resource allocation, and faculty assignments. In our work at BC, we consciously tookan exploratory and local approach, asking questions for which there were no a priorianswers. We regarded this exploratory and internal approach as standing in markedcontrast to the confirmatory approach often involved in accreditations where the goalis to verify compliance with external standards, and there is little room for identifyingactual internal problems or posing genuine and situation-specific questions that mightinform changes in curriculum or programme structures. Below we illustrate how we

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 15: An accountability model for initial teacher education

Journal of Education for Teaching 365

used assessment evidence to inform programmatic change through a process ofcontinuous feedback.

The 10 questions on the programme surveys that make up the ‘inquiry’ scale askteacher candidates to assess how well prepared they are to engage in inquiry in theirclassrooms, including reflecting on and evaluating theories of teaching, seeking andusing feedback to guide instruction, and applying recent research in education.Overall, from 2005 through 2010, most respondents reported their preparedness in thisarea as ‘good’ or ‘excellent.’ However, there was a seeming contradiction betweenthis level of perceived preparedness on the 10 items on the inquiry scale, and a sepa-rate survey item rating the ‘inquiry seminar,’ which was required of all students andwas intended to be part of capstone student teaching experience. Across several years,teacher candidates’ ratings of the inquiry seminar were between ‘fair’ and ‘good.’ Thisfinding suggests that teacher candidates were critical of the inquiry seminar, whilepositive about their preparation to actually engage in inquiry in their classrooms.Based on this evidence and a separate in-depth content analysis of a sample of inquiryprojects, faculty are currently revising the scope and sequence of the inquiry seminaras well as the requirements and evaluation rubric for the inquiry project. As program-matic changes are made, survey data and scoring data from the inquiry project willprovide an indication of the impact of these changes and inform further revisions tothe programme.

Conclusion

This paper has described and illustrated with key examples a locally-developedprogramme-specific initial teacher education accountability model. We have shownthat this local accountability model can also be used to meet the external accountabil-ity demands, such as those made by national professional accreditors or regionaluniversity accreditors. The principles underlying this model emphasise multiplemeasures and assessments that are conceptually linked by an over-arching frameworkthat is consistent with the values, mission, and goals of faculty, administrators, andother stakeholders and also empirically linked by a robust data management system.Taken together, these make for a practical system of assessments, analysis and datamanagement that can be used to track student learning, provide evidence forprogramme and institutional claims, and contribute to a system of continuousimprovement.

We conclude with four main points. First, it is important to reiterate that thefaculty and administration assumed the task of measuring what is locally valued bydeveloping and implementing a variety of process and outcomes measures. This waspart of a concurrent movement to build a culture of evidence in the LSOE and TeacherEducation programme. As we learned, changing the culture of teacher education, sothat decisions are made in part on the basis of evidence, is complex and multi-layered.This has been a continuous and challenging process over the course of the past sixyears.

Second, the database management system has evolved, and continues to evolve. Itprovides an unprecedented opportunity for programme faculty and graduate studentsto assess the experiences and learning of teacher candidates from the day they enterthe programme to three years after graduation. These possibilities include linkingteacher candidates’ and graduates’ beliefs about and commitment to teaching for

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 16: An accountability model for initial teacher education

366 L. Ludlow et al.

social justice with their reported practices longitudinally from the time of entry intothe programme through the first three years of teaching, as well as further integratingand linking TESA with databases outside of BC such as those that house state stan-dardised assessments of graduates’ pupils’ learning.

Third, although many of the assessments and instruments we describe here werelocally developed, they are applicable to teacher education programmes outside ofLSOE. The surveys, in particular, were designed to assess initial teacher education atBoston College, but they have broader applicability and are of interest to othernational and international teacher education programmes as well. For example,preliminary comparative validity analyses suggest that the LTSJ-B scale is invariantacross teacher education programmes at Boston College, Saint Patrick’s College(Dublin, Ireland), the University of Auckland (Auckland, New Zealand), and theUniversity of Puerto Rico (San Juan, Puerto Rico – where the scale has been translatedinto Spanish) (Ludlow et al. 2010).

Finally, we want to suggest that the programme-specific accountability model withthe four components we have elaborated on in this article is potentially applicable toany higher education institution or programme seeking to respond to internal andexternal accountability demands. In particular, from a policy perspective, our worksuggests that it is possible to design and implement a powerful accountability system,based on local measures, that meets the demands of external evaluators and auditors,while remaining flexible enough to address an institution’s unique programmaticobjectives.

The level of effort is high but is warranted, especially if the alternative is a stan-dardised, generic assessment used alone. The return for this effort is a system that istied specifically to the institution and is responsive both to its internal needs and theexternal demands for accountability. Of course the ability to construct the kind ofsystem we describe here depends on the individual institutional resources, especiallyresources related to the availability and expertise of personnel, the time and energyrequired, and the need for a long-term commitment to developing, honing, andusing the system for continuous improvement. Furthermore, longitudinal projectspresent their own unique challenges and not every challenge to the implementationand success of the system can be foreseen (Ludlow et al. 2010). Even institutionswith very limited resources, however, can develop simple systems in keeping withour major principles – measures that reflect local mission and values, multiplemeasures over time, and enlisting of faculty and other stakeholders as partners inthe process.

Acknowledgements

There are more people involved in the research covered here than specified as authors. Theresearch would not have been possible without all these people. Boston College’s ‘ET’ is partof the Teachers for a New Era teacher education national initiative. The team includes BCfaculty members Marilyn Cochran-Smith (chair), Alan Kafka, Larry Ludlow, Pat McQuillan,Joe Pedulla, and Jerry Pine; administrators Jane Carter, Sarah Enterline, Jeff Gilligan, and FranLoftus; and graduate students Joan Barnatt, Robert Baroz, Mac Cannady, Deborah ParkerCantor, Stephanie Chappe, Lisa D’Souza, Ann Marie Gleeson, Apryl Holder, Jiefang Hu,Cindy Jong, Kara Mitchell, Tracy McMahon, Emilie Mitescu, Aubrey Scheopner, KarenShakman, Yves Salomon- Fernandez, and Diana Terrell. Although there were a few changesover the years, the composition of the team has been remarkably stable.

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 17: An accountability model for initial teacher education

Journal of Education for Teaching 367

ReferencesAllen, J., and M.J. Bresciani. 2003. On the transparency of assessment results. Change January/

February: 21–3.Allen, M. 2003. Eight questions on teacher preparation: What does the research say?

Denver, CO: Education Commission of the States.Association of American Colleges and Universities and the Council for Higher Education

Accreditation. 2008. New leadership for student learning and accountability: A state-ment of principles, commitments to action. Washington, DC: Association of AmericanColleges and Universities and the Council for Higher Education Accreditation.

BC TNE Survey Team 2008. Results from the teacher education faculty questionnaire.Unpublished report.

Bollag, B. 2006. Making an art form of assessment. The Chronicle of Higher Education 53,no. 10: A8–A10.

Boston College Lynch School of Education, Department of Teacher Education, SpecialEducation and Curriculum and Instruction. 2009. Inquiry brief, representing all initiallicense teacher educator programme options. Submitted to the Teacher EducationAccreditation Council.

Braun, H. 2009. Five minutes with the next president. Paper presented at the Boston CollegeLynch School of Education Endowed Chairs Colloquium, 25 March.

Cochran-Smith, M. 2005. The new teacher education: For better or for worse? EducationalResearcher 34, no. 6: 3–17.

Cochran-Smith, M., and the Boston College Evidence Team. 2009. Reculturing teachereducation: inquiry, evidence, and action. Journal of Teacher Education 60, no. 5:458–68.

Cochran-Smith, M., E. Mitescu, K. Shakman, and the Boston College Evidence Team. In press.Just measures: Social justice as a teacher education outcome. Teacher Education andPractice.

Cuban, L. 2004. Looking through the rearview mirror at school accountability. In HoldingAccountability Accountable, ed. K. Sirotnik, 18–34. New York: Teachers College Press.

Education Commission of the States. 1998. The progress of education reform, 1998. Denver,CO: Education Commission of the States.

Ekman, R., and S. Pelletier. 2008. Assessing student learning: a work in progress. Change July/August. http://www.changemag.org/Archives/Back%20Issues/July-August%202008/full-assessing-student-learning.html.

Elmore, R. 2002. The testing trap. Harvard Magazine 105, no. 1: 35.Enterline, S., M. Cochran-Smith, L.H. Ludlow, and E. Mitescu. 2008. Learning to teach for

social justice: Measuring change in the beliefs of teacher candidates. The New Educator4: 267–90.

Eubanks, D. 2006. The problem with standardized assessment: There are other, better waysthan high-stakes testing to hold institutions accountable for making good on the prom-ises of higher education. http://www.allbusiness.com/company-activities-management/financial/7934973-1.html.

Frye, R. 1999. Assessment, accountability, and student learning outcomes. Dialogue 2: 1–12.Garcia, A.E., and J.M. Pacheco. 1992. A student outcomes model for community colleges:

measuring institutional effectiveness. Paper presented at the North Central Associationof Colleges and Schools Commission, Chicago, 21–24 March.

Hersh, R.H. n.d. Teaching to a test worth teaching to in college and high school. http://www.cae.org/content/pdf/teaching_to_a_test_worth_teaching_to_reformat_.pdf.

Kennedy, M. 1999. The problem of evidence in teacher education. In The role of the univer-sity in the preparation of teachers, ed. R. Roth, 87–107. Philadelphia: Falmer Press.

Labi, A. 2007. International assessment effort raises concerns among education groups. TheChronicle of Higher Education 54, no. 5: A31–A.

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14

Page 18: An accountability model for initial teacher education

368 L. Ludlow et al.

Linn, R. 2003. Accountability: responsibility and reasonable expectations. EducationalResearcher 32: 3–13.

Ludlow, L., S. Enterline, and M. Cochran-Smith. 2008. Learning to teach for social justice: anapplication of Rasch measurement principles. Measurement and Evaluation in Counselingand Development 40, no. 4: 194–214.

Ludlow, L.H., S. Enterline, M. O’Leary, F. Ell, V. Bonilla, and M. Cochran-Smith. 2010.Learning to teach for social justice-beliefs (LTSJ-B): an international construct invari-ance study. Paper presented at the American Educational Research Association annualmeeting, May 3, in Denver, CO.

Ludlow, L.H., J. Pedulla, M. Cannady, E. Mitescu, S. Chappe, S. Enterline, F. Loftus, D.Cantor, and T. McMahon. 2010. Methodological challenges in conducting longitudinalmulti-cohort teacher retention analyses. Paper presented at the American EducationalResearch Association annual meeting, April 30, in Denver, CO.

Ludlow, L., J. Pedulla, S. Enterline, M. Cochran-Smith, F. Loftus, Y. Salomon-Fernandez,and E. Mitescu. 2008. From students to teachers: using surveys to build a culture ofevidence and inquiry. European Journal of Teacher Education 31, no. 4: 1–19.

Malandra, G.H. 2008. Accountability and learning assessment in the future of higher educa-tion. On the Horizon 16, no. 2: 57–71.

Murray, F.B. 2005. On building a unified system of accreditation in teacher education. Journalof Teacher Education 56, no. 3: 307–17.

Phillips, D.C. 2007. Adding complexity: Philosophical perspectives on the relationshipbetween evidence and policy. In Evidence and decision making, the 106th yearbook ofthe national society for the study of education, ed. P. Moss, 376–402. Malden, MA:Blackwell.

Schagen, I., and D. Hutchinson. 2007. Comparisons between PISA and TIMSS: We could bethe man with two watches. Education Journal 101: 34–5.

Shavelson, R.J. 2007. Assessing student learning responsibly: from history to an auda-cious proposal. Change January/February. http://www.changemag.org/Archives/Back% 20Issues/January-February%202007/abstract-assessing-responsibly.html.

Shulman, L.S. 2007. Counting and recounting: assessment and the quest for accountability.Change January/February. http://www.changemag.org/Archives/Back%20Issues/January-February%202007/full-counting-recounting.html.

Sirotnik, K. 2004. Holding accountability accountable. New York: Teachers College Press.USA Department of Education. 2006. A test of leadership: Charting the future of U.S. Higher

Education. Washington, DC: USA Department of Education.Williams, B., A. Mitchell, and T. Leibbrand, eds. 2003. Navigating change: Preparing for a

performance-based accreditation preview. Washington, DC: National Council for theAccreditation of Teacher Education.

Wineburg, M. 2006. Evidence in teacher preparation: Establishing a framework for account-ability. Journal of Teacher Education 57, no. 1: 51–64.

Dow

nloa

ded

by [

Nor

th D

akot

a St

ate

Uni

vers

ity]

at 0

7:46

10

Dec

embe

r 20

14


Recommended