Transcript
Page 1: Attitudes towards school self-evaluation

Studies in Educational Evaluation 35 (2009) 21–28

Attitudes towards school self-evaluation

Jan Vanhoof *, Peter Van Petegem, Sven De Maeyer

Antwerp University, Institute of Education and Information Sciences, Universiteitsplein 1, 2610Wilrijk, Belgium

A B S T R A C T

Research reveals that a positive attitude towards self-evaluation is a pre-condition which favours

successful school self-evaluation. This article describes how self-evaluation is regarded in schools and

investigates whether school characteristics can explain differences in the attitude of individuals. We

report on a survey study conducted among 2716 school principals and teachers in 96 schools. Our

research shows that respondents expressed themselves more positively with regard to the possible

results of self-evaluation than with regard to the self-evaluation process itself. We also found that school

principals exhibit a more positive attitude than teachers. Multi-level analyses demonstrate that the

attitude towards self-evaluation is related to the characteristics of the broader functioning of the school

where the respondent works (such as school culture and whether or not the school concerned meets the

criteria of a professional learning community).

� 2009 Elsevier Ltd. All rights reserved.

Contents lists available at ScienceDirect

Studies in Educational Evaluation

journal homepage: www.e lsev ier .com/stueduc

Introduction

Schools are increasingly being asked to shoulder a greaterproportion of the responsibility for developing and guaranteeingeducational quality, which involves, among other things, theirbeing expected to engage in self-evaluation. This means that theyare required to arrive at an appraisal of their current functioning(strengths and weaknesses) as a point of departure for a plan orvision for the future. Self-evaluation is a procedure which isinitiated and carried out by the school in order to describe andevaluate its own functioning (Blok, Sleegers, & Karsten, 2005, p. 3).In the last decade self-evaluation formats have become – or arein the process of becoming – a commonplace activity in manyschools. An analysis of the literature on self-evaluation reveals thatexpectations with regard to results are to a large extent positive,although there is, as yet, limited evidence to support this positivepicture. Despite the fact that the introduction of self-evaluation iswidely applauded, there are serious question marks about thequality of self-evaluations as they are currently practised. Thisraises the issue as to how far self-evaluations are beingimplemented in a manner which will yield worthwhile resultsand how differences in the quality of self-evaluations can beexplained. The attitude towards self-evaluation is often suggestedas a crucial factor in this. Self-evaluation can only work if teammembers are positively disposed towards it (McBeath, 1999). Thus,one pre-condition which favours a worthwhile self-evaluation is

* Corresponding author. Tel.: +32 32204143; fax: +32 32204998.

E-mail address: [email protected] (J. Vanhoof).

0191-491X/$ – see front matter � 2009 Elsevier Ltd. All rights reserved.

doi:10.1016/j.stueduc.2009.01.004

achieving an awareness that self-evaluation is a meaningful andfruitful activity. ‘All things considered, self-evaluation is not some-

thing to be embarked on lightly’ (van Aanholt & Buis, 1990, p. 19).There are indications that attitudes towards self-evaluation are

generally not positive and it would appear that there is insufficientawareness in schools of the objectives and usefulness of self-evaluation (Schildkamp, 2007). The fact that schools in Flandersmostly have experience with self-evaluations that are imposed onthem by government would also be likely to contribute to self-evaluation being seen more as an obligation – the principalobjective of which is ‘being compliant’ (i.e. meeting one’s statutoryand regulatory obligations) – rather than as a tool for improvingthe school’s functioning as an educational institution (VanPetegem, Verhoeven, Buvens, & Vanhoof, 2005). There is alsoevidence of a lack of openness within school teams and anunwillingness on the part of schools to look critically at their ownperformance. It would seem, therefore, that staff are often notmentally ready for carrying out a self-evaluation. Moreover, it isfurther apparent that, in many schools, identifying and confrontingproblems, questions, doubts, etc., and discussing these openly is byno means standard practice (Schildkamp, 2007). Evaluation andself-evaluation are still all too often seen as something threatening.Teachers still jealously guard their autonomy in the classroom andregard evaluation (self- or external) as a form of social control(MVG, 2006; Van Petegem et al., 2005). A positive attitude – whichis the necessary foundation for implementing self-evaluation – isthus often absent. Among other things, the perceived onerousnessof self-evaluation appears to play an innovation-inhibiting role.The same studies also suggest that school principals and teachersshare the same resistance towards the added paperwork that

Page 2: Attitudes towards school self-evaluation

J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 21–2822

a self-evaluation brings with it (Van Petegem et al., 2005).Nevertheless, the authors claim that head teachers are generallymore positively disposed towards self-evaluation and that they aremore convinced of the usefulness of self-evaluation activities thanare their teachers. The attitude of principals towards self-evaluation is generally positive. Thus, head teachers in theNetherlands, for example, regard self-evaluation as a useful andinstructive undertaking. They take the view that self-evaluationyields a reliable picture (Blok et al., 2005) and see it as a viableactivity, although admitting that it takes up a lot of time. McBeath,Meuret, Schratz, and Jakobsen (1999) arrive at similar conclusionsin their evaluation of a project in which 100 European schoolscarried out a self-evaluation. There are also indications that theattitude of head teachers is markedly more positive than that ofteachers. Van Petegem et al. (2005, p. 288) state, for example, that‘the attitude of teachers vis-a-vis self-evaluation is often quite

dismissive. School heads are by and large more positively disposed and

are more convinced of the usefulness of self-evaluation activities’.These findings coincide with an ongoing trend whereby carryingout self-evaluation is becoming an increasingly importantcomponent of the quality assurance system used in education.In this context, knowledge and understanding of attitudes towardsself-evaluation is crucial, not least because of the existingempirical evidence pointing to the link between conceptionsand behaviour (Kellgren and Wood, 1986).

Another question which needs to be asked concerns the schoolcharacteristics which are related to attitudes towards self-evaluation. The impact on school policy of organizationaleffectiveness (Quinn and Rohrbaugh, 1983) and of schoolcharacteristics which might suggest the existence of a professionallearning community has repeatedly been demonstrated (Griffith,2003; Levine & Lezotte, 1990; Mortimore, Sammons, Stoll, Lewis &Ecob, 1988; Sammons, Hillman & Mortimore, 1995). The expecta-tion is thus that school characteristics – such as encouragement bythe head teacher – can influence attitudes towards self-evaluationin both a positive and a negative sense (Saunders, 2000). Althoughattitudes towards self-evaluation have already been the subject ofresearch (Blok et al., 2005; Devos et al., 1999; McBeath et al., 1999),the link with school functioning has not yet been explicitlyinvestigated. This link is, however, of considerable importance in apolicy context in which schools are expected to design and executetheir own self-evaluation initiatives on an autonomous basis. Thisexpectation could place very great demands on schools withcertain characteristics. Lack of evidence regarding the impact ofschool characteristics on attitudes towards self-evaluation makesit difficult to assess exactly what can be expected of schools in thisarea. Kyriakydes and Campbell (2004, p. 32) have expressed theview that: ‘the field of school self-evaluation is in an early stage of

development’.All of this suggests that a considerable investment still needs to

be made in expanding the knowledge base regarding attitudestowards self-evaluation and how the attitudes of team membersand head teachers are related to the characteristics of the schoolsin which they work. This article sets out to make a contribution inthis direction by formulating an answer to the following researchquestions: (1) What is the attitude of school principals and teachers

towards self-evaluation and what are the differences in this regard?

and (2) What school characteristics can explain the differences

observed in this attitude?

We expect that the average attitude of all respondents towardsself-evaluation will be positive to a limited degree. However, thisgeneral picture conceals a large variation. We expect, for example,that head teachers will express themselves more positively withregard to self-evaluation than teachers. Equally, we expect that thenull hypothesis which states that there are no differences betweenrespondents and schools with respect to this attitude will have to

be rejected. We also expect that the attitude of the respondentstowards self-evaluation will be more positive in accordance withthe extent to which schools (a) are more organizationally effective,in the view of those respondents and (b) more closely correspondto the idea of a professional learning community. In the course ofthis article we will clarify these concepts from a theoretical point ofview and discuss their relevance.

Theoretical framework and operationalizations

The attitude of team members towards self-evaluation has to besituated and understood in the context in which this attitudearises. That context is in reality very broad and thus needs to bedefined and demarcated. In order to examine relevant school-

related characteristics, we used two lines of approach: schoolculture and the extent to which the school concerned can becharacterized as a ‘professional learning community’. Theseschool-related characteristics constitute the independent variablesin our model. The dependent variable is the attitude towards self-evaluation. The different variables are explained below. We willdiscuss the psychometric characteristics of the tools used insection 4 ‘Tools and clusters’.

Dependent variable: attitudes towards self-evaluation

This study focuses on attitudes towards self-evaluation. We setout to describe the attitude of head teachers and team membersand to explain the differences which exist. Specifically, we areinterested in their personal attitudes vis-a-vis self-evaluation. Anattitude indicates how positively or negatively an individualstands with regard to a particular issue (Petty and Wegener, 1998).Given that self-evaluations are complex phenomena, we looked atattitudes towards several different aspects. In order to permitcomparison with the findings of other research, we operationalizedattitudes towards self-evaluation using contrasting statementsdrawn from earlier studies (Blok et al., 2005; Meuret & Morlaix,2003). These are, for example: self-evaluation ‘tells us nothingnew’ versus ‘teaches us a great deal’; self-evaluation ‘takes up a lotof time’ versus ‘takes up very little extra time’ and self-evaluation‘only involves a few people’ versus ‘involves everybody’. Thestatements elicit respondents’ views concerning the purpose andvalue of self-evaluation; its viability and complexity; the expectedeffects; the systematic nature of self-evaluations and the extent towhich they are perceived as revealing useful insights.

Independent variables

In setting out the objectives of the study above we referred toschool culture and school characteristics which constitute thehallmarks of a professional learning community. These are theindependent variables which we have adopted in order to look atdifferences in school functioning. Our decision to opt for these twolines of approach was based on evidence which suggests that theyare related to the various school process and product character-istics and because of the opportunities they offer in terms ofexamining differences between schools. Given that research intoself-evaluations in schools is still a largely uncharted territory,connecting our research to these established lines of approachseemed to us to be a logical choice.

School culture: organizational effectiveness of the school

In order to examine school culture we used the effectivenessperspectives identified by Quinn and Rohrbaugh (1983). Quinn andRohrbaugh (1983, p. 369) developed a methodology for ascertainingthe effectiveness of a knowledge system using four effectivenessperspectives which in turn are determined by two independent

Page 3: Attitudes towards school self-evaluation

J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 21–28 23

dimensions. The first dimension concerns the organizational focus ofthe entity (in this case, the school) and ranges from an internal focuson the people in the organization to an external focus on theorganization itself. The second dimension represents the contrastbetween stability and control on the one hand, and flexibility andchange on the other. These two dimensions result in four quadrants.The 4 perspectives are: the ‘Human relations model’ (creating close-knit teams/meeting groups in order to achieve openness, collabora-tion, loyalty, involvement and motivation); the ‘Open system model’(modifying objectives on time to achieve optimal suitability withrespect to the context); the ‘Internal process model’ (creating clearrules in order to achieve control and stability); and the ‘Rational goalmodel’ (ensuring that activities are appropriately geared towardsthe achievement of productivity and efficiency).

The above model is also referred to as the ‘competing valuesmodel’ as the dimensions in the model seem at first sight to conveycontradictory messages. For example, we want schools to becontrollable and stable organizations, but we also expect adapt-ability and flexibility. The different quadrants can thus better beapproached as a cohesive whole. The four quadrants – and thevalues which underlie them – are very illuminating as a means ofunderstanding the attitude and behaviour of people in schools.Furthermore, although the different characteristics of the modelmay well seem to contradict each other, they are not mutuallyexclusive (Quinn & Rohrbaugh, 1983, p. 374). What makes thesequadrants so instructive is the principle that all four must apply toat least a minimum extent in order to be able to regard a school asan effective organization (Quinn, Faerman, Thompson, & McGrath,2003). It is up to schools to find a suitable balance between thedimensions. The desired balance cannot be established on auniform basis (Maslowski, 2001). In the analysis we will work withclusters of schools (school profiles) in order to adequately reflectpossible variations in the balance. The operationalization of theeffectiveness perspectives is based on the ‘School cultureInventory’ (Maslowski, 2001). The question asked is ‘to whatextent respondents are of the opinion that other people in theirschool regard a given set of values as important’. For the internalprocess model the following values are included: stability,continuity, formal approach, consistency, control, regulation andsecurity.

The school as a professional learning community

Operating an effective school policy is now less and less seenas a process of incorporating knowledge developed elsewhereinto the specific approach adopted in the individual schoolconcerned. Nonaka and Takeuchi (1995), for example, take theview that implementing innovation and policy in organizationsdepends on the capacity to create knowledge inside the orga-nization. An entity’s capacity to develop and apply knowledge atall levels is increasingly regarded as a core organizationalcompetency. With regard to professional development in schoolsthere is, in other words, a change of emphasis from instruction(processing knowledge) towards learning (creating knowledge)(Donnenberg, 1999; Patriotta, 2004; Wenger, 1998). The mostsuccessful organizations – irrespective of whether these areprofit or non-profit organizations – appear to be able to link thepersonal development of the individuals who make up theorganization to an improved effectiveness at organizational level(Senge, 2001). In the context of research into self-evaluations it isalso argued that connections should be sought with theunderlying principles of a professional learning community(Devos et al., 1999, p. 38). In order to explain differences inattitudes towards self-evaluation, we specifically focused –within the concept of a ‘professional learning community’ – onlearning activities and learning outcomes in schools and on theattitude of team members with respect to their own learning.

It also true of schools that the attitude of individuals towardstheir own learning is an important component of the organization. Inorder to investigate the conceptions regarding their own learningheld by school staff we used a framework drawn from the socio-constructivistic theory of learning. The core concept behind thistheory is that learning is a process which is active, constructive,cumulative, self-regulatory, target-oriented, context-related andsocial (Herman, Aschbacher, & Winters, 1992). In order to elicitrespondents’ perceptions with regard to their own learning we againused contrasting statements. For example: ‘Learning is passivelyreceiving information’ versus ‘Learning is thinking critically andmaking connections’; and ‘I don’t think that you get much fromdiscussions with other people’ versus ‘You learn a lot fromcomparing your opinion with that of other people.’

Learning activities and learning results focus on the nature ofthe learning processes which take place in schools. In the presentstudy we investigated variations in these areas using two lines ofapproach: the distinction between ‘single loop’ and ‘double loop’learning (Argyris & Schon, 1978), on the one hand and the theory ofknowledge creation (Nonaka & Takeuchi, 1995) on the other hand.Argyris and Schon (1978, p. 2) believe that the way organizations(in this case schools) react to problems is indicative of the learningcharacter of the organization. Learning in a learning organization isa dynamic process whereby not only is new knowledge added tothe knowledge base of the organization (single loop learning), butalso dominant conceptions, norms, policy directions and objectivesare modified (double loop learning). Double loop learning is a crucialelement in successful self-evaluations (Scheerens, 2004). In singleloop learning, when something goes wrong, another strategy issought which makes it possible to arrive at a solution, but this isdone within the existing framework of objectives, values, plans andrules. The framework itself is not brought into question. In doubleloop learning, on the other hand, assumptions are challenged and(practical) theories are compared in order to come up with a newtheory. This kind of learning may indeed result in a modification ofthe objectives, values, plans and rules of the organization. Whencollecting data for the present study, we asked the respondents toindicate how often people in their school reacted in response toproblems in a given number of ways. Examples of such reactionsare (single loop): doing nothing; failing to recognize the problem orminimalizing it; analyzing the problem, but taking no action; and(double loop): gathering the different opinions of the variousparties involved; collecting objective data in order to the examinethe problem; and reflecting critically on their own values andnorms.

The final independent variable is based on the theory ofknowledge creation (Nonaka & Takeuchi, 1995). Nonaka andTakeuchi (1995) take the view that, in the Western world, there isoften an excessive emphasis on explicit learning in the form of – forexample – education and courses and that a greater value shouldbe attached to ‘tacit’ learning in which the use of experience andintuition plays an important role. They focus on four types oflearning processes, which, in their view, constitute the ‘motor’ ofthe knowledge creation process. For Nonaka and Takeuchi theconversion of knowledge is always a social process. It is thereforenot a process which takes place in individuals, but rather a processbetween individuals. The model focuses on four types of knowl-edge conversion: ‘externalization’, which is the process by whichpersonal knowledge is expressed in explicit concepts; ‘combina-tion’, which is the process by which concepts are synthesized into aknowledge system; ‘internalization’ which is the process by whichexplicit knowledge comes to form part of personal knowledge; and‘socialization’ which is the process by which experiences areshared and personal knowledge is created. In order to bring aboutorganizational knowledge creation, the accumulation of personalknowledge at individual level must be transferred to other

Page 4: Attitudes towards school self-evaluation

J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 21–2824

members of the organization through socialization, therebycreating a knowledge spiral (Nonaka & Takeuchi, 1995, p. 82). Ifthis does not occur the existing knowledge base of the organizationwill not be expanded (Simons & Ruijters, 2001). According toNonaka and Takeuchi (1995, p. 84) only an interaction betweenpersonal and explicit knowledge can create opportunities forinnovation. They also describe organizational knowledge creationas a ‘continual and permanent interaction between personal andexplicit knowledge’ and refer to an accompanying knowledgespiral. The theory of knowledge creation has been operationalizedby examining to what extent processes in the four components ofthe knowledge spiral take place in schools (Chou & Tsai, 2004; vonKrogh, Ichijo, & Nonaka, 2000). Examples of such items include: towhat extent is it a customary practice in your school to ‘report onyour own experiences?’ to ‘receive coaching from experts?’ or to‘put ideas which have been acquired on a training course intopractice?’ and ‘to observe other people at work?’.

Methodology

This article reports on a survey study conducted among schoolprincipals, teachers and support staff examining their attitudetowards self-evaluation and how they perceive the functioning oftheir school. The target population used in this study consisted ofall those schools in Flanders (including both primary andsecondary schools) which carried out an instrumentalized self-evaluation in the academic years 2004–2005 or 2005–2006.‘Instrumentalised’ means that a written questionnaire was issuedto teachers. This population cannot be clearly described anddemarcated as there is no database of Flemish schools which haveconducted a self-evaluation using a questionnaire for teachers(instrumentalized). Given that we were unable to take a sample inthe classic meaning of the term, we applied a number of principlesin looking for potential sample schools. One of these principles wasto survey as many schools as possible with a view to reducingsample error (Creswell, 2005). A second principle was trying toreflect the expected variation in the target population, by which wemean, for example, variation in educational level, variation inschool network, variation in tools used and variation in previousexperiences with self-evaluation. Our aim was not to represent thevariation in the target population (which is in any case unknown)proportionally, but, instead, to ensure that the cases in the sample

Table 1Scale characteristics of the dependent and independent variables.

Scale Number of items N SEM

CMIN

Attitude toward self-evaluation 12 1313 174.0

EFP: Internal process model 7 2042 30.2

EFP: Human relations model 7 2042 31.9

EFP: Open system model 7 2042 11.8

EFP: Rational goal model 7 2042 17.1

Socio-constructivistic view on learning 6 2519 27.2

Single versus double loop learning 12 2056 53.6

Knowledge creation in the school 13 2090 291

Table 2Results of the cluster analysis of schools.

3 Clusters Number of schools %

Cluster 1: ‘Strong flexible/weak control’ 8 8.3

Cluster 2: ‘Relatively weak flexible/average control’ 49 51.1

Cluster 3: ‘Strong on all perspectives’ 39 40.6

Total 96 100%

represented differences in the population. For this reason, wecontacted schools by a variety of different channels in order toinvite them to take part (e.g. via training institutions, pedagogicalsupervision, governmental communication, etc.).

The sample consisted of 96 schools, of which 51 were primaryschools and 45 were secondary schools. In the 96 schoolstaking part in the study 2716 respondents were surveyed. Themajority of the respondents were teachers. 85.1% of therespondents said that they spent most of their time teaching.Members of school management teams constituted a total of 7.3%of the sample. 3.9% of the respondents were head teachers ordeputy heads; 3.4% indicated that most of their time was takenup by management activities (policy support, technical con-sultancy, etc.). Finally, there was also a significant group ofancillary staff: 4.7% of the respondents were principally engagedin administrative or ICT related support activities. Just under 3%were responsible for pedagogical or paramedical support at theschools studied.

Instruments and school clustering

Before going on to look at the actual results of our study we willreport on the various preparatory analyses conducted relating tothe development of scales and clusters of schools.

The scales employed were developed using confirmatory factoranalyses at respondent level (SEM using AMOS). Table 1 sets outthe results for the different scales (for more details see Vanhoof,2007). The fit indicators in the table reveal ‘good’ to ‘very good’scales. The Cronbach’s alpha values at respondent and school level(in brackets) also point in the same direction. Caution is onlyindicated with regard to the variable ‘socio-constructivistic vision’.

In order to include the impact of the effectiveness perspectivesas a school characteristic in the analyses, schools were clustered onthe basis of their average scores on these perspectives. The reasonfor this approach are indications at school level of possibleproblems with regard to multi-collinearity, which is not the case atrespondent level. A hierarchical clustering of the 96 schoolsresulted in a 3 cluster solution (see Table 2). The three clusterswhich we have identified are ‘Strong on all perspectives’,‘Relatively weak flexible/average control’ and ‘Strong flexible/weak control’. These groups represent 40.6%, 51% and 8.3% of thetotal number of schools, respectively.

Cronbach’s alpha

p AGFI CFI RMSEA p close Respondent- and

(school) level

0.00 0.96 0.99 0.05 0.37 0.92 (0.94)

0.00 0.98 1.0 0.05 0.48 0.82 (0.88)

0.00 0.98 1.0 0.04 0.80 0.93 (0.96)

0.02 0.99 1.0 0.03 0.93 0.92 (0.96)

0.01 0.99 1.0 0.03 0.91 0.89 (0.92)

0.00 0.99 0.99 0.04 0.22 0.60 (0.64)

0.06 0.98 0.99 0.01 1.0 0.92 (0.97)

0.00 0.98 0.97 0.04 0.82 0.87 (0.91)

Effectiveness perspectives (scale scores (ranging from 1 to 5))

Human relations Internal process Rational goal Open system

% 4.34 3.69 3.43 3.88

% 4.04 3.91 3.82 3.57

% 4.42 4.11 3.92 3.95

4.22 3.97 3.83 3.75

Page 5: Attitudes towards school self-evaluation

Table 3Frequencies (%), mean scores and standard deviation for the items of the ‘Attitudes towards self-evaluation’-scale (n = 2686).

Self-evaluation 1 2 3 4 5 6 Mean SD

. . . tells us nothing new/teaches us a great deal 4 8 13 31 29 17 4.24 1.29

. . . does not result in better teaching/results in better teaching 4 8 12 28 31 16 4.22 1.32

. . . takes up a lot of time/takes up very little extra time 11 22 23 24 16 5 3.27 1.38

. . . only involves a few people/involves everybody 6 16 20 24 21 12 3.75 1.43

. . . does not lead to better management/leads to better management 6 10 11 24 34 16 4.18 1.40

. . . is difficult to interpret/is easy to understand 7 14 22 30 21 6 3.64 1.31

. . . is not popular with the majority of team members/is popular 18 29 24 21 7 1 2.75 1.24

. . . is difficult to carry out/can be carried out relatively easily 6 17 24 27 19 7 3.56 1.33

. . . is not cost-effective/is cost-effective 8 12 22 29 21 6 3.62 1.34

. . . depends on chance/results in a reliable picture 4 9 20 33 28 5 3.89 1.18

. . . is subjective/is objective 10 19 22 27 19 4 3.39 1.35

. . . is a snapshot/represents our development correctly 8 14 18 30 26 6 3.69 1.34

Scale score ‘Attitude towards self-evaluation’ 3.68 0.95

J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 21–28 25

Results

We will start by presenting a number of descriptive results andthen go on to test the impact of the school and respondent-relatedcharacteristics on attitudes towards self-evaluation. This will bedone using multi-level analyses.

Descriptive results

Table 3 shows how the respondents stand in relation to self-evaluation. The frequency distribution shows the opinions ofthe various respondents on a continuum between two contrastingviewpoints. Respondents who score 1 tend towards the left-handvariant (for example: ‘self-evaluation is subjective’); respondentswho score 6 tend towards the right-hand variant (for example: ‘self-evaluation is objective’). We found that, on average, respondentsexpressed themselves relatively positively with regard to self-evaluation. Around three quarters of the respondents subscribed tothe view that one learns a lot from it and that it results in betterteaching and better management. However, despite these positiveexpectations with regard to self-evaluation, self-evaluation is notpopular. The question as to whether self-evaluation was popularamong the majority of team members obtains the lowest averagescore of all (Av. = 2.75). It appears that this negative attitude cannotreally be attributed to an unduly limited expectation with regard toresults. In fact, the items which score low are those which relate toprocess characteristics. The respondents consistently indicate thatself-evaluation takes up a lot of time (Av. = 3.27); that it is asubjective phenomenon (Av. = 3.39); and that it is difficult to carryout (Av. = 3.56). On average these items score below or at themidpoint of the evaluation scale (i.e. 3.5).

Both the frequency distribution and the standard deviation ofthe average scores (see Table 3) reveal that there is a considerablevariation in the judgements expressed by the respondents for eachitem. The scale ‘attitude towards self-evaluation’ exhibits anaverage of 3.68 and a standard deviation of 0.95. This variationappears to be related to a number of background characteristics ofschools and respondents or at least this is what the multi-levelanalyses reveal.

Explanatory multi-level analyses

We used multi-level analyses in order to explain the differencesin respondents’ answers, which enabled us to arrive at a decom-position of respondents’ answers with regard to their attitudestowards self-evaluation into an individual component and a schoolcomponent. When interpreting the results yielded by this analysis,we considered the basic model, gross model and net model (seeTable 4). In this table we give the estimate and standard error (in

brackets) for each regression coefficient. Statistically significanteffects are shown in bold type. Finally, with regard to interpretationit is also important to point out that all the variables in the multi-level models have been standardized.

The basic model reminds us that a multi-level analysis is indeednecessary. The variances at both school and respondent level aresignificantly different from 0.7% of the total variance in the attitudeof the respondents towards self-evaluation can be attributedto differences between schools. The remaining 93% must beattributed to differences between respondents within schools,which is an important finding in itself. The differences withinschools are much greater than the differences between schools.

The gross models represent the effect of each of the individualcharacteristics at school and respondent level without controllingfor other characteristics. Independently of the question as to whatthe differences found have to be attributed to, the gross modelsgive us an idea of differences between educational levels, betweenhead teachers and team members, and between schools with andwithout support. We mainly found statistically significant effectsfor respondent-related characteristics. At school level we only find adifference between two school clusters with regard to theeffectiveness perspectives. Respondents from schools which scorestrongly on the different effectiveness perspectives score 0.32 SDhigher on the attitude towards self-evaluation than schools whichscore low on the management-orientated effectiveness perspec-tives. The other differences between these school clusters are notstatistically significant. At respondent level it appears that it isprimarily the position held in the school which is of mostimportance. Team members who are involved in managing theschool concerned (i.e. head teachers, deputy head teachers andmiddle managers) score 0.59 SD higher than other team members(i.e. teachers, Equal Educational Opportunities scheme teachersand support staff). The attitude of respondents towards self-evaluation is also more positive if respondents have enjoyed someform of support. This means that the respondent concerned has, forexample, recently consulted relevant professional literature or hasattended a training course on self-evaluation. Finally, each of theschool-related indicators included yields a statistically significanteffect on the attitude towards self-evaluation. The strongest effectsare found for the respondent’s perception regarding the degree ofin-depth learning and knowledge creation in their school. For eachdifference of one standard deviation point between respondents onthese indicators, they will, on average, score with a difference of0.36 SD and 0.33 SD, respectively, on attitudes towards self-evaluation. Other important findings are also that differentbackground characteristics of schools and respondents do notshow a statistically significant relationship with the attitude ofrespondents towards self-evaluation. There are no statisticallysignificant differences between the different educational levels

Page 6: Attitudes towards school self-evaluation

Table 4Multi-level analyses ‘Attitude towards self-evaluation’.

Fixed Null model Gross model Net model

Estimate (SE) Estimate (SE) Estimate (SE)

Intercept 0.025 (0.034) �0.113 (0.039)

School related predictors

Primary (vs. secondary education) 0.124 (0.067)

No self-evaluation experience (vs. SE experience) �0.026 (0.070)

School size (number of teachers) �0.049 (0.038)

School culture cluster 1 (vs. cluster2) �0.101 (0.129)

School culture cluster 3 (vs. cluster2) 0.316 (0.062) 0.131 (0.057)

Respondent related predictors

School leader (vs. team member) 0.589 (0.072) 0.453 (0.068)Use of external support (vs. no use) 0.213 (0.040) 0.108 (0.041)Age �0.028 (0.019)

Perception of ‘human relations’ 0.171 (0.022) �0.061 (0.029)Perception of ‘internal process’ 0.139 (0.022)Perception of ‘open system’ 0.192 (0.021) 0.057 (0.027)Perception of ‘rational goal’ 0.169 (0.021) 0064 (0.025)Perception of own learning 0.274 (0.020) 0.154 (0.021)Perception of single vs. double loop learning 0.359 (0.026) 0.210 (0.025)Perception of knowledge creation 0.326 (0.022) 0.170 (0.025)

Variabel

Variance at school level 0.071 (0.016) 0.031 (0.010)

Variantie at respondent level 0.936 (0.026) 0.655 (0.022)

R2—Reduction of variance at school level 30%

R2—Reduction of variance at respondent level 32%

Model fit

Deviance 7548 4531

Number of schools 96 96

Number of respondents 2686 1852

J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 21–2826

(primary and secondary education) and different school networks.The specific branch in which respondents in secondary schoolsmainly work (general, technical or vocational education); whetheror not they have experience as a school with self-evaluation;and the size of the school do not show a relationship to theirattitude towards self-evaluations. The same applies to the age ofrespondents.

Finally, we come to the net model. In interpreting the netmodel, we should start by pointing out that this model improvesthe predictive power by 30% at school level and by 32% atrespondent level in comparison with the null model (the R2 valuesare calculated according the method used by Snijders and Bosker(1999)). The first interpretative finding from the net model is thatthe position held by the respondent in the school concerned alsoappears to play a very important role, after control for othercharacteristics. Independently of the question of what they thinkabout learning and how they perceive the school culture,members of the school management team score 0.45 SD higherthan the other team members. The use of some form of supportalso exhibits a relationship with a more positive attitude towardsself-evaluation. Finally, the net model reveals that the majority ofschool-related indicators, even after control for other character-istics, have a statistically significant effect on the attitude towardsself-evaluation, although the regression coefficients are lowerthan in the gross model. The greatest impact is exerted by ‘in-depth learning’ and ‘degree of knowledge creation’. It is worthnoting that the perception of the importance of ‘human relation-ships’ after control has a statistically significant negative effect.After control for other school and respondent characteristics, ittherefore appears that respondents have a less positive attitudetowards self-evaluation in so far as they are more of the opinionthat their school culture is characterized to a greater degree by astrong emphasis on this effectiveness perspective (i.e. internalflexibility).

Conclusion

The descriptive results suggest that schools are reluctant tocarry out self-evaluations. Self-evaluation is still widely regardedas something strange. It thus appears that modes of reflection –such as the self-evaluations that are the subject of this presentarticle – which require a considerable openness to others andimply a degree of vulnerability, are still unpopular with teammembers. Despite this, however, there is a strong belief in thepossibilities of self-evaluation and the respondents have positiveexpectations with regard to self-evaluation. There is a recognitionthat the process can produce valuable results, but the nature of theactivities involved in self-evaluation has the effect of puttingpeople off initiating a self-evaluation or even taking part in anexternally imposed self-evaluation. The causes of this wariness arethus to be found more with the process than with the product ofself-evaluation. There is a perception, for example, that self-evaluation takes up a lot of time and that it is difficult to carry out.People thus believe in the potential power and value of self-evaluations, but the accompanying process causes many tohesitate when it comes to actually carrying one out. It is alsopossible that working practices and a lack of openness in manyschools act as an obstacle to speaking freely about internalconcerns and problems.

From our multi-level analyses of attitudes towards self-evaluation we can conclude that our theoretical expectationshave largely been confirmed. The way in which respondents regardself-evaluation processes has not come out of thin air. The analysesclearly demonstrate that the attitude towards self-evaluation islinked to a considerable degree to the characteristics of the broaderfunctioning of the schools concerned. As expected, the attitude ofrespondents is clearly more positive in proportion to the extent towhich the school-related characteristics studied (organizationaleffectiveness and professional learning community) are more

Page 7: Attitudes towards school self-evaluation

J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 21–28 27

pronounced. Only the limited net effect of ‘human relations’ pointsin another direction. We must not forget, however, that this effectis found after control for the other predictors in the net model. Anexcessive orientation towards ‘human relations’ might jeopardizethe systematic character of the self-evaluation (Devos & Verhoe-ven, 2003). Furthermore, the research hypothesis on the differ-ences between principals and teachers is also confirmed bythe data. The other background characteristics of schools andrespondents studied appear, however, not to be statisticallysignificant. We also note the large variation between respondentswithin schools. The differences within the schools are greater thanthe differences between schools, although it should be noted thatthe differences between schools are also statistically significant.

Discussion

The suggestion that some schools have a positive attitudetowards self-evaluation, whereas other schools team membershave a negative attitude requires a certain degree of qualification.Our study clearly demonstrates that there are also considerabledifferences within schools. With regard to ‘attitudes towards self-evaluation’, it appears that only 7% of the total variation inrespondents’ scores can be attributed to differences betweenschools. The remaining 93% must therefore be attributed todifferences between respondents within schools. Thus the image ofone school where everyone has a positive attitude and anotherwhere everyone has a negative attitude does not correspond toreality: there are, in fact, major differences between respondentsin every school. These implications go further than just thedistinction between head teachers and team members: there arealso differences between team members. This study shows that itis dangerous to think in terms of stereotypical images of schoolsand the people who work in them. Responding appropriately to thecontext when implementing self-evaluation processes thereforerequires not just a school-related approach, but also a personalizedapproach.

Head teachers express themselves considerably more positivelywith regard to self-evaluation-related characteristics than teammembers. Our contention is that this must not be attributed – or atleast not solely – to head teachers having a more positive view ofwhat goes on in their school. It is not unthinkable that headteachers express themselves more positively both because they aremore closely involved in the self-evaluation process and alsobecause they are in a position to determine certain aspects of theself-evaluation themselves, while ordinary teachers are not. Theymay well, for example, have a better view of the communicationand participatory initiatives which are set up as part of the self-evaluation or they are better placed to make the link between theself-evaluation and overall school vision (Devos & Verhoeven,2003; Van Petegem et al., 2005). The school management team can,for example, set clear action points for themselves (and carry theseout)—something which the other team members do not experienceat first hand or of which they may not even be aware. A self-evaluation which is regarded as having been a success by themanagement team in a school may, therefore, be regarded by theother team members as having been a failure (Vanhoof, 2007).Consequently, attitudes towards self-evaluation on the part of thetwo groups can be influenced in different directions. The highscores obtained by head teachers do not, therefore, necessarilyhave to be viewed as ‘suspect’, nor do they necessarily represent astatistical quirk (Watling & Arlow, 2002). The scores might accordperfectly with the reality of self-evaluation processes.

Attitudes towards self-evaluation are not something separatefrom the broader functioning of the school. This means that schools– if there is a desire to build in guarantees for successful self-evaluation – also need to score sufficiently high on these elements

(Schildkamp, 2007). The school-related characteristics are, as itwere, an indication of their ‘readiness’, of their being at the ‘startposition’ or an indication that what might be called ‘the basiccompetencies’ which a school must possess in order to get the fullbenefit from self-evaluation are in place. Judging from the attitudestowards self-evaluation observed it would appear that the pre-conditions required to conduct a successful self-evaluation hadstill not been sufficiently achieved in the schools studied. For thisreason, schools should make reflection a strategic objective as partof the school’s functioning. Carrying out self-evaluations of thetype which we have studied is, in effect, a systematic form ofreflection at team or school level. We therefore see reflection as astep towards – and a pre-condition to – the use of instrumentalizedself-evaluations. Working on successful self-evaluations thusrequires, first and foremost, a sufficiently developed reflectivecapacity (York-Barr, Sommers, Ghere, & Montie, 2001). Individuals,groups and organizations can only learn if they are prepared toreflect on their own practice (Mitchell & Sackney, 2000). However,the dominant culture in many schools is one of action rather thanwords, with little or no attention paid to reflection. This is oftenexcused on the basis of lack of time. We have found, however, thatattitudes towards forms of reflection also act as an impediment toreflection. Self-evaluation is not a popular activity among manyteachers because it is seen as a form of social control. Thisperception of self-evaluation as something threatening has alsobeen observed elsewhere (Clift, Nuttall, & McCormick, 1989). Thereis often a lack of openness within the team and an unwillingness tolook critically at their own performance. A change of mentality istherefore also required. The attention to school culture and thecharacteristics of professional learning communities in botheducational practice and educational research is thus justified inthe light of the findings of the present study.

At present, team members all too often get involved in a self-evaluation process because it is expected of them (Van Petegemet al., 2005). Self-evaluation is carried out in schools in the firstplace because it is seen as an external necessity by the schoolmanagement team. This is a fundamentally different approachfrom carrying out a self-evaluation because it is perceived as aninternal necessity. External necessities are ‘shortcomings whichare felt by others’; whereas internal necessities are ‘shortcomingswhich are felt and recognized by the people involved themselves’.This distinction is crucial in implementing innovatory initiativessuch as carrying out self-evaluations. Obliging team members totake part in self-evaluation is what is known as the ‘adoptionapproach’. In other words, team members are to a large degreeregarded as members of a rational organization charged withimplementing what others deem to be important (and henceworthy of innovation). The process of implementation is thus indanger of receiving too little attention and as a result innovationsare in danger of ending up having scant connection with actualpractice (Guthe, 1997). If the intention is to find a way ofintroducing self-evaluations in schools, this will require focusingattention on the interaction that exists between the specificcontent of the innovation (as team members share a responsibilityfor quality assurance) and the characteristics of the individualteam members. The perceptions of those involved, for example,and in particular the significance which is given to self-evaluation,is of major importance for the success of this innovative process(Fullan, 1991). In the present study we have found an unwilling-ness to carry out self-evaluations. Staessens (1993, p. 127) has thefollowing observation: ‘A school’s reaction to a change can, in part, be

understood with reference to the school’s individual culture, or to be

more precise, the gap between the school’s existing values and norms

and the values and norms on which the innovation is based.’ This gapis currently still considerable. Furthermore, so long as innovatoryinitiatives such as self-evaluation are still perceived by many team

Page 8: Attitudes towards school self-evaluation

J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 21–2828

members as something which has been imposed on them and,thus as something threatening, this gap is in danger of becomingeven greater.

References

Argyris, C., & Schon, D. A. (1978). Organizational learning: A theory of action perspective.Reading: Addison-Wesley.

Blok, H., Sleegers, P., & Karsten, S. (2005). Schoolzelfevaluatie in het basisonderwijs; eenterugblik op de zelfevaluatiefase van Ziezo [School self evaluation in primary educa-tion]. Amsterdam: SCO-Kohnstamm Instituut.

Chou, S.-W., & Tsai, Y.-H. (2004). Knowledge creation: Individual and organizationalperspectives. Journal of Information Science, 30(3), 205–218.

Clift, P. S., Nuttall, D. L., & McCormick, R. (Eds.). (1989). Studies in school self evaluation.London: The Falmer Press.

Creswell, J. W. (2005). Educational research: Planning, conducting, and evaluating quan-titative and qualitative research. New Jersey: Pearson Prentice Hall.

Devos, G., Verhoeven, J., Van den Broeck, H., Gheysen, A., Opbrouck, W., & Verbeeck, B.(1999). Interne zelfevaluatie-instrumenten voor kwaliteitszorg in het SecundairOnderwijs. [Internal self-evaluation instruments for quality care in secondary educa-tion]. Leuven, KUL: Departement Sociologie.

Devos, G., & Verhoeven, J. (2003). School self-evaluation – conditions and caveats – Thecase of secondary schools. Educational Management & Administration, 31(4), 403–420.

Donnenberg, O. H. J. (1999). Nieuwe werkstructuren - nieuwe leervormen [Newworking structures – New types of learning]. In J. M. J. Dekker (Ed.), Opleidersin organisaties [Educators in organizations] (pp. 116–137). Deventer: Kluwer.

Fullan, M. (1991). The new meaning of educational change. London: Cassell.Griffith, J. (2003). Schools as organizational models: Implications for examining school

effectiveness. The elementary School Journal, 104(1), 29–47.Guthe, K. A. K. (1997). Indicatoren van schoolontwikkeling [Indicators for school devel-

opment]. Enschede: OCTO, University of Twente.Herman, L., Aschbacher, R., & Winters, L. (1992). A practical guide to alternative assessment.

Alexandria, VA: Association for Supervision and Curriculum Development.Kellgren, C. A., & Wood, W. (1986). Access to attitude-relevant information in memory

as a determinant of attitude-behaviour consistency. Journal of Experimental SocialPsychology, 38.

Kyriakides, L., & Campbell, R. J. (2004). School self-evaluation and school improvement:A critique of values and procedures. Studies In Educational Evaluation, 30(1), 23–36.

Levine, D. U., & Lezotte, L. W. (1990). Unusually effective schools: A review and analysis ofresearch and practice. Madison: National Centre for Effective Schools Research andDevelopment.

Maslowski, R. (2001). School culture and school performance. Enschede: Twente Uni-versity Press.

McBeath, J. (1999). Schools must speak for themselves. London: Routledge.McBeath, J., Meuret, D., Schratz, M., & Jakobsen, L. B. (1999). Evaluating quality in school

education. Brussels: European Commission.Meuret, D., & Morlaix, S. (2003). Conditions of success of a school’s self-evaluation:

Some Lessons of an European experience. School Effectiveness and School Improve-ment, 14(1), 53–71.

Mitchell, C., & Sackney, L. (2000). Profound improvement. Building capacity for a learningcommunity. Lisse: Swets & Zeitlinger Publishers.

Mortimore, P., Sammons, P., Stoll, L., Lewis, D., & Ecob, R. (1988). School matters: Thejunior years. Somerset: Open Books.

MVG. (2006). Onderwijsspiegel Schooljaar 2004–2005. Verslag over de toestand van hetonderwijs [Educational Mirror 2004–2005]. Brussels: Ministry of the Flemish Com-munity.

Nonaka, I., & Takeuchi, H. (1995). The Knowledge-creating company. How Japanesecompanies create the dynamics of innovation. Oxford: University Press.

Patriotta, G. (2004). Organizational knowledge in the making: How firms create, use, andinstitutionalize knowledge. Oxford: University Press.

Petty, R. E., & Wegener, D. T. (1998). Attitude change: Multiple roles for persuasionvariables. In D. Gilbert, S. Fiske, & G. Lindzey (Eds.), The handbook of socialpsychology. New York: McGraw-Hill.

Quinn, R., Faerman, S. R., Thompson, M. P., & McGrath, M. (2003). Becoming a mastermanager. A competency framework. New York: Wiley.

Quinn, R., & Rohrbaugh, J. (1983). A spatial model of effectiveness criteria: Towards acompeting values approach to organizational analysis. Management Science, 29,363–377.

Sammons, P., Hillman, J., & Mortimore, P. (1995). Key characteristics of effective schools:A review of school effectiveness research. London: Office for Standards in Education.

Saunders, L. (2000). Understanding schools’ use of ‘value added’ data: The psychologyand sociology of numbers. Research Papers in Education, 13(3), 241–258.

Scheerens, J. (2004). The evaluation culture. Studies In Educational Evaluation, 30(2),105–124.

Schildkamp, K. (2007). The utilisation of a self-evaluation instrument for primary educa-tion. Enschede: PrintPartners Ipskamp.

Senge, P. (2001). Schools that learn: A fifth discipline fieldbook for educators, parents, andeveryone who cares about education. New York: Doubleday.

Simons, P. R. J., & Ruijters, M. (2001). Learning professionals: Towards an integratedmodel. Paper presented at the biannual conference of the European Association forResearch on Learning and Instruction, Fribourg.

Snijders, T., & Bosker, R. J. (1999). Multilevel analysis: An introduction to basic andadvanced multilevel modelling. London: Sage.

Staessens, K. (1993). Identification and description of professional culture in innovativeschools. Qualitative Studies in Education, 6, 111–128.

Van Petegem, P., Verhoeven, J. C., Buvens, I., & Vanhoof, J. (2005). Zelfevaluatie enbeleidseffectiviteit in Vlaamse scholen. Het gelijke onderwijskansenbeleid als casus [Selfevaluation and policy effectiveness of schools. A case study of the Flemish equal chancespolicy]. Ghent: Academia Press.

Vanhoof, J. (2007). Zelfevaluatie Binnenstebuiten. Een onderzoek naar het verloop en dekwaliteit van zelfevaluaties in Vlaamse scholen [Self Evaluation Inside Out. The Processand Quality of Self-Evaluation in Flemish Schools]. Antwerp: Antwerp University.

van Aanholt, T., & Buis, T. (1990). De school onder de loep. [The school under a magnifyingglass]. Culemborg: Educaboek.

von Krogh, G., Ichijo, K., & Nonaka, I. (2000). Enabling knowledge creation. How to unlockthe mystery of tacit knowledge and release the power of innovation. Oxford: Uni-versity Press.

Watling, R., & Arlow, M. (2002). Wishful thinking: Lessons from the internal andexternal evaluations of an innovatory education project in northern Ireland.Evaluation & Research in Education, 16(3), 166–181.

Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge:Cambridge University Press.

York-Barr, J., Sommers, W. A., Ghere, G. S., & Montie, J. (2001). Reflective practice: Anaction guide for educators. Thousand Oaks, CA: Corwin Press.


Recommended