Upload
cocolatto
View
13.443
Download
0
Tags:
Embed Size (px)
DESCRIPTION
This is what literature review looks liked
Citation preview
SAMPLE LITERATURE REVIEW
Mgmt 430 MMMS 530 Research Paper on a Selected Aspect of Management
This review has been made available with permission for learning
purposes only. Do not quote. [email protected]
Literature review
Counting what Counts - Key Performance Indicators for Adult and
Community Education
Introduction
How do you evaluate organisations if they do not measure what they do?
This is the question faced by voluntary Adult and Community Education
(ACE) organisations1.
In order to “justify public expenditure”, education policy has become
“increasingly driven by the need to measure outcomes” (Brindley, 2001,
138). In ACE providers, two major factors conspire to make the use of
evaluation troublesome. The outcomes of ACE fall into two broad areas.
Educational outcomes, such as the ability to speak English can be
measured through extant tests2. But for some social outcomes, specific
tests do not exist. Jackson offers an Australian perspective on Adult
Migrant Education Programmes (AMEP), stating that “non-content
outcomes… [are] rarely a matter of serious debate, the recording of such
outcomes… [is] even viewed with suspicion” (1994, p. 59). Improved civic
engagement (for example the knowledge of how to access services such
as taking one’s children to the doctor) has repercussions in society and in
the next generation which ACE providers do not measure.
1 A group of education providers wholly or partly funded by the Tertiary Education Commission working in five ‘priority’ areas: targeting learners whose initial learning was not successful, raising foundation skills, encouraging lifelong learning, strengthening communities by meeting identified community learning needs, and strengthening social cohesion2 The communicative competence test is one basis for measurement here (Savignon, 2000)
SAMPLE LITERATURE REVIEW MGMT 430.MMMS 530 2006
The already difficult challenge of developing the capability (suitable
processes, trained workforce) to measure these complex results is
exacerbated by the second factor. Volunteering. Many New Zealand
organisations facilitate willingness in communities to help their co-citizens
for no pay. This is made possible by a shared belief in the benefits of the
work (Zappalá, 2000, Derby 2001). So can the expense and voluntary time
of developing and implementing complicated evaluation procedures be
justified, in the context of low-budget3, high-value-added4 activities such
as voluntary community education?
What are the requirements of a set of key performance indicators (KPIs)
that meet the needs of voluntary ACE?
This paper will survey management and education management literature
for the characteristics of a set of KPIs suitable for voluntary ACE. This
paper does not attempt to recommend specific KPIs for ACE; rather it
unpacks the philosophical underpinnings of why KPIs have evolved and
why different KPIs are chosen in organisations. The movement towards
more balanced measures in management (Kaplan and Norton, 1992,
1996a, 1996b) is argued to be of increasingly more interest to the ACE
sector, although many specific evaluation tools are still inapplicable to the
case of volunteering. The service industry will warrant attention, as it
stands considerably closer in core business to education than industrial
models do. The Balanced Scorecard (ibid) already in use in some school
settings will also be reviewed.
This paper will argue that KPIs for voluntary ACE must strike the correct
balance between measuring financial and other management, and
evaluating outcomes for learners. I will propose that a set of KPIs must
focus on quality, not as a ‘determinant of outcomes’ (Ballantine and
Brignall 1994), but as an outcome in itself. The measures chosen should
be simple, and have an impact on members of the organisation suitable to 3 The entire ACE funding pool for New Zealand is $24.9million annually4 For a full discussion of the value added by volunteering, see the VAVA (Value Added by Voluntary Agencies) Report 2004, Price Waterhouse Coopers
2
the realities of voluntary bodies. The work uncovers questions for further
research.
The case for balanced measures in management
From the industrial age on, the way organisations rated performance was
to consult the bookkeeper. The bottom line summarised achievement
(Kaplan and Norton, 1992). As long as this was the case, theories of
management provided little possibility for cross application to measuring
in providers of ACE.
During the 19th century however, the ‘cooperative movement’ and ‘guild
socialism’ (Bonner, 1961) were forces not specifically driven by
organisational management practices, but that encapsulated ideas giving
rise to such organisations as the Workers Education Association, founded
in New Zealand in 1915 and still providing voluntary ACE today (The WEA,
2005).
But fundamental changes occurred.
Technology started developing at a rate that necessitated more and
ongoing investment into equipment. Globalisation required industries to
be more adaptable. Growing environmental concerns became the problem
of business, increasing pressure on corporations. Even customers had
changed (Neely, 1998).
So companies adapted. One reaction was to become ‘learning
organisations’ (Kochan and Useem, 1992). They created corporate
missions that had a specific focus on the customer (Kaplan and Norton,
1992). Top-down management was challenged by a call for ‘a much
greater participation in the management of enterprises by all the
workforce’ (Heywood, 1989, p. xii). In the name of sustainability, many
started conducting social audits that regarded their societal and
environmental outcomes of their operations.
3
The financials were no longer enough. Their ‘documented inadequacies’
included their ‘backward-looking focus’ and their ‘inability to reflect …
value-creating activities’ (Kaplan and Norton, 1992, p. 73). For
corporations to remain competitive, looking at the monthly balance sheet
no longer provided incentives for the necessary investments in
technology, community, environment or innovation (Neely, 1998). The
stage was set for a more balanced style of evaluation to enter, increasing
the possibility of cross application in broader contexts.
Can management tools help education questions?
Strategic management tools sprang up in response to the new needs.
Wang Corporation’s ‘SMART’ model (Strategic Measurement and Analysis
Reporting Technique) shapes a company’s strategies into a pyramid, and
then translates them into actions for each member of the organisation
(Lynch and Cross, 1991). This signalled that the success of an organisation
requires the involvement and good performance of all the staff (ibid). This
is in line with a customer (learner) –focus more suitable to our case.
Its areas of measurement however, have a strong production orientation
that would not be applicable to voluntary ACE. It was designed with the
production industry in mind, and its assumption of countable outputs is
not immediately transferable to softer outcomes.
After an overview of similar strategic measuring tools in industry
(Fitzgerald et al., 1991, Brown, 1996), it became apparent that the
priorities of industry make their models inapplicable for our purposes. With
its stronger people orientation, answers were next sought in the service
industry.
The service industry
In organisations where individual consumer acquisition and retention are
central to success, the inadequacies of financial measures were further
exaggerated (Fitzgerald et al., 1991), suggesting potential cross
application to our question.
4
In light of the service industry’s requirements, Warwick University created
the Results/Determinants matrix (Ballantine and Brignall 1994). This
distinguishes between measuring outcomes and measuring what
determines the outcomes. These two categories break down into six
‘dimensions’. Notably in its case however, ‘quality of service’ is defined as
determining results in financial and competitive arenas. Its assumption
that quality of service is not an output in itself, but a determinant of
competitive and financial results restricts its suitability only to profit-
oriented companies. It would not offer appropriate solutions for not-for-
profit organisations, where service provision is the major outcome.
A model is needed that focuses on quality. The EFQM (European
Foundation for Quality Management) model also distinguishes between
‘results’ and ‘enablers’ but includes ‘society’ in the results and ‘leadership’
as an enabler. This answers our previous question but throws up another.
Where the Results/Determinants matrix has six, EFQM has nine areas in
which to select and set several measures each. The time spent collecting
data in order to achieve meaningful information in all these evaluations
will make it complicated ‘beyond probable pay-off’ (Neely and Adams,
2000). This view was substantiated by Worthen, Sanders and Fitzpatrick:
One can hardly oppose using objectives and assessing their
attainment, but the use of dozens, or even hundreds of
objectives for each area of endeavour… amount[s] to a
monopolization of staff time and skills for a relatively small
payoff. (1997, p. 90)
The Higher Education Quality Council in London issued an even stronger
warning against over-evaluation; that ‘Constant external review potentially
diminishes quality” (1995, p. 29).
In order to increase the probability of finding a simpler and more
appropriate tool, techniques already in practice in the education setting
were sought.
5
What they do at schools
Schools are a group of organisations that have always focused on
outcomes other than financial. Educational institutions have long known
that ‘the real test of good teaching is its effect on students’ (Higher
Education Quality Council, 1995, p. 100). For these organisations, rating
performance has never consisted of financial measures alone, and has
usually been approximated to student achievement (Education Review
Office, 2003) as seen on standardised and other tests.
On comparison with the contemporary management tools already
outlined, just measuring (approximated) outcomes does not represent a
balanced set of measures. A school with incredibly successful students
could still be inefficient if for example, it is mismanaging monies or staff.
Measurements need to encompass both outcome and process indicators,
and such tools are indeed used by education organisations.
The Balanced Scorecard
The Balanced Scorecard has been applied, initially by private teaching
institutions (e.g. Berlitz Language Services Worldwide in 2000, personal
experience) more closely connected with the business sector and later by
members of state-run systems (for example Charlotte-Mecklenburg
schools, 2005). Pioneered by Kaplan and Norton in the early nineties
(Kaplan and Norton, 1992) in a corporate context, it was later proposed for
schools by Somerset (Somerset, 1998) and others. It balances financial
evaluation with three other ‘perspectives’; customer (or learner), internal
processes and (organisational) learning and growth.
Although originally developed for the business sector, its advocates see
‘no reason why it shouldn’t be used for charities or public sector
organisations’ (Bourne and Bourne, 2000, p.17). Its purported advantages
over traditional systems are seen in three major areas. The first is its
simplicity (as already outlined), followed by its applications to
organisational communication and finally its ability to influence behaviour
(Somerset, 1998).
6
To assess their appropriateness to our question, the latter two areas will
look briefly to some psychology literature.
The Balanced Scorecard and communication
Somerset asserted that ‘building a performance measurement system is
not about control; it is about communication’ (Somerset, 1998, p.2). If
high-level strategy is ‘decompose[d]’ into measures for actions at local
levels (Kaplan and Norton, 1992, p.75), each member of an organisation is
informed about how to contribute to achieving the overall mission.
Literature on communication however alerts us that being informed is not
equivalent to successful communication. To ensure that the message has
been understood would require communication in more than one direction,
and include mechanisms such as ‘perception checking’ and ‘feedback’
(Gundykunst, 1994).
An ACE organisation with a mission to provide English support to migrants
might inform a volunteer English tutor that his number of visits to the
English resource library is being tracked in an attempt to measure overall
performance. Without two-way communication about why this is
happening, there are risks of deterring the volunteer, making his voluntary
experience less satisfactory or in fact pressuring him to do more than he is
willing.
The Balanced Scorecard and behaviour
When setting up measures, managers are warned of the strong effects the
process will have on the behaviour of employees (Kaplan and Norton,
1992). Some traditional measurement systems “specify the particular
actions they want employees to take and then measure to see whether
the employees have in fact taken those actions. In that way, the systems
try to control behaviour” (ibid, p.79).
Within power relations however, (such as between a target setter and the
worker charged to achieve the target) behaviour can better be influenced
through ‘competent authority’ than this form of ‘coercive authority’
(Wrong, 1995). Under Wrong’s model, the employee’s behaviour is best
7
influenced out of a belief in the authority’s superior competence (in this
case, to interpret strategy and set measures for actions accordingly).
Under the Balanced Scorecard regime, targets are not set to dictate
actions, but to influence behaviour seen to achieve the corporate vision
simply by ‘focusing attention on key areas’ (Bourne and Bourne, 2000,
p.10). Some organisations will find the choice of competent authority
preferable to coercive authority favouring the Balanced Scorecard for this
reason. But the concept of setting targets in order to influence behaviour
is fundamentally problematic in our case. It suggests that measures taken
will evaluate progress against goals and influence behaviour towards
achieving the vision of the organisation, rather than unobtrusively trying
to gain a picture of actual achievement and value produced by the
organisation.
Our case necessitates avoiding interference in the work of volunteers,
rather than influencing their behaviour towards more or different kinds of
work. The KPIs that reflect ‘actual’ work done (as opposed to measures
that aim to inspire achieving the vision and drive performance) must be as
simple and easy to measure as possible, and not adversely influence
volunteer behaviour.
A distinction arises between evaluation that aims to reflect status quo, and
measurement as an ‘instrument to pursue goals’ (Worthen et al, 1997,
p.22). While the Balanced Scorecard may be a useful tool for ACE
organisations in working towards their missions, it will not provide KPIs to
indicate the current value of their work.
Management, education, and back again
The waves of change in organisational management have both informed
and been informed by education literature. In his important work
‘Learning, Adaptability and Change, The Challenge for Education and
Industry’ (1989), John Heywood applies the cognitive theory of how
children learn, to create lessons for organisations in being more adaptable.
He then relates his theories on learning organisations back to the school
setting with recommendations for educationalists. His work is a strong
8
example of how management and education thinking have grown towards
each other: schools have an increasing emphasis on effective
management while organisations try to learn.
Conclusions and further research
The applications of various performance measurement systems to
voluntary ACE seem reasonable efforts to satisfy public demands for
accountability of funds. However, research on the adaptations necessary
to fit the requirements of the sector in question is still required, and
research into measuring non-educational outcomes is also needed.
References
Ballantine, J. and Brignall, S., (1994). A Taxonomy of Performance
Measurement
Frameworks, Warwick: Warwick Business School Research Paper
Bonner, Arnold (1961). British Co-operation. Manchester
Bourne, Mike and Bourne, Pippa, (2000). Understanding the Balanced
Scorecard in a week. London, Hodder and Stoughton
Brindley, Geoff (2001). Assessment. In R. Carter and D. Nunan (Eds). The
Cambridge guide to teaching English to speakers of other languages (Pp
137-143). Cambridge: Cambridge University Press.
Brown, M. G. (1996). Keeping Score: Using the Right Metrics to Drive
World-Class
Performance. Quality Resources: New York.
Charlotte-Mecklenburg schools (2005). District level balanced scorecard:
public document, accessed 14 July 2005,
http://www.cms.k12.nc.us/discover/goals/BSC.pdf
Derby, Mark (2001) Good work and no pay. Wellington, Steele Roberts
9
Education Review Office (2003). Evaluation Indicators for Education
Reviews in Schools, paper on the internet, accessed 10 July 2005,
http://www.ero.govt.nz/EdRevInfo/Schedrevs/SchoolEvaluationIndicators.ht
m
Fitzgerald, L., Johnston, R., Brignall, T. J., Silvestro, R. & Voss, C., (1991).
Performance Measurement in Service Businesses London: The Chartered
Institute of Management Accountants.
Gundykunst, William B. (1994). Bridging Differences, Effective intergroup
communication. California, Sage Publications
Heywood, John (1989). Learning, Adaptability and Change. London, Paul
Chapman Publishing
Higher Education Quality Council of Britain, (1995). Managing for quality,
stories and strategies. London, Chameleon Press
Jackson, Elaine (1994). Non-language Outcomes in the Adult Migrant
English Population. Sydney: NCELTR.
Kaplan, R. S. and Norton, D. P., (1992), The Balanced Scorecard –
Measures that Drive Performance, Harvard Business Review, Vol. 70, No.
1, January / February, (71 – 79).
Kaplan, R. S. and Norton, D. P., (1996a), The Balanced Scorecard -
Translating Strategy into Action, Harvard Business School Press: Boston,
MA.
Kaplan, R. S. and Norton, D. P., (1996b), Linking the Balanced Scorecard to
Strategy, California Management Review, Vol. 39, No. 1, (53 – 79)
Kochan, T and Useem, M (1992). Transforming Organizations. New York:
Oxford University Press.
10
Lynch, R. L. and Cross, K. F., (1991). Measure Up – The Essential Guide to
Measuring Business Performance London: Mandarin.
Morrison, K (1998). Management theories for educational change.
California, Paul Chapman Publishing
Murphy, Joseph (1996). The privatisation of schooling, problems and
possibilities. California, Corwin Press
Neely, A. D., (1998), Performance Measurement: Why, What and How
London: Economist Books.
Neely, A. D., and Adams, C. A., (2000). Perspectives on Performance: The
Performance Prism Cranfield: Cranfield School of Management
Price Waterhouse Coopers (2004). VAVA (Value Added by Voluntary
Agencies) Report: report on the internet, New Zealand Federation of
Voluntary Welfare Organisations Inc. Accessed May 2005,
http://www.nzfvwo.org.nz/files/file/VAVAO_overview_report.pdf
Savignon, S (2000). Communicative language teaching. In M. Byran (ed).
Routledge encyclopaedia of language teaching and learning (Pp 124-129).
London and New York: Routledge.
Somerset, John (1998). Creating a balanced performance measurement
system for a school, report on the internet. Hall Chadwick, accessed 13
June 2005 http://www.hallchadwick.com.au/05_publications/ra_creating.pdf
WEA, the (2005). Telling our stories. Wellington, the WEA
Worthen, Blaine R., Sanders, James R. and Fitzpatrick, Jody L., (1997).
Programme evaluation, Alternative approaches and practical guidelines
(2nd Edition). New York, Longman
11
Wrong, Dennis H., (1995). Power, Its forms, bases and uses. New Jersey,
Transaction Publishers
Zappalá, Gianni (2000). How many people volunteer in Australia and why
do they do it? briefing paper on the internet. The Smith Family, Research
and Advocacy Briefing 4, accessed 17 July 2005
http://www.smithfamily.com.au/documents/Briefing_Paper_4_DA10F.pdf
12