On the Use and Abuse of Bibliometric Performance Indicators A critique of Hix’s “Global Ranking of Political Science Departments”

Embed Size (px)

Citation preview

  • 7/30/2019 On the Use and Abuse of Bibliometric Performance Indicators A critique of Hixs Global Ranking of Political Scienc

    1/9

    THE PROFESSION

    on the use and abuse ofbibliometric performanceindicators: a critique of Hixsglobal ranking of politicalscience departmentsroland erne

    Industrial Relations and Human Resources Group, UCD Business School, UniversityCollege Dublin, Belfield, Dublin 4, Ireland.E-mail: [email protected]

    doi:10.1057/palgrave.eps.2210136

    AbstractBibliometric measures, as provided by the Social Science Citation Index of the Institute for Scientific Information, certainly represent a useful tool for librarians and researchers. However, although librarian scientists haveshown that the use of journal impact factors to evaluate the performance of academics is misleading, some authors continue to promote bibliometricmetrics to assess the productivity of academic departments and even theentire European academic community. Taking an ambitious global rankingof political science departments as a reference, this article questions boththe reliability and desirability of bibliometric performance indicators. Thearticle concludes that the development of a panopticon-like audit culture inuniversities will not enhance their quality, but rather undermine theclassical idea and purpose of the university.

    Keywords bibliometric performance indicators; journal impact factor;

    political science; public policy instruments; research assessment exercise;university restructuring

    QUANTIFYING QUALITY ARE UNIVERSITY RANKINGSDESIRABLE?

    I f one defines the university accordingto Wilhelm von Humboldts classicdefinition (Fehe r, 2001), as a space

    of self-formation that is characterised bya unity of research and teaching, academicfreedom, collegiality and intellectual andmethodological pluralism, then its qualitycannot be evaluated by quantitativeuniversity rankings. While controllinggovernments, corporations and university

    e uropean p olitical science: 6 2007

    (306314) & 2007 European Consortium for Political Research. 1680-4333/07 $30 www.palgrave-journals.com/eps

    306

  • 7/30/2019 On the Use and Abuse of Bibliometric Performance Indicators A critique of Hixs Global Ranking of Political Scienc

    2/9

    administrators are certainly keen toestablish cheap performance metrics allowing for mechanised annual updates (Hix, 2004a: 293), it is much more difficultto understand why some academics are

    facilitating the rise of managerialistrituals of verification in the universitysector (Power, 1997, Shore and Wright,2000).

    Quantitative performance metrics, asthey are used in industry as a technocratictool to benchmark the performance of workers, do not play out to the advantageof academia as a whole. As emphasisedby critical industrial relations and newpublic management research, such me-

    trics entail perverse effects, especiallyregarding the performance of highlyskilled employees in the public service(Clark and Newman, 1997: 80f; Crouch,2003: 9). In contrast to non-skilled man-ual work, the quantity and quality of all aspects of work performed by highlyskilled professionals cannot be monitoredat the same time (Goldthorpe, 2000). Asit will never be possible to design aperformance measurement system that

    adequately captures all aspects of multi-faceted academic work, 1 the introductionof quantitative performance metrics willnot induce better performance of aca-demics, but influence their distribution of effort, and of time and attention, amongtheir different responsibilities. It followsthat the introduction of global rankings of university departments does not repre-sent a neutral tool to increase academicperformance, but a powerful instrumentthat is pressuring academic departmentsacross the world to align themselves tothe criteria set by the constructors of therankings performance metrics. The intro-duction of controlling benchmarking tech-niques represents a fundamental shiftfrom the management by commitmentapproach usually used in relation to highlyskilled professionals, to the neo-Tayloristmanagement by control approach tradition-ally used in relation to unskilled workers.

    While any ranking allows the bench-marked entities to adopt different strate-gies to become good, any rankingassumes that there is only one way of actually being good. By selecting themetrics of his global ranking of politicalscience departments, Hix (2004a,b) isassuming that there is a universallyacceptable definition of the disciplinesboundaries and a shared definition of the

    activities that distinguish the best aca-demic departments. However, how canone sustain such a bold claim? Are wenow living in an academic world that isdominated by one meta-language, andone-intellectual style, beamed all over the universe (Galtung, 1990: 111) or, isJohan Galtungs plea for theoretical andmethodological pluralism still valid? Bystating that a global ranking of researchoutputs could be constructed in politicalscience, Hix (2005) assumes that there isa universal meta-language in politicalscience, which would permit us to mea-sure the performance of all politicalscience departments across the globe.However, even if this were the case, thequestion remains whether one could ac-tually construct corresponding perfor-mance measurement variables in anyreliable way, given Albert Einsteinsobservation that everything that can be

    y the introduction of quantitative

    performance metrics will not induce better performance of

    academics, but influence their

    distribution of effort, and of time and attention,among their different

    responsibilities

    roland erne e uropean p olitical science: 6 2007 307

  • 7/30/2019 On the Use and Abuse of Bibliometric Performance Indicators A critique of Hixs Global Ranking of Political Scienc

    3/9

    counted does not necessarily count;everything that counts cannot necessarilybe counted (cited by Mattern, 2002: 22).

    HIXS RANKING ARELIABLE PERFORMANCEINDICATOR?

    University rankings are frequently basedon a rather arbitrary selection of a narrowset of performance indicators and anarbitrary allocation of weightings to them(Turner, 2005). Hixs global ranking of political science departments is no excep-tion. The ranking averages the positionof departments on four sub-rankings(R1R4), which, strikingly, are all influ-enced by one single measure. While thefirst ranking (R1) is based on the quantityof articles published by a member of an institution in the selected sixty-three main political science journals (measure A), the other three rankingsare based on:

    the product of measure A and Hixs journal impact factor (R2);

    the measure A per faculty member (R3);the product of measure A and Hixs journalimpact factor per faculty member (R4).

    Many flaws of this ranking exercise havealready been discussed and Hix (2005)himself recognises that his model in-cludes several biases. It is indeed note-worthy that his ranking exercise is basedon only one reliable measure (number of articles), one unreliable observation (fa-culty size) and at least five arbitrary

    decisions, namely(1) the way by which the composite,

    overall ranking is constructed,(2) the selection of its four sub-rankings,(3) the selection of its key indicator to

    measure academic performance,(4) the selection of its sixty-three main

    political science journals,(5) the use of its specific journal impact

    factor as a performance indicator.

    Henceforth, the article critically discussesthese five decisions and their implications.

    A COMPOSITE RANKING BASED ON ERRONEOUS AVERAGING OF ORDINAL SCALES

    By creating a single, easily grasped rank-ing of an institution, Hix is following manyother university league tables, which arebased on the averaging of the resultsfrom different sub-rankings. In mathe-matical terms, however, this process isindefensible, as the scales of the four sub-rankings do not allow the creation of areliable composite ranking (Turner,2005). While the first-placed departmenton the first sub-ranking (R1) was, for example, 61.06 articles ahead of thesecond, the distance between the 32ndand 33rd positions was only 0.17 articles(Hix, 2004a: 304). Hence, the creation of Hixs composite ranking is faulty as it isbased on the arithmetic average of ordinal scales (Benninghaus, 1994). Theway by which the composite ranking isconstructed is also questionable, as itsfour sub-rankings are based on variablesthat are not mutually exclusive. In fact,measure A influences all four sub-rankings.

    A SELECTION OF SUB-RANKINGS THAT SIMPLY FAVOURS BIG INSTITUTIONS

    The decision to base the overall, compo-site ranking on four sub-rankings is arbitraryfor another reason. The performance indi-cators of any comparative benchmarkingexercise should never be based on abso-lute, but on output per input measures.The number of published articles per institution makes no comparative senseif it is not related to the institutionsresources, for example, its faculty size.It follows that the sub-rankings R1 and R2 that are based on absolute figures aremeaningless, as they lack any denomina-tor. Hence, the sub-rankings R1 and R2 donot measure the performance of the

    e uropean p olitical science: 6 2007

    on the use and abuse of bibliometric performance indicators308

  • 7/30/2019 On the Use and Abuse of Bibliometric Performance Indicators A critique of Hixs Global Ranking of Political Scienc

    4/9

    institutions, but just provide a biased proxyfor their size and material resources.

    It follows that Hixs ranking is biased infavour of big institutions, such as Harvardand Hixs London School of Economics

    (LSE). While LSE ranks second and fourthin the two, denominator-less sub-rankings,it figures in the two sub-rankings thatrelate the number of publications to theinstitutions faculty size only at the 31stand 57th place, respectively. Moreover, thesub-rankings 1 and 2 also convey a dis-torted picture of the performance of Eur-opean political science departments bycomparison to their US counterparts. If one looks only at the sub-rankings that

    take faculty size into account, the produc-tivity of the European political sciencedepartments is indeed much higher.

    Hence, the worries of Gerald Schneider et al (2006), according to whom theEuropean political science departmentswould be unproductive, are hardly justi-fied. European political science depart-ments hold, for instance, all top sixpositions in the sub-ranking (R3) (Hix,2004a: 304). If one takes into account

    the smaller resources that are available toEuropean universities, European politicalscience departments are much more pro-ductive than implied by their overall rankon the Hix index. Yet, one must also keep inmind that even the two sub-rankings (R3;R4) that take faculty size into account facemajor problems. In fact, universities oftenmanipulate their faculty size to be able toreport a higher productivity per head; asmight be the case in Britain due to theResearch Assessment Exercise (Bull andEsp ndola, 2005) and in business schoolsacross the world, in relation to the Financial Times and Economist rankings.

    AN ARBITRARY SELECTION OF ITS KEY PERFORMANCE INDICATOR

    The choice of Hixs key performance mea-sures, namely the quantity of articles pub-lished in mainstream journals, must also be

    criticised as it does not capture all aspects of academic excellence. As emphasised by Bulland Espndola (2005), Hixs ranking entailsseveral biases; namely against politicalscientists with book publications, non-

    English publications and publications innon-mainstream political science journals.Furthermore, the quantitative focus of theindicators cannot capture the quality of academic publishing. On the contrary, be-cause such indicators favour questionablepublishing practices, such as salami slicepublishing and self-plagiarism (Mattern,2002; Terrier, 2002), they are likely to beinversely related to quality.

    In his response to critics, Hix concedes

    that the English language-centricity of themethod is difficult, and perhaps evenimpossible to fix. Moreover, he effectivelyacknowledges the one-dimensionality of his league table by encouraging others toconstruct a book publication-based rank-ing (Hix, 2005: 30). This response, how-ever, is rather disingenuous, as he knowsthat analysing the content of books and thenumber of citations to particular book seriesis costly, since there is not a single database

    of book publications and book citations likethe SSCI for journal publications (Hix,2004a: 295). Hix also fails to address thesalami slice and self-plagiarism criticism inboth the European Political Science and thePolitical Studies Review versions of hisranking exercise (Hix, 2004a,b).

    AN ARBITRARY SELECTION OF THE SIXTY-THREE MAIN POLITICALSCIENCE JOURNALS

    The selection of the sixty-three mainpolitical science journals is also rather arbitrary. At the outset, Hix concludedthat the political science and internationalrelations journals included in the Institutefor Scientific Information (ISI) JournalCitation Reports of the Social ScienceCitation Index (SSCI) do not representan adequate sample of the main politicalscience journals (Hix, 2004a: 296). This

    roland erne e uropean p olitical science: 6 2007 309

  • 7/30/2019 On the Use and Abuse of Bibliometric Performance Indicators A critique of Hixs Global Ranking of Political Scienc

    5/9

    observation is not surprising, as it corre-sponds to researchers perceptions inother disciplines (Lewinson, 2002). How-ever, rather than acknowledging that it ishardly possible to reach a universally

    acceptable and reliable definition of themain political science journals, Hix en-gages in an arbitrary selection process,which eventually leads to a list of thesixty-three main political science jour-nals.

    This list excludes approximately 40 per cent of the ISIs own list of top-rankedpolitical sciences journals, 2 as they would according to Hix in fact be journals inother fields of social science, such as law,

    economics, geography, sociology, history,psychology, social policy, communica-tions, philosophy or management (Hix,2004a: 297). By tautologically defining apolitical science journal, for the sake of simplicity as a journal that is (a) editedby a political scientist and (b) has amajority of political scientists on its edi-torial board (2004a: 297), Hixs listeffectively supports a very narrow, sub- ject area-centred view of political science,

    which mirrors its institutionalisation in theBritish context. However, if one definesthe discipline rather via its approach, as a Integrationswissenschaft , which accord-ing to the founders of West Germanpolitical science after 1945 aims to ana-lyse the political in all fields of society(Buchstein, 1992), then the exclusion of social policy, international political econ-omy, political psychology, etc. from theremit of political science must be per-ceived as outrageous. As this integrative understanding of political science is muchmore common among critical academics,Hixs selection of journals effectivelyserves a particular political agenda. More-over, the rankings concentration on nar-row, discipline-limited political science journals also encourages political scien-tists to play very safe with narrowlydefined, easily evaluated projects. Ironi-cally, this discourages interdisciplinary

    work right at a time when innovation ishappening precisely at the bordersbetween disciplines (Hollingsworth,

    2000).It is also noteworthy that Hixs list of the

    sixty-three main political science journalsincludes a non-peer-reviewed publica-tion, namely Foreign Affairs . While onecan indeed make a strong case in favour of academics who act as public intellec-tuals and publish not only in sparsely readacademic journals but also in widelydistributed current affairs periodicals(Lynch, 2006), it is nevertheless striking

    that the only non-peer-reviewed period-ical included in Hixs list is published by anestablished USAmerican think tank.

    In addition, the list of the chosen sixty-three journals excludes a third of theSSCIs political science journals, namely,those that received in 2002 less than 100citations from all the journals included inthe SSCI (Hix, 2004a). However, thisde-selection criterion clearly favours jour-nals that publish many issues per year.Moreover, it is also rather astonishing todefine the main political science journalsin relation to the citations received from asample, 98 per cent of which consistsof non-political science journals. Theauthor also does not explain why theISI, which according to Hix himself hasnot provided a reliable list of politicalscience journals, would be in a positionto provide a reliable list of all relevantsocial science journals. Not only the ISIs

    As this integrative understanding of political

    science is much morecommon among critical

    academics, Hixs selectionof journals effectively

    serves a particular political agenda.

    e uropean p olitical science: 6 2007

    on the use and abuse of bibliometric performance indicators310

  • 7/30/2019 On the Use and Abuse of Bibliometric Performance Indicators A critique of Hixs Global Ranking of Political Scienc

    6/9

    selection of political science journals,but the whole SSCI is biased, especiallyin favour of British- and US-journals,but also in favour of some specificsub-fields. However, this is a problem

    that cannot be fixed by arbitrarily addinga German, a Scandinavian and anAustralian journal as well as six major sub-field journals to the list of thesixty-three major political science journals (Hix, 2004b: 297). Why shouldone only add the journal of the Germanpolitical science association to the list andnot a journal from another, maybe smal-ler but equally interesting country? Whodefines what the major sub-fields of

    political science actually are? The lack of consideration for this question is evenmore egregious in the light of Hixs ownfailure to acknowledge any conflict of interest as associate editor of one of theadded journals, namely European UnionPolitics .

    HIXS IMPACT FACTOR A RELIABLE INDICATOR?

    European Union Politics also benefittedfrom the journal impact factor that hasbeen especially created for the purpose of this ranking exercise, as the standard ISIimpact factor would create a bias againstrecently established journals (Hix,2004a: 299). While the ISI impact factor indeed involves many biases, the intro-duction of a corrective in Hixs impactfactor formula does not represent a solu-tion. The ISI impact factor measures arenot only biased against new journals, butalso against specific journal types. Per O. Seglen, for instance, has shown that high impact factors are likely in journalscovering large areas of basic researchwith a rapidly expanding but short livedliterature that use many references per article (1997: 497).

    Brian D. Cameron (2005) presentsthe even more troubling findings of astudy that analyses emerging publishing

    practices in health sciences. Cameronshows that commercial publishers andacademic editors are increasingly manip-ulating journal impact factors to enhancetheir journals standing and profits. They

    are doing so by increasing the number of review articles, by encouraging contro-versial debates, by publishing serial arti-cles and research that has in part alreadybeen published elsewhere and by requiringthe inclusion of specific references. Incontrast, a journal that aims to get a highimpact factor should avoid publishing pri-mary research and case studies, as theyare less likely to be cited (Cameron, 2005).

    Furthermore, as Seglen discovered

    by studying citation rates in biomedical journals, the overall impact factor of a journal is only marginally related to eachindividual article, as articles in the mostcited half of articles in a journal arecited 10 times as often as the leastcited half (Seglen, 1997: 497). As itwould not seem reasonable to suggestthat the editors and the reviewers of anacademic journal apply different qualitystandards to different articles in their

    journal, Seglens finding fundamentallyquestions the belief that impact factorsare reliable measures to assess thequality of academic publications. Highcitation rates are not a measure of the appropriateness of the researchdirection of a paper or a journal, but onlyof its capacity to generate reactions with-in the research community. Whenever one uses bibliometric measures, oneshould therefore always be very awareof their Pied Piper of Hamelin effect(Kostoff, 1998).

    CONCLUSION

    This article has argued that Hixs globalranking of political science departments isbased on several arbitrary choices. Yet, itwould be wrong to assume that one couldsolve the problem by simply amending

    roland erne e uropean p olitical science: 6 2007 311

  • 7/30/2019 On the Use and Abuse of Bibliometric Performance Indicators A critique of Hixs Global Ranking of Political Scienc

    7/9

    the proposed performance metrics. Onthe one hand, any auditing exercise isdeeply shaped by its underlying politicalassumptions and definitions. It is wellestablished that the introduction of new,

    accountancy-driven public policy instru-ments is not a neutral and apoliticaltechnical practice (Power, 2003: 392).Metrics-based policy instruments alsoconstitute a condensed form of knowl-edge about social control and ways of exercising it (Lascoumes andLe Gales, 2007). On the other hand,university rankings are in particular af-fected by arbitrary choices, as quantita-tive performance metrics will never be

    able to measure all aspects of multi-faceted academic work. As a result, uni-versity rankings are producing perverseeffects, independently of the objectivepursued. Per definition, rankings assumethat there is only one way to be good. Therankings inherent unitarist logic, how-ever, can hardly be reconciled with theprinciples of academic freedom andmethodological and intellectual pluralism.It follows that it will never be possible

    to produce an objective quantitativeperformance ranking of academic depart-ments. Furthermore, any ranking exercisewill always produce winners and losers,and the more the outcome of rankingexercises impacts upon the distributionsof research funds, departmental re-sources and academic salaries, the lesslikely it will be to attain a consensus onhow to measure the performance of political science departments.

    For this very reason, it is not surprisingthat the same governmental, corporateand university administrators that arepromoting quantitative university rank-ings, are also at the forefront of another battle, which is currently shaking theuniversity sector across Europe. Be itOxford University, the Eidgeno ssischeTechnische Hochschule in Zurich or Uni-versity College Dublin, in all three cases,the university presidents and vice chan-

    cellor have proposed with more or less

    success far-reaching restructurings of the universitys governance structures,which aim to replace collegial by author-itarian decision-making mechanisms.Apparently, the promotion of quantitativeperformance metrics in higher educationcan hardly be reconciled with a classicalunderstanding of the college , as a spacein which we approach and respect eachother as colleagues . While the creativedestruction of collegiality within academia

    will certainly benefit some academics, suchas the new corporate university presidentsand their associates, 3 it is very doubtfulthat such a process will enhance the qualityof higher education.

    The particular way by which institutionsare held to account says much aboutdominant political and socio-economicvalues (Power, 1997). The same seemsto be true also in relation to the creationof global university rankings. In fact, therise of such market-emulating mechan-isms in the university sector is no coin-cidence. They mirror the growingmarketisation of higher education inmany corners of the world (Lynch,2006). In fact, according to UNESCOsInternational Ranking Expert Group, glo-bal university rankings above all serve topromote customer choice and to stimu-late competition among higher educationinstitutions (IREG, 2006). As rankings

    y the promotionof quantitative

    performance metrics inhigher education can

    hardly be reconciled witha classical understandingof the college, as a space

    in which we approachand respect each other

    as colleagues.

    e uropean p olitical science: 6 2007

    on the use and abuse of bibliometric performance indicators312

  • 7/30/2019 On the Use and Abuse of Bibliometric Performance Indicators A critique of Hixs Global Ranking of Political Scienc

    8/9

    aim to construct a new currency for the evaluation of academic excellence,the promotion of Hixs global universityrankings should not be analysed as amere technology transfer, but as a very

    political tool, with potentially far-reachingconsequences for both academic life andthe life of academics.

    ACKNOWLEDGEMENTS

    The author would like to thank JohnBaker, Colin Crouch, Stefan Klein, Oscar Molina, Sabina Stan and Tobias Theiler

    for valuable comments on an earlier version of this paper. The usual disclaimer applies.

    Notes

    1 For instance, according to Article 12 of the Irish Universities Act, 1997, the objectives of universitiesare: ( a ) to advance knowledge through teaching, scholarly research and scientific investigation, ( b ) topromote learning in its student body and in society generally, ( c ) to promote the cultural and social life of society, while fostering and respecting the diversity of the universitys traditions, ( d ) to foster a capacityfor independent critical thinking amongst its students, ( e ) to promote the official languages of the State,with special regard to the preservation, promotion and use of the Irish language and the preservation andpromotion of the distinctive cultures of Ireland, ( f ) to support and contribute to the realisation of nationaleconomic and social development, ( g ) to educate, train and retrain higher level professional, technicaland managerial personnel, ( h ) to promote the highest standards in, and quality of, teaching and research,( i ) to disseminate the outcomes of its research in the general community, ( j ) to facilitate lifelong learningthrough the provision of adult and continuing education, and ( k ) to promote gender balance and equalityof opportunity among students and employees of the university (Irish Statute Book, 2007).2 Incidentally, the following journals, which all figured among the top twenty political science journalsaccording to the ISI impact factor analysis in 2003, have not been included in Hixs list of the mainpolitical science journals: Political Geography (second ranked), Annual Review of Political Science (fifth),Public Opinion Quarterly (seventh), Political Physiology (eleventh), New Left Review (twelfth), Survival (thirteenth), Review of International Political Economy (seventeenth) and Policy and Politics (nine-teenth).

    3 It goes without saying that the h 320,000 p.a. pay claim by the presidents of the seven Irish universities(University College, Dublin (UCD), Trinity, Dublin City University, Cork, Maynooth, Galway and Limerick) representing a 55 per cent pay rise of up to h 135,000 p.a. is corroding staff morale and the above cited,egalitarian objectives of the Irish university sector. Incidentally, UCD president, Hugh Brady, justified hiswage claim by suggesting that his role was now more akin to that of the corporate chief executive whomust develop and drive strategically and position their business to grow (Gleeson, 2007). While the Irishuniversities have always had a majority of unelected non-academics on their governing bodies, it isnoteworthy that a corresponding, corporate takeover of Oxford University dramatically failed in December 2006, as a clear majority of its academic staff rejected a set of controversial governance proposals in aninternal referendum, despite huge governmental and corporate pressures (MacNamara, 2006).

    ReferencesBenninghaus, H. (1994) Einfuhrung in die sozialwissenschaftliche Datenanalyse , Munchen: R Oldenbourg

    Verlag.Buchstein, H. (1992) Politikwissenschaft und Demokratie. Wissenschaftskonzeption und

    Demokratietheorie sozialdemokratischer Nachkriegspolitologen in Berlin , Nomos: Baden-Baden.Bull, M. and Esp ndola, R. (2005) European universities in a global ranking of political science

    departments: a comment on Hix, European Political Science 4(1): 2729.Cameron, B.D. (2005) Trends in the usage of ISI bibliometric data: uses, abuses and implications,

    Libraries and the Academy 5(1): 105125.Clark, J. and Newman, J. (1997) The Managerial State , London: Sage.Crouch, C. (2003) Commercialisation or Citizenship. Education Policy and the Futures of Public Services ,

    London: Fabian Society.

    roland erne e uropean p olitical science: 6 2007 313

  • 7/30/2019 On the Use and Abuse of Bibliometric Performance Indicators A critique of Hixs Global Ranking of Political Scienc

    9/9

    Fehe r, I. (2001) The Humboldtian idea of an university, Neohelicon 28(2): 3337.Galtung, J. (1990) Theory Formation is Social Research: A Plea for Pluralism, in E. yen (ed.)

    Comparative Methodology: Theory and Practice in International Social Research , London: Sage, pp.96112.

    Gleeson, C. (2007) Because weer worth it!, College Tribune 20(6): 1, available at: http://www.ucd.ie/tribune, accessed 8 March 2007.

    Goldthorpe, J.H. (2000) Social Class and the Differentiation of Employment Contracts, in J.H. Goldthorpe(ed.) On Sociology: Numbers, Narratives and the Integration of Research and Theory , Oxford: OxfordUniversity Press, pp. 206229.

    Hix, S. (2004a) A global ranking of political science departments, Political Studies Review 2(3):293313.

    Hix, S. (2004b) European universities in a global ranking of political science departments, EuropeanPolitical Science 3(2): 524.

    Hix, S. (2005) European universities in a global ranking of political science departments: a reply to Bulland Esp ndola, European Political Science 4(1): 3032.

    Hollingsworth, J.R. (2000) Doing institutional analysis: implications for the study of innovations, Review of International Political Economy 7(4): 595644.

    IREG International Ranking Expert Group. (2006) Berlin principles on ranking of Higher EducationInstitutions, available at: http://www.cepes.ro/hed/meetings/berlin06/Berlin%20Principles.pdf,accessed 8 March 2007.

    Irish Statute Book. (2007) Universities Act, 1997, available at: whttp://ww.irishstatutebook.ie/ZZA24Y1997.html, accessed 8 March 2007.

    Kostoff, R.N. (1998) The use and misuse of citation analysis in research evaluation, Scientometrics43(1): 2743.

    Lascoumes, P. and Le Gales, P. (2007) Introduction: understanding public policy through its instruments from the nature of instruments to the sociology of public policy instrumentation, Governance 20(1): 121.

    Lewinson, G. (2002) Researchers and users perceptions of the relative standing of biomedical papers indifferent journals, Scientometrics 53(2): 229240.

    Lynch, K. (2006) Neo-liberalism and marketisation: the implications for higher education, EuropeanEducational Research Journal 5(1): 117.

    MacNamara, W. (2006) Dons reject proposals for reforms of Oxford, Financial Times , 20 December, 4.Mattern, F. (2002) Zur Evaluation der Informatik mittels bibliometrischer Analyse, Informatik Spektrum

    25(1): 2232.

    Power, M. (1997) The Audit Society. Rituals of Verification , Oxford: Oxford University Press.Power, M. (2003) Auditing and the production of legitimacy, Accounting, Organisations and Society 28:

    379394.Schneider, G., Steunenberg, B., Holzinger, K. and Gleditsch, P. (2006) Why is European political science

    so unproductive and what should be done about this? available at: http://www.uni-konstanz.de/FuF/Verwiss/GSchneider/downloads/papers/EPS.pdf, accessed 8 March 2007.

    Seglen, P.O. (1997) Why the impact factor of journals should not be used for evaluation research, BritishMedical Journal 314: 497.

    Shore, C. and Wright, S. (2000) Coercive Accountability. The Rise of Audit Culture in Higher Education, inM. Strathern (ed.) Audit Cultures. Anthropological Studies in Accountability, Ethics and the Academy ,London: Routledge, pp. 5989.

    Terrier, J. (2002) Le processus dautonomisation des universite s suisses. Principes et proble `mes,Carnets de bords en sciences humaines 2(4): 1321, available at: http://www.carnets-de-bord.ch,

    accessed 8 March.Turner, D. (2005) Benchmarking in universities: league tables revisited, Oxford Review of Education31(3): 353371.

    About the Author

    Roland Erne (Ph.D., EUI; Dipl. -Pol. FU Berlin/Sciences Po Paris) is a tenured lecturer of international and comparative employment relations at University College Dublin . Histeaching and research interests cover European integration, transnational democracy, tradeunionism and globalisation of work and employment. His forthcoming publications includeDemocratizing European Union Politics: Labors quest for a Transnational Democracy(Ithaca: Cornell University Press).

    e uropean p olitical science: 6 2007

    on the use and abuse of bibliometric performance indicators314