22
PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: [Wageningen UR Library] On: 20 May 2011 Access details: Access Details: [subscription number 907218346] Publisher Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37- 41 Mortimer Street, London W1T 3JH, UK Journal of Spatial Science Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t921253450 Evaluating the application of the multi-view spatial data infrastructure assessment framework L. Grus a ; J. Crompvoets ab ; A. K. Bregt a ; B. van Loenen c ; Tatiana Delgado Fernandez d ; D. Vandenbroucke e a Wageningen University, Centre for Geo-Information, The Netherlands b Katholieke Universiteit Leuven, Public Management Institute, Leuven, Belgium c Delft University of Technology, OTB Research Institute for Housing, Urban and Mobility Studies, the Netherlands d National Commission of the SDI of the Republic of Cuba, e Katholieke Universiteit Leuven, Spatial Applications Division, Belgium Online publication date: 18 May 2011 To cite this Article Grus, L. , Crompvoets, J. , Bregt, A. K. , van Loenen, B. , Fernandez, Tatiana Delgado and Vandenbroucke, D.(2011) 'Evaluating the application of the multi-view spatial data infrastructure assessment framework', Journal of Spatial Science, 56: 1, 121 — 141 To link to this Article: DOI: 10.1080/14498596.2011.567438 URL: http://dx.doi.org/10.1080/14498596.2011.567438 Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Evaluating the application of the multi-view spatial data infrastructure assessment framework

Embed Size (px)

Citation preview

PLEASE SCROLL DOWN FOR ARTICLE

This article was downloaded by: [Wageningen UR Library]On: 20 May 2011Access details: Access Details: [subscription number 907218346]Publisher Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Spatial SciencePublication details, including instructions for authors and subscription information:http://www.informaworld.com/smpp/title~content=t921253450

Evaluating the application of the multi-view spatial data infrastructureassessment frameworkL. Grusa; J. Crompvoetsab; A. K. Bregta; B. van Loenenc; Tatiana Delgado Fernandezd; D.Vandenbrouckee

a Wageningen University, Centre for Geo-Information, The Netherlands b Katholieke UniversiteitLeuven, Public Management Institute, Leuven, Belgium c Delft University of Technology, OTBResearch Institute for Housing, Urban and Mobility Studies, the Netherlands d National Commission ofthe SDI of the Republic of Cuba, e Katholieke Universiteit Leuven, Spatial Applications Division,Belgium

Online publication date: 18 May 2011

To cite this Article Grus, L. , Crompvoets, J. , Bregt, A. K. , van Loenen, B. , Fernandez, Tatiana Delgado andVandenbroucke, D.(2011) 'Evaluating the application of the multi-view spatial data infrastructure assessmentframework', Journal of Spatial Science, 56: 1, 121 — 141To link to this Article: DOI: 10.1080/14498596.2011.567438URL: http://dx.doi.org/10.1080/14498596.2011.567438

Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf

This article may be used for research, teaching and private study purposes. Any substantial orsystematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply ordistribution in any form to anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation that the contentswill be complete or accurate or up to date. The accuracy of any instructions, formulae and drug dosesshould be independently verified with primary sources. The publisher shall not be liable for any loss,actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directlyor indirectly in connection with or arising out of the use of this material.

Evaluating the application of the multi-view spatial data infrastructure

assessment framework

L. Grusa*, J. Crompvoetsa,b, A.K. Bregta, B. van Loenenc, Tatiana Delgado Fernandezd andD. Vandenbrouckee

aWageningen University, Centre for Geo-Information, The Netherlands; bKatholieke Universiteit Leuven,Public Management Institute, Leuven, Belgium; cDelft University of Technology, OTB Research Institutefor Housing, Urban and Mobility Studies, the Netherlands; dNational Commission of the SDI of theRepublic of Cuba; eKatholieke Universiteit Leuven, Spatial Applications Division, Leuven, Belgium

As a response to the growing interest in assessing Spatial Data Infrastructures (SDIs) theMulti-view SDI assessment framework has been proposed. The Multi-view SDIassessment framework collects multiple assessment approaches and methods with theaim to assess many different aspects of SDI in a comprehensive and unbiased way. Despitethe potential strengths of the framework, its complex design raises concerns about itsusability and applicability for SDI assessment. In this article we evaluate the application ofthe Multi-view SDI assessment framework. In addition, we ask the potential users of theframework to evaluate its applicability to assess SDIs. The results show that theframework could be applied to 21 National SDIs. Evaluation of the application processreveals that the completeness of assessment data and time needed to measure indicatorsdepends strongly on the assessment methods used. It is recommended to use those methodsthat need less time to measure assessment indicators. The results also show that asignificant part of the measurements could not be done due to survey questions not beingfilled in by the respondents. The results also show that the users tend to agree with theapplicability of the Multi-view SDI assessment framework to assess SDIs.

Keywords: Spatial Data Infrastructures assessment; assessment framework; meta-evaluation

1. Introduction

Background

Over the last decade Spatial Data Infra-structures (SDIs) have become an importantaspect of geo-information science and prac-tice. Their significance has been demon-strated by numerous initiatives all over theworld at global, regional, national and locallevels (Groot 1997; Masser 1999, 2005, 2007;Rajabifard 2002; Crompvoets 2006; Onsrud2007). Now, when many SDIs have beenestablished, it is increasingly important to

judge if the SDIs realise the intendedobjectives and benefits. SDI assessment canbe performed for the following reasons: toobtain more knowledge about SDI function-ing, to see if SDIs are on the intended trackof development, to assist SDI development,or to check for accountability reasons(Georgiadou et al. 2006; Lance et al. 2006,2009; Grus et al. 2007). SDI assessmentsgradually become an integral part of SDIpolicies. For example, the European direc-tive establishing an Infrastructure for Spa-tial Information in Europe (INSPIRE)

*Corresponding author. Email: [email protected]

Journal of Spatial Science

Vol. 56, No. 1, June 2011, 121–141

ISSN 1449-8596 print/ISSN 1836-5655 online

� 2011 Surveying and Spatial Sciences Institute and Mapping Sciences Institute, Australia

DOI: 10.1080/14498596.2011.567438

http://www.informaworld.com

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

requires monitoring and regular reportingon the progress of INSPIRE implementa-tion and the use of the infrastructure(European Commission 2007, 2009). An-other example is GIDEON – the policydocument for establishing the national SDIin the Netherlands – which also sets formalrequirements relating to monitoring andreporting (VROM 2008).

Many researchers have tried to developand apply approaches to assess SDIs (seeCrompvoets et al. 2008; Onsrud 1998;Masser 1999; Kok & van Loenen 2005;Steudler et al. 2004; Delgado Fernandezet al. 2005, 2007; Rodriguez-Pabon 2005;Crompvoets 2006; Vandenbroucke et al.2008). All these attempts, however usefuland valuable, either concentrate on oneaspect of SDI (Kok & van Loenen 2005;Delgado Fernandez et al. 2005; Crompvoets2006), are bounded by one region (Vanden-broucke et al. 2008), are describing SDIdevelopment in few particular countries(Onsrud 1998; Masser 1999), or are stillconceptual in nature (Steudler et al. 2004;Rodriguez-Pabon 2005). In order to evalu-ate many SDI aspects simultaneously acomprehensive assessment framework isneeded. However, comprehensive assess-ment and evaluation of SDI initiatives isproblematic due to its multi-faceted, com-plex and dynamic nature (De Man 2006;Grus et al. 2007).

Multi-view SDI assessment framework

As a response to the challenges of compre-hensive SDIs assessment, Grus et al. (2007)proposed a Multi-view SDI assessmentframework. The framework acknowledgesthe complex, multi-faceted and dynamiccharacter of SDIs. The rationale of theframework is based on theoretical researchon the nature of SDI. It has been shown thatSDIs behave and can be described asComplex Adaptive Systems (CAS) (Gruset al. 2010). This behavior implies that the

evaluation principles of CAS may also beused to evaluate SDIs. The principles ofassessing Complex Adaptive Systems differlargely from the methods used for assessingless complex and more linear systems(Eoyang & Berkas 1998; Cillers 1998). Forexample, oversimplification should beavoided, as many as possible vantage pointsshould be taken and flexibility should beincorporated into an assessment framework.The Multi-view SDI assessment frameworkdesign is based on these principles.

Figure 1 presents the Multi-view SDIassessment framework. The frameworkconsists of several assessment approaches.Each approach corresponds to at least oneassessment purpose. Each assessment ap-proach consists of a specific set of indicatorsand methods to measure those.

The first step of the Multi-view SDIassessment framework implementation is todetermine the purpose of SDI assessment.The framework identifies three purposes ofassessing SDIs: (1) accountability – tomeasure the effects of SDIs, (2) knowledge– to get a better understanding of the SDInature and functionalities, (3) development –to recommend changes in SDI functionality.For example, the Clearinghouse suitabilityapproach is classified as developmentalbecause it assesses the development of anSDI’s data access facility (Crompvoets &Bregt 2008) and allows for determining thecritical factors influencing their success(Crompvoets et al. 2004). In the next stepan assessment approach(es) which best fitsthe determined purpose of the assessmentshould be chosen. In the case of the frame-work visualised in Figure 1, a set of existingSDI assessment approaches from variousauthors constitute the framework. In thenext step, the approaches are applied bymeasuring the values of their indicators.Finally, the evaluation of SDI is performedby interpreting the indicator values.

It has to be noted that according to theCAS theory the application of all available

122 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

assessment approaches would give the mostcomplete and most truthful assessment ofSDI. In this way the SDI assessment is less

biased and various SDI aspects are covered.However, it is also possible that the userselects and applies a selection or

Figure 1. Multi-view SDI assessment framework adapted from Grus et al. (2007).

Journal of Spatial Science 123

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

combination of assessment approacheswhich best fit the assessment purpose. Forthe detailed description of the whole frame-work, the nature and applicability of eachof the framework’s assessment approachesmentioned in Figure 1 we refer to the articleby Grus et al. (2007).

The application of the multi-methodframeworks may be challenging for anumber of reasons: (1) the high costs (interms of time and resources) of multi-method studies; (2) the academic pressureto publish disciplinary rather than inter-disciplinary [multi method] research; (3)the lack of multi-method research knowl-edge; (4) the incompatibility betweenmethods (Gil-Garcia & Pardo 2006). Con-cerning the Multi-view SDI assessmentframework, several potential difficultiesabout its application have also been stated(Grus et al. 2007). Especially, the timeaspect of assessment is important becauseSDIs are considered to be dynamic and thevalues of indicators change over time.Another concern is the different level ofdata availability for the various assessmentapproaches. For some assessment ap-proaches it might be difficult to collect allthe needed data. For this reason theusability of some assessment approachesin the Multi-view SDI assessment frame-work may be limited.

Until now, the Multi-view SDI assess-ment framework has only been proposed asa concept. For the reasons and challengesmentioned above the application of theframework needs to be evaluated before itcan be used as an appropriate wayof assessing SDIs. Evaluation of the appli-cation of the assessment framework is alsocrucial for its future usability becausestakeholders will only use the frameworkif they accept and trust it (Grus et al. 2007).

The objective of this paper is to evaluatethe application of the Multi-view SDIassessment framework in terms of theapplication process and its applicability.

In the following section the concept ofevaluating the application of the assessmentframework is described. Next the metho-dology used to conduct the evaluation ispresented. Then the results of the Multi-view SDI assessment framework evaluationare presented and discussed. The finalsection describes the main conclusions andrecommendations for further research anduse of the framework.

In the remainder of the paper the nameMulti-view SDI assessment framework willbe often substituted by the framework.

2. Evaluating the evaluations

Evaluation can be defined as a carefulexamination of a program, institution,organisation or policy with the purpose oflearning about the particular entity studied(Walberg & Haertel 1990). The focus ofevaluation is on understanding and improv-ing the evaluated object and/or on summar-ising, describing and judging its outcomes.As Stufflebeam (2001) stresses, to achieveand sustain the status of professionalism,the products, programs and services shouldbe subjected to evaluation. The samepertains also to evaluations themselves. Inthis context the Multi-view SDI assessmentframework can also be the object of anevaluation. Evaluating the application ofthe framework has a purpose of assuringthe acceptable level of quality, proving itsvalidity, and earning the credibility amongthe potential users of the framework. Theconcept of evaluating the evaluation is alsonamed meta-evaluation. For the purpose ofthis paper meta-evaluation will be treated asa special case of evaluation. It is alsoassumed that different types of evaluationsare valid also for meta-evaluations.

Meta-evaluation has been defined byScriven (1969, 1991) as a second-orderevaluation, i.e. evaluation of evaluations.The purpose of meta-evaluation is todocument the strengths and weaknesses of

124 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

evaluation processes with the purpose ofimproving evaluation practice (Hanssenet al. 2008). Meta-evaluation represents anethical as well as a scientific obligation.From the ethical point of view it is necessaryto know the strengths and weaknesses of theevaluation method because the evaluationresults often serve as a basis for decisionswhich may impact whole societies. From ascientific point of view it is necessary toknow, for instance, how closely the assess-ment results correspond to reality. Stuffle-beam and Shinkfield (2007) distinguishedtwo types of meta-evaluations: proactiveand retroactive. The proactive meta-evalua-tion concentrates on not yet fully appliedevaluations and aims to help evaluatorsfocus, design, budget, carry out and improvesound evaluations. Retroactive meta-eva-luations focus on completed evaluations andcheck if the evaluation achieved the intendedgoals. The evaluation literature labels proac-tive and retroactive meta-evaluations also asformative and summative respectively (Scri-ven 1980; Stufflebeam & Shinkfield 2007).Hansen (2005) distinguished more models ofevaluations that may also be relevant toclassify meta-evaluations: Result models,Explanatory process models, System mod-els, Economic models, Actor models andProgram theory models.

Due to many possible types of meta-evaluation, the objective of this paper needsto be further clarified. The Multi-view SDIassessment framework is still in a process ofdevelopment. The meta-evaluation de-scribed in this paper can therefore beclassified as a proactive or formative typebecause it is conducted during the frame-work development and aims at improvingit. For the purpose of this paper the authorsalso conducted a pilot application of theframework (see for details in the Methodol-ogy section). The pilot application of theframework should be treated as a part ofthe process of its development. It has to bestressed that the pilot application differs

from the potential real application of theframework. The purpose of the pilotapplication was only to evaluate the frame-work itself. Therefore the meta-evaluationof the framework might also be associatedwith the Hansen’s explanatory processmodel which focuses on the nature of theapplication process of the object beingevaluated.

Once the pilot application of the frame-work has been done it is also possible toevaluate the usefulness of the results of thispilot application. Therefore the focus of thispaper is also on the users’ assessment of theframework’s applicability to assess SDI. Inthat sense this meta-evaluation can beassociated with the Hansen’s Actor modelwhich focuses on the stakeholders criteriafor the assessment.

3. Methodology

Figure 2 presents the methodology followedto evaluate the application of the Multi-view SDI assessment framework.

The application of the Multi-view SDIassessment framework (Box 1 in Figure 2)was the object of evaluation. To do so, theauthors conducted a pilot application of theframework (Box 2). The pilot applicationwas the basis for evaluating the applicationprocess against the criteria listed in Box 4.The results of the pilot application (Box 3)were the basis for evaluating the applic-ability of the framework to assess SDIs(Box 5). The detailed methodology toconduct pilot application (Box 2), toevaluate the data collection time andcompleteness of data (Box 4), and toevaluate the applicability of the frameworkto assess SDIs (Box 5) is described below.

Pilot application of the Multi-view SDIassessment framework (Box 2)

In the first step the authors selected fromthe Multi-view SDI assessment framework

Journal of Spatial Science 125

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

those assessment approaches that could befully applied i.e. their indicators hadalready been tested in previous studies.Four assessment approaches met this cri-terion of applicability. The first was theClearinghouse suitability approach(Crompvoets and Bregt 2008). which as-sesses the status of the existing nationalspatial data clearinghouses around theworld. It focuses on the systematic descrip-tion of 15 clearinghouse characteristics.The second, the SDI-readiness approach(Delgado Fernandez et al. 2005), aims tomeasure the degree to which a countryis prepared to deliver its geographicalinformation to the community. The third,the INSPIRE State of Play approach(Vandenbroucke and Janssen 2005), is anadapted version of an approach to assessthe status of NSDIs in Europe (Vanden-broucke & Janssen 2005). The approachwas adapted so that it can be applied notonly in a European but also in aninternational context. This assessment ap-proach is based on a number of SDI issuessuch as: organisational, legal, economical,spatial data, metadata, access to data and

services, standards and thematic environ-mental data. The fourth, the Organisa-tional approach (Kok & Van Loenen 2005)identifies, describes and compares thecurrent status of the organisational aspectsof NSDIs. This assessment approach isbased on the following organisationalcomponents: leadership, vision, communi-cation channels and self-organising abilityof the sector.

In the next step the authorsselected SDIs where the framework wasapplied. The authors decided to focus onNational SDIs for several reasons. NationalSDIs are easily identifiable due to theexistence of a national vision, strategy andsteering committees of these SDIs. Mostcountries also have a spatial data clearing-house at national level (Crompvoets 2006 ).In addition National SDIs are consideredto be the key level of SDI hierarchybecause they have full impact and relation-ship on the other levels of the SDIhierarchy through its components(Rajabifard et al. 2003). Moreover, forpractical reasons of conducting an evalua-tion the authors could easily identify and

Figure 2. Methodology to evaluate the application of the Multi-view SDI assessment framework.

126 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

contact National SDI coordinators andobtain the required data.

Based on an established network ofcontacts the authors contacted 21 NSDIcoordinators of the following countries:Argentina, Brazil, Canada, Chile, Colom-bia, Cuba, Denmark, Ecuador, Guyana,Jamaica, Malaysia, Mexico, Nepal, theNetherlands, Norway, Poland, Serbia,Spain, Sweden, Turkey, Uruguay. Becausethe choice of the countries was mainlybased on the existing network of theauthors, it should be noted that the resultsof this work is strongly biased to Europeand Latin-America with only a few othercountries. NSDI coordinators were consid-ered to be the most appropriate persons tocontact for the following reasons: theirperceptions can be treated as sensitiveindicators for changes in SDI (Crompvoetset al. 2007), they have the broadest andmost complete knowledge about the NSDIsof their countries, and they act on behalf ofan authorised NSDI institution. In addi-tion, the information that they provide canpotentially be treated as an official state-ment of the authorities about NSDI. TheNSDI coordinators were contacted bye-mail.

The following step was to measure theindicators of the four selected assessmentapproaches. In order to measure the in-dicators’ values we used three differentmethods: web survey, questionnaire, andstatistical data. For the Clearinghouseapproach all needed indicators values couldbe obtained by surveying the websites (websurvey method) of the Clearinghouses. Theweb survey method is relatively easy andquick because of the easy access to theclearinghouse sites (Crompvoets et al.2004). For collecting data for SDI-readi-ness, Organisational and INSPIRE State ofPlay approaches, we used an interviewerabsent questionnaire i.e. a questionnairewhich is left with the respondent and pickedup by the researcher when it is ready

(Bernard 2006). The reason for using thistype of questionnaire was the relative easeof obtaining data by one researcher from anumber of respondents at relatively lowcost. The questionnaire was sent to coordi-nators of the 21 National SDIs. Thequestionnaire consisted of three parts,each relating to one approach i.e. SDI-Readiness, INSPIRE State of Play, andOrganisational. The questionnaire usedfixed-choice or open ended types of ques-tions. When necessary a reminder was sentto the coordinator asking for returning afilled in questionnaire. Finally, the values ofthree indicators of the SDI-readiness ap-proach, i.e. Human capital index, Webmeasure index and Infrastructure index,were taken from the United Nations e-Government Survey 2008 (United Nations2008).

Evaluating the data collection time andcompleteness of data (Box 4)

The strength of the Multi-view SDI assess-ment framework lays in the simultaneoususe of several assessment approaches whichis supposed to generate more realisticassessment results than when the ap-proaches are sequentially applied. Eachapproach however might use differentmethods to obtain assessment data. Thetime needed to obtain data for eachassessment approach may vary dependingon the method. This variation may con-siderably slow down the application processof the whole framework. The challenge oftime has already been mentioned in Gruset al. (2007) as a potential obstacle anddifficulty when applying the Multi-view SDIassessment framework. In addition, thetimeframe of evaluating complex and dy-namic phenomena such as SDI should notbe too long because the values of thesedynamic and complex phenomena maychange rapidly (Eoyang & Berkas 1998).Therefore, the evaluation of SDI should be

Journal of Spatial Science 127

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

adjusted to this dynamic behaviour bymeasuring all indicators at the shortesttime. For example, the monitoring ofINSPIRE directive implementation anduse requires yearly measurements of theindicators (European Commission 2009).Therefore, evaluating the data collectiontime needed to obtain data for differentassessment approaches is crucial to judgewhether different assessment approaches ofthe Multi-view SDI assessment frameworkcan be applied simultaneously and howmuch time is needed for each of them toprovide the assessment data.

Assessing complex phenomena (such asSDIs; see Grus et al. 2010) can only beapproached with complex resources (Cillers1998). This implies that assessing SDIsrequires many assessment data to be col-lected. These data may be of very differenttypes and from different sources. Thispotential problem refers to the incompat-ibility between methods obstacle of usingmulti-method studies (mentioned already inthe Introduction of this article; see the listof challenges by Gil-Garcia and Pardo2006). For some assessment approachesdata might be easily available and for otherapproaches not. These differences maytherefore impede the application of theframework. Therefore data availabilitymight be an obstacle for efficient applica-tion of the framework.

As a result of the reasoning discussedabove, the challenge of time and thechallenge of data availability are regardedas the key potential obstacles of applyingthe Multi-view SDI assessment framework.Therefore these two challenges will be usedas the main criteria of evaluating theapplication process of the framework. Inorder to evaluate these two challenges theauthors formulated two indicators:

(1) data collection time – the timeneeded to measure the indicators’values;

(2) completeness of data – the numberof indicators without value.

Evaluating the applicability of the Multi-viewSDI assessment framework to assess SDIs(Box 5)

Evaluation of the applicability of the Multi-view SDI assessment framework is based onthe pilot application. The authors used theresults of the pilot application to write anNSDI assessment summary report (seeAppendix 2) for each measured NSDI andsend it to 21 National SDIs coordinators. Inorder to evaluate the applicability of theframework, the 21 National SDIs coordi-nators, the same who earlier provided theassessment data for their NSDIs, wereasked to have a look to the summary reportand consecutively fill in the questionnaire(see Appendix 1). The coordinators wereexplicitly asked to concentrate on evaluat-ing the applicability of the framework toevaluate NSDIs. They were also informedto treat the summary report only as anindication of the status and not as a realassessment of their NSDI because theassessment was based on the data fromthe previous year.

The statements in the questionnairereflected two groups of criteria. The firstgroup was derived from the claims aboutthe strengths of the framework. Theseclaims can be summarised in the followingstatement: by using the Multi-view SDIassessment framework the more objective,complete, bias-free and realistic SDI assess-ment can be obtained. The diversity of dataobtained by applying multiple assessmentapproaches reflects the complexity of SDIand allows for indicating those SDI aspectswhich lack behind the others (Grus et al.2007). The first four questions of thequestionnaire (see Appendix 1) related tothese claims.

The second group of criteria was derivedfrom the meta-evaluation criteria standard

128 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

Table 1. The results of the Multi-view SDI assessment framework application in 21 countries.

Assessment approaches

NSDIsClearinghousesuitability

SDI ReadinessINSPIRE

INSPIRE Stateof Play Organizational

Argentina 43 53 52 50Brazil* 38 56 50 75Canada* 100 63 74 100Chile* 50 58 50 75Colombia 76 67 76 100Cuba 60 53 59 75Denmark 38 65 59 75Ecuador* 34 44 59 75Guyana* 0 41 27 50Jamaica* 46 58 65 100Malaysia* 67 39 44 50Mexico* 75 58 73 75Nepal* 49 32 55 50Netherlands 79 59 59 75Norway 77 67 76 75Poland 36 49 39 50Serbia* 0 45 55 50Spain 100 72 71 75Sweden* 50 64 55 100Turkey* 0 37 32 50Uruguay* 52 55 52 50

*NSDI measurement results contained missing data

for conducting evaluations (Stufflebeam1974; The Joint Committee 1994). Thisstandard contains 10 questions askingabout technical adequacy, utility and effi-ciency of assessment frameworks (see Ap-pendix 3). Out of 10 questions the authorsused four due to the following reasons. Twoquestions, one about the validity andbiasness and another one about the objec-tivity of the assessment framework, havealready been covered by the questionsrelated to the framework claims. Thequestions about the internal validity andscope of the assessment could not beanswered because the framework’s pilotapplication had no specific assessmentpurpose or scope. The questions about therelevance and pervasiveness of the findingsto the intended audiences could not beanswered because the pilot application hadno SDI assessment audience specified.

4. Results and discussion

Table 1 presents the results of the Multi-view SDI assessment framework pilot ap-plication to assess 21 National SDIs. Theresults presented in Table 1 correspond withBox 3 in Figure 2.

The columns show the assessment va-lues of each NSDI using the four assess-ment approaches. The numbers forClearinghouse suitability approach wereobtained by measuring the values of 15National SDI Clearinghouse indicators bymeans of web survey method. The sum-marised values of all indicators compose aClearinghouse Suitability Index with valuesvarying between 0 (minimum) and 1 (max-imum). For the purpose of making theresults of four different approaches com-parable the clearinghouse suitability indexvalues are expressed in Table 1 as percen-tage values of the maximum possible score.

Journal of Spatial Science 129

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

For a detailed explanation of the metho-dology to measure the Clearinghouse Suit-ability Index we refer to Crompvoets(2006). The numbers for the SDI-readinessapproach have been obtained by measuring12 indicators grouped into five classes offactors relating to country’s SDI readiness:organisational factors, information factors,human resources, access network andtechnology, financial factors. The SDI-readiness indicators presented in Table 1have been calculated based on indicatorsvalues obtained in interviewer absent ques-tionnaire and statistical data. The SDI-readiness index have been calculated foreach country and according to the metho-dology described in a paper by Delgado-Fernandez et al. (2005). SDI-readinessindices are expressed as a percentage of amaximum possible score. The numbers forthe INSPIRE State of Play have beenobtained by measuring 36 indicators relat-ing to five SDI components (i.e. LegalFramework and Funding Mechanism, Spa-tial data, Metadata, Access and otherServices, Standards). Interviewer absentquestionnaire was used to measure thevalues of all indicators. The approach isbased on the method used by the EuropeanCommission to monitor SDI initiatives in32 European countries. For a detaileddescription of the approach we refer toVandenbroucke et al. (2008). The results inTable 1 were also expressed as a percentagevalue of the maximum possible score. Thenumbers for the Organisational approachwere obtained by measuring (by means ofopen questionnaire) nine indicators mea-suring four organisational components ofSDI: (1) leadership, (2) inclusiveness andcommunication channels, (3) long-termvision or strategic plan, (4) self-organisingability. The interpretation of the indicatorvalues assigned each NSDI to one of fourstages of SDI organisational development:stand-alone, exchange, intermediary, andnetwork. For the detailed description of the

approach we refer to Kok and Van Loenen(2005). For the purpose of this study wetranslate the four stages of the Organisa-tional approach into percentage valueswhere the scores of 25 percent, 50 percent,75 percent and 100 percent indicate respec-tively the four stages. It has to be noticed,however, that for the real application of theframework this conversion might be toosimplistic. The countries in a Table 1 aresorted alphabetically.

The results are shown to demonstratethat the Multi-view framework could beapplied to assess 21 National SDI usingfour different assessment approaches, i.e.Clearinghouse suitability, SDI Readiness,INSPIRE State of Play and Organisational.The presented results can only be treated asan illustration of the framework applica-tion. The further discussion and analysis theassessment values of Table 1 is beyond thescope of this paper.

The following sub-sections discuss theresults of evaluating the application ofthe Multi-view SDI assessment frameworkThe first sub-section discusses the results ofevaluating the application process of theMulti-view SDI assessment framework. Thesecond sub-section discusses the results ofevaluating the applicability of the Multi-view SDI assessment framework.

Evaluating the application process of theMulti-view SDI assessment framework

The results described and discussed in thissection refer to Box 4 in Figure 2.

Data collection time

Table 2 shows the time that the NationalSDI coordinators needed to respond to thequestionnaire. The results show that it isnecessary to wait on average 51 days toreceive a completed version of the ques-tionnaire from the NSDI coordinator (seethe mean value). In the case of Argentina

130 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

the response time was considerably longerthan for the response time of the othercountries (205 days). In contrast, in the caseof Serbia the response time was less thanone day. Due to these outliers the median isa more robust parameter than the mean.The median for the presented sample equalsto 38 days.

The data collection time presented inTable 2 refers only to the questionnairemethod of assessment data collection. Forthe other data collection methods the timeto collect the necessary data for eachcountry was much shorter. To collectstatistical data for the three indicators ofthe SDI readiness assessment approachonly several minutes per each countrywere necessary. To collect the values of 15indicators of the Clearinghouse suitabilityapproach using the web survey method, onaverage 30 minutes were needed to find outdetails of the National Clearinghouse of thecountry was. Another 1.5 hour (on average)

was needed to measure and classify 15clearinghouse characteristics.

The results show that the time of theapplication process depends highly on themethod used to measure indicators. Thequestionnaire method requires much moretime to collect the needed assessment datain comparison with the web survey methodand collecting statistical data method. Insome countries, it was observed that NSDIcoordinators were very reluctant to provideany information via the internet. In order toreceive their response it was necessary tosend several emails or to find a commonlyknown person who could be an intermedi-ary between the NSDI coordinator and theauthors. In addition, on average tworeminders about the questionnaire had tobe sent to each NSDI coordinator.

A mean of 51 (or median 38) days toassess 21 National SDIs can be regarded asa relatively long time to assess SDIsconsidering their dynamics. On the other

Table 2. NSDI coordinators’ response times.

NSDI Sent Received Difference (days)

Argentina 2007–10–12 2008–05–07 205Brazil 2007–10–12 2007–12–10 58Canada 2008–02–29 2008–04–30 60Chile 2007–10–12 2007–12–10 58Colombia 2007–10–12 2007–11–20 38Cuba 2007–10–12 2007–12–21 69Denmark 2007–10–18 2007–11–22 34Ecuador 2007–10–12 2007–11–16 34Guyana 2008–02–28 2008–04–07 39Jamaica 2008–02–28 2008–05–14 76Malaysia 2008–01–17 2008–02–21 34Mexico 2007–10–12 2008–01–29 107Nepal 2007–10–24 2007–11–25 31The Netherlands 2008–01–22 2008–01–30 8Norway 2007–10–18 2007–11–25 37Poland 2008–03–03 2008–04–16 43Serbia 2008–07–07 2008–07–07 0Spain 2007–10–12 2007–11–19 37Sweden 2008–05–14 2008–07–14 60Turkey 2008–05–07 2008–05–16 9Uruguay 2007–10–12 2007–11–16 34

Mean 51 daysMedian 38 days

Journal of Spatial Science 131

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

hand the INSPIRE directive requires an-nual monitoring of SDIs. It seems that theannual collection of the monitoring indica-tors is a short enough period to capture allimportant changes within SDIs. Neverthe-less, the time needed to collect the necessaryassessment data should be reduced when-ever possible. This can be done by sub-stituting the questionnaire method of datacollection with methods requiring less timeto collect data. This is particularly impor-tant when large numbers of assessmentapproaches are applied simultaneously.

Completeness of data

Table 3 shows the numbers of missing data,i.e. missing indicators or answers to thequestionnaire, relating to each assessmentapproach. The table shows only thoseNSDIs for which at least one missing datumwas observed.

Regarding the Clearinghouse suitabilityassessment approach the indicators weremeasured by use of a web survey. It appearsthat all the data were collected for thisapproach. The part of the questionnaireregarding the SDI readiness assessmentapproach contained four missing data.The missing data regarding this approachwere recorded only for the Swedish SDI.The part of the questionnaire regarding theINSPIRE state of play assessment ap-proach contained 40 missing data. Themissing data regarding this approach wererecorded for 11 NSDIs. Finally, the part ofthe questionnaire regarding the Organisa-tional approach contained five missingdata. The missing data regarding thisapproach were recorded for four countries.

The results show that the application ofthe INSPIRE State of Play assessmentapproach results in a relatively high numberof questionnaires containing missing data.This approach contained an optional an-swer ‘not sufficient information is availablefor assessing’. The choice of this optional

answer was treated as missing data becauseit indicates that the information to answerthe question or to provide the indicatorvalue is missing. This might explain the highnumber of missing data for this approach.In addition, the INSPIRE State of Playassessment approach, when compared withthe three other approaches, contained thehighest number of questions which in-creases the chances for missing data. Never-theless, it has to be noted that only two outof 40 missing data in this approach weredue to unanswered questions. The remain-ing 38 missing values originated fromindicating the option ‘not sufficient informa-tion is available for assessing’. Therefore, in38 cases the reason for the missing valueswas known. The three other assessmentapproaches, i.e. the Clearinghouse suitabil-ity, the SDI-readiness and the Organisa-tional, did not have an option like the onein the INSPIRE State of Play approach. Inthis case the NSDI coordinator, not havingsufficient information to provide for anassessment, had two options: (1) could notanswer the question at all and leave itblank, or (2) give an answer based on theirsubjective judgement or choose the answerrandomly. Moreover, it can be suspectedthat the respondents, not having an optionto say that data are missing, are likely toindicate any answer not to leave thequestion blank. To conclude, it seemsbeneficial to have in a questionnaire anoption such as ‘not sufficient information isavailable for assessing’ because it increasesthe reliability of the answers (fewer randomanswers) and diminishes the chance that therespondents will leave the answer blankwithout a clear reason.

According to Kent (2001) the easiestway to deal with an incomplete dataset is toexclude the missing data from the dataset.However, this exclusion might not be usefulif the sample is too small and the number ofmissing data is too high. Another way is topresent the records with the missing data

132 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

but not use them in the further analysis.Other methods focus on estimating themissing values using statistical methods.The methods vary from simply replacingthe missing values with the variable mean tomore advanced methods such as regression(Frane 1975). In Table 1 the authors choseto present all the results and indicate thosecontaining the missing values with anasterisk (*). Because the interpretation andanalysis of the assessment results is beyondthe scope of this paper the authors did notchoose any method to handle the missingdata. In the calculation of the assessmentvalues presented in Table 1 the missing datawere given a score of 0. It has to be stressed,however, that in the practical application ofthe Multi-view SDI assessment frameworkthe missing data should be treated as novalue and one of the methods describedabove should be used to handle the no valueresult.

The results show that the completenessof assessment data might depend on themethod used to collect them. In theclearinghouse suitability approach, where

all data were collected by means of websurvey, no missing data were recorded. Thequestionnaire method, in comparison withthe web survey method, resulted in anumber of missing values. In the SDIReadiness assessment approach values forthree indicators have been obtained fromstatistical data originating from UnitedNations e-Government Survey 2008 report.These statistical data were obtained for allsampled NSDIs.

Evaluating the applicability of the Multi-viewSDI assessment framework

The results described in this section corre-spond with Box 5 of Figure 2. Out of the 21questionnaires sent to the NSDI coordina-tors 14 responses were received. Figure 3shows the results of the applicabilityevaluation of the Multi-view SDI assess-ment framework. The figure shows eightcharts corresponding to the eight state-ments presented in the questionnaire. Thenumbers in the charts indicate the numberof respondents.

Table 3. The number of missing data for each assessment approach.

NSDIs The number of indicators or questions without value or answers

1 2 3 4 5

Clearinghouse suitability(max 15 indicators)

SDI Readiness(max 12 questions)

INSPIRE State of Play(max 36 questions)

Organizational(max 9 questions)

Canada 0 0 0 2Chile 0 0 4(1)* 0Mexico 0 0 2 0Sweden 0 4 2 0Jamaica 0 0 1 1Ecuador 0 0 2 0Malaysia 0 0 3 0Uruguay 0 0 2 0Nepal 0 0 0 1Brazil 0 0 9 0Serbia 0 0 2 0Turkey 0 0 2(1)* 1Guyana 0 0 11 0Sum 0 4 40 5

* In the case of INSPIRE State of Play the number of missing data could originate from indicating the answer ‘Notsufficient information is available for assessing’ or from the answers which were left empty. The numbers in bracketsindicate missing data originating from answers which were left empty.

Journal of Spatial Science 133

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

The overall view on the results suggestsa rather positive opinion of the respondentson the applicability of the Multi-view SDIassessment framework for assessing SDIs.The most frequent answer was Agree (inseven out of eight statements).

The results confirm to a certain extentthe claims that the use of the frameworkwould provide an accurate, consistent,more complete, realistic and objective pic-ture of SDI (Statements 1, 2 and 3). Inaddition, the respondents provided the

Figure 3. Evaluation results of the Multi-view SDI assessment framework.

134 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

strongest support for statement 4, sayingthat the Multi-view SDI assessment frame-work application indicates those NSDIaspects which need to be improved andenhanced. This strong support demon-strates that the Multi-view SDI frameworkdesign allows for determination of theweakest aspects of SDI by looking frommultiple views at the same time. The resultsalso suggest that the framework can beapplied to assess any other SDI (statement7). This could mean that the framework isgeneric and its applicability is not limitedonly to National SDIs but can also beapplied to assess regional or local SDIs.

Concerning the cost-effectiveness of theMulti-view SDI assessment framework(statement 6), most of the respondentsindicated the answer ‘neutral’. A possiblereason for this result could be that therespondents did not have enough knowl-edge to answer this question. Most prob-ably, it was difficult for them to estimate thecost-effectiveness of the framework applica-tion (statement 6) because they did notapply it themselves. Statement 8 was acontrol statement to check the consistencyof respondents’ answers. The statementasked for the opinion if the assessmentresults are verifiable, which is similar tostatement 3. The similarity of the answers toboth statements indicates the consistency ofthe answers. The analyses of the responsesshow that only one respondent indicated forstatement 3 the answer ‘disagree’ while forstatement 8 ‘agree’. Therefore the results ofthe questionnaire could be treated as ratherconsistent.

5. Conclusions

The objective of this paper was to evaluatethe application of the Multi-view SDIassessment framework. The focus was onthe evaluation of (1) the application processof the framework and (2) the applicabilityof the framework to assess SDIs.

The results demonstrate that the Multi-view SDI assessment framework could beapplied to assess National SDIs using fouroperational assessment approaches. NSDIassessment using more than one assessmentapproach gives an overview of NSDI perfor-mance frommultiple viewpoints. This multi-view assessment can assist, for instance, SDIcoordinators to identify the status of manySDI aspects and help them to decidewhere toallocate resources to enhance their SDIs.In addition, it was possible to apply theframework at the same time in 21 differentNSDIs, which gives a good basis for cross-national NSDI assessment, comparison andbenchmarking of NSDIs.

Two aspects of the framework’s appli-cation process were also evaluated: (1) datacollection time and (2) completeness of dataneeded for assessment. The results showthat the time needed for Multi-view SDIassessment framework application variessignificantly between the measurementmethods. Whenever possible it is recom-mended to use data collection methodswhich require less time. This might shortenthe SDI assessment process significantly,which is of particular importance whenassessing complex and dynamic phenomenasuch as SDIs. Moreover, indicating to thecoordinators a cut-off time for deliveringthe responses could potentially mobilisesome of them to provide the data by theindicated deadline. Also, the importance ofthe sender (e.g. European Commission vsresearcher) significantly influences the ea-gerness of the respondent to provide assess-ment data. The analysis of the completenessof the data shows that a part of theassessment data was missing. In a realapplication of the framework a decisionneeds to be made on the method forhandling them.

It has to be noted that the frameworkapplication relies heavily on responsesprovided by SDI coordinators (except forthe Clearinghouse suitability approach and

Journal of Spatial Science 135

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

part of the SDI-readiness approach). How-ever, SDI coordinators might be biased toprovide politically correct answers, notnecessarily reflecting the true status of theirSDI. Also, the responses provided by themreflect their own perception of the SDIstatus. In addition, the interpretation of themeaning of the questions might also deviatebetween respondents. For the future appli-cation of the framework two recommenda-tions are made. First, more than onerespondent from each organisation withproven knowledge about assessed SDIshould also be asked to provide data.Second, more assessment approaches thatrely, for example, on statistical data, officialreports or web surveys should be included inthe Multi-view SDI assessment framework.

The evaluation of the application of theMulti-view SDI assessment framework de-scribed in this paper has been classified asproactive (formative), explanatory processmodel and actor model (concentrating onstakeholders). It has to be stressed thatother types of evaluations are also necessaryto fully evaluate the potential of the frame-work. For example, once the framework isapplied it could be evaluated whether theassessment results meet the specific user’srequirements (actor model concentrating onclients). In addition economic models maybe used to measure the costs and profit-ability of the framework. For instance,Hansen’s (2005) evaluation model checkingthe framework’s productivity, effectivenessand utility is an interesting option forfurther research. One other point worthmentioning is that the described applicationof the Multi-view SDI assessment frame-work did not have any specific purpose oraudience interested in SDI assessment. As aresult, the relevance of the frameworkapplication could not be measured. Therelevance of the assessment findings to thespecific audiences might, however, be acrucial criterion for assessing the utility ofthe framework.

In general, the authors would like tostress the role of meta-evaluation as anintegral part of a process of evaluatingSDIs. One of the reasons for that is that themeta-evaluation allows for identification ofthe actual problems faced during the actualSDI evaluation process, e.g. the longtime to obtain assessment data for someof the data collection methods, difficultieswith obtaining assessment data. On-timeknowledge about these problems allows fortaking appropriate actions which conse-quently would lead to improved quality ofthe SDI assessment framework and itsapplication. In addition, involving the SDIcoordinators in the meta-evaluation processprovides knowledge about the applicability,usability and users’ satisfaction from theproposed assessment method. Moreover,this involvement also builds the coordina-tor’s trust in the evaluator’s work and mayenhance the further cooperation of thetwo parties in improving the SDI assess-ment framework and consequently SDIsthemselves.

Acknowledgements

The authors would like to acknowledge andthank the SDI coordinators who filled in thesurvey and provided us with information ontheir SDIs. The authors of this study would alsolike to thank Space for Geo-information (RGI) –Dutch innovation program, for providing thenecessary resources to conduct this research.

References

Bernard, H.R. (2006) Research Methods inAnthropology. Qualitative and QuantitativeApproaches (fourth edition). Altamira Press,Oxford.

Cillers, P. (1998) Complexity and Postmodernism,Understanding Complex Systems. Routledge,London.

Crompvoets, J., Bregt, A., Rajabifard, A., &Williamson, I. (2004) Assessing the world-wide developments of national spatial dataclearinghouses. International Journal of Geo-graphical Information Science, vol. 18, no. 7,pp. 665–689.

136 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

Crompvoets, J. (2006) National spatial dataclearinghouses. Worldwide development andimpact. Ph.D thesis, Wageningen Universi-teit, The Netherlands.

Crompvoets, J., Bree, F. de, Oort, P.A.J. van,Bregt, A.K., Wachowicz, M., Rajabifard, A.,& Williamson, I. (2007) Worldwide impactassessment of spatial data clearinghouses.URISA Journal, vol. 19, no. 1, pp. 23–32.

Crompvoets, J., & Bregt, A. (2008) Clearing-house suitability index. In: Crompvoets, J.,Rajabifard, A., van Loenen, B., &DelgadoFernandez, T. eds. A Multi-view Frameworkto Assess Spatial Data Infrastructures. Digi-tal Print Centre, The University of Mel-bourne, pp. 135–144.

Crompvoets, J., Rajabifard, A., van Loenen, B.,& Delgado Fernandez, T. eds. (2008) AMulti-view Framework to Assess Spatial DataInfrastructures. Digital Print Centre, TheUniversity of Melbourne.

Crompvoets, J., Rajabifard, A., van Loenen, B.,& Delgado Fernandez, T. (2008) Futuredirections for Spatial Data Infrastructureassessment. In: Crompvoets, J., Rajabifard,A., van Loenen, B., & Delgado Fernandez,T. eds. A Multi-view Framework to AssessSpatial Data Infarastructures. Digital PrintCentre, The University of Melbourne, pp.383–399.

Delgado Fernandez, T., Lance, K., Buck, M., &Onsrud, H.J. (2005)Assessing an SDI readi-ness index. Frompharaohs to geoinformatics.Proceedings of FIGWorking Week 2005 and8th International Conference on GlobalSpatial Data Infrastructure, April, Cairo.

Delgado Fernandez, T., & Crompvoets, J. (2007)Infraestructuras de Datos Espaciales en Iber-oamerica y el Caribe. IDICT, Habana.

De Man, W.H.E. (2006) Understanding SDI;complexity and institutionalization, Interna-tional Journal of Geographical InformationScience, vol. 20, no. 3, 329–343.

Eoyang, G.H., & Berkas, T.H.. (1998) Evalua-tion in a complex adaptive system. In:Lissac, & M., Gunz, H., eds. ManagingComplexity in Organizations. QuorumBooks, Westport, CT.

European Commission. (2007) Directive 2007/2/EC of the European Parliament and of theCouncil of 14 March 2007 establishing anInfrastructure for Spatial Information in theEuropean Community (INSPIRE).

European Commission. (2009) INSPIRE Mon-itoring Indicators – Guidelines Document.European Commission – Eurostat.

Frane, J.W. (1975) Some simple proceduresfor handling missing data in multivariateanalysis. Psychometrika, vol. 41, no. 3, pp.409–415.

Georgiadou, Y., Rodriguez-Pabon, O., & Lance,K.T. (2006) SDI and e-Governance: a questfor appropriate evaluation approaches. UR-ISA Journal: Journal of the Urban andRegional Information Systems Association,vol. 18, no. 2.

Gil-Garcia, J.R., & Pardo, T.A. (2006) Multi-method approaches to understanding thecomplexity of e-government. InternationalJournal of Computers, Systems and Signals,vol. 7, no. 2.

Groot, R. (1997) Spatial Data Infrastructure(SDI) for sustainable land management. ITCJournal, vol. 2, no. 4, pp. 287–294.

Grus, L., Crompvoets, J., & Bregt, A.K.(2007) Multi-view SDI assessment frame-work. International Journal of SpatialData Infrastructures Research, vol. 2,pp. 33–53. available at: http://ijsdir.jrc.ec.europa.eu/index.php/ijsdir/article/view/27/21

Grus, L., Crompvoets, J., & Bregt, A.K. (2010)Spatial Data Infrastructures as ComplexAdaptive Systems, International Journal ofGeographical Information Systems, vol. 24,no. 3, pp. 439–463.

Hansen, H.F. (2005) Choosing evaluation mod-els. A discussion on evaluation design.Evaluation, vol. 11, no. 4, pp. 447–462.

Hanssen, C.E., Lawrenz, F., & Dunet, D.O.(2008). Concurrent meta-evaluation: a cri-tique. American Journal of Evaluation, vol.29, no. 4, pp. 572–582.

Kent, R. (2001). Data Construction and DataAnalysis For Survey Research. New York:Palgrave.

Kok, B., & van Loenen, B. (2005) How to assessthe success of National Spatial Data Infra-structures? Computers, Environment andUrban Systems, vol. 29, pp. 699–717.

Lance, K.T., Georgiadou, Y., & Bregt, A. (2006)Understanding how and why practicionersevaluate SDI performance. InternationalJournal of Spatial Data Infrastructure Re-search, vol. 1, pp. 65–104.

Lance, K.T., Georgiadou, Y., & Bregt, A.K.(2009) Cross-agency coordination inthe shadow of hierarchy: ‘joining up’government geospatial information sys-tems. International Journal of GeographicalInformation Science, vol. 23, no. 2, pp.249–269.

Journal of Spatial Science 137

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

Masser, I. (1999) All shapes and sizes: The firstgeneration of national spatial data infrastruc-tures. International Journal of GeographicalInformation Science, vol. 13, no. 1, 67–84.

Masser, I. (2005) GIS Worlds, Creating SpatialData Infrastructures. ESRI Press, Redlands,CA.

Masser, I. (2007) Building European SpatialData Infrastructures. ESRI Press, Redlands,CA.

Onsrud, H.J. (1998) Compiled responses byquestions for selected questions. Survey ofnational and regional spatial data infrastruc-ture activity around the globe. Global SpatialData Infrastructure, at http://www.spatial.maine.edu/*onsrud/GSDI.htm.

Onsrud, H.J. (2007) Research and Theory inAdvancing Spatial Data Infrastructure Con-cepts. ESRI Press, Redlands, CA.

Rajabifard, A. (2002) Diffusion for RegionalSpatial Data Infrastructures: particular re-ference to Asia and the Pacific. PhD thesis,The University of Melbourne, Melbourne.

Rajabifard, A., Feeney, M.E., & Williamson, I.(2003) Spatial data infrastructures: concept,nature and SDI hierarchy. In: Williamson, I.,Rajabifard, A., & Feeney, M.E., eds. Devel-oping Spatial Data Infrastructures: FromConcept to Reality. Taylor & Francis,London, pp. 17–40.

Rodriguez-Pabon, O. (2005) Cadre theoriquepour l’evaluation des infrastructures dınfor-mation geospatiale. Centre de recherche engeomatique, Departement des sciences geoma-tiques. These de doctorat, University ofLaval, Canada.

Scriven, M. (1969) An introduction to meta-evaluation. Educational Products Report,vol. 2, pp. 36–38.

Scriven, M. (1980) The Logic of Evaluation.Edgepress, Inverness, CA.

Scriven, M. (1991) Evaluation Thesaurus. Sage,New York.

Steudler, D., Rajabifard, A., & Williamson, I.(2004) Evaluation of land administrationsystems. Land Use Policy, vol. 21, no. 4,pp. 371–380.

Stufflebeam, D. (1974) Metaevaluation. Ocas-sional Paper, The Evaluation Centre, WesternMichigan University. Available from: http://www.wmich.edu/evalctr/pubs/ops/ops03.pdf[accessed 2 November 2009].

Stufflebeam, D. (2001) The MetaevaluationImperative. American Journal of Evaluation,vol. 22, no. 2, pp. 183–209.

Stufflebeam, D., & Shinkfield, A. (2007) Evalua-tion Theory, Models, & Applications. JohnWiley, San Francisco.

United Nations. (2008) United Nations e-Gov-ernment Survey 2008. Department of Eco-nomic and Social Affairs. Available from:http://unpan1.un.org/intradoc/groups/public/documents/un/unpan028607.pdf [accessed 2November 2009].

The Joint Committee. (1994) Program Evalua-tion Standards. Sage, Thousand Oaks, CA.

Vandenbroucke, D., & Janssen, K. (2005) SpatialData Infrastructures in Europe: State of PlaySpring 2005. Summary report by SpatialApplications Division, K. U. Leuven R&D.Available from: http://inspire.jrc.ec.europa.eu/reports/statofplay2005/rpact05v42.pdf [ac-cessed 18 April 2011].

Vandenbroucke, D., Jannsen, K., & Van Or-shoven, J. (2008) INSPIRE State of Play:Generic approach to assess the status ofNSDIs. In: Crompvoets, J., Rajabifard, A.,van Loenen, B., & Delgado Fernandez, T.eds. A Multi-view Framework to AssessSpatial Data Infarastructures. Digital PrintCentre, The University of Melbourne, pp.145–172.

VROM. (2008) GIDEON – Key geo-informationfacility for the Netherlands. Ministry ofVROM, The Netherlands.

Walberg, H., & Haertel, G. (1990) TheInternational encyclopedia of educationalevaluation. Pergamon Press, Toronto.

138 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

Appendix 1. A questionnaire to evaluate the applicability of the framework to assess National

Spatial Data Infrastructures sent to the NSDI coordinators

Dear respondent,The report that you received shows result of your NSDI assessment with 4 different assessment views(different approaches, indicators, measurement methods). To evaluate the relevance of assessing SDIssimultaneously with several assessment views we would like you to express your opinion about thefollowing statements

(1) The assessment results are accurate and consistentFully disagree Disagree Neutral Agree Fully agree

¤ ¤ ¤ ¤ ¤

Your comments:

(2) The assessment results shown in the report allows for a more complete and realistic SDIassessment.

Fully disagree Disagree Neutral Agree Fully agree¤ ¤ ¤ ¤ ¤

Your comments:

(3) The conclusions from the assessment are objective.Fully disagree Disagree Neutral Agree Fully agree

¤ ¤ ¤ ¤ ¤

Your comments:

(4) The assessment results shown in the report help you decide which aspects of your SDI needenhancement.

Fully disagree Disagree Neutral Agree Fully agree¤ ¤ ¤ ¤ ¤

Your comments:

(5) The assessment can be seen as valid and unbiased.Fully disagree Disagree Neutral Agree Fully agree

¤ ¤ ¤ ¤ ¤

Your comments:

(6) The assessment is cost-effective in achieving the assessment resultsFully disagree Disagree Neutral Agree Fully agree

¤ ¤ ¤ ¤ ¤

Your comments:

(7) The Multi-view SDI assessment framework can be used to assess any SDIsFully disagree Disagree Neutral Agree Fully agree

¤ ¤ ¤ ¤ ¤

Your comments:

(8) The assessment results are verifiable.Fully disagree Disagree Neutral Agree Fully agree

¤ ¤ ¤ ¤ ¤

Your comments:

Journal of Spatial Science 139

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

Assessment summary report National Spatial Data Infrastructure

(Country name)

Assessment view 1Data access facility assessment (Clearinghouse)

(based on Crompvoets et al., 2004)

Total score of assessment view 1 (value in %)

Points of attention*:-

Assessment view 2Assessment of organisational, legal, financial, metadata, data access and services and standards aspects

(based on INSPIRE State of Play approach (Vandenbroucke & Janssen 2005))

Total score of assessment view 2 (value in %)Points of attention*:-

Assessment view 3Organisational aspects based on Organisational approach

(based on Kok and van Loenen 2005)

Total score of assessment view 3: (4possible stages of SDI developmentfrom the least to the most mature [leftto right]: Stand-alone; Exchange;Intermediary and Network)

(stage name)

Points of attention*:-

Assessment view 4Assessment of your country’s preparedness to embrace SDI

(based on Delgado Fernandez et al. 2005)

Total score of assessment view 4 (value in %)

Points of attention*:- Low level funding by means of the application of policies regarding cost recovery

Short summary In our ranking of 21 NSDIs assessed with four differentperspectives and methods of the Multi-view SDI assessmentframework the (country name) NSDI has ranked the 1stplace. In order to improve, please look at the points ofattention. If you are interested in detailed data, indicatorsvalues concerning this report and ranking of the assessedNSDIs please contact [email protected].

Appendix 2. Extract (without data) from an NSDI assessment report sent to the NSDI

coordinators

This report presents the results of the pilot application of the Multi-view SDI assessment framework(Grus et al. 2007). The Multi-view SDI assessment framework aims to assess one SDI from differentperspectives (views).

The results below should only be treated as an indication of the NSDI performance seen from fourdifferent perspectives. It is not anofficialNSDIperformance report.Wealsoknowthat this report hasbeendone based on data from 2008 and since then the values of some indicators might have already changed.

*Questions that have not been answered or marked as ‘Not sufficient information for assessment’have been treated as 0 points.

140 L. Grus et al.

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011

I. Technical adequacycriteria

1. Internal validity: Does the assessment design unequivocallyanswer the question it was intended to answer?

2. External validity: Do the assessment results have the desiredgeneralisability? Can the necessary extrapolations to otherpopulations, other program conditions, and other times be safelymade?

3. Reliability: Are the assessment data accurate and consistent?4. Objectivity: Would other competent assessors agree on the

conclusion of the assessment?

II. Utility criteria 1. Relevance: Are the findings relevant to the audiences of theassessment?

2. Importance: Have the most important and significant of thepotentially relevant data been included in the assessment?

3. ‘Scope: Does the assessment information have adequate scope?4. Credibility: Do the audiences view the assessment as valid and

unbiased?5. Timeliness: Are the results provided to the audiences when they

are needed?6. Pervasiveness: Are the results disseminated to all of the intended

audiences?

II. Efficiency criterion 1. Is the assessment cost-effective in achieving the assessment results?

Note the substitution of assessment as the object of the Meta-Evaluation. Assessment is not evaluation, but it maybe considered an evaluation activity.

Note: Derived from Stufflebeam 1974.

Appendix 3. Meta-evaluation criteria used to evaluate the applicability of the Multi-view SDI

assessment framework to assess NSDIs

Journal of Spatial Science 141

Downloaded By: [Wageningen UR Library] At: 10:51 20 May 2011