15
P1: ... RTPG_A_272328 RTPG.cls November 9, 2007 1:41 An Evaluation of an Object-Oriented Paradigm for Land Use/Land Cover Classification QUERY SHEET Q1: Au: Please provide a complete entry for this software in your references, including software name, version used, manufacturer name, city/state/country of manufac- turer. Q2: Au: This entry is not given in your references. Please either add a full entry there or delete the citation here. Q3: Au: Spelled Benz in reference entry; please correct whichever is wrong. Q4: Au: Spelled Benz in reference entry; please correct whichever is wrong Q5: Au: This entry is not given in your references. Please either add a full entry there or delete the citation here. Q6: Au: This entry is not given in your references. Please either add a full entry there or delete the citation here. Q7: Au: This entry is not given in your references. Please either add a full entry there or delete the citation here. Q8: Au: Please provide English translation of this title in parentheses. Q9: Au: Please provide location of this conference. Q10: Au: Please provide page numbers for this chapter and location of publisher for this entry. Q11: Au: Please provide page numbers for this chapter here. Q12: Au: Please provide name(s)of editor(s) and page numbers for this chapter here. Q13: Au: Please provide complete publication information for this entry. If a journal article, provide page numbers; if a book chapter, please provide name(s) of editor(s), page numbers and name of publisher; etc. 0

An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

An Evaluation of an Object-Oriented Paradigm for Land

Use/Land Cover Classification

QUERY SHEET

Q1: Au: Please provide a complete entry for this software in your references, includingsoftware name, version used, manufacturer name, city/state/country of manufac-turer.

Q2: Au: This entry is not given in your references. Please either add a full entry thereor delete the citation here.

Q3: Au: Spelled Benz in reference entry; please correct whichever is wrong.Q4: Au: Spelled Benz in reference entry; please correct whichever is wrongQ5: Au: This entry is not given in your references. Please either add a full entry there

or delete the citation here.Q6: Au: This entry is not given in your references. Please either add a full entry there

or delete the citation here.Q7: Au: This entry is not given in your references. Please either add a full entry there

or delete the citation here.Q8: Au: Please provide English translation of this title in parentheses.Q9: Au: Please provide location of this conference.

Q10: Au: Please provide page numbers for this chapter and location of publisher for thisentry.

Q11: Au: Please provide page numbers for this chapter here.Q12: Au: Please provide name(s)of editor(s) and page numbers for this chapter here.Q13: Au: Please provide complete publication information for this entry. If a journal

article, provide page numbers; if a book chapter, please provide name(s) of editor(s),page numbers and name of publisher; etc.

0

Page 2: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

An Evaluation of an Object-Oriented Paradigm for Land

Use/Land Cover Classification

Rutherford V. Platt and Lauren RapozaGettysburg College5

Object-oriented image classification has tremendous potential to improve classification accuracies of land useand land cover (LULC), yet its benefits have only been minimally tested in peer-reviewed studies. We aimto quantify the benefits of an object-oriented method over a traditional pixel-based method for the mixedurban–suburban–agricultural landscape surrounding Gettysburg, Pennsylvania. To do so, we compared atraditional pixel-based classification using maximum likelihood to the object-oriented image classificationparadigm embedded in eCognition Professional 4.0 software. This object-oriented paradigm has at leastfour components not typically used in pixel-based classification: (1) the segmentation procedure, (2) nearestneighbor classifier, (3) the integration of expert knowledge, and (4) feature space optimization. We evaluatedeach of these components individually to determine the source of any improvement in classification accuracy.We found that the combination of segmentation into image objects, the nearest neighbor classifier, andintegration of expert knowledge yields substantially improved classification accuracy for the scene comparedto a traditional pixel-based method. However, with the exception of feature space optimization, little or noimprovement in classification accuracy is achieved by each of these strategies individually. Key Words: imageclassification, land cover, land use, object-oriented.

10

15

O bject-oriented image classification at its20simplest level is the classification of ho-

mogeneous image primitives, or objects, ratherthan individual pixels. Object-oriented imageclassification has numerous potential advan-tages. If carefully derived, image objects are25closely related to real-world objects. Oncethese objects are derived, topological relation-ships with other objects (e.g., adjacent to,contains, is contained by, etc.), statistical sum-maries of spectral and textural values, and shape30characteristics can all be employed in the clas-sification procedures (Benz et al. 2004). De-spite many advances, object-oriented methodsare still computationally intensive and the im-provement in classification accuracy over tradi-35tional methods is not always clear.

The idea of object-based image analysishas been around since the early 1970s (deKok, Schneider, and Ammer 1999), but im-plementation lagged due to lack of computing40power. On a limited basis, specialized object-oriented software packages were employed inthe 1980s to extract roads and other linear fea-tures (McKeown 1988; Quegan et al. 1988).These methods of analysis were difficult to em-45ploy and inefficient compared to pixel-based

methods, which began to employ advancedtechniques such as fuzzy sets, neural networks,and textural measurements. Since the 1990s,computing power has increased and high spa- 50tial resolution imagery has become common,prompting a new emphasis on object-orientedtechniques (Franklin et al. 2003).

Recently, object-oriented image classifica-tion has been successfully used to identify 55logging and other forest management activi-ties using Landsat ETM+ imagery (Flanders,Hall-Beyeer, and Pereverzoff 2003); to mapshrub encroachment using QuickBird imagery(Laliberte et al. 2004); to quantify landscape 60structure using imagery from Landsat ETM+,QuickBird, and aerial photography (Ivits et al.2005); to map fuel types using Landsat TM andIkonos imagery (Giakoumakis, Gitas, and San-Miguel 2002); and to detect changes in land 65use from imagery in the German national to-pographic and cartographic database (Walter2004).

Few direct comparisons of object-orientedand traditional pixel-based methods have been 70published, and these primarily appear inconference proceedings. One study foundthat object-oriented methods yield similar

The Professional Geographer, 60(1) 2008, pages 1–14 C© Copyright 2008 by Association of American Geographers.Initial submission, August 2006; revised submission, May and June 2007; final acceptance, June 2007.

Published by Taylor & Francis, LLC.

Page 3: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

2 Volume 60, Number 1, February 2008

classification accuracy to traditional methods,but that segmentation of pixels into objects75makes the classification more “map-like” byreducing the number of small disconnectedpatches (Willhauck 2000). Another study com-pared four change detection methods: tradi-tional postclassification, cross-correlation anal-80ysis, neural networks, and an object-orientedmethod (Civco et al. 2002). The study foundthat there was no single best way to per-form change analysis, and suggested that futureobject-oriented methods based on “multitem-85poral objects” could improve on current results.A third study compared pixel-based and object-oriented land use classification for a scene inthe Black Sea region of Turkey and found thatthe object-oriented classification method had a90substantial advantage over the pixel-based par-allelepiped, minimum distance, and maximum-likelihood classifiers (Oruc, Marangoz, andBuyuksalih 2004). A comparison of land coverclassification in northern Australia found that95object-oriented classification yielded improvedclassification accuracy, 78 percent versus 69.1percent (Whiteside and Ahmad 2005). Over-all, the literature suggests either that object-oriented methods have a real advantage or that100they demonstrate little advantage but muchpromise.

These studies all use eCognition Profes-sional 4.0 (recently renamed Definiens Pro-fessional), a specialized image classificationQ1105software package that integrates hierarchi-cal object-oriented image classification, fuzzylogic, and other strategies to improve classifica-tion accuracy. At the time of publication, eCog-nition was the most fully developed object-110oriented classification software available. Theobject-oriented paradigm in eCognition hasat least four components not typically usedin traditional pixel-based classification meth-ods: (1) the segmentation procedure, (2) near-115est neighbor classifier, (3) the integration ofexpert knowledge, and (4) feature space op-timization. We evaluate the object-orientedparadigm in eCognition for classification ofan urban–suburban–agricultural landscape sur-120rounding Gettysburg, Pennsylvania.

Our research questions are as follows: First,to what extent does the eCognition object-oriented paradigm increase land use/land cover(LULC) classification accuracy over a tradi-125tional pixel-based method for this scene? Sec-

ond, how much of the increased accuracy,if any, is due to segmentation of image ob-jects, the classifier, expert knowledge, or fea-ture space optimization? This study is distinct 130from previous studies because it independentlyevaluates these four elements of the eCognitionobject-oriented paradigm to determine their ef-fect on classification accuracy.

Methods 135

Study Area and DataThe study area is 148 km2 in size, and lo-cated in a rural area northwest of the Balti-more and Washington, DC metropolitan ar- 140eas (Figure 1). It encompasses the borough ofGettysburg, the Gettysburg National MilitaryPark, and surrounding areas. The area is lo-cated in the southwest corner of the Newark-Gettysburg basin, which is situated between 145the South Mountain (an extension of the BlueRidge Mountains) to the west and the Susque-hanna River to the east. The area comprises ex-tensive agriculture divided by narrow woodedbands and, increasingly, low-density residen- 150tial development. We acquired a georectifiedIKONOS satellite image of the study area taken25 July 2003. The image has a spatial resolutionof four meters and has four spectral bands: blue(480.3 nm), green (550.7 nm), red (664.8 nm), 155and near infrared (805.0 nm).

Classification ModelsThe image was classified into seven LULCclasses: forest, fallow, water, recreationalgrasses, commercial/industrial/transportation, 160cultivated, and residential (Table 1). Theseclasses represent the most common and impor-tant land uses and covers in the area. We did notuse a more detailed classification scheme suchas the one used for the National Land Cover 165data set (NLCD; Vogelmann et al. 2001) be-cause many of the NLCD classes are missingfrom the area (e.g., ice/snow, shrublands), rare(e.g., high-density development), or not distin-guishable from a single date image. Two to four 170image objects were selected as training samplesfor each class for use in the classification pro-cedures described in the next section.

A total of eight image classifications wereconstructed to compare the object-oriented 175

Page 4: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

Evaluation of an Object-Oriented Paradigm for Land Use/Land Cover Classification 3

Figure 1 Study area.

paradigm to a traditional classification (Table2). Model 1 represents the best object-orientedmodel, model 2 represents the traditional pixel-based classification and models 3 through 8 fall

somewhere in between. We compared pairs of 180models that differed in one key respect; ei-ther in terms of analysis level (pixel or ob-ject), classifier (maximum likelihood or nearest

Page 5: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

4 Volume 60, Number 1, February 2008

Table 1 Land use/land cover classes

Class Description

Forest Closed canopy forestFallow Field with little or no vegetationWater Stream, lake, or pondRecreational grasses Mowed grass in a suburban or urban context, might have scattered treesCommercial/industrial/transportation Parking lots, industrial sites, strip malls, and associated infrastructureCultivated Cropland, pasture, and unmowed grassesResidential Single or multifamily housing

neighbor), expert knowledge (used or not),or feature space optimization (used or not).185These terms and the specific differences be-tween models are discussed in the followingsections.

Analysis Level: Pixel versus ObjectTraditional image classification methods clas-190sify individual pixels, whereas object-orientedclassification methods classify homogeneousregions, or image objects. The process of ag-gregating pixels into image objects is known asimage segmentation. For the models operating195at the object level, the image used in this studywas segmented using the fractal net evolutionapproach (FNEA), which is a multifractal ap-proach implemented in the eCognition imageprocessing software (Baatz and Schaepe 2000;200Definiens 2003). FNEA is a pairwise clusteringQ2process that finds areas of minimum spectraland spatial heterogeneity given a set of scale,color, and shape parameters (Bentz et al. 2004).Q3

The size of the image objects is determined205by the scale parameter, a unitless number re-lated to the image resolution that describesthe maximum allowable heterogeneity of imageobjects. As the scale parameter increases, thesize of the image objects also increases (Bentz210

et al. 2004). The color and shape parameters Q4are weights between zero and one that deter-mine the contribution of spectral heterogeneity(in this case red, green, blue, and near infrared)and shape heterogeneity to the overall hetero- 215geneity that is to be minimized. The smooth-ness and compactness parameters are additionalweights between zero and one that determinehow shape heterogeneity is calculated.

Spectral heterogeneity is defined as the sum 220of standard deviations of each image band.Minimizing only spectral heterogeneity resultsin objects that are spectrally similar, but thatmight have fractally shaped borders or manybranched segments (Definiens 2003). To ad- Q5225dress this issue, the segmentation process canalso incorporate spatial heterogeneity in termsof compactness or smoothness. Compactness isdefined as the ratio of the border length andthe square root of the number of object pixels. 230Smoothness is defined as the ratio of the borderlength and the shortest possible border lengthderived from the bounding box of an image ob-ject (Definiens 2003). Q6

The “best” compactness and smoothness pa- 235rameters depend on the size and types of ob-jects to be extracted. For example, an objectrepresenting an agricultural field would ide-ally have high smoothness and compactness,

Table 2 Summary of classification models

Analysis Expert FeatureModel level Classifier knowledge space

1 Object Nearest neighbor Yes Spectral2 Pixel Maximum likelihood No Spectral3 Object Nearest neighbor No Spectral4 Pixel Nearest neighbor No Spectral5 Object Maximum likelihood No Spectral6 Pixel Nearest neighbor Yes Spectral7 Object Nearest neighbor No Optimized8 Pixel Nearest neighbor No Optimized

Page 6: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

Evaluation of an Object-Oriented Paradigm for Land Use/Land Cover Classification 5

Figure 2 Before and after segmentation.

whereas an object representing a riparian area240along a stream would ideally have low smooth-ness and compactness.

It is important to note that there is no suchthing as optimal parameters for image seg-mentation (Benz et al. 2004). For pixel-level245models, we used a scale parameter of one toforce eCognition to operate as a pixel-basedclassifier. For object-level models, the normalprocedure is simply to iteratively try differentparameters until the resulting objects are ap-250propriately sized and shaped for the particulartask. After testing many possible parameters, weused the following: scale: 50, color: 0.7, shape:0.3, smoothness: 0.5, compactness: 0.5. Theresulting objects closely corresponded to the255boundaries of fields, woodlots, strip malls, andother elements of interest in the image (Fig-ure 2).

Image ClassifierThe models are divided according to what clas-260sifier they use: maximum likelihood or near-est neighbor. The maximum likelihood clas-sifier calculates the probability that a pixelor object belongs to each class and then as-signs the pixel or object to the class with265the highest probability (Richards 1999). It isone of the most commonly used classifiers be-cause of its simplicity and robustness (Platt andGoetz 2004). The Environment for Visualiz-ing Images (ENVI) image processing software270

was used for maximum likelihood classification.The nearest neighbor classifier is a part of theeCognition object-oriented paradigm and as-signs each object to the class closest to it infeature space. 275

Expert KnowledgeThe models are also divided according towhether or not they employ expert knowl-edge (i.e., user-developed classification rules)in addition to the classifier (maximum likeli- 280hood or nearest neighbor). Traditional clas-sification methods typically do not employexpert knowledge, whereas the eCognitionobject-oriented paradigm integrates member-ship functions with the nearest neighbor clas- 285sifier. Membership functions allow objects tobe classified using simple rules related to spec-tral, shape, and textural characteristics or onrelationships between neighboring objects. Forexample, we might observe that recreational 290grasses are typically surrounded by developedareas like commercial/industrial/transportationor residential. Knowing this, a membershipfunction can be constructed that assigns eachobject a value between zero and one depend- 295ing on the percentage of that object that issurrounded by an object defined as devel-oped (Figure 3). The closer this number is toone, the more likely the object will be clas-sified as recreational grasses. For models that 300

Page 7: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

6 Volume 60, Number 1, February 2008

Figure 3 Example of a membership function.

use expert knowledge, membership functionswere defined for every class except for water(Table 3).

Feature Space OptimizationFinally, models were divided into those that use305feature space optimization and those that donot. Typically classification is conducted us-ing the spectral or textural bands chosen bythe analyst. In the feature space optimizationprocedure, the user first specifies which fea-310tures’ space bands should be included in theanalysis. For objects, the feature space bands

might number in the hundreds and includespectral data, hierarchical data, shape data, andtextural data. For pixels, the feature space is 315considerably smaller and is limited to spectraland textural data. Feature space optimizationthen calculates all the bands and sorts thembased on how well they separate the classes.

The thirty feature space bands that best sep- 320arated the LULC classes were used to clas-sify the image for models that used featurespace optimization. The bands beyond thesethirty did little to help separate classes and sowere not used in the classification. Textural 325data were computed following Haralick, Shan-mugam, and Dinstein (1973). Haralick’s greylevel co-occurrence matrix describes how dif-ferent combinations of pixel values occur withinan object (Definiens 2003). Haralick’s grey level Q7330difference vector, another way to measure tex-ture, is the sum of the diagonals of the greylevel co-occurrence matrix.

At the object level (model 7), the followingsix bands best separated the classes according 335to the feature space optimization:

� Grey level co-occurrence matrix correla-tion of band 3 (red). This measures thecorrelation between the values of neigh-boring pixels in the red band. 340

� Shape index: perimeter divided by fourtimes the square root of the area of anobject.

� Degree of skeleton branching: describesthe highest order of branching of skele- 345tons, defined as lines that connect themidpoints of a Delaunay triangulation ofan object. High values indicate a complexgeometrical structure of the object (Benzet al. 2004). 350

Table 3 Membership functions

Class Factors promoting membership Factors limiting membership

Commercial/industrial/transportation — Relative border to residentialRecreational grasses Relative border to residential or

commer-cial/industrial/transportation

Relative border to residential orcommercial/industrial/transportation

Residential Adjacent to recreational grasses,low homogeneity

Relative border to commer-cial/industrial/transportation,cultivated, or fallow

Cultivated Rectangular fit Relative border to residentialFallow Rectangular fit Relative border to residentialForested — Relative border to residentialWater — —

Page 8: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

Evaluation of an Object-Oriented Paradigm for Land Use/Land Cover Classification 7

� Maximum pixel value of band 4 (near in-frared).

� Grey level difference vector entropy ofband 3 (red). This measures whether pix-els have similar brightness levels in the red355band.

� Grey level co-occurrence matrix entropyof band 1 (blue). This measures whetherpixels have similar brightness levels in theblue band.360

At the object level (model 8), the followingsix bands best separated the classes accordingto the feature space optimization:

� Grey level co-occurrence matrix mean ofband 1 (blue). This measures the mean365frequency of pixel values in combinationwith neighboring pixel values in the blueband.

� Grey level co-occurrence matrix contrastof band 4 (near infrared). This measures370the amount of variation in the near in-frared band within an object.

� Grey level co-occurrence matrix varianceof band 4 (near infrared). Similar to greylevel co-occurrence matrix contrast, this375measures the dispersion of variation in thenear infrared band within an object.

� Grey level co-occurrence matrix mean ofband 4 (near infrared). This measures themean frequency of pixel values in combi-380nation with neighboring pixel values in thenear infrared band.

� Mean difference to neighbor, band 4 (nearinfrared). This measures the mean differ-ence between a pixel value and its neighbor385in the near infrared band.

� Grey level co-occurrence matrix homo-geneity of band 1 (blue). This measuresdegree that the object displays a lack ofvariation in the blue band.390

One possible issue with feature space op-timization is overfitting—fitting a statisticalmodel with too many parameters such that themodel fits the training data much better thanthe validation data. To minimize overfitting,395we excluded bands with nonnormalized unitsof length or area; objects of any size can belongto any class.

Classification EvaluationTo evaluate classification accuracy of the eight 400models, a random sample of 300 points wasgenerated across the image in areas that are nottraining sites. This random sample was usedto determine the relative proportion of eachclass within the image. Following Congalton 405and Green (1999), the random sample was sup-plemented by a stratified random sample of 250points. This ensured that each class in the clas-sified image contained at least thirty validationpoints. All 550 validation points were overlain 410on the original imagery and on 0.6 m georecti-fied aerial photography taken in spring 2003.An image analyst identified the LULC classat the object level and the pixel level, alwaysdeferring to the IKONOS imagery for classes 415that might have changed between the times thetwo images were taken. A second image analystrevisited any points that were tagged as “ques-tionable” by the first analyst. If the two analystsdisagreed, the point was visited in the field by 420the first analyst to make a final determinationof the class. A total of thirty-five points werevisited in the field.

Using all 550 points, a confusion matrix wasgenerated to compare the predicted LULC 425classes to the actual LULC classes. The diago-nal of the matrix shows the number of objectsor pixels where the predicted class is the sameas the actual class, whereas the off-diagonal val-ues show the number of objects or pixels where 430the predicted class is different from the actualclass. For each model, user’s accuracy (proba-bility that a site in the classified image actuallyrepresents that class on the ground) and pro-ducer’s accuracies (probability that a site on the 435ground was classified correctly) were calculatedfor each class. The average user’s accuracy, av-erage producer’s accuracy, and weighted agree-ment (percent correct weighted by the actualoccurrence in the landscape, as estimated by the 440300-point random sample) were also reported.

Note that our accuracy reporting does notinclude Kappa, which attempts to correct forchance agreement, because this measure is dif-ficult to interpret, has an arbitrary definition 445of chance agreement, and conflates differentsources of error (Pontius 2000). Instead, wereported weighted disagreement due to loca-tion (percentage of objects or pixels incorrectlyclassified because the predicted location was 450

Page 9: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

8 Volume 60, Number 1, February 2008

Table 4 Confusion matrix, model 1

Model

Analysis

level Classifier

Expert

knowledge

Feature

space

Average

user’s

acc.

Average

prod.

acc.

Weighted

agreement

Weighted

disagree

location

Weighted

disagree

quantity

1 Object Nearest neighbor Yes Spectral 72% 71% 78% 13% 9%

CIT Cultivated Fallow Forested

Recreational

grass Residential Water Total

Prod.

acc.

CIT 94 0 4 1 0 8 0 107 88%Cultivated 3 101 2 18 12 3 0 139 73%Fallow 14 1 14 2 0 7 0 38 37%Forested 0 4 1 85 6 5 0 101 84%Recreational grass 3 18 0 7 54 17 0 99 55%Residential 8 0 2 1 0 23 0 34 68%Water 1 0 0 0 0 2 28 31 90%Total 123 124 23 114 72 65 28 549 71%User’s acc. 76% 81% 61% 75% 75% 35% 100% 72%

Note: CIT = Commercial/industrial/transportation.

incorrect) and weighted disagreement due toquantity (percentage of objects or pixels incor-rectly classified because the predicted quantitiesof classes were incorrect). Both percentages areweighted by prevalence of the classes in the im-455age.

Results and Discussion

Representing the eCognition object-orientedparadigm, model 1 (Table 4) operates on the460object level, uses the nearest neighbor classi-fier, incorporates expert knowledge, and usesonly the spectral feature space. The weightedagreement (percent correct weighted by preva-lence of the class) was 78 percent, the high-465est of all the models (Table 4). The user’saccuracy was over 70 percent for all classesexcept fallow and residential. Because residen-tial is a rare class that is spectrally similar tomany other classes, areas known to be residen-470tial were only 68 percent likely to be classi-fied correctly. Producer’s accuracy was over 70percent for all classes except fallow, residen-tial, and recreational grasses. Sites known tobe fallow were misclassified 37 percent of the475time because of the spectral similarity to com-mercial/industrial/transportation. The classesof commercial/industrial/transportation, culti-vated, forested, and water were over 70 percentaccurate from both the user’s and producer’s480perspective. The weighted disagreement dueto location is 13 percent and the weighted dis-agreement due to quantity is 9 percent, showingthat neither type of error predominates.

Representing pixel-based image classifica-485tion, model 2 operates on the pixel level, uses

the maximum likelihood classifier, does not in-corporate expert knowledge, and uses spectralfeature space (Table 5). The weighted agree-ment is 64 percent, which is 14 percent lower 490than model 1. Model 2 has a lower classificationaccuracy than model 1 for every class except fal-low, which has a higher user’s and producer’saccuracy than model 1. Whereas weighted dis-agreement due to quantity is similar between 495model 1 and 2 (9 percent versus 12 percent),the weighted disagreement due to location isclearly superior in model 1 compared to model2 (13 percent versus 24 percent).

Analysis Level: Pixel versus Object 500To evaluate the effect of analysis level we com-pared model 3 (Table 6) to model 4 (Table 7).Both models use the nearest neighbor classi-fier, omit expert knowledge, and use spectralfeature space, but model 3 operates at the ob- 505ject level whereas model 4 operates at the pixellevel. The weighted agreement is similar forboth models—60 percent for model 3 and 61percent for model 4. Weighted disagreementdue to location is identical (17 percent) and sim- 510ilar for weighted disagreement due to quantity(23 percent versus 22 percent). The accuracywithin classes varies, however. These resultssuggest that classifying objects rather than pix-els does not necessarily improve classification 515accuracy.

ClassifierTo evaluate the effect of the classifier we com-pared model 3 (Table 6) to model 5 (Table8). Both models operate at the object level, 520

Page 10: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

Evaluation of an Object-Oriented Paradigm for Land Use/Land Cover Classification 9

Table 5 Confusion matrix, model 2

Model

Analysis

level Classifier

Expert

knowledge

Feature

space

Average

user’s

acc.

Average

prod.

acc.

Weighted

agreement

Weighted

disagree

location

Weighted

disagree

quantity

2 Pixel Maximum likelihood No Spectral 63% 53% 64% 24% 12%

CIT Cultivated Fallow Forested

Recreational

grass Residential Water Total

Prod.

acc.

CIT 77 1 2 0 1 26 0 107 72%Cultivated 0 64 8 0 51 16 0 139 46%Fallow 1 0 30 0 0 7 0 38 79%Forested 0 10 2 53 26 10 0 101 52%Recreational grass 1 26 3 7 42 20 0 99 42%Residential 16 0 2 0 2 14 0 34 41%Water 7 0 0 0 0 12 12 31 39%Total 102 101 47 60 122 105 12 549 53%User’s acc. 75% 63% 64% 88% 34% 13% 100% 63%

Note: CIT = Commercial/industrial/transportation.

omit expert knowledge, and use spectral featurespace. They differ only in terms of the classi-fier: model 3 uses the nearest neighbor classi-fier, whereas model 5 uses the maximum like-lihood classifier. Model 3 has a lower weighted525agreement than model 5 (60 percent versus 71percent). Model 3 also has a similar weighteddisagreement due to location (17 percent versus18 percent) and a higher weighted disagree-ment due to quantity (23 percent versus 11530percent). This suggests that maximum likeli-hood does a better job in predicting the quan-tity of classes, but is comparable to nearestneighbor in predicting the location. However,because several less common classes (water535and residential) are poorly classified in model5, the average producer’s accuracy for thismodel (58 percent) is lower than for model 3(61 percent).

Expert Knowledge 540To evaluate the effect of expert knowledge in-tegrated into membership functions, we com-pared model 1 (Table 4) to model 3 (Table 6)and model 6 (Table 9) to model 4 (Table7). Within each pair, the models differ only 545in terms of whether they use membershipfunctions in addition to the classifier (mod-els 1 and 6 do, models 3 and 4 do not).The first pair of models operates at the objectlevel and the second set operates at the pixel 550level.

At the object level, membership functionsimprove classification accuracy considerably.Model 1 is higher than model 3 in termsof weighted agreement (78 percent versus 60 555percent), average user’s accuracy (72 percentversus 62 percent), and average producer’saccuracy (71 percent versus 61 percent). In

Table 6 Confusion Matrix, Model 3

Model

Analysis

level Classifier

Expert

knowledge

Feature

space

Average

user’s

acc.

Average

prod.

acc.

Weighted

agreement

Weighted

disagree

location

Weighted

disagree

quantity

3 Object Nearest Neighbor No Spectral 62% 61% 60% 17% 23%

CIT Cultivated Fallow Forested

Recreational

grass Residential Water Total

Prod.

acc.

CIT 62 0 24 0 0 21 0 107 58%Cultivated 0 73 3 1 47 15 0 139 53%Fallow 1 2 32 0 0 3 0 38 84%Forested 0 10 2 68 18 3 0 101 67%Recreational grass 1 51 7 2 20 18 0 99 20%Residential 3 1 11 0 0 19 0 34 56%Water 2 0 1 0 0 0 28 31 90%Total 66 137 80 71 85 79 28 549 61%User’s acc. 94% 53% 40% 96% 24% 24% 100% 62%

Note: CIT = Commercial/industrial/transportation.

Page 11: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

10 Volume 60, Number 1, February 2008

Table 7 Confusion matrix, model 4

Model

Analysis

level Classifier

Expert

knowledge

Feature

space

Average

user’s

acc.

Average

prod.

acc.

Weighted

agreement

Weighted

disagree

location

Weighted

disagree

quantity

4 Pixel Nearest neighbor No Spectral 60% 62% 61% 17% 22%

CIT Cultivated Fallow Forested

Recreational

grass Residential Water Total

Prod.

acc.

CIT 71 3 11 1 0 19 2 107 66%Cultivated 0 80 10 2 37 10 0 139 58%Fallow 1 3 31 0 0 3 0 38 82%Forested 0 17 5 69 9 0 1 101 68%Recreational grass 1 42 8 10 28 10 0 99 28%Residential 10 0 9 1 0 14 0 34 41%Water 1 0 1 0 0 0 29 31 94%Total 84 145 75 83 74 56 32 549 62%User’s acc. 85% 55% 41% 83% 38% 25% 91% 60%

Note: CIT = Commercial/industrial/transportation.

particular, cultivated, residential, and recre-ational grasses are improved from both the560user’s and producer’s perspective when mem-bership functions are used. The classificationaccuracy of water remains the same becausemembership functions are not used in eithermodel. Model 1 is superior to model 3 in terms565of weighted agreement due to location (13 per-cent versus 17 percent) and quantity (9 percentversus 23 percent). This shows that member-ship rules especially help predict the quantityof objects in each class.570

At the pixel level, there is no improvement inclassification accuracy when membership func-tions are used. Models 4 and 6 are similar inweighted agreement (61 percent versus 60 per-cent). Model 6 has a higher average user’s ac-575curacy than model 4 (73 percent versus 60 per-cent), but a lower average user’s accuracy (60percent versus 62 percent). An important dif-

ference between the two models is the weighteddisagreement due to location and quantity. For 580model 6, the weighted disagreement due tolocation is 7 percent, whereas the weighteddisagreement due to quantity is 33 percent.Classes like residential and recreational grassesare severely underpredicted by model 6. In 585contrast, for model 4, the weighted disagree-ment due to location is 17 percent, whereas theweighted disagreement due to quantity is 22percent. Compared to model 6, model 4 doesa better job predicting the number of pixels in 590each class, but a worse job predicting the loca-tions of these pixels. Overall the results showthat these membership functions do not im-prove classification at the pixel level.

Feature Space Optimization 595To evaluate the effect of feature space opti-mization, we compared model 3 (Table 6) to

Table 8 Confusion matrix, model 5

Model

Analysis

level Classifier

Expert

knowledge

Feature

space

Average

user’s

acc.

Average

prod.

acc.

Weighted

agreement

Weighted

disagree

location

Weighted

disagree

quantity

5 Object Maximum likelihood No Spectral 69% 58% 71% 18% 11%

CIT Cultivated Fallow Forested

Recreational

grass Residential Water Total

Prod.

acc.

CIT 84 0 1 0 0 22 0 107 79%Cultivated 0 79 10 0 41 9 0 139 57%Fallow 1 0 31 0 0 6 0 38 82%Forested 0 6 0 61 32 2 0 101 60%Recreational grass 2 25 1 2 52 17 0 99 53%Residential 18 1 2 0 0 13 0 34 38%Water 10 0 0 0 0 10 11 31 35%Total 115 111 45 63 79 125 11 549 58%User’s acc. 73% 71% 69% 97% 66% 10% 100% 69%

Note: CIT = Commercial/industrial/transportation.

Page 12: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

Evaluation of an Object-Oriented Paradigm for Land Use/Land Cover Classification 11

Table 9 Confusion matrix, model 6

Model

Analysis

level Classifier

Expert

knowledge

Feature

space

Average

user’s

acc.

Average

prod.

acc.

Weighted

agreement

Weighted

disagree

location

Weighted

disagree

quantity

6 Pixel Nearest neighbor Yes Spectral 73% 60% 60% 7% 33%

CIT Cultivated Fallow Forested

Recreational

grass Residential Water Total

Prod.

acc.

CIT 88 4 12 1 0 0 2 107 82%Cultivated 0 125 11 3 0 0 0 139 90%Fallow 3 3 32 0 0 0 0 38 84%Forested 0 24 5 71 0 0 1 101 70%Recreational grass 1 73 12 12 1 0 0 99 1%Residential 20 0 13 1 0 0 0 34 0%Water 1 0 1 0 0 0 29 31 94%Total 113 229 86 88 1 0 32 549 60%User’s acc. 78% 55% 37% 81% 100% 0% 91% 63%

Note: CIT = Commercial/industrial/transportation.

model 7 (Table 10) and model 4 (Table 7) tomodel 8 (Table 11). The first pair operates onthe object level, whereas the second pair op-600erates on the pixel level. Models 3 and 7 usespectral feature space only, whereas models 7and 8 use feature space optimization. Featurespace optimization benefits classification accu-racy at both the object and pixel level. At the605object level, model 7 has a weighted agree-ment of 71 percent, whereas model 3 has aweighted agreement of 60 percent. At the pixellevel, model 8 has a weighted agreement of67 percent, whereas model 4 has a weighted610agreement of 61 percent. In neither case do themodels outperform model 1, which uses expertknowledge but not feature space optimization.

LimitationsThis analysis has several limitations. First, the615results should not automatically be extended

to other contexts; they are partially a functionof the spatial and spectral resolution of the im-age, the composition of the scene, the classifica-tion system, and the segmentation parameters. 620However, the results are consistent with thegrowing body of literature that shows an advan-tage for object-oriented classification methods(Willhauck 2000; Oruc, Marangoz, and Buyuk-salih 2004; Whiteside and Ahmad 2005). Sec- 625ond, although we tested several aspects of theobject-oriented image classification paradigmin eCognition, we cannot be sure that we max-imized the software’s potential. For example,the segmentation parameters and membership 630functions we used are not necessarily optimal.

A third limitation is that, despite the sim-ple classification system, many of the classesmight be difficult to distinguish spectrally. Partof the potential difficulty in separating these 635LULC classes is due to the spatial and spectral

Table 10 Confusion matrix, model 7

Model

Analysis

level Classifier

Expert

knowledge

Feature

space

Average

user’s

acc.

Average

prod.

acc.

Weighted

agreement

Weighted

disagree

location

Weighted

disagree

quantity

7 Object Nearest neighbor No Optimized 57% 56% 71% 20% 9%

CIT Cultivated Fallow Forested

Recreational

grass Residential Water Total

Prod.

acc.

CIT 31 0 34 0 0 40 2 107 29%Cultivated 0 72 24 8 22 10 3 139 52%Fallow 1 0 33 0 0 4 0 38 87%Forested 0 11 5 47 19 19 0 101 47%Recreational grass 2 6 22 0 23 43 3 99 23%Residential 3 0 6 0 0 24 1 34 71%Water 2 0 1 0 0 1 27 31 87%Total 39 89 125 55 64 141 36 549 56%User’s acc. 79% 81% 26% 85% 36% 17% 75% 57%

Note: CIT = Commercial/industrial/transportation.

Page 13: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

12 Volume 60, Number 1, February 2008

Table 11 Confusion matrix, model 8

Model

Analysis

level Classifier

Expert

knowledge

Feature

space

Average

user’s

acc.

Average

prod.

acc.

Weighted

agreement

Weighted

disagree

location

Weighted

disagree

quantity

8 Pixel Nearest neighbor No Optimized 59% 63% 67% 22% 11%

CIT Cultivated Fallow Forested

Recreational

grass Residential Water Total

Prod.

acc.

CIT 65 0 7 0 0 35 0 107 61%Cultivated 0 71 16 4 40 8 0 139 51%Fallow 1 3 31 0 1 2 0 38 82%Forested 0 15 4 61 8 13 0 101 60%Recreational grass 1 18 8 6 26 40 0 99 26%Residential 5 0 3 0 0 26 0 34 76%Water 1 0 0 0 0 4 26 31 84%Total 73 145 75 83 74 56 32 549 63%User’s acc. 89% 49% 41% 73% 35% 46% 81% 59%

Note: CIT = Commercial/industrial/transportation.

resolution of the imagery. High spatial reso-lution imagery, like the imagery used in thisstudy, can be difficult to classify due to the highspectral heterogeneity within classes (Wood-640cock and Strahler 1987; Donnay 1999; Lalib-erte et al. 2004). Furthermore, multispectralIKONOS imagery contains only four spectralbands and thus has poorer spectral resolutionthan many other common multispectral sensors645such as Landsat 7.

Conclusions

This study started with two research ques-tions: (1) to what extent object-oriented image650classification increases LULC classification ac-curacy over a traditional pixel-based methodfor this scene, and (2) how much of the in-creased accuracy, if any, is due to segmenta-tion, the classifier, expert knowledge, or fea-655ture space optimization. The answer to thefirst question is clear: we found that the object-oriented paradigm implemented in eCognitionyields a considerable improvement in classi-fication accuracy over a traditional method660for this scene. Weighted agreement of thebest object-oriented method (model 1) was78 percent, which is 14 percent higher thanthe model representing the traditional pixel-based classification (model 2). That said, the665object-oriented paradigm is no a magic bullet:when classes overlap spectrally as in this study,high classification accuracy is still difficult toachieve.

The answer to the second question is that 670the combination of the object analysis level, thenearest neighbor classifier, and expert knowl-edge yields the highest classification accuracy.Therefore, for the most part, we cannot at-tribute the advantage of the object-oriented 675paradigm to any one of these methods in isola-tion. For example, we found that the classifica-tion of image objects rather than pixels by itselfyields no increase in weighted agreement (∼60percent weighted agreement in each case for 680models 3 and 4). Furthermore, using the near-est neighbor classifier rather than the maximumlikelihood classifier actually decreases weightedagreement (60 percent for model 3, 71 percentfor model 5) and does a worse job predicting 685the quantity of classes (weighted disagreementof 23 percent for model 3 and 14 percent formodel 5).

When paired with a nearest neighbor classi-fier, membership functions yielded a large im- 690provement in classification accuracy at the ob-ject level (weighted agreement of 78 percent formodel 1 compared to 60 percent for model 3).At the pixel level, there is no improvement inclassification when membership functions are 695used (weighted agreement of ∼60 percent forboth models 4 and 6). It is important to un-derscore membership functions are not exclu-sive to object-oriented classification; they canbe implemented for pixel-based classification 700as well. In this study, most of the membershipfunctions are related to relative border and ad-jacency, which can be applied at both the pixeland object level. However, membership func-

Page 14: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

Evaluation of an Object-Oriented Paradigm for Land Use/Land Cover Classification 13

tions are more broadly applicable at the object705level because certain variables commonly usedin membership functions, such as shape, areonly meaningful for objects. Feature space op-timization led to an increase in classification ac-curacy at both the object and pixel level, but fea-710ture space optimization models still performedpoorer than model 1, which integrated expertknowledge.

The study shows that, for this image and clas-sification system, the object-oriented paradigm715improves classification accuracy considerably.Unlike previous studies, we investigated thesource of this advantage and found that muchof the benefit is derived from the ability to in-tegrate expert knowledge through membership720functions, which is only effective at the objectlevel. To simplify the comparison with tradi-tional image classification methods, we used aflat classification scheme and a single level ofimage objects. A next step in the study will be725to develop a nested hierarchy of image objectsand classify land cover at each level using in-formation from subobjects (e.g., trees, grass)and superobjects (e.g., city park, forest). Thiswould reduce direct comparability with pixel-730based methods, but allow us to take full advan-tage of multiresolution capabilities of eCogni-tion and potentially further boost classificationaccuracy. �

Literature Cited735

Baatz, M., and A. Schaepe. 2000. Multiresolutionsegmentation: An optimization approach for highquality multi-scale image segmentation. In Ange-wandte Geographische Informationsverarbeitung XII,Q8740ed. J. Strobl, T. Blaschke, G. Griesebner, 12–23.Heidelberg, Germany: Wichmann-Verlag.

Benz, U. C., P. Hofmann, G. Willhauck, I. Lin-genfelder, and M. Heyen. 2004. Multi-resolution,object-oriented fuzzy analysis of remote sens-745ing data for GIS-ready information. ISPRS Jour-nal of Photogrammetry & Remote Sensing 58:239–58.

Civco, D. L., J. D. Hurd, E. H. Wilson, M. Song, andZ. Zhang. 2002. A comparison of land use and land750cover change detection methods. Paper presentedat the annual conference of the ASPRS-ACSM.Q9

Congalton, R., and K. Green. 1999. Assessing theaccuracy of remotely sensed data: Principles andpractices. Boca Raton, FL: CRC/Lewis Press. De755Kok, R., T. Schneider, and U. Ammer. 1999. Ob-ject based classification and applications in theAlpine forest environment. In Fusion of sensor data,

knowledge sources and algorithms: Proceedings of thejoint ISPRS/EARSel Workship, 3-4 June 1999, Val- 760ladolid, Spain. International Archives of Photogram-metry and Remote Sensing 32:7-4-3 W6.

Donnay, J. P. 1999. Use of remote sensing informa-tion in planning. In Geographical information andplanning, eds. J. Stillwell, S. Geertman, & S. Open- 765shaw, 242–60. Berlin: Springer-Verlag.

eCognition Professional, version 4.0, DefiniensImaging, Munich, Germany.

ENVI, version 4.1, ITT Visual Information Solu-tions, Boulder, CO. 770

Flanders, D., M. Hall-Beyeer, and J. Pereverzoff.2003. Preliminary evaluation of eCognition object-based software for cut block delineation and featureextraction. Canadian Journal of Remote Sensing. 29(4):441–52. 775

Franklin, J., S. R. Phinn, C. E. Woodcock, and J. Ro-gan. 2003. Rationale and conceptual framework forclassification approaches to assess forest resourcesand properties. In Methods and applications for re-mote sensing of forests: Concepts and case studies, eds. 780M. Wulder and S. E. Franklin. Kluwer. Q10

Giakoumakis, M. N., I. Z. Gitas, and J. San-Miguel. 2002. Object-oriented classification mod-eling for fuel type mapping in the Mediterranean,using LANDSAT TM and IKONOS imagery- 785preliminary results. In Forest fire research & wild-fire safety, ed. X. Viegas. Rotterdam, Netherlands: Q11Millpress.

Haralick, R. M., K. Shanmugam, and I. Dinstein.1973. Textural features for image classification. 790IEEE TransActions on Systems, Man, and Cybernetics3:610–21.

Ivits, E., B. Koch, T. Blascheke, M. Jochum, and P.Adler. 2005. Landscape structure assessment withimage grey-values and object-based classification 795at three spatial resolutions. International Journal ofRemote Sensing 26 (14):2975–93.

Laliberte, A. S., A. Rango, K. M. Havstad, J. F. Paris,R. F. Beck, R. McNeedly, and A. L. Gonzalez.2004. Object-oriented image analysis for mapping 800shrub encroachment from 1937 to 2003 in south-ern New Mexico. Remote Sensing of Environment93:198–210.

McKeown, D. 1988. Building knowledge-based sys-tems for detecting man-made structure from re- 805motely sensed imagery. Philosophical Transactions ofthe Royal Society of London, Series A: Mathematicaland Physical Sciences 324 (1579):423–35.

Oruc, M., A. M. Marangoz, and G. Buyuksalih. 2004.Comparison of pixel-based and object-oriented 810classification approaches using Landsat-7 ETMspectral bands. Paper presented at the conferenceof the ISPRS, Istanbul, Turkey.

Platt, R. V., and A. F. H. Goetz. 2004. A comparisonof AVIRIS and synthetic Landsat data for land use 815classification at the urban fringe. PhotogrammetricEngineering and Remote Sensing 70:813–19.

Page 15: An Evaluation of an Object-Oriented Paradigm for Land Use ...public.gettysburg.edu/~rplatt/platt and rapoza proof.pdfobject-oriented paradigm in eCognition has at least four components

P1: ...

RTPG_A_272328 RTPG.cls November 9, 2007 1:41

14 Volume 60, Number 1, February 2008

Pontius, R. G. Jr. 2000. Quantification error versuslocation error in comparison of categorical maps.Photogrammetric Engineering and Remote Sensing82066:1011–16.

Quegan, S., A. Rye, A. Hendry, J. Skingley, and C.Oddy. 1988. Automatic interpretation strategiesfor synthetic aperture radar images. PhilosophicalTransactions of the Royal Society of London, Series A:825Mathematical and Physical Sciences 324 (1579): 409–20.

Richards, J. A. 1999. Remote sensing digital image anal-ysis. New York: Springer Verlag.

Vogelmann, J. E., S. M. Howard, L. Yang, C. R.830Larson., B. K. Wylie, and J. N. Van Driel. 2001.Completion of the 1990’s National Land CoverData Set for the conterminous United States.Photogrammetric Engineering and Remote Sensing67:650–62.835

Walter, V. 2004. Object-based classification of re-mote sensing data for change detection. ISPRSJournal of Photogrammetry and Remote Sensing58:225–38.

Whiteside, T., and W. Ahmad. 2005. A compari-840son of object-oriented and pixel-based classifica-tion methods for mapping land cover in northern

Australia. In Proceedings of SSC2005 Spatial intel-ligence, innovation and praxis: The national biennialconference of the Spatial Sciences Institute. Melbourne,Australia: Spatial Sciences Institute. Q12845

Willhauck, G. 2000. Comparison of object-orientedclassification techniques and standard image anal-ysis for the use of change detection between SPOTmultispectral satellite images and aerial photos. IS-PRS XXXIII. Amsterdam. Q13850

Woodcock, C. E., and A. H. Strahler. 1987. Thefactor of scale in remote sensing. Remote Sensing ofEnvironment 21:311–32.

RUTHERFORD V. PLATT is an Assistant Profes-sor in the Department of Environmental Studies at 855Gettysburg College, Gettysburg, PA 17325. E-mail:[email protected]. His research interests includespatial models of human–environment interactionsand emerging methods in remote sensing.

LAUREN RAPOZA is a graduate of Gettysburg 860College and is currently an outreach specialistat the Maryland Science Center. E-mail: [email protected].