19
An improved algorithm for generating global window brightness temperatures from multiple satellite infrared imagery Article Published Version Hodges, K. I., Chappell, D. W., Robinson, G. J. and Yang, G. (2000) An improved algorithm for generating global window brightness temperatures from multiple satellite infrared imagery. Journal of Atmospheric and Oceanic Technology, 17 (10). pp. 1296-1312. ISSN 1520-0426 doi: https://doi.org/10.1175/1520- 0426(2000)017<1296:AIAFGG>2.0.CO;2 Available at http://centaur.reading.ac.uk/169/ It is advisable to refer to the publisher’s version if you intend to cite from the work. See Guidance on citing . To link to this article DOI: http://dx.doi.org/DOI:10.1175/1520- 0426(2000)017<1296:AIAFGG>2.0.CO;2 Publisher: American Meteorological Society

An improved algorithm for generating global window

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An improved algorithm for generating global window

An improved algorithm for generating global window brightness temperatures from multiple satellite infrared imagery

Article

Published Version

Hodges, K. I., Chappell, D. W., Robinson, G. J. and Yang, G. (2000) An improved algorithm for generating global window brightness temperatures from multiple satellite infrared imagery. Journal of Atmospheric and Oceanic Technology, 17 (10). pp. 1296-1312. ISSN 1520-0426 doi: https://doi.org/10.1175/1520-0426(2000)017<1296:AIAFGG>2.0.CO;2 Available at http://centaur.reading.ac.uk/169/

It is advisable to refer to the publisher’s version if you intend to cite from the work. See Guidance on citing .

To link to this article DOI: http://dx.doi.org/DOI:10.1175/1520-0426(2000)017<1296:AIAFGG>2.0.CO;2

Publisher: American Meteorological Society

Page 2: An improved algorithm for generating global window

All outputs in CentAUR are protected by Intellectual Property Rights law, including copyright law. Copyright and IPR is retained by the creators or other copyright holders. Terms and conditions for use of this material are defined in the End User Agreement .

www.reading.ac.uk/centaur

CentAUR

Central Archive at the University of Reading

Reading’s research outputs online

Page 3: An improved algorithm for generating global window

1296 VOLUME 17J O U R N A L O F A T M O S P H E R I C A N D O C E A N I C T E C H N O L O G Y

q 2000 American Meteorological Society

An Improved Algorithm for Generating Global Window Brightness Temperaturesfrom Multiple Satellite Infrared Imagery

K. I. HODGES, D. W. CHAPPELL, AND G. J. ROBINSON

Environmental Systems Science Center, University of Reading, Reading, United Kingdom

G. YANG

Centre for Global Atmospheric Modeling, University of Reading, Reading, United Kingdom

(Manuscript received 7 June 1999, in final form 7 January 2000)

ABSTRACT

An improved algorithm for the generation of gridded window brightness temperatures is presented. The primarydata source is the International Satellite Cloud Climatology Project, level B3 data, covering the period fromJuly 1983 to the present. The algorithm takes window brightness, temperatures from multiple satellites, bothgeostationary and polar orbiting, which have already been navigated and normalized radiometrically to theNational Oceanic and Atmospheric Administration’s Advanced Very High Resolution Radiometer, and generates3-hourly global images on a 0.58 by 0.58 latitude–longitude grid. The gridding uses a hierarchical scheme basedon spherical kernel estimators. As part of the gridding procedure, the geostationary data are corrected for limbeffects using a simple empirical correction to the radiances, from which the corrected temperatures are computed.This is in addition to the application of satellite zenith angle weighting to downweight limb pixels in preferenceto nearer-nadir pixels. The polar orbiter data are windowed on the target time with temporal weighting to accountfor the noncontemporaneous nature of the data. Large regions of missing data are interpolated from adjacentprocessed images using a form of motion compensated interpolation based on the estimation of motion vectorsusing an hierarchical block matching scheme. Examples are shown of the various stages in the process. Alsoshown are examples of the usefulness of this type of data in GCM validation.

1. Introduction

The aim of this paper is to report a new algorithmfor the generation of global window brightness tem-peratures (WBTs) from multiple infrared (IR) satelliteimagery. This form of data is useful both for the vali-dation of general circulation models (GCMs) and per-forming quantitative climatological studies of convec-tion over wide areas, for example the whole of the Trop-ics. Thus, these type of data can be very useful forexploring the properties, in terms of the distribution andvariability, of organized convective systems from themesoscale to the synoptic scale over seasonal to decadelperiods. This is provided that a sufficiently long timeseries of this type of data can be generated at suitablespatial and temporal resolution. Of course we can per-form regional studies using individual satellites, butgenerating global data at current forecast GCM reso-lutions allows comparisons to be made with the vari-

Corresponding author address: Dr. K. I. Hodges, ESSC Harry PittBuilding, 3 Earley Gate, Whitenights, University of Reading, P.O.Box 238, Reading RG6 6AL, United Kingdom.E-mail: [email protected]

ability predicted by GCMs as well as enabling clima-tological studies to be performed over wide areas.

Recent developments in operational forecasting haveaimed to directly assimilate satellite radiance measure-ments from atmospheric sounders. This arguably mightform the basis of directly comparing radiances frommodels and satellites via a forward model, taking intoaccount viewing angle dependencies; this would haveto be done on a per satellite basis. For example, thecurrent European Centre for Medium-Range WeatherForecasts (ECMWF) code RTATOV (Saunders et al.1999) is only configured to use the National Oceanicand Atmospheric Administration (NOAA) polar orbit-ing TIROS Operational Vertical Sounder instrument.However, such comparisons may depend on the so-phistication of the underlying radiative transfer modelused to perform the forward modeling and the inter-polation required in space and time to the satellite ob-servation position. For example, the RTATOV code usesa two-stream model with a plane parallel atmospherewith scattering effects neglected. While this may be areasonable approximation for clear-sky or stratiform-cloud regions, what the satellite actually measures canin practice be the result of more complex situations in

Page 4: An improved algorithm for generating global window

OCTOBER 2000 1297H O D G E S E T A L .

which scattering is important and for which a full 3Dradiative transfer code is more appropriate. While thisapproach is feasible for validating models by compar-ison of absolute radiance values, a computationally sim-pler means of validating models can be achieved bycomparing the predicted variability with the observedvariability on a range of spatial and temporal scales byusing the outgoing longwave radiation as a surrogatefor WBT. Since most GCMs output this variable rou-tinely there is no need to invoke the additional com-putational overhead of computing and comparing ra-diances, which are dependent on satellite and possiblythe form of the forward model. Thus, meaningful com-parisons can be made between models and observationson this basis and provides the motivation for the gen-eration of global WBT.

Composite satellite imagery is routinely generated byweather forecast groups at a number of operational cen-ters to aid in weather forecasting and, for example, toimprove the model forecasts by ‘‘bogusing’’ systems inthe forecast initialization process. The process of gen-erating composites generally takes the form of ‘‘stitch-ing’’ multiple IR satellite images together. The com-posite data are rarely referenced to a grid, which isuseful for the types of studies previously mentioned,and even when they are, the methodology is rathercrude, often just averaging pixel WBT over grid boxes,whose size may vary with latitude (latitude-dependentgrid boxes, equal-area grids, are often used to take ac-count of the areal distortion when projecting sphericaldata onto a cylindrical projection). Artifacts of the pro-cess are often apparent in these composite images, usu-ally appearing as discontinuities in the image intensities.These discontinuities can be caused by prior or non-existent intercalibration between different satellite seriesand/or limb-darkening effects, as well as by cloud shear-ing effects caused by cloud motion when the images arenot from the same synoptic times.

An alternative to generating the composites, and thengridding the WBT, is to take the calibrated data fromthe required individual satellites and interpolate/smooththe data directly onto the grid. This has the benefit thatdata from the individual satellites can be corrected forcalibration, navigation, and limb effects before the com-posite/gridding procedure is performed. Additionally,these corrections produce a better set of gridded datafor determining the motion of mesoscale/synoptic sys-tems in tracking studies.

Of course, to generate global WBT at a reasonabletemporal sampling rate, that is, 1–3 h, there has to besufficient satellite coverage over such a time period.This is obviously not a problem currently since there isa profusion of weather satellites operational for whichoperational centers have access, and which provide near-ly global coverage every 30 min. This is an enormousamount of data at full weather satellite IR resolutions(;4 km for geostationary and polar orbiting satellites).However, these data are not generally or systematically

available outside the operational centers. It is also at asignificantly higher resolution than current GCM res-olutions. In order to generate a historical global WBTdataset for both climatological studies and GCM vali-dation, similar data coverage is required over an ex-tended period of time. Fortunately, the International Sat-ellite Cloud Climatology Project (ISCCP) (Rossow etal. 1997) has been archiving and processing the avail-able weather satellite imagery using data available fromJuly 1983 to the present (excluding the Indian INSATsatellite, apart for a brief period from April 1988 toMarch 1989, with the result that there is poor synopticdata coverage in the Indian Ocean region). To reducethe amount of data for processing and archiving, theraw imagery is subsampled to a nominal spatial reso-lution of 30 km and temporal resolution of 3 h to pro-duce the level-B3 imagery. These data are processedinto cloud products by ISCCP, typically on an equal-area cylindrical grid of resolution ;2.58 at 3-hourlyintervals. While these data are useful for large scalestudies, they are of limited use for observing and study-ing smaller spatial scales, for example, mesoscale phe-nomena. The trend is for climate models to be run atever higher spatial resolutions, depending on the sim-ulation period. For example, the ECMWF reanalysesare available at resolutions ;1.08, and free runningmodels are not far behind. It therefore makes sense toprovide data at a resolution that will be useful for val-idation for the foreseeable future. The fact that the WBTdata are not on the identically same grid as the modeldata is not a problem if we are concerned with studiesof variability, as opposed to comparison of absolute val-ues.

A first attempt at generating WBT data from theISCCP B3 archive was performed by Tanaka et al.(1991), and was used for studies of the diurnal cyclethroughout the Tropics by Salby et al. (1991), on a lim-ited period of one year from July 1983 to June 1984.The usefulness of this type of data for studying tropicalmesoscale activity was illustrated by Hodges (1998),using Salby’s data. In order to conduct such analyseswithout having to worry about missing data the pro-duction of a dataset with few data voids is important.However, some missing data is inevitable, because offailed satellites or corrupted input data, so the best thatcan be done is to minimize the occurrence of data voids.The grid chosen by Tanaka et al. was 512 3 512 (equiv-alent to 0.358 latitude 3 0.78 longitude), driven by theirdesire to perform efficient spatial fast Fourier transformanalyses. Their algorithm essentially averaged pixel val-ues for the contributing satellites over grid boxes, withthe pixel temperature values weighted by the cosine ofthe satellite zenith angle; that is, they downweight thelimb pixels in comparison with near-nadir pixels. Thisanticipates the problem of limb darkening and changesin pixel foot print size but does not actually correct theIR values. The data taken for this spatial sampling wasconstrained to be within 1.5 h of the target synoptic

Page 5: An improved algorithm for generating global window

1298 VOLUME 17J O U R N A L O F A T M O S P H E R I C A N D O C E A N I C T E C H N O L O G Y

TABLE 1. Summary information of contributing satellites. Note thatgeostationary satellites were sometimes moved to cover for the failureof other satellites; e.g., METEOSAT-3 covered for the failure ofGOES-6.

Satellite Agency Series IR band

Subsatel-lite

longitude

METEOSATGMS

ESAJMA

3, 4, 51, 2, 3, 45

10.5–12.5 mm10.5–12.5 mm10.3–11.4 mm

081408E

GOES-E NOAA 5, 78

10.5–12.5 mm10.2–11.2 mm

758W

GOES-W NOAA 6, 79

10.5–12.5 mm10.2–11.2 mm

1358W

NOAA-AMNOAA-PM

NOAANOAA

8, 10, 127, 9, 11, 14

10.3–11.3 mm10.3–11.3 mm

——

time. Small voids in the grid were filled by spatiallyaveraging over surrounding nonvoid grid values at twolevels; larger voids in the grid were filled by simplelinear temporal interpolation using gridded data fromadjacent synoptic times (equivalent to 6-h separation).This is a fairly straightforward and simple algorithm forgenerating the global WBT data and will used as a ref-erence for the improved algorithm described here.

In this paper an improved algorithm for generatingglobal WBT from the ISCCP B3 archive is described.The algorithm development forms part of the CloudArchive User Service (CLAUS) project to generate along-term historical archive of global WBT at resolu-tions suitable for both climate studies and GCM vali-dation, and to use this data to perform these types ofstudies. The algorithm essentially follows the approachused by Tanaka et al. (1991), but differs in the tech-niques that have been used for the spatial gridding, limb-darkening correction, and temporal interpolation. Thegrid chosen for the study has a spacing of 0.58 latitude3 0.58 longitude (360 3 720) on a Plate Carree latitude–longitude projection. The choice of this grid is partiallydriven by the resolution of the available data and alsoby the requirements of GCM validation. However, thetechniques can be applied just as well to higher-reso-lution data if required, and we hope to illustrate this inthe future with full-resolution imagery. Also, the al-gorithm can be applied to other data, for example, thewater vapor channel, although this channel was notavailable on all the satellite series until recently. Thefinal product data are supplemented by quality data,which will also be described.

The paper continues with a description of the ISCCPB3 data and then continues with a step-by-step approachto the algorithm methodology, followed by examples ofthe applicability of the data to GCM validation.

2. Description of the ISCCP B3 data

The ISCCP B3 data are generated by ISCCP from theraw full-resolution imagery by a process of subsam-pling, calibration, and navigation (Rossow et al. 1997).The B3 data are stored as 8-bit values (together withthe relevant calibration tables). This can limit some ap-plications due to a lack of accuracy—for example, iden-tifying changes associated with the sea surface or lowlevel stratocumulus. The contributing satellites consistof the geostationary satellites METEOSAT, GOES-E,GOES-W, GMS, and for a limited period, INSAT (thisis not used by CLAUS because of the limited availabilityof the data, although this may be used in subsequentreprocessing), while the polar orbiting satellites areNOAA-AM and NOAA-PM. The full-resolution im-agery is sampled in space and time for the geostationarysatellites to produce imagery every 3 h at a nominalpixel spacing of 30 km. There is the issue of bias ofthe raw data, due to how each satellite instrument sam-ples the Earth, as well as bias caused by the sampling

scheme used to produce the B3 data, which still needsto be investigated. The effect of the spatial sampling,in contrast to spatial averaging to produce the B3 dataalso still needs to be explored. For the polar orbitingsatellites spatial sampling is performed to produce thesame nominal pixel spacing as for the geostationarysatellites. Each image is calibrated according to theavailable spectral channels; the IR channel is used inthe CLAUS project, and normalized to the referenceNOAA polar orbiter satellites for the channels commonto the geostationary satellites (see Table 1 for the rel-evant channels). Although each image is navigated andcorrected for navigation errors, residual problems withsatellite intercalibration, image navigation, and bad scanlines are occasionally observed in the B3 data.

Each B3 image has available for each pixel the fol-lowing: its geolocated (latitude–longitude) position; thesatellite zenith angle (the angle between the satelliteview direction of a point on the earth and the nadir viewdirection of the same point); a physical value for eachspectral channel (an equivalent blackbody WBT for theIR channels); and flags for scan line quality, on/off plan-et pixels, and a land/water mask. Instead of using thecalibration tables that come with each B3 image basedon normalization to the Advanced Very High ResolutionRadiometer (AVHRR) on the NOAA-7 polar orbitingsatellite, the newer calibration tables based on normal-ization to the AVHRR on NOAA-9 are used (Brest etal. 1997). These new tables, available as version 1 ofthe ISCCP BT dataset, have been corrected by ISCCPfor artifacts found in the older tables. In addition to theiruse for converting the IR channels of the various sat-ellites to brightness temperatures, they are used to gen-erate polynomial fits between radiance and temperaturefor use in the limb-darkening correction to be describedlater.

The ISCCP B3 data consists of all available spectralchannels. For the CLAUS project the IR channels listedin Table 1 are extracted from the B3 data and stored ina simple packed format for use by the CLAUS program.The data extraction process is supplemented by qualitycontrols to identify corrupt scan lines and bad pixel

Page 6: An improved algorithm for generating global window

OCTOBER 2000 1299H O D G E S E T A L .

TABLE 2. Statistics for maximum scan line length for GOES-6 forJan 1984 as an example of detecting squashed images.

Scan lengthNo. of images

3921

402226

4039

45312

values. Bad scan lines are found using two methods: arun length search of scan lines to identify regions thathave the identical pixel value and a check of the meanand variance along the scan line, so that any scan linethat has a high or low value for the mean and a lowvariance is also excluded. This approach appears to re-move nearly all the bad scan lines that are erroneouslyflagged as good by ISCCP. Occasionally, images are sobadly corrupted that they are wholly unusable: these areidentified using the percentage of bad scan lines valueobtained from the B3 data. Another form of image cor-ruption appears only for the geostationary satellites, inparticular the GOES series. This manifests itself as anapparent bad image navigation. Examination of the im-ages reveals that for some images the on-planet disk isapparently squashed in the vertical direction. Othertypes of bad navigation appear as vertical translationsof a nonsquashed disk. When these data are gridded,this results in large vertical translations in what shouldbe fixed features, such as continents. The squashed im-ages are identified by examining the value for the max-imum on-planet scan line length; this is larger than forgood images, so that images are excluded if they exceeda scanline length threshold. For example, the GOES-6satellite for January 1984 had the maximum scan linestatistics shown in Table 2. The single image with length392, is a partial image so should be accepted; the ma-jority of images have length of 402, but the images withlengths of 453 are bad images and should be excluded.Thus, an acceptance threshold of 20 from the value of402 accepts the images with length 392, 402, and 403and rejects those with value 453. The image translationsare harder to detect automatically since they require theidentification of surface features and the comparison oftheir ISCCP navigated position with their actual posi-tion. A manual quality control is used to identify them,but this is operator dependent. Ideally, these otherwisegood images should be renavigated, which may possiblybe done in subsequent algorithm improvements. Thepixel on/off-planet flag is used to exclude all off-planetpixels. Thus, the resulting reformatted data should con-tain only data from good images/scan lines and only foron-planet pixels for the IR channel. However, othertypes of bad data occasionally slip through, in the formof bad images/scan lines that are not flagged and arenot easily automatically detected as well as badly nav-igated images; these then require manual intervention.

3. Algorithm overview

The algorithm proceeds by first initializing a grid anda calibration table to convert temperatures to 8-bit in-

tegers for archiving in the chosen image format. Thegrid is chosen by specifying the number of grid pointsrequired in longitude and latitude for the Plate Carreeprojection (this is the only projection used at present,although any grid can be used). This was chosen as themost convienient for comparing with global GCM data.The method of performing the gridding is independentof the grid chosen, since a spherical nonparametric re-gression technique is used (Hodges 1996). The calibra-tion table is used to scale the gridded WBT values sothat the data can be stored in image format [the PortableGray Map (PGM) format is used]. Valid values arescaled to the range (1–255), with the value 0 being usedto indicate missing data values. The PGM format is alsoused for the quality data, with the quality values beingpacked into bit fields for each quality byte. Each gridpoint thus has a byte in the WBT image and a byte ineach of the quality images; there are two quality imagesper WBT image.

The algorithm continues by looping over the chosentime range, performing the spatial regression at eachsynoptic time at three smoothing levels, incorporatingcorrections for limb darkening for the geostationary sat-ellites and temporal weighting for the polar orbitingsatellites to account for the asynoptic nature of the polarorbiter data. Three smoothing levels where chosen as acompromise between processing speed and the require-ment of producing as complete an image as possiblefrom the spatial regression. For each synoptic time therequired B3-reformatted data within 1.5 h of the targettime are identified from the satellite header summariesobtained at the data extraction/reformat stage. This usu-ally results in the identification of a single image foreach geostationary satellite, unless more than one seriesis operational at the same time, for example, METEO-SAT-3 and METEOSAT-4, with METEOSAT-3 beingused as cover for the failed GOES-6. For the polar or-biting satellites there may be several images that overlapthe temporal window.

The spatial regression is performed by looping overthe data for each contributing satellite with only the datathat fall within the 61.5-h time window being used.This will include all of the data in a geostationary sat-ellite image that coincides with the target time, but mayonly be a partial polar orbiter image, so only the datafor scan line times that fall with the time window areused. For each satellite the algorithm loops over eachof the selected B3 pixels, checking to ensure that it hasa valid value and that it falls within a satellite zenithangle mask. The mask is used for the geostationarysatellites to exclude the very extreme limb pixels whichcan be affected by variations in satellite navigation.Each B3 pixel is also corrected for limb-darkening ef-fects if the current satellite is a geostationary satellite.No limb-darkening correction is currently performed forthe polar orbiter data as the swath is relatively narrowcompared with the geostationary satellite field of view.Also no correction is made for changes in pixel footprint

Page 7: An improved algorithm for generating global window

1300 VOLUME 17J O U R N A L O F A T M O S P H E R I C A N D O C E A N I C T E C H N O L O G Y

size with the change in satellite zenith angle. Followingthe application of these checks and corrections to theB3 pixel the contribution to the grid estimates is com-puted for each smoothing level and the estimates for theregression and data density are updated. A crude ap-proach to computing the contributions to the grid foreach pixel would be to check every grid point for eachpixel, a very time-consuming process. However, sincethe weighting kernels we are using are spatially localin nature a much more efficient scheme has been de-veloped which limits the amount of checking requiredto just the grid points local to the pixel position on thesphere.

Following the completion of the regression estima-tion, there are estimates available at three smoothinglevels on the chosen grid. The estimate for the smooth-ing level, which has the highest data density estimateis the one taken as the final gridpoint value. Ideally, afully adaptive regression estimation should be usedwhich varies the smoothing with the local data density.However, for this amount of data, this approach wouldbe computationally prohibitive (a pilot density estimateis required; see Hodges 1996) as well as invalidatingthe efficient search method mentioned. Finally the grid-ded data are scaled to image format and written to file.The quality data are also written to file.

The final stage is to take the images obtained fromthe spatial regression and fill the remaining data voidsusing temporal interpolation between adjacent images.This is performed using a motion compensated inter-polation method based on a hierarchical determinationof the motion vectors, as opposed to the simple linearinterpolation method used by Tanaka et al. (1991). How-ever, both methods have been implemented for com-parison purposes. The motion compensated interpola-tion is found to preserve the sharpness of features muchbetter than the simple linear interpolation although onaverage the differences are not very large between thetwo methods at the resolution being used. This is in parta consequence of the resolutions in space and time thatthe ISCCP B3 data provides. The final gridded data arewritten to file along with the updated quality informa-tion.

An important though difficult question to answer iswhat accuracy we require for validation studies. As theISCCP B3 data are only 8 bit, this is already a limitationon the available accuracy and appears to preclude thestudy of the ocean surface or low-level stratocumuluscloud regions in studies so far conducted using dataproduced by the algorithm described here. However,examination of diurnal and semidiurnal activity and Hy-ashi spectra do not indicate any problems with the dataproduced by the algorithm, at least in the Tropics forcloud systems and land, so we conclude the algorithmis performing adequately for the required purpose ofmodel validation at least. The usefulness of the dataproduced by the algorithm in other studies will only

become apparent with time and use in a range of ap-plications.

4. Spatial regression

The spatial regression is required to combine theISCCP B3 data distributed along scan lines for multiplesatellites onto the chosen grid. This is done using thereformatted data and performing a nonparametric re-gression onto the chosen grid. Since the ISCCP B3 datacan be considered as data distributed on a unit sphere,that is, each ISCCP B3 pixel has a latitude–longitudecoordinate, the method for gridding the B3 data is touse a spherical nonparametric regression estimatorbased on local spherical kernels (Hodges 1996). Thisrequires the grid to be converted to three-dimensionalCartesian coordinates representing unit vectors pointingto the grid points on the unit sphere. The kernels es-sentially control the region of influence of each datapoint (B3 pixel position) and their contribution to anestimation point (grid point) via a function of the an-gular separation between a data point and a grid point.This approach is independent of the grid chosen, thatis, any distribution of grid points can be selected and aregression estimate performed. Also, there is no bias inthe estimate (although there may be in the source data)as the estimation is performed in spherical geometry,which obviates the need to introduce some form of cor-rection, that is, latitudinal-dependent area weighting. Itis not apparent that Tanaka et al. (1991) took this intoaccount in their algorithm, since they used fixed angulargrid boxes. The ISCCP C1 and C2 data (280-km grid)take this into account by using an equal area grid fortheir cloud products, although this does not take accountof the change in shape of the sampling region, for ex-ample, a rectangular latitude–longitude box at the equa-tor is not a rectangular box at higher latitudes on theearth.

The spatial regression is performed by looping overall the identified ‘‘good’’ pixels for each satellite. Foreach pixel the contributions to the regression and datadensity estimates are computed for three kernel widthscontrolled by the spherical kernel parameters. This isperformed for those grid points local to the position ofthe pixel (this will be described shortly). These contri-butions are then used to update the regression and datadensity estimates for the three smoothing regimes. Be-fore the updates are computed the pixel value is firstcompared against a temperature acceptance mask to ex-clude unlikely temperature values (the mask used byTanaka et al. 1991 has been used), although it is veryrare for a pixel value to be rejected, and then correctedfor limb-darkening effects. The limb-darkening effectsare caused by a mixture of radiative and geometric ef-fects due to the view angle. For example, for a brokencloud field viewed at a large zenith angle, it is morelikely that cold cloud tops are viewed than at nadirwhere there is a greater possibility of viewing the sur-

Page 8: An improved algorithm for generating global window

OCTOBER 2000 1301H O D G E S E T A L .

FIG. 1. CLAUS satellite zenith angle weighting function comparedwith cos(u) weighting function used by Tanaka et al. (1991).

TABLE 3. Kernel parameters for the three smoothing levels andtheir equivalent ABRs.

C 1.000038078 1.000152328 1.000342792

ABR (deg) 0.5 1.0 1.5

face. Also, for a large viewing angle the radiance at thetop of the atmosphere will be decreased due to the longerpathlength through the atmosphere. This correction isrequired, in addition to the zenith angle weighting, tocorrect for artifacts which appear when polar orbitingdata are adjacent to geostationary limb data. The zenithangle weighting has little effect unless the pixels fromdifferent satellites are mixed. This correction is alsorequired for the temporal interpolation, where a missinggeostationary satellite results in a data hole with ap-parent bright edges due to the limb effects adjacent tothe interpolated hole. The correction for these effectswill be discussed later.

The regression estimate is computed at grid point ias

N

(t) (z) (s)T w w kO j j j i, jj51

T 5 , (1)i N

(t) (z) (s)w w kO j j i, jj51

where Ti is the temperature estimate at the ith grid point,Tj is the calibrated temperature at the jth pixel with Nthe number of pixels, and is the polar orbiter tem-(t)wj

poral weighting defined as

1, if geostationary

1 t 2 t0(t) w 5 1 2 , (2)j ) )[ ]h hif polar orbiter |t 2 t | # h, h # 1.5 h. 0

This progressively downweights polar orbiter pixelsthat are temporally farther away from the target synoptictime t0 and thus provides tapering to the polar orbiterdata.

The satellite zenith angle weighting is given by(z)wj

the function

1 1 0.43429 ln[cos(u )], if cos(u ) $ 0.1j j(z)w 5 (3)j 50, otherwise,

where uj is the satellite zenith angle for the jth pixel.This weighting function progressively downweights off-nadir pixels in favor of near-nadir pixels. Tanaka et al.(1991) also used a zenith angle weighting based on onlycos(u). The function we have used does not fall off asseverely as cos(u) for small u but falls off more quicklyfor extreme limb values of u. This is shown in Fig. 1compared with cos(u). The form of the function in ex-pression (3) is chosen to reflect the functionality oftenused for limb darkening correction factors (see Minniset al. 1991 and later discussion on limb-darkening cor-rection). The cutoff value of cos(u) 5 0.1 is used toexclude the extreme limb pixels, which are the mostaffected by the extreme viewing angle and that mightbe viewing a mixture of on-planet and off-planet re-gions. Larger values can be specified on a per-geosta-tionary-satellite basis to exclude bad pixels associatedwith navigation problems as discussed earlier (usuallyfor GOES satellites in the ISCCP B3 data).

The spatial weighting function is given by the(s)ki,j

linear spherical kernel function (Hodges 1996),

C(CX · X 2 1), CX · X 2 1 $ 0 i j i j2p(C 2 1)(s)k 5i, j (4)

0, otherwise,

where C is the kernel smoothing parameter, which con-trols the bandwidth of the kernel; and X i and Xj are theunit vectors representing the grid point and pixel po-sition, respectively, on the sphere. Note that this func-tion is also a probability density function since it in-tegrates to unity on the sphere. The three smoothinglevels are represented by a choice of three values forthe parameter C, these are given in Table 3 with their

Page 9: An improved algorithm for generating global window

1302 VOLUME 17J O U R N A L O F A T M O S P H E R I C A N D O C E A N I C T E C H N O L O G Y

FIG. 2. Schematic of the method for determining the grid points lo-cal to a pixel position for fixed-width kernels.

equivalent arc band radius (ABR), obtained from theconstraint CXi · X j 2 1 $ 0.

Note these are global parameters; ideally we wouldlike to use a fully adaptive scheme in which case theparameter C becomes a local parameter dependent onthe data density (Hodges 1996), but this is very expen-sive.

The spatial data density is given by

N1(s)f 5 k , (5)Oi i, jN j51

and is used to determine which estimate to use of thethree smoothing levels.

As mentioned earlier, rather than test each grid pointfor a contribution from the current pixel, only those gridpoints local to the pixel position are tested for a contri-bution. This is possible because constant width kernelsare being used. The method for determining the gridpoints local to a pixel position is illustrated in Fig. 2.

For each latitude circle in the grid a longitude pointis chosen, for example, point (b) in Fig. 2. The numberof longitude nodes that can possibly fall within a fixedkernel centered on this point is computed by translatingthe kernel to point (c) in Fig. 2, and counting the lon-gitude nodes along the latitude line XY in Fig. 2. Thisis done by computing the value of CX1 · X2 2 1: pos-itive values indicate that the point X2 is within the kernelsupport for a kernel centered at X1. This ensures thatall required grid points are captured for any pixel po-sition. This process is performed once for all the linesof latitude in the grid for a hemisphere (the process issymmetric about the equator). Clearly, as we move tohigher latitudes the number of longitude nodes that fallwithin the kernel support increases, so that near thepoles all the longitude nodes are likely to fall withinthe kernel support. The number of latitude nodes withinthe kernel extent is constant and only needs computingonce. This process is repeated for each kernel widthused, that is, three in this case. The way these subgridextents are used is to find the nearest grid point to thecurrent pixel position; the subgrid extents then providethe local grid points that need testing for possible con-

tributions to the WBT estimate and density estimate ateach of these grid points.

An example of the result for just the primary regres-sion, that is, using the smallest regression kernel, isshown in Fig. 3, together with the primary data density,the result of applying the three levels of spatial regres-sion (all scaled to 8-bit images) and its temperaturecalibrated form.

Notice in Fig. 3 that the result for the primary re-gression is a much more complete image than was ob-tained by Tanaka et al. (1991) for their first spatial stage.The final image, obtained using all three regression lev-els, is shown in Fig. 3c. This still has some regions withno data; these are subsequently filled using the temporalinterpolation part of the algorithm described in section6. Figure 3b shows the spatial probability density func-tion (i.e., ignoring the temporal weighting of the polarorbiter data) scaled to an 8-bit image. This shows thecomplex way the satellite views overlap with each otherand the patterns formed by the data sampling usingspherical kernel estimators. Figure 3d shows the samedata but calibrated to temperature.

5. Limb-darkening correction

The reasons for including a limb-darkening correctionhave already been discussed. Here the form of this cor-rection is described. The approach of Tanaka et al.(1991) to the problem of limb effects, is to weight thepixels with their cosine of satellite zenith angle to reducethe contribution of the limb pixels relative to pixelscloser to nadir. However, this only works when satelliteviews overlap: it has no effect when there is no overlap.The effect of limb darkening, when combining data fromdifferent satellite platforms, is often apparent when thereis a polar orbiter swath adjacent to a geostationary limb.The geostationary limb shows much lower temperatures(brighter image values) than the adjacent polar orbiter,that is, there is a discontinuity in temperature. To correctfor these effects for data from geostationary satellitesthere have been several previous studies, resulting inempirical limb correction functions for application tothe radiances which correct the radiances to nadir view.However, these can never really fully take account ofthe geometric effects. The correction of the radiancesto nadir is provided by the following expression:

R(u)R(0) 5 , (6)

l(u)

where R(u) is the radiance at a zenith angle u, R(0) isthe radiance at nadir, and l(u) is the limb correctionfunction as a function of the satellite zenith angle. Min-nis et al. (1991) derived an empirical function based onlimited radiative transfer calculations (Minnis and Har-rison 1984) in aerosol-free atmospheres for the spectralwindow 10.0–12.0 mm. The Minnis function is givenby

Page 10: An improved algorithm for generating global window

OCTOBER 2000 1303H O D G E S E T A L .

FIG

.3.

Exa

mpl

esat

1500

UT

C12

Jan

1984

of(a

)ap

plyi

ngth

eno

npar

amet

ric

regr

essi

onat

the

prim

ary

leve

l,sc

aled

to8-

bit

imag

efo

rmat

(lig

htgr

ayle

vels

are

cold

,w

hile

dark

gray

leve

lsar

ew

arm

);(b

)th

epr

obab

ilit

yda

tade

nsit

y,sc

aled

to8-

bit

imag

efo

rmat

(lig

htgr

ayle

vels

are

high

dens

itie

s,w

hile

dark

gray

leve

lsar

elo

wde

nsit

ies)

;(c

)th

efu

llre

gres

sion

,sc

aled

to8-

bit

imag

efo

rmat

;an

d(d

)th

efu

llre

gres

sion

conv

erte

dto

tem

pera

ture

.

Page 11: An improved algorithm for generating global window

1304 VOLUME 17J O U R N A L O F A T M O S P H E R I C A N D O C E A N I C T E C H N O L O G Y

FIG. 4. Limb-darkening function estimated by the ratio of geosta-tionary to polar orbiter standardized radiances. The lower curve rep-resents the limb correction applied to the CLAUS data. The uppercurve shows the limb-darkening function predicted by the radiativetransfer calculations by Minnis et al. (1991).

01, if u , 11l(u) 5 (7)5b 1 a ln[cos(u)] otherwise,

where a 5 0.03247 and b 5 1.000602.However, we have found that the response of this

function does not correct sufficiently for the limb-dark-ening effect. This can be attributed to the lack of anyclouds in the radiative models from which the expres-sion is derived, or the presence of aerosols that can resultin more absorption. An alternative empirical means todetermine the nature of the correction would be to com-pare the geostationary limb pixels with polar orbiterpixels suitably localized in time and space to that of thegeostationary satellite. This will be discussed in moredetail shortly.

A similar approach to determining the limb-darkeningrelationship was explored by Smith et al. (1994) usingdata from the Earth Radiation Budget Experiment(ERBE) scanning radiometer operating in along-trackmode. This is a broad band instrument (5–50 mm) andallows the comparison to made between limb views andnadir views of the same position on the earth with onlya short time difference. The data were partitioned ac-cording to scene type, day/night and season and thelimb-darkening relationship derived for each of thesescenarios. However, Smith et al. (1994) were interestedin radiative fluxes, so their limb correction functions arenormalized differently than in expression (6). However,for most scenes, irrespective of time of day or seasonthe relationship between limb and nadir data is remark-ably similar, except for scenes with snow cover. Thisstable relationship between nadir and limb data is inpart due to the size of the ERBE footprint, which isquite large and likely to contain some cloud and alsobecause the broad spectral band used which will tendto smooth the differences due to cloud cover. Addition-ally, the averaging over a large number of observationswill tend to smooth out any variation; the relationshipmay show more variation for a single pass. Smith et al.(1994) fit their results to a simple physical model, whichresulted in a relationship dependent on special functionsin the form of the exponential integral of order 4.

To explore the limb-darkening problem further, thegridded values of geostationary satellites were com-pared with those of the polar orbiters. The look angleof the polar orbiter imagery was restricted to u , 368to approximate the nadir view of scenes imaged on thelimbs of the geostationary satellites. The polar orbiterscans were further restricted to times within half an hourfrom the geostationary image to reduce effects of tem-poral variability in clouds or land. To compare the de-rived limb-darkening functions with previous results,the gridded temperatures for both the polar orbiter andgeostationary satellites were converted to a standardizedradiance based on a common window profile. We chosea typical window profile for the GOES-E satellite basedon the ISCCP calibration tables. We find that the resultsare not sensitive to the detailed form of this standardized

window. For every grid location which has both geo-stationary and polar orbiter coverage (subject to the con-straints listed above), the ratio Rgs/Rpo of the geosta-tionary to polar orbiter radiances are computed. Theaverage value and standard deviation of these ratios fora given range dz of the cosine nadir view angle are thencomputed.

Figure 4 shows the limb-darkening profiles for threegeostationary satellites over the period 1–16 January1992. The limb functions for the GOES-7 and GMS-4satellites are in good agreement with each other and areconsistent with the broadband measurements of Smithet al. (1994). METEOSAT-4 appears to exhibit a greateramount of limb darkening for reasons, which are notclear at present. One possibility that may account forthese differences in limb behavior is that for these geo-stationary instruments their may be a danger of includ-ing the effects of CO2 absorption/emission at the long-wave end of the transmission window. However, theoverlap of the spectral response functions of each ofthese satellites barely overlap the CO2 region with thegreatest overlap occurring for GMS-4 (according to theISCCP documentation). What may be more relevant isthe shape of the spectral response functions: this wouldprobably require a radiative transfer transfer study alongthe lines of the forward modeling described in the in-troduction to resolve this issue. Inspection of images ofthe spatial variation of Rgs/Rpo reveals that much of thescatter associated with each data point in Fig. 4 arisesfrom cloudy regions containing high spatial variability.Thus, the scatter may be caused by slight navigationerrors, differing footprint sizes due to viewing geometryangle, projection effects of vertical cloud distribution,and/or temporal variability. The upper curve shows theradiative transfer predictions of Minnis et al. (1991) (a5 0.0325) discussed above. The empirical estimates ofthe limb function clearly have a faster fall-off with nadirangle than the Minnis function. The reason for this dis-crepancy, we suggest, is due to the neglect of aerosolsand clouds in the theoretical calculation. The lowercurve shows the limb correction [eq. (7) with a 5 0.09]adopted for the CLAUS dataset. This admittedly crude

Page 12: An improved algorithm for generating global window

OCTOBER 2000 1305H O D G E S E T A L .

FIG. 5. Comparison of interpolated images of a Southern Hemi-sphere midlatitude storm using motion compensation (left) andstraight temporal averaging (right). The true image is shown in thecenter panel. The features indicated by the arrows are discussed inthe text.

approach may lead to some artifacts remaining associ-ated with limb darkening as a result of the limited studyperformed here. But there is a significant reduction inthe number and magnitude of artifacts that can be iden-tified with no correction. Ideally, the limb correctionshould be determined on a per-satellite basis and fordifferent scene types, etc. This is a substantial task thatwe hope to tackle in the future.

The limb correction is applied in the following way.For each individual geostationary satellite image a quar-tic polynomial fit is produced for the radiance as a func-tion of window brightness temperature using the cali-bration tables available in the ISCCP BT version 1 files.This is used to obtain the radiance, which is correctedfor the limb effects by applying expression (6). This isthen converted back to temperature by applying New-ton’s method to the quartic expression. The computa-tional overhead is small, since the expression for thepolynomial and limb correction function can be eval-uated rapidly and the Newton scheme has a quadraticorder of convergence. Also, we only need to store thequartic coefficients for each geostationary image,whereas using an alternative lookup table approachwould require a significantly greater amount of storage(following the reformat stage the satellite calibrationtables which might be used as lookup tables are nolonger required).

6. Temporal interpolation

Gaps in the spatially sampled images because of in-complete satellite coverage cause problems for appli-cations which require complete data, such as stormtracking, scale-dependent analysis, and some GCM val-idation activities. For this reason, we use a temporalinterpolation method to ‘‘fill in’’ the data holes to pro-duce a spatially and temporally homogeneous dataset.The interpolation scheme is based on a cross-correlationmethod to estimate a field of cloud motion vectors overeach data void using adjacent images in the series. In-terpolation is then performed along the local velocityvector over the neighbouring image frames. As will beshown below, the motion compensation reduces blurringof moving cloud systems, especially in midlatitude re-gions, as compared with the simple linear interpolationmethod used by Tanaka et al. (1991), which neglectmotion effects.

Central to this method is the estimation of cloud mo-tion vectors. Much work has been done in this area as ameans of deriving mesoscale and synoptic wind fieldsfor use in forecasting and model validation. This worklargely uses cross-correlation methods to search for high-ly correlated regions between pairs of successive satelliteimages. Straightforward application of cross-correlationmethods, however, often produces irregular velocityfields containing non–physical motion vectors. By thres-holding the velocity field according to the correlationcoefficient many (but not all) of the erroneous velocity

vectors may be rejected, but this can lead to sparselysampled velocity fields (e.g., Wahl and Simpson 1990).Attempts to produce spatially smooth fields often rely oniterative methods involving, for example, vector medianfilters (Simpson and Gobat 1994), energy minimization(Larsen et al. 1998), or fluid motion models (Hasler etal. 1998). However, these techniques tend to be com-putationally expensive, prohibiting their use for large da-tasets such as that here. In addition, not all these methodsclearly demonstrate that the smoothed velocity field ac-tually represents a better estimate of the true field. Wehave found that for the present dataset, confusion betweenmultiple correlation maxima is a large contributor to thepresence of erroneous velocity estimates. This is mostlikely due to a breakdown in the implicit assumption ofthe cross-correlation method, which holds that at smallscales the image structure evolves entirely through rigidtranslation and not rotation, dilation, or differentialbrightness variations. The degree to which these as-sumptions are valid for cloud imagery depends on thespatial scale and temporal sampling of the image se-quence. As, for example, the temporal sampling rate de-creases, larger differences may occur in the cloud struc-ture between subsequent images, violating these as-sumptions and resulting in a poor correlation match forthe true displacement vector. If, in addition, a given cloudfeature happens to resemble a neighbouring cloud system,then an erroneous correlation peak may appear, eventhough the corresponding displacement vector does notcorrespond to true physical motion.

In the present approach, we use a hierarchical methodof selecting the final correlation peak. Initially, rela-tively large blocks are chosen which are designed tofollow the motion of synoptic systems such as midlat-itude storms or tropical cyclones. We find that for blockswith linear sizes of 208, the correlation surfaces for theISCCP B3 data often contain a single peak correspond-ing to the average motion at that spatial scale. Thus,confusion between multiple peaks rarely occurs on thesescales. However, since the velocity field is underresolvedat this scale, we divide each large block into four smallerblocks and recompute the correlation surface. The ve-locity estimate in the large block is used to select thelocal maximum in the correlation surface produced byeach smaller block. (In principle, a quad-tree algorithm

Page 13: An improved algorithm for generating global window

1306 VOLUME 17J O U R N A L O F A T M O S P H E R I C A N D O C E A N I C T E C H N O L O G Y

FIG

.6.

An

exam

ple

ofth

efi

nal

resu

ltof

appl

ying

the

algo

rith

mat

2100

UT

C6

Jan

1992

:(a

)gr

aysc

ale

imag

e,(b

)te

mpe

ratu

reca

libr

ated

,(c

)fi

rst

qual

ity

fiel

d,an

d(d

)se

cond

qual

ity

fiel

d.

Page 14: An improved algorithm for generating global window

OCTOBER 2000 1307H O D G E S E T A L .

FIG 7. Difference temperature plot for the CLAUS algorithm and the Tanaka et al. (1991) algorithm for 0600 UTC 4 Jan 1984: CLAUS–Tanaka.

could be used, so that the correlation surface need notbe recomputed. The total computation time would there-fore not be sensitive to the number of hierarchical levelsused.) This method implicitly assumes that smaller-scalemotions may be regarded as perturbations on a larger-scale flow. For midlatitude systems, squall lines, andtropical cyclones this appears to be a good assumption,with the method producing velocity fields that are sub-stantially more coherent, and that result in smaller in-terpolation errors than those made from straightforwardcross-correlation methods. However, for complex dy-namics at small spatial scales, common to tropical con-vection, this assumption breaks down and the methodprovides little (if any) improvement over simple linearinterpolation as used by Tanka et al. (1991). Finally, wenote that this method has little computational overhead,unlike other more sophisticated iterative techniques,which makes it ideal for large datasets such as CLAUS.

Implementation of this method proceeds as follows.Given a data void in the temperature image at time t,tT i

a grid of contiguous blocks centered on the void is over-laid on the preceding image . Each block is thent21T i

translated over a portion of the image to find thet11T i

displacement vector vf that produces the largest corre-lation coefficient. If any of the blocks contain incompletedata, that block is repositioned on the next frame, for

example, from image to . As a quality control,t21 t22T Ti i

the matching is then performed in reverse, where the gridof blocks is overlaid on image and matched againstt11T i

the preceding frame to produce the vector vb. If thet21T i

displacement vectors vf and vb agree to within a specifiedvalue dv and have correlation coefficients larger than aspecified threshold R0, the average of the vectors is usedfor the local velocity estimate. The vectors are rejectedif |vf 2 vb| . dv or if both correlation coefficients fallbelow R0. If only one vector satisfies r . R0, then thecorresponding vector is selected. Rejected velocity esti-mates are replaced by local averages of the velocity field.In the event that no neighboring velocity values are de-fined, the velocity estimate is set to zero. In this case,the interpolation reduces to the simple non-motion-com-pensated method. The resulting set of motion vectors arethen ‘‘fed in’’ to the hierarchical scheme described above.At each stage in the hierarchy, the quality control justdescribed is implemented.

Once the velocity field has been estimated the tem-perature values in the data void are interpolated ac-tT i

cording tot1DtW(Dt)TO i1y Dti

DttT 5 , (8)iW(Dt)O

Dt

Page 15: An improved algorithm for generating global window

1308 VOLUME 17J O U R N A L O F A T M O S P H E R I C A N D O C E A N I C T E C H N O L O G Y

TABLE 4. First quality byte structure. The spare bit fields are usedwhen additional satellites are operating, e.g., an additional METEO-SAT.

Bit Satellite series

1234

GMSMETEOSATGOES-EGOES-W

5678

NOAA-PMNOAA-AMSpareSpare

TABLE 5. Second quality byte packing order.

M I Z

8 7 6 5 4 3 2 1

TABLE 6. Values of the interpolation flags in the second qualitybyte.

Value Regression/interpolation level

0123

Primary spatialSecondary spatialTertiary spatialPrimary temporal

4567

Secondary temporalTertiary temporalReservedReserved

where vi is the local cloud motion field, the summationruns over the two neighboring images on either side ofthe target image, and the temporal weighting functionW(Dt) is temporally symmetric, with W(Dt 5 63 h) 55W(Dt 5 66 h). Only those images that have spatiallysampled data values contribute to the sum. The valuesof the weighting function were chosen so that the imagecontributions at times Dt 5 66 h are negligible whenthe Dt 5 63-h frames are present. When vi is locallyzero, the interpolated value is given by simple temporalaveraging.

The left panel in Fig. 5 shows the results of the mo-tion-compensated interpolation technique applied to amidlatitude storm in the Southern Hemisphere. The trueimage is shown in the center panel, while temporal in-terpolation without motion-compensation was used toproduce the image in the right panel. In this example,motion compensation reduces the amount of blurringdue to cloud motion (see lower arrow), reducing thecorresponding interpolation error by 30% from 6.5 to4.6 K. Notice that the cloud band that appears doubledin the right panel (see upper arrow) is much better rep-resented by the motion-compensated interpolation. Anexample of a complete image obtained from the spatialand temporal parts of the algorithm are shown in Fig.6, along with its temperature scaled version and imagesof the two quality fields.

A comparison of Tanaka et al.’s (1991) results andthe result of applying the full CLAUS algorithm isshown in Fig. 7. This figure shows that the algorithmreported here consistently has lower cloud-top temper-atures. This may be due to using the more recent ISCCPcalibration tables, but is more likely to be due to thedifference in approach of the spatial sampling. As hasalready been mentioned, the distance weighting schemefills more of the grid values in the primary pass thanthat of Tanaka et al. (1991). The Tanaka algorithm fillsthe remaining small holes by local averaging whichsmooths out the temperatures, resulting in warmer cloudtops. Additionally, the box averaging used by Tanakais a discrete process, while our distance weighting meth-od is continuous, so there may be an element of mis-alignment in this comparison. Clearly evident is the ef-fect of the limb-darkening correction for the geosta-tionary satellites, which show warmer temperatures than

Tanaka et al. (1991) around the geostationary limbs; theIndian Ocean region shows this most clearly; this iswhere there is only a very small overlap between ME-TEOSAT and GMS and at this particular time there isno polar orbiter coverage for this region.

7. Quality indicators

As well as producing the gridded WBT data, a largeamount of quality data is also produced, which is storedas two bytes per grid point. Although this amounts totwice as much data as the gridded data, this is justifiedbecause it provides users with as much information aspossible to allow assessment of the quality of a gridpoint value based on a knowledge of how it was derived.The first byte contains the contributing satellite infor-mation, one bit per satellite and packed according toTable 4.

The second byte (Table 5) contains a missing data valueflag in the high end bit, with a value of zero for presentand one for missing. The first 4 bits (low end) contain theaverage satellite zenith angle (Z) computed as

N

(s)z kO j i, jj51

z 5 , (9)i N

(s)kO i, jj51

where zj 5 cos(uj) is the cosine of satellite zenith anglefor data point j and zi is the mean cosine of satellitezenith angle computed from all contributing satellitesto the ith grid point. The values of zi are scaled into theinteger range 0–15. The remaining three bits (I) are usedto indicate the means by which the gridpoint data hasbeen derived, for example, primary regression, temporalinterpolation etc. The values take the range from 0 to7, that is, they fit into 3 bits. The various values arelisted in Table 6. The temporal values are defined toreflect the number of frames at times 63 h from targetwhich contribute to the interpolation, since they gen-

Page 16: An improved algorithm for generating global window

OCTOBER 2000 1309H O D G E S E T A L .

erally provide a more reliable estimate than the 66-himages. The primary, secondary, and tertiary temporalinterpolation values indicate that both, one, or neitherof the t0 6 3-h frames were used in the interpolation.When tertiary temporal interpolation is used, the esti-mated image value is solely dependent on the imageframes 6 h from the target image, and thus representsthe least accurate interpolation level. The reliability ofthe local image estimate therefore increases with de-creasing value of the interpolation fields taken as awhole.

The two quality bytes for each grid point are storedas images in two separate files. Color-coded examplesof these are shown in Fig. 6, and clearly reveal thecomplex way that the satellite coverage of the individualsatellites overlap, as well as the distribution of methodsby which the data have been combined.

8. Applications

The types of data that are produced by the algorithmdescribed here can be used for a variety of climatolog-ical studies and GCM validation. Processing the wholeof the ISCCP B3 archive will produce a dataset fromJuly 1983 to the present, that is, ;16 yr, at the 3-hourlytime steps. This will allow a range of climatologicalstudies to be performed on cloud systems from meso-scale to supercloud cluster scales, for example Africansquall lines, modulation of convection by easterlywaves, diurnal variations, the Madden–Julian oscillationand the effects of El Nino to name but a few. Salby etal. (1991) used their limited period of data to generatetime means, filtered variance and power spectra. TheCLAUS data will allow these simple statistics to begenerated over a much longer period, to explore thenatural variability of tropical cloud systems in particular.The motion and evolution of cloud systems can also beexplored (Hodges 1998), although new techniques arecurrently being investigated to deal with differing cloudstructures.

For GCM validation the data can be used to explorewhether models produce the correct diurnal cycle, animportant diagnostic of the radiation scheme and con-vective parameterization; also whether the models arecapable of simulating such processes as the Madden–Julian oscillation, for example.

As an example, the usefulness of the CLAUS data isdemonstrated by a limited study of the diurnal variationin the Tropics using a years worth of data for 1992. Theresults of this study are shown in Fig. 8 for the diurnalamplitude and Fig. 9 for the diurnal phase (in terms ofthe local time) for the four seasons obtained by per-forming a time spectral analysis at each grid point.

The diurnal amplitude of the WBT in Fig. 8, showsthat the maximum WBT at this frequency occurs overthe land, as would be expected as a result of the lowheat capacity of the land surface, which allows it torespond to solar heating and infrared cooling, faster than

the ocean. Regions over the land with lower amplitudesof the WBT indicate the main regions of convectiveactivity. For example, over the African region of theSahel and Congo for JJA there is a band of lower am-plitudes from the Ethiopian highlands to the Guineacoast, which indicates the main region of convectiveactivity for this period.

More general observations of the diurnal phase in-dicate that for clear-sky regions over the land the max-imum WBT occurs in the early afternoon after the peakin the solar heating, whereas for clear-sky regions overthe ocean the maximum WBT occurs in the late after-noon due the larger heat capacity of water, resulting ina slower temperature response to the solar heating. Inconvectively active regions over the land the maximumin WBT occurs in the morning when the convectiveactivity is decaying. Over the convectively active oce-anic regions the maximum in WBT shows less orga-nization with a lot of variability, although around theland margins the phase is consistently in the late even-ing, possibly associated with the formation of stratiformclouds.

More extended studies will be performed along thelines described above once the full CLAUS dataset iscompleted.

9. Conclusions

A new algorithm has been presented for generatinggridded WBT from multiple satellite images. Althoughthe skeleton of the algorithm follows that of Tanaka etal. (1991), the individual elements of the algorithm usealternative techniques that are more comprehensive intheir approach, for example the use of spherical kernelestimators for the spatial regression, the introduction ofa temporal weighting for the asynoptic polar orbiterdata, a correction for limb darkening, and the use ofmotion compensated temporal interpolation to fill datavoids. This results in a reduction in many of the artifactsapparent in the data of Tanaka et al. (1991) (observedin the years’ worth of data processed by Tanaka et al.1991). The CLAUS project will process the whole ofthe ISCCP B3 archive, amounting to about 16 years’worth of data. This will allow meaningful climatologicalstudies to be performed as described in the previoussection, as well as the validation of several aspects ofGCM integrations.

Further improvements to the algorithm are ongoing,including studies of the limb-darkening problem: at themoment empirical, but also in the future using radiativetransfer models. Further work will also be performedto study the effects of data bias in the source data anddue to the sampling of the raw data to B3 resolution.The efficacy of applying the algorithm to full-resolutiondata will also be explored.

Acknowledgments. The data used for this study wereobtained from the NASA Langley Research Center

Page 17: An improved algorithm for generating global window

1310 VOLUME 17J O U R N A L O F A T M O S P H E R I C A N D O C E A N I C T E C H N O L O G Y

FIG. 8. Diurnal amplitude for the four seasons: DJF, MAM, JJA, and SON.

Page 18: An improved algorithm for generating global window

OCTOBER 2000 1311H O D G E S E T A L .

FIG. 9. Diurnal phase in local time for the four seasons: DJF, MAM, JJA, and SON.

Page 19: An improved algorithm for generating global window

1312 VOLUME 17J O U R N A L O F A T M O S P H E R I C A N D O C E A N I C T E C H N O L O G Y

EOSDIS Distributed Active Archive Center. TheCLAUS project is part funded by the European Com-mission under Contract ENV4-CT96-0356 (http://www.ceo.org/claus.html). The algorithm implementa-tion and archive processing is being undertaken by theThe National Remote Sensing Centre (NRSC). The au-thors would also like to thank the reviewers for theirhelpful comments and bringing to our attention relatedwork and also Professor Murray Salby for the supplyof the data derived from the Tanaka algorithm.

REFERENCES

Brest, C. L., W. B. Rossow, and M. Roiter, 1997: Update of radiancecalibrations for ISCCP. J. Atmos. Oceanic Technol., 14, 1091–1109.

Hasler, A. F., K. Palaniappan, C. Kambhummetu, P. Black, E. Uhlhorn,and D. Chesters, 1998: High-resolution wind fields within theinner core and eye of a mature tropical cyclone from GOES 1-min images. Bull. Amer. Meteor. Soc., 79, 2483–2496.

Hodges, K. I., 1996: Spherical nonparametric estimators applied tothe UGAMP model integration for AMIP. Mon. Wea. Rev., 124,2914–2932., 1998: Feature-point detection using distance transforms: Ap-plication to tracking tropical convective complexes. Mon. Wea.Rev., 126, 785–795.

Larsen, R., K. Conradsen, and B. K. Ersboll, 1998: Estimation of

dense image flow fields in fluids. IEEE Trans. Geosci. RemoteSens., 36, 256–264.

Minnis, P., and E. F. Harrison, 1984: Diurnal variability of regionalcloud and clear-sky radiative parameters derived from GOESdata. Part 1: Analysis method. J. Climate Appl. Meteor. 23, 993–1011., D. F. Young, and E. F. Harrison, 1991: Examination of therelationship between outgoing infrared window and total long-wave fluxes using satellite data. J. Climate, 4, 1114–1133.

Rossow, W. B., A. Walker, and M. Roiter, 1997: International SatelliteCloud Climatology Project (ISCCP)—Description of reducedresolution radiance data. WMO Tech. Document 58, 163 pp.[Available from ISCCP Global Processing Center, NASA God-dard Institure for Space Studies, New York, NY 10025.]

Salby, M. L., H. H. Hendon, K. Woodberry, and K. Tanaka, 1991:Analysis of global cloud imagery from multiple satellites. Bull.Amer. Meteor. Soc., 72, 467–480.

Saunders, R., M. Matricardi, and P. Brunel, 1999: An improved fastradiative model for assimilation of satellite radiance observa-tions. Quart. J. Roy. Meteor. Soc., 125, 1407–1425.

Simpson, J. J., and J. I. Gobat, 1994: Robust velocity estimates, streamfunctions, and simulated Lagrangian drifters from sequentialspacecraft data. IEEE Trans. Geosci. Remote Sens., 32, 479–493.

Smith, G. L., N. Manalo-Smith, and L. M. Avis, 1994: Limb-dark-ening models from along-track operation of the ERBE scanningradiometer. J. Appl. Meteor., 33, 74–84.

Tanaka, K., K. Woodberry, H. Hendon, and M. Salby, 1991: Assim-ilation of global cloud imagery from multiple satellites. J. Atmos.Oceanic Technol., 8, 613–626.

Wahl, D. D., and J. J. Simpson, 1990: Physical processes affectingthe objective determination of near-surface velocity from sat-ellite data. J. Geophys. Res., 95, 13 511–13 528.