35
ECMWF WWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities of Precipitation Value Extreme Events François Lalaurette, ECMWF

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities

Embed Size (px)

Citation preview

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

NWP precipitation forecasts: Validation and Value

Deterministic Forecasts

Probabilities of Precipitation

Value

Extreme Events

François Lalaurette, ECMWF

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Deterministic Verification

Deterministic:

one cause (the weather today - the analysis),

one effect (the weather in n days - the forecast)

Verification of the forecast using observations

categorical (e.g. verify events when daily rainfall > 50mm)

continuous (needs a definition or norm for errors)

» - e.g. (RRforec.-RRobs)2 (Root Mean Square Errors)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Deterministic Verification: Biases

Bias=mean(observation-forecast)

Diurnal cycle (too much convective rain by 12h, too little by 00h - local time)

T3194D-var T511

60 levels + new precipitation scheme

New PhysicsNew microphysics3D-var

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Deterministic Verification: Bias maps

(DJF 2001)

Overestimation of orographic precipitation

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Deterministic Verification: scatter plots

Error distribution

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Deterministic Verification: Frequency Distribution

Small amounts of precipitation are much more frequent in the forecast than in SYNOP observations

39%

58%

% of days <0.1 mm

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Deterministic Verification: Heavy rainfall

Higher resolution has brought more realistic distributions of heavy rainfall

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Deterministic Verification: Does all this make sense?

Synop observations catchment area (raingauge) = O(10-1 m2)

Model grid catchment area = O(1000 km2)

a large number of independent SYNOP observations per model grid are required for the assessment of the precipitation fluxes in a model grid box.

high resolution climatological data - O(10 per model grid box)- are not exchanged in real time, but can be used for a-posteriori verification

two studies recently explored the sensitivity of ECMWF verification to the upscaling of observations (Ghelli and Lalaurette, 2000 used data from Meteo-France while Cherubini et al. used data from MAP)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Deterministic verification: Super-observations

Synop data collected from the GTS

Climatological network (Météo-France)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Deterministic verification: Super observations (2)

The bias towards too many light rain events is to a large extend a representativity artifact

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Probabilities of Precipitation

PoP can be derived following 2 strategies:

To derive the PDF from past error (conditional) statistics (MOS, Kalman Filter) e.g. using scatter diagrams

To transport a prescribed PDF for initial errors into the future (dynamical or “ensemble” approach)

• ECMWF runs 50 perturbed forecasts at T255L40 (+ 1 control)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Probabilities of Precipitation (EPSgram)

Forecast for Prague, Base time 10/5/2001 12UTC

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Probabilistic Verification

What do we want to verify?

Whether probabilities are biased…

• e.g., when an event is forecast with probability 60%, it should verify 6 times out of 10 (no more, no less!)

– but then forecasting with the probability=climate frequency is a “perfect” forecast

… or whether the probabilistic product is useful

• compared, for example, with a single, deterministic forecast

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Probabilistic Verification: 1) Reliability Diagrams

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Probabilistic Verification: 4) Brier Scores

BS=(1/N) (p-o)2

p is the probability forecast (relative number of EPS members forecasting the event)

o is the verification (=0 if the event did occur, =1 otherwise)

the Brier score varies from 0 (perfect, deterministic forecast) to 1 (perfectly wrong, deterministic forecast)

the Brier Skill Score measures the relative performance with respect to the climate (for which p=pc, the relative frequency of occurrence in the long term climate)

BSS=1-(BS/BSC)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Proba. Verification: Brier Skill Scores Time Series

Rnorm+Bugfix

T255

Rnorm -Stochastic Phys

60 levels + new precipitation

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Forecast Value: Brier Scores partition

The BS can be split into the sample climate uncertainty, the forecast reliability (BS_REL), and the forecast resolution (BS_RSL):

resolution tells how informative the probabilistic forecast is; it varies from zero for a system for which all forecasted probabilities verify with the same frequency of occurrence to the sample uncertainty for a system for which the frequency of verifying occurrences takes only values 0 or 100% (such a system resolves perfectly the forecast between occurring and non-occurring events);

reliability tells how close the frequencies of observed occurrences are from the forecasted probabilities (on average, when an event is forecasted with probability p, it should occur with the same frequency p);

uncertainty varies from 0 to 0.25 and indicates how close to 50% the occurrence of the event was during the sample period (uncertainty is 0.25 when the event is split equally into occurrence and non-occurrence).

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Forecast Value: Categorical Forecasts

Categorical forecast - Step 1: event definition

e.g.: will rain exceed 10mm over the 24h period H+72/H+96?

Step 2: gather verification data

H=number of good forecasts of the event occurring

M=number of misses (no-forecast but the event occurred)

F=number of false alarms (yes-forecast of a no-event)

Z=number of good forecasts of a no-event

False Alarm Rate=F/(F+Z)

Hit Rate=H/(H+M)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Value of Probabilistic Categorical Forecast: Relative Operative CharacteristicsForecast of the event can be made at different probability levels

(10%, 20%, etc…)

P>0

P>10%

P>20%

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Categorical Forecast Economic Value (Richardson, 2000)

Cost/loss ratio (C/L) decision model can be based on several decision-making strategies:

1. To take preventive action (with cost C) on a systematic basis;

2. To never take action (and therefore facing loss L when the event occurs);

3. taking action when the event is forecast by the meteorological model;

4. taking action when the event occurs (this strategy is based on the availability of a perfect forecast model)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Categorical Forecast Economic Value (Richardson, 2000)

Strategies 1 and 2 can be combined

always take action if the cost/loss ratio is smaller than the climatological frequency of occurrence of the event, and not to take action otherwise.

The economic value of the meteorological forecast is then computed as the reduction of the expense made possible by the use of the meteorological forecast:

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Categorical Forecast Economic Value (Richardson, 2000)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Refinements of the EPS verification procedures

Address the skill over smaller areas (need to gather several events categories - CRPS)

Specifically target extreme events (need climatological data)

Refine the references (“Poor Man Ensembles”)

Show the ensemble forecast of Z500 is not more skillful than cheaper alternatives (distributions of errors over the previous year and/or multi-model ensemble) (Atger, 1999; Ziemann, 2000)

The ensemble maximum skill seems to be achieved for abnormal situations

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events: Recent examples

A) November French Floods (12-13/11/1999)

Inches

252015105

150km

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events: November floods

TL319 precip. Acc 72-96h

>80mm[40, 80]mm

1100km

TL159 EPS proba precip. >20mm (0.8”)

>5%>35%

>65%

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events: November floods

Verification against SYNOP data

0%<p

10%<p

20%<p

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events: An EPS Climate

3 years (January 1997 to December 1999)

constant horizontal resolution (TL159)

Monthly basis, valid. 12UTC

Europe Lat/Lon grid (0.5x0.5 - oversampling )

T2m, Precip (24, 120, 240h acc.), 10m-wind speed

50 members (D5+D10) + Control (D0, D5+D10)

around 10,000 events per month

post-processing is fully non-parametric (archived values are all 100 percentiles + 1‰ and 999‰)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events: An EPS Climate (2)

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events: EPS Climate (November)

24h rain rates exceeded with frequency:

1% 1‰

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events : Proposals

A better definition of events worth plotting

e.g.: Number of EPS members forecasting values of 10m-wind speeds exceeding the 99% threshold in the “EPS Climate”

A non-parametric “Extreme Forecast Index”?

Based on how far the EPS distribution is from the Climate distribution

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events : Extreme Forecast Index

•By re-scaling using the climate distribution, we can create a dimensionless, signed measure:

)])([sgn()(31

0

lim

1

0

2lim

dppxxppdppxxppEFI cEPScEPS

•The Extreme Forecast Index is:•0% when forecasting the climate distribution,

•25% for a determinist forecast of the median,

•100% for a deterministic forecast of an extreme

•A CRPS-like distance between distributions:

dxpxpCRPS CEPS2

lim )]()([

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events: EFI Maps for November Floods

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Extreme Events: Verification issues

The proposal is to extend the products from physical parameters

• (e.g. amounts of precipitation) to the forecast of climatological quantiles

• (e.g. the forecast to day is for a precipitation event that was not occurring more than one time out of 100 in our February climatology)

Need local climatologies to rescale the observed values

What to do with major model changes?

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Summary

Data currently exchanged on the GTS (SYNOP) can only address very crude measures of precipitation forecast performance (biases) or on scales much broader than resolved by the model (e.g. hydrologic basins)

High resolution networks are needed to upscale the data from local to model grids;

Ensemble forecasts have shown some skill in assessing the probabilities of occurrence in the medium range; an optimum combination of dynamical and statistical PoP remains to achieve

ECMWFWWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001

Summary (2)

Value of probability forecast compared to pure deterministic forecast of precipitation are easy to establish

Some idea of extreme events can be found in the model direct output... provided it is seen from a model perspective

A framework for the verification of these extreme events forecasts has been established, but needs gathering long climatological records from a range of stations