Upload
shon-gardner
View
222
Download
1
Tags:
Embed Size (px)
Citation preview
GIFS-TIGGE Working GroupAutumn 2005IPO update
David BurridgeManager IPO
Science Plan peer-reviewed;
Implementation Plan 2004-2015 approved;
Approved annual expenditure $ 1.2M
through a trust fund;
First Symposium (Montreal,2004)
Regional Committees - RAII,IV,VI
First TIGGE workshop (report issued)
Three TIGGE data Centres - phase 1
developments at CMA, ECMWF & NCAR
THORPEX within the WMOstructure
Major CampaignsCoordination
Predictability andDynamical
Processes WG
Data AssimilationObserv ing
Strategies WG
Observ ing SystemWG
Societal andEconomic
Applications WG
GIFS - THORPEXInteractiv e Grand
Global Ensemble WG
World Meteorological Congress
WMO Executive Council
CASCBSOther TechnicalCommissions JSC/WCRP Regional Asociations
SSC/WWRP WGNE
THORPEX ICSCTHORPEX ICSC
TechnicalAdv isory Board
ScienceAdv isory Board
Stakeholders'Adv isory Panel
RegionalCommittees
Executive Board
InternationalProgramme Office
Data Policy andManagement WG
C o r e R e s e a r c h S u b p r o g r a m m e s
Ov
er
sig
ht
Ma
na
ge
me
nt
De
liv
er
y
Boards, working groups and key groups and meetings (continued)
The EB met in Geneva in September 2005 MEDEX (4 October 2005) GEOSS (links developed and support for THORPEX included in
plans The GIFS-TIGGE WG is meeting in Boulder on 15 & 16 November
2005 A technical plan for the first phase of the TIGGE data bases is under
development and will be discussed at the Boulder meeting Southern Hemisphere Planning Meeting in Melbourne (late
November 2005) ICSC 5 meeting in Melbourne (November/December 2005) SEA WG 1 – January 2006 There will be a THORPEX/WCRP meeting on the MJO, organised by
Julia Slingo, Mel Shapiro and the PDP WG which will be held at ICTP (Trieste – 13-17 March 2006).
There will a “kick off” workshop for all working groups which will be held in the University of Reading (20-22 March 2006; this will be followed by meetings of the SAB and the TAB (23-24 March 2006).
THORPEX Interactive Grand Global Ensemble (TIGGE)
Framework for international collaboration in development and testing of ensemble prediction systems
Resource for THORPEX research projectsComponent of THORPEX Forecast
Demonstration Projects (FDPs)A prototype future Global Interactive Forecast
System Initially develop database of available
ensembles, collected in near-real timeCo-ordinate research using this multi-model
ensemble data, including interactive aspects
THORPEX Regional Campaigns
ATReC (2003) - many groups are actively working with the data – a summary of current views will be available for the ICSC meeting in Melbourne
European ETreC – D-Phase (MAP), COPS supported by the European regional committee
Stronger links are being developed with AMMA for observing system experiments, modelling & predictability and societal and economic applications
Links with TWP – ICE are being investigated Pacific Asian Regional Campaign (PARC 2008) – on tropical
cyclone tracks, extra-tropical transitions, tropical warm-pool physics and down-stream propagation (link to, Beijing Olympics & IPY)
Winter Olympics (2010) Tropical convection (2012)
ECMWF9-10 November 2005 Slide 19
TIGGE Implementation Meeting - Report
ECMWF 9-10 November 2005
ECMWF9-10 November 2005 Slide 24
List of products: single level
6_h: accumulated over previous 6 hours
acc_st: accumulated from start of forecast
Parameter Level Unit Output frequency
Comment
Mean sea level pressure MSL Pa 6h instantaneous
Surface Pressure surface Pa 6h inst
10m U-velocity 10m m s**-1 6h inst
10m V-velocity 10m m s**-1 6h inst
2m temperature 2m K 6h inst
2m dew point temperature 2m K 6h inst
2m max temperature 2m K 6h 6_h
2m min temperature 2m K 6h 6_h
Total precipitation (liquid + frozen)
surface m 6h acc_st
Snow fall surface m of water equivalent 6h acc_st
Snow depth surface m of water equivalent 6h inst
ECMWF9-10 November 2005 Slide 25
List of products: single level fields
acc_st: accumulated from start of forecast
Orography and Land-sea mask to be provided for the Control for each output step
Parameter Level Unit Output frequency Comment
Total cloud cover surface 0-100% 6h instantaneous
Total column water surface kg m**-2 6h inst
Surface latent heat flux surface W m**-2 s 6h acc_st
Surface sensible heat flux surface W m**-2 s 6h acc_st
Surface solar radiation surface W m**-2 s 6h acc_st
Surface thermal radiation surface W m**-2 s 6h acc_st
Sunshine duration surface s 6h acc_st
Convective available potential energy
surface J kg**-1 6h inst
Orography (Geopotential height at the surface)
surface m inst
Land-sea mask surface 0-1 inst
ECMWF9-10 November 2005 Slide 26
List of products: upper air fields
Parameter Unit Output frequency Comments
Temperature K 6h instantaneous
Geopotential height m 6h inst
U-velocity m s**-1 6h inst
V-velocity m s**-1 6h inst
Specific Humidity kg kg**-1 6h inst
5 parameters on 9 pressure levels, i.e. 45 fields.
The 9 levels are 1000, 925, 850, 700, 600, 500, 300, 250 and 200 hPa.
ECMWF9-10 November 2005
TIGGE Phase 1 – A summary
3 Archive CentresCMA, NCAR, ECMWF
8 Data providers (?) NCEP, ECMWF, UKMO, JMA, BMRC, CPTEC, KMA, MSC
Each Archive Centre will receive data from all the Data Providers
In near realtime
Users will be able to get the same data from any of the Archive Centres
No extra resources!Use existing infrastructure
ECMWF9-10 November 2005
ECMWF9-10 November 2005
Definition of a core datasetA field is uniquely identified within the TIGGE
dataset by the following tuple: (base date, base time, time step, origin centre, ensemble
number, level, parameter)
When using fields to create a “grand ensemble”, i.e. when considering all members from several origin centres as a super ensemble, we must make sure that they share the same values for the tuple:
(base date, base time, time step, level, parameter)
Therefore, a core dataset must be defined in terms of these attributes
All Data Providers must adhere to the core dataset definition.
ECMWF9-10 November 2005
Network BandwidthIs a major risk
Bilateral connections need testingBetween Archive Centres
Data Provider to Archive Centres
Requires a lot of tuning/tweakingGet vs. put
Number of parallel streams
Buffer sizes
TCP window sizes
Target aggregate bandwidth: 100GB per 12 hoursGives a chance to resend everything on the same day
ECMWF9-10 November 2005
Aggregate rate using FTP get (client - server)
0
2000
4000
6000
8000
10000
12000
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
Number of parallel streams
KB
ytes
/sec
on
d
ncar-ecmwf
ncar-cma
cma-ncar
cma-ecmwf
ecmwf-ncar
ecmwf-cma
100 GB in 12 hours
ECMWF9-10 November 2005
Aggregate rate using FTP put (client - sever)
0
2000
4000
6000
8000
10000
12000
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
Number of parallel streams
KB
ytes
/sec
on
d
ncar-ecmwf
ncar-cma
cma-ncar
cma-ecmwf
ecmwf-ncar
ecmwf-cma
100 GB in 12 hours
ECMWF9-10 November 2005
WMO file naming convention
Files to be exchanged must follow the WMO naming convention:Z_TIGGE_C_CCCC_yyyyMMddhhmmss_SSSS_LL_VVVV.bin[.compressi
on]
where:CCCC Originator
yyyyMMdd base date of the run
hhmmss base time of the run (UTC)
SSSS forecast time-step in hours (0006, …)
VVVV version (0001 for operational)
LL type of level, (sl: single level,…)
compression optional (e.g. .gz for gzip)
ECMWF9-10 November 2005
WMO file naming convention
Example: time-step 24
from NCEP
for the run of 1st September 2005
12Z
operational version,
single level parameters,
all ensemble members, including the control,
No compression
Z_TIGGE_C_KWBC_20050901120000_0024_sl_0001.bin
ECMWF9-10 November 2005
File structure
Files will be exclusively GRIB messages. No record padding.
No supplementary headers or trailers.
Non-compliant files will not be accepted by the Archive Centres.
It is acknowledged that will add development work at each centre, but:
Will make data management tractable at the Archive Centres
Will be beneficial to the users.
ECMWF9-10 November 2005
Metadata and naming conventions
Used to create of a homogenous catalogue user data discovery
Used to specify retrieval requests.
Most of the Data Providers have their own ways to name variables, use different abbreviation and GRIB code tables.
The TIGGE partners will have to agree on a common vocabulary of terms.
Use existing names defined by WMO
See work done by the group that created the Climate and Forecast (CF) metadata convention
ECMWF9-10 November 2005
User access : Registration
The TIGGE dataset is available to the research community
A “registration authority” must be set up that should verify each registration request and establish if they are linked to a genuine research activity
Access to valid data is not permitted apart from specific field experiments.
“Valid data” is defined as: base date + base time + time step + 24 hours >= now
ECMWF9-10 November 2005
Data retrieval
ECMWF will utilise the MARS system
NCAR will build upon its Research Data Archive and Community Data Portal.
CMA is still in the development process of their data delivery system.
Over time and with additional project support, it is expected that there will be opportunities to further unify the user interface by leveraging developments from the WMO Information System (WIS) effort.
ECMWF9-10 November 2005
Data retrieval : field order
Fields will be returned to the users in the following order:
(outer loop) DateTime
StepOrigin
Ensemble memberLevel (for multi-level
variables)Parameter (inner loop)
This allows users to process fields from different origins as if they were members of the same ensemble
ECMWF9-10 November 2005
Data retrieval
Archive Centres will prioritize and limit requests according to size.
Because users’ requests will generate high data volumes,
This will make sure that no user can monopolise the system by submitting an unreasonable request.
Very large requests may require delivery by tape media.
ECMWF9-10 November 2005
Implementation plan : GRIB2
ECMWF will consult with NCEP and WMO in order to make sure we agree on the proper encoding of the fields in GRIB2.
ECMWF will provide a sample model output to the Data Providers.
ECMWF will provide a series of example programs to create these files.
These tools may have to be adapted by Data Providers in order to handle their own data and metadata mapping.