Rutherford Appleton Laboratory OEM retrievals with IASI, AMSU and MHS PM1 Telecon 9 April 2014...

Preview:

Citation preview

Rutherford Appleton Laboratory

OEM retrievals withIASI, AMSU and MHS

PM1 Telecon

9 April 2014

R.Siddans, D. Gerber (RAL),

Agenda

• 15:00 Review KO minutes / actions

• 15:10 Task 1: summary of literature review

• 15:25 Task 1: Analysis of AMSU+MHS observation errors based on FM simulations

• 15:40 Task 2: Summary of Task 2 & 3 results including

• Comparison of RAL and Eum ODV results

• Comparison of IR and MWIR retrievals over land and sea

• 16:30 Plans for remaining tasks

• 16:45 Discussion

• date for next meeting

• 17:30 close

Actions from KO

Task 1: Literature Review

• Overview of literature presented on AMSU data processing

• Different methods used to analyse the measurements (and errors)

• Presentation of the most significant results

• Conclusions for our own study

Different Data Processing Methods

Linear Regression AlgorithmsA “heuristic” relation between scene brightness temperate and humidity for select channels is exploited. No error treatment, so less useful as a source of information.

Physical Methods (i.e. OEM)Finding the most likely state within the boundaries of measurement errors and climatological variability. Requires a solid assessment of all errors, hence good source of information.

Neural Networks“Black-box” handling of measurement/instrument errors in the training of the network, so no explicit error quantification.

Summary of Literature and Relevant Findings

Method Findings relevant to our studyMitra 2010 Neural network Comparison with IAPP (OEM)Olsen 2008 Regression Lv1 mods for degraded channel 4John 2008 OEM Full error covariance matrix for T retrievalHoushangpur 2005 Regression N/AJimenez 2005 Neural network Review of past AMSU H2O measurementsChou 2004 1D Var NEBT separate for all beam anglesSusskind 2003 OEM Cloud clearing algorithmMcKague 2003 OEM of simulated meas. NEBT (updated)Atkinson 2001 Bi-monthly trending tests NEBT, Freq. stability, Abs. accuracyMcKague 2001 OEM of simulated meas. NEBT, a priori errorsRosenkranz 2001 OEM with improved FG Measurement errorsWu 2001 OEM / RTTOV statistics Total measurement errorsLi 2000 Regression NEBT, Calibration accuracy, Bias correctionSusskind 1998 OEM Cloud handlingVangasse 1996 Instr. characterisation Linearity, Antenna patternsEyre 1989 OEM MSU NEBT and FM errors (historic)

Overview of AMSU Random Errors from Literature

AMSU

Channel

Chou 2004

Rnd.&Obs.

Err.

McK

ague 200

3

NEBT

McK

ague 200

1

NEBT

Atkinso

n 2001

NEBT

Wu 20

01

RTTOV/O

bs.

Rosenkra

nz 20

01

"Sen

sitivit

y"

Li 200

0

NEBT

Vanga

sse 1996

NEBT (E

ng.Md.)

Mea

n Value

Std. D

ev.

1 0.30 0.30 5.00 0.20 0.30 1.83 2.742 0.30 0.30 5.00 0.27 0.30 1.86 2.723 0.40 0.40 5.00 0.22 0.40 1.87 2.714 1.52 0.25 0.25 4.00 0.15 0.25 1.47 2.195 0.82 0.25 0.25 1.80 0.15 0.25 0.73 0.936 0.34 0.25 0.25 1.20 0.13 0.25 0.53 0.597 0.32 0.25 0.25 1.50 0.14 0.25 0.63 0.768 0.47 0.25 0.25 1.20 0.14 0.25 0.53 0.589 0.61 0.25 0.25 1.60 0.20 0.25 0.68 0.79

10 0.77 0.40 0.40 1.70 0.22 0.40 0.77 0.8111 0.40 0.40 1.80 0.24 0.40 0.81 0.8612 2.58 0.60 0.60 2.20 0.35 0.60 1.05 1.0013 3.58 0.80 0.80 3.50 0.47 0.80 1.59 1.6614 1.20 1.20 3.90 0.78 1.20 1.96 1.6915 0.50 0.50 5.00 0.11 0.50 1.87 2.7216 2.00 0.80 1.00 5.00 0.37 0.42 1.70 2.2217 2.00 0.80 1.00 5.00 0.84 0.88 1.93 2.0518 2.00 0.80 1.10 4.50 1.06 1.14 1.95 1.7019 2.00 0.80 1.00 3.70 0.70 0.77 1.54 1.4420 2.00 0.80 1.20 4.40 0.60 0.70 1.73 1.80

Comparison of MetOffice Sy vs. Literature

6

5

4

3

2

1

0

Overview of AMSU Systematic Errors from Literature

AMSU

Channel

Atkinso

n 2001

Freq.S

tab. M

Hz

Atkinso

n 2001

Abs.Acc

ur. K

Li 200

0

Freq.S

tab. M

Hz

Li 200

0

Cal. A

cc. K

Chou 2004

Random Err.

K

Chou 2004

Syste

m. Err.

K

Chou 2004

Est.O

bs.Err.

K

Atkinso

n 2001

Pre Se

pt 1999

Olsen 200

8

Post Aug 2

007

Mitr

a 201

0

Temp. a

nomaly

1 10 22 10 23 10 1.54 5 1.5 0.38 -0.38 1.14 NEDT5 5 1.5 0.28 -0.44 0.546 5 1.5 0.21 -0.23 0.137 5 1.5 0.24 0.01 0.08 Temp.8 10 1.5 0.37 0.35 0.19 0.5 1.5 0.51 0.07 0.1

10 0.5 1.5 0.4 -2.25 0.3711 1.2 1.512 1.2 1.5 1.42 -9.72 1.1613 0.5 1.5 1.69 -11.3 1.8914 0.5 1.515 50 216 100.0 1 Bias17 100.0 118 50.0 119 70.0 1 Bias20 70.0 1 Bias

Some Specific Findings

• Atkinson 2001: There was a 40K bias of some AMSU-B channels pre September 1999 (data transmitters).

• Wu 2001: RTTOV statistics compared to observations indicate random errors (and biases) far larger than pure NEBT.

• Chou 2004: Standard deviation of error is different for off-nadir views than for nadir view. The sign of the difference is channel dependent!

• Olsen 2008: Channel 4 (post Aug 2007 NEBT increase) no information on surface or atmosphere – use for cloud flagging only.

• Mitra 2010: Temperature anomaly in channel 7 (They exploit it to detect cyclones).

• Generally NEBT was higher at the start (Atkinson 2001) and higher towards the end (MacKaque 2001, 2003).

Some Specific Findings

• Atkinson 2001: Slight gain drop and NEBT increase in Chs.18 & 20. Thermal oscillation of Ch.16 in early 1999, also Temp anomaly of Ch.17.

• Li 2000, Rosenkranz 2001: Critical dependence on first-guess profile (iterative pre-selection). Geomagentic field correction to Ch.14.

• Eyre 1998: Retrieval more affected by correlations in background error covariance matrix than observation error.

Conclusions

• NEBT values in literature roughly consistent. Increased numbers (in some channels) for later publications.

• Some channels require bias correction (corrected in latest version of Lv1b data).

• Some channels have intermittent problems (abnormal bias or NEBT, so select dates accordingly)

• Most recent data of NEBT consistent with Met Office “diagnosed error”.

• All records of total measurement error from NWP analysis consistent with Met Office “operational error”.

Testimng RAL implentation:RAL vs Eumetsat RT simulations

RAL vs Eumetsat Initial Cost function

Estimation of AMSU+MHS errors:Simulations from PWLR

Observation – simulations (PWLR)

Observation - simulations

Observation - simulations

Observation – simulations (IASI)_

Observation – simulations (after bias correction and retrieval)

Observation – simulations (x-track dependence, from PWLR)

Observation – simulations (x-track dependence, from IASI retrieval)

Observation – simulations after MW bias correction

Observation coveariance derived fromMW residuals from IASI retrieval

Task 2 & 3• Retrievals run over both sea (T2) and land (T3)

• All 3 days (17 April, 17 July, 17 October 2013)

• IR-only retrievals compared to Eumetsat ODV

• Differences small cf noise and mainly related to different convergence approach, which affects scenes for which final cost high (deserts, sea ice)

• MWIR retrieval run with 2 options for Sy

• Correlated (as previous slide)

• Uncorrelated (same diagonals)

• Linear simulations also performed for 4 sample scenes to assess information content

• Additional case of 0.2K NEBT (uncorrelated)

• Approximate perfect knowledge of MWIR emissivity

Summary of DOFS

Summary from Linear Simulations

• Using the derived observation errors, IASI+MHS add 2 degrees of freedom to temperature and about half a degree of freedom to water vapour.

• Effects on ozone are negligible.

• Neglecting off-diagonals reduces DOFS on temperature and water vapour by about 0.1 (a small effect).

• For temperature, the improvements are related mainly to the stratosphere though some improvement is also noticeable in the troposphere, esp over the ocean (where the assumed measurement covariance is relatively low).

• For water vapour improvements are mainly related to the upper troposphere, and penetrate to relatively low altitudes in the mid-latitudes.

• Assuming 0.2 K NEBT errors to apply to all channels adds an additional degree of freedom to temperature and an additional half a degree of freedom to water vapour, in some cases considerably sharpening the near-surface averaging kernel.

Assessment of full Retrieval

• Based on comparing retrieval to analysis (ANA), Eumetsat retrieval (ODV), PWLR and analysis smoother by averaging kernel (ANA_AK):

x’ = a + A ( t - a )

Where a is the a priori profile from the PWLR, t is the supposed "true", A is the retrieval averaging kernel

• Profiles smoothed/sampled to grid more closely matching expected vertical resolution (than 101 level RTTOV grid):

• Temperature: 0, 1, 2, 3, 4, 6, 8, 10, 12, 14, 17, 20, 24, 30, 35,40,50 km.

• Water vapour: 0, 1, 2, 3,4, 6, 8, 10, 12, 14, 17,20 km

• Ozone: 0, 6, 12, 18, 24, 30, 40 km.

The grid is defined relative to the surface pressure / z*.

Mid-lat land full retrieval: Measurements and residuals: IR only

Mid-lat land full retrieval: Measurements and residuals: MWIR

Mid-lat land full retrieval:

Profile comparisons:

IR only

Mid-lat land full retrieval:

Profile comparisons:

MWIR

Mid-lat ocean full retrieval: Measurements and residuals: IR only

Mid-lat land full retrieval: Measurements and residuals: MWIR

Mid-lat ocean full retrieval:

Profile comparisons:

IR only

Mid-lat ocean full retrieval:

Profile comparisons:

MWIR

Cost function+ Number of

iterations:IR only

Cost function+ Number of

iterations:MWIR

IR vs MWIR:Temperature

IR vs MWIR:Water vapour

Latitudedependence:

MWIR

Latitudedependence:

IR only

Viewdependence:MWIR only

Viewdependence:IR

only

IR only

MWIR

Summary from full retrievals

• Differences between (RAL) retrievals and (Eumetsat) ODV are generally very small, particularly compared to the estimated retrieval error

• Remaining differences probably due to convergence approach

• Ice surface remain problematic in MW due to the difficulty defining the surface emissivity. For now we focus on results at latitudes tropical and mid-latitudes (60 S to 60 N).

• Desert surfaces problematic in IR – may be affecting derivation of MW bias correction + error covariance over land (?)

• Including AMSU+MHS generally reduces estimated errors (as in linear simulations), but slightly degrades comparison with analysis

• The apparent degradation in performance in terms of agreement with analysis, accounting for kernels, is largely independent of viewing angle, latitude, and whether observations are over land or sea.

• Including or not off-diagonals in the AMSU+MHS observation covariance has a minor effect

• Benefit of AMSU+MHS in OE will be more important in cloudy scenes

Task 2: OEM (MWIR/Metop-B) over ocean, clear-sky

• Implement the IASI product processing facility (PPF) settings (as provided by Eumetsat) in our IASI retrieval scheme and verify that the scheme produces consistent results.

• Apply this scheme to the Eumetsat selected days of IASI and AMSU/MHS cloud-free data over ocean, to generate results for IR only and MW+IR (MWIR).

• These will be evaluated using the diagnostics:

• PWLR, OEM(IR), OEM(MWIR) cf reference profiles (ECMWF analysis)

• vertical profiles of bias, standard deviation

• histograms and scatter plots for selected pressure levels

• maps of departures

• The water-vapour shall analysed in mixing and relative humidity.

• DOFS, AKs, fit residuals of OEM(IR) cf OEM(MWIR)

• Eumetsat masking to be used, though could refer to our own IASI cloud flagging if cloud-related issues suspected

• Compile and discuss results in DFR delivered prior to PM1

Task 2+3: Next steps

• Current MW obs covariance over land affected by degradation in window channel obs-sim std.dev. From IASI retrieval compared to PWLR

• Repeat analysis using sims based on PWLR and/or analysis

• Check if current apparent degradation using MW in comparison to ANA_AK is due to improvement in sensitivity or “real” degradation in performance (compare IR-only also to ANA_AK for MWIR)

• Check if use of other cloud flags change stats (currently using cloud fraction < 0.01)

• Other suggestions ?

Task 4: OEM (MWIR/Metop-B) over land, clear-sky, with variable emissivity

• Extend state vector to include land emissivity

• State represented by principle components

• Already implemented in RAL IASI Ozone scheme based on UW/CIMS principle components

• Will consider if RAL channel selection has advantages for retrieving emissivity (was based on info content for emissivity)

• Potential for correlations between MW and IR emissivities to be investigated

• Repeat analysis of Task 2 to assess results with emissivity fitted

• Results included in DFR produced before PM2

Task 5:OEM (MWIR/Metop-B) in partially or fully cloudy IFOVs

• Two retrieval configurations will be assessed, building on the optimum retrieval configuration following Task 4:

1. IASI L2 cloud information (provided by Eumetsat) is used to identify cloud and thereby select a sub-set of IR channels assumed insensitive to a given cloud, using the approach of McNally and Watts.

• Cloud contaminated channels ignored by “inflating” Sy.

2. Cloud is retrieved, represented (in IASI) as a black body with given area fraction and pressure (using RTTOVs cloud modelling).

• Adapted retrieval applied over land and sea

• Analysis of task 2 repeated to assess scheme

• Land and sea separately

• As function of cloud fraction, pressure

Task 6: Retrievals with one or more missing AMSU channels

• Results will be analysed with a view to drawing conclusions re performance of Metop-A vs Metop-B and for the combination with/without

• channel 7

• channels 3,7 and 8.

• DFR updated prior to PM3

Task 7: Reporting and consolidation of data sets

• Final report delivered (consolidated version of DFR produced at end of each task)

• Organise and document the processed output files (hdf5 or netcdf?)

• Organise and document the analysis outputs (hdf5?)

• Statistics, AKs, DOFSs etc

CFIs

Schedule

Event Location Deliverables Planned Date

Kick-Off Meeting

Teleconferenece

  9th December 2013

PM 1 Teleconference Report on Tasks 1-2  1st April 2014

PM 2 EUMETSAT Report on Tasks 3-4  24th June 2014

PM 3 Teleconference Report on Tasks 5-6  11th November 2014

Final Presentation

EUMETSAT Final Report,  Datasets and Presentation

 9 December 2014

Meetings / deliverables

IR only

MWIR

Recommended