144
AOAC Official Methods of Analysis SM Darryl Sullivan, Covance Laboratories and Past President, AOAC INTERNATIONAL

AOAC Official Methods of AnalysisSM March 28, 2011, the AOAC INTERNATIONAL Board of Directors approved an alternative path to achieve an Official Method (Official First Action status)

Embed Size (px)

Citation preview

AOAC Official Methods of

AnalysisSM

Darryl Sullivan, Covance Laboratories and Past President, AOAC INTERNATIONAL

On March 28, 2011, the AOAC INTERNATIONAL

Board of Directors approved an alternative path to

achieve an Official Method (Official First Action status)

for methods selected and reviewed using the AOAC

volunteer consensus standards development

processes.

How this change came about…

Rationale for Change

•AOAC’s ability to validate fully collaboratively

studied methods has been steadily on the decline,

approving only three in 2010.

•AOAC was repeatedly disappointing customers

and communities who needed methods to solve

problems.

•AOAC already had a reputation of being slow and

old with cumbersome processes – was potentially

facing decline in the confidence of our brand.

Rationale for Change

•AOAC has evolved and now acts as a problem solver

through a broader process of consensus building and

standards development.

•AOAC is trying to meet community/customer needs

by gathering the world’s authorities to articulate and

evaluate methods needs through expertise and

judgment.

•AOAC decided to find a way to give proper weight to

the confidence we have in the judgment and collective

knowledge of our experts.

•AOAC decided to align our brand and method output

closer to our proven standards development processes

The “new” or “alternate” path…

Rationale for Change

How it Works – At a Glance

Funded Stakeholder Panel

Working Groups to establish Standard Method Performance Requirements (SMPRs)

Expert Review Panels to adopt methods as Official First Action based upon performance against SMPRs

How it Works: The Details

Expert Review Panels

Must be supported by relevant stakeholder body

Membership is carefully managed and properly vetted by

the AOAC Official Methods Board

Holds transparent public meetings only

Remains in force to monitor methods as long as method is

in First Action Status.

How it Works: The Details Official First Action Status Decision

Method adopted by ERP must perform adequately against

the SMPR set forth by the stakeholders

Method becomes Official First Action on date when ERP

decision is made.

How it Works: The Details Official First Action Status Decision

Methods to be drafted into AOAC format by a

knowledgeable AOAC staff member or designee in

collaboration with the ERP and method author.

Report of decision complete with ERP report regarding

decision including scientific background (references etc) to

be published concurrently with method in traditional

AOAC publication venues

How it Works: The Details

Transition to Final Action Status

ERP will monitor performance and data submitted for two

years

Further data indicative of adequate method reproducibility

performance to be collected. Data may be collected via a

collaborative study or by proficiency or other testing data

of similar magnitude

How it Works: The Details

Transition to Final Action Status

Removed from Official First Action and OMA if no evidence

of method use or no data indicative of adequate method

reproducibility is forthcoming

ERP makes recommendations to the Official Methods Board

(OMB)

OMB renders decision on transition to Final Action Status

Expected Benefits

More Official Methods of Analysis generated

We can provide solutions faster and take full advantage of

collective expertise of AOAC members

Methods can be put into regular use right away – generating

more useable data to evaluate performance

Expected Benefits

OMA can be more flexible – if a method is not performing

up to ERP expectations it is removed

The transition from first to final action becomes more

meaningful and dynamic, more credence is given to final

action methods.

Time for Change

AOAC will convene its first Expert Review Panel charged

with adopting Official First Action Methods of Analysis this

afternoon.

Many more ERPs will follow, developing out of the many

stakeholder activities going on at AOAC

Questions and Comments?

Thank you!

Standard Method Performance Requirements [SMPR] Guideline

Gaithersburg, Maryland, USA

Thursday June 30, 2011

Background

• First written in 2010.

• First used for the Endocrine Disruptor Compounds (EDC) project in 2010.

• Reviewed and revised by Official Methods Board in 2010.

• Still in review, but also in use.

Background• Resulted when AOAC staff started a project

create a standard SMPR format.

• It was realized that:

• If the SMPR format required certain parameters (i.e. “recovery”) then a definition was needed.

• If we defined parameters then we needed to offer guidance on how to collect data.

Background

• If we defined parameters then we needed to offer guidance on how to collect data.

• If we offered guidance on how to collect data then we needed to provide guidance on acceptance criteria.

• If we offer guidance on acceptance criteria then we needed to explain the concept.

• So 2 pages turned into . . .

Background

. . . into 13 pages with 14 pages of appendices.

But . . . it a single, comprehensive document with good information all in one place for many different kinds of methods.

Philosophical Direction• An attempt to bring to together several

different AOAC technical documents.

• OMB Guidelines

• Microbiology Methods Guideline (Appd. X)

• AOAC Single Laboratory Guideline

• BTAM Guideline

• Best Practices for Microbiological Method (BPMM) Validation

Philosophical Direction

• An attempt to bring to together several different types of methods:

• Chemistry

• Microbiology

• Qualitative

• Quantitative

• Identity

Components of the Guideline

1. SMPR Format

2. Recommended Performance Requirement Parameters

3. Definitions

4. Recommendations for Evaluations

5. Explanations

6. Appendices

Architecture of Performance Requirements in SMPR GuidelineClassification of methods

Quantitative / Qualitative

Main component / trace (contaminant)

Identification method.

Type of data

single laboratory

independent

collaborative study

Big Notes!

• No distinction made between microbiology and chemistry!

• Not intended to require

SLV → independent lab → collaborative

• Identification methods separated from qualitative methods,

Classifications of Methods9

Quantitative Method( main

component1)

Quantitative Method(trace or

contaminant2)

Qualitative Method(main component1)

Qualitative Method(trace or

contaminant2)

Identification Method

Para

met

ers Si

ngle

labo

rato

ry

valid

atio

nIn

de-

pend

ent

Col

labo

rativ

eSt

udy

Classifications of Methods9

Qualitative Method(main component1)

Qualitative Method(trace or

contaminant2)Identification Method

Para

met

ers

Sing

le

Labo

rato

ryva

lidat

ion

Reference Method ComparisonInclusivity/SelectivityExclusivity/Specificity

Environmental InterferenceLaboratory Variance

BiasProbability of Detection

Reference Method ComparisonInclusivity/Selectivity Exclusivity/Specificity

Environmental InterferenceLaboratory Variance

BiasProbability of Detection (POD) at the

AMDL

Reference Method ComparisonInclusivity /SelectivityExclusivity/Specificity

PrecisionEnvironmental Interference

Bias

Inde

pen-

dent TBD5 Probability of Detection (POD)

at the AMD Bias

Col

labo

ra-

tive

Stud

y POD (0)POD (c)

Laboratory Probability of Detection8

POD (0)POD (c)

Laboratory Probability of Detection

POD (0)POD (c)

Laboratory Probability of Detection

Parameters

• Reference Method Comparison• Inclusivity/Selectivity• Exclusivity/Specificity• Environmental Interference• Laboratory Variance• Bias• Probability of Detection (POD)

Inclusivity/Selectivity

• Definition: Strains or isolates or variants of the target agent(s) that the methodcan detect.

• Recommendation: Analyze one test portion containing a specified concentration of one inclusivity panel member. More replicates can be used.

Exclusivity/Specificity• Definition: Strains or isolates or variants of

the target agent(s) that the method must not detect.

• Recommendation: Analyze one test portion containing a specified concentration of one exclusivity panel member. More

Bias

• Definition: The difference between the expectation of the test results and an accepted reference value. Bias is the total systematic error as contrasted to random error. There may be one or more systematic error components contributing to the bias.

• No recommendations.

Probability of Detection (POD)

• Definition: The proportion of positive analytical outcomes for a qualitative method for a given matrix at a given analyte level or concentration. Already discussed.

Probability of Detection (POD)

• Recommendations: Determine the desired Probability of Detection at a critical concentration. Consult with table 7 to determine the number of test portions required to demonstrate the desired Probability of Detection.

No definitions or recommendations

• Reference Method Comparison• Environmental Interference• Laboratory Variance

Summary• SMPR summarized a variety of AOAC

guidelines.

• SMPR is comprehensive, but not detailed.

• SMPR includes chemistry and microbiology; quantitative and qualitative.

• Work in progress.

Chi-Square Statistics in Method Validation

ISPAM Microbiology and Chemistry Working Groups 30 June, 2011

Dan Tholen, M.S.

Chi-Square and Related Issues

Different designs Statistical estimators vs. statistical tests McNemar Chi-Square test in ISO 16140 Related estimators and tests Relationship to POD and dPOD

Estimators vs. Hypothesis Tests

Estimators provide best estimates of parameters of interest, based on design – Accuracy, Sensitivity, Specificity, POD, etc.

Hypothesis tests provide advice on whether differences in estimators could have occurred by chance – Assumes a statistical distribution – Requires consideration of Type 1 and 2 error – Requires decision level

Estimators vs. Hypothesis Tests

Estimators should be accompanied by confidence intervals that show a range of values that could be the „correct‟ value – Similar to measurement uncertainty

Hypothesis tests usually come with a „reject‟ or „do not reject‟ decision, and

perhaps with a „p value‟ which is a

likelihood for the evidence if H0 is not true – Not as informative as C.I. or MU

Chi-Square Analysis

Recommened in ISO 16140, and in proposed CD 16140:2011 – And in protocols influenced by ISO 16140

Used for testing equivalence of methods by looking at discordant results Very powerful technique, often based on only a few discordances out of hundreds of agreements (that is, small differences between methods can be significant)

Comparative Accuracy - 16140

Separate study for several categories of food (up to 5 categories) Select at least 3 types of food from each category Select at least 20 samples representative of each type – Independent samples, not replicates – Ideally 10 negative, 10 positive

Test each sample with both methods

Two Way Designs - 16140

Unpaired: from the same sample, but separate test portions Paired: from the same sample and shared first step in the enrichment procedure

From same enrichment medium (microbiology) From the same extraction (chemistry)

2x2 Layout – Paired (and ISO 16140 unpaired)

Method B

(Reference)

Total

Method A (Alternative)

Present Absent

Present a b NA+

Absent c d NA-

Total NB+ NB-

N

Estimators - ISO 16140 Paired

Relative accuracy: AC = (a+d)/N Relative specificity: SP = (d)/NB- Relative sensitivity: SE = (a)/NB+

Sensitivity (altern): SEalt = (a+b)/(a+b+c) Sensitivity (refrnc): SEref = (a+c)/(a+b+c) Alternative method results confirmed for reference method negatives (cells b & d)

Estimators – 16140 Unpaired

Relative accuracy: AC = (a+d)/N Relative specificity: SP = (d)/NB- Relative sensitivity: SE = (a)/NB+ – These estimators are same as for paired

All alternative method results are confirmed, estimators are listed separately for confirmed and unconfirmed results

Chi-Square Test – ISO 16140

Only McNemar test is discussed 2 = |b-c|2 / (b+c) (1 degree of freedom)

Considers only discordant results

Other estimators not tested, McNemar is considered the most sensitive test to rule out differences due to random error Requires minimum size, b and c (b+c>22) Often the exact test (Binomial) is used for small size samples, or in all cases

Chi-Square Test – ISO 16140

Test for significant difference in proportion of positives – P+A = (a+b)/N – P+B = (a+c)/N

Since P+A and P+B both use a, the proportions are correlated Most sensitive test is for whether b and c are statistically different – Binomial, p=0.5 n=b+c

Equivalence with POD Concept

P+A: PODA = (a+b)/N P+B: PODB = (a+c)/N P+A - P+B = dPOD Test of dPOD using the Binomial (for H0: dPOD=0) is the same as the McNemar – Large and small numbers of tests – Both paired and unpaired – For single lab or multi-laboratory studies

Note on Nordval

Nordval (May 2010), for Qual. Chemistry Uses same 2x2 layout, for paired data Does not use Chi-Square. Uses Kappa, a measure of agreement that “corrects” for random agreement The extent to which agreement exceeds chance agreement is a measure of concordance Nordval recommends agreement > 80%

Note on Nordval

Uses Kappa, a measure of agreement that “corrects” for random agreement – That is, if methods A and B are totally

unrelated, there is a likelihood that they will agree on a lot of results, just by chance

e.g., if A reports 80% positive and B reports 70% positive, then we expect them to agree 56% of the time, just by chance

(0.8x0.7 = 0.56) So the extent to which the methods agree in excess

of 56%, is a measure of concordance

Classic Unpaired Design

When two methods are used on unpaired samples For example, drug studies on “treatment”

and “placebo” groups This is a classic design, not done in method validation studies Mentioned in ISO 16140, but not used

2x2 Layout – Unpaired (not used in ISO 16140)

Result Total

Method Present Absent

Method A e f NA

Method B g h NB

Total N+ N-

N

Estimators

Assume random samples from same population, randomly assigned to A or B Proportion Positive A = (e)/NA (= PODA)

Proportion Negative A = (f)/NA

Proportion Positive B = (g)/NB (= PODB) Proportion Negative B = (h)/NB

Accuracy, sensitivity, specificity not defined unless all results are confirmed

Chi-Square and POD

P+A - P+B = dPOD Chi-Square test is the same as a Binomial test of null hypothesis: H0: dPOD = 0

Chi-Square Test

Checks only for differences between observed and expected numbers of results in each cell “Expected” based on random assignment

of subjects to A or B, so expect same proportion of positives in A and B (and same proportion of negatives) “Expected” calculated from marginal

frequencies

Estimators POD Concept

P+A: PODA = (e)/NA

P+B: PODB = (g)/NB

P+A - P+B = dPOD Chi-Square test is the same as a Binomial test of null hypothesis: H0: dPOD = 0

Thank you

For the Validation of Qualitative Methods

Paul Wehling June 30, 2011

1

Qualitative (Binary) Methods Methods that are restricted to 2 possible

outcomes:

Positive or Negative

Pass or Fail

Heads or Tails

1 or 0

Yes or No

Presence or Absence

Identified or Not Identified 2

POD Parameter – Probability of Detection

General – Designed to be used by any Qualitative (Binary) Method

Microbiological

Chemical

Bio Threat Agent Methods

Botanical Identification

Allergens

3

POD Parameter Method Parameter that describes and

predicts method behavior

Probability of Detection or POD

The probability of getting a positive result at a given concentration of analyte.

POD is a function of concentration

4

POD Curve

5

POD Curve

6

POD A simple descriptive statistic that describes the

method performance at a given concentration.

It is a calculation of proportion of observed positive outcomes per total trials.

This simple statistic is inherent in all other systems, such as Chi-Square, LOD, RLOD.

The “POD Concept” is only new in that it recognizes the POD as a key parameter and plots a graph of POD vs concentration.

7

WHY PLOT POD? Plot of POD Curves are intended to assist

method users To assist users in selecting best method for intended

use.

Understanding POD Curve is crucial for interpretation of results.

The POD curve can be an indicator of the “usefulness” of the method.

If POD were constant across all concentrations, the method would not be useful.

8

VALIDATION The task of validating a qualitative (binary)

method is characterizing the POD curve at critical concentration points. 1. Make up a series of test materials at concentrations

of interest.

2. Analyze with replication

3. Calculate the proportion of positive responses at the concentrations.

4. Plot observed proportions as POD curve by concentration.

9

10

Example POD Response Curve

11

A BIT ABOUT POD POD is a combination of sensitivity,

specificity, false positives, false negatives.

Where did they all go?

12

“Where’s my False Negative?”

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

PO

D

Concentration (ppm)

POD Response vs Concentration

POD(1 ppm)

1-POD(1 ppm)

“Sensitivity at 1 ppm”

“False Negative at 1 ppm”

“Specificity”

1-POD(0)

POD(0)

“False Positive”

13

Some Statistics To do Classical Collab Statistics, Code Results

“Positive” = 1

“Negative” = 0

Use AOAC Calculations from Quantitative Stats to estimate

Mean = POD

Reproducibility Standard Deviation

Repeatability Standard Deviation

Laboratory Standard Deviation

14

POD = Mean LAB1 LAB2 LAB3

Trial1 1 1 0

Trial2 0 1 1

Trial3 1 1 1

Trial4 0 0 0

Trial5 1 0 0

Trial6 1 0 1

Trial7 0 0 1

Trial8 0 1 0

Trial9 1 1 0

Trial10 0 1 0

Mean 0.5 0.6 0.4 LPOD = 0.50

15

Analogous Parameters Method Attribute

Quantitative

Parameter

Quantitative

Estimate

Qualitative

Parameter Qualitative Estimate

General Mean or

Expectation Mean, μ Mean, POD

Repeatability

Variance

Reproducibility

Variance

Laboratory Variance

Expected difference

between Two Methods* Bias, B dPOD

2

r2

r

2

R 2

R

2

L2

L

1 2x x

x

2

rs2

rs

2

Rs 2

Rs

2

Ls 2

Ls

1 2POD POD

or POD LPOD

16

Difference Between Methods - dPOD Compare any two methods by comparing

POD values at a given concentration.

Difference by subtraction

dPOD = PODc – PODr

dPOD is always dependent on concentration

17

dPOD (c = 0.5) = -0.30

dPOD (2) = -0.27

dPOD(3.5) = -0.10

18

dPOD Curve vs Concentration

19

Big Ideas Combine sensitivity, specificity, false

positive, false negative into 1 parameter – Probability of Detection or ‘POD’

Graph POD vs. Concentration with Confidence Intervals

Compare methods by difference of POD at same concentration

Use the classic statistical model and descriptive stats for quantitative methods as the tool for calculating qualitative stats.

20

POD Concept Works for single lab and Multilab experiments.

Works for paired and unpaired designs.

Provides harmonization across qualitative/quantitative methodologies.

Does comparisons and hypothesis tests via confidence interval analysis – equivalent to chi-squared tests.

POD Curve plots mean response and uncertainty on the same graph.

21

Copyright 2011 by Robert A LaBudde 1

Qualitative Method Validation

Studies for Quantal Data: LOD, dPOD, PRE = RLOD and ω

Robert A LaBudde, BS, MS, PhD,

ChDipl ACAFS, PAS

AOAC Statistical Advisor

Least Cost Formulations, Ltd.

Old Dominion University

Copyright 2011 by Robert A LaBudde 2

Summary

• Example POD vs. Concentration curves

• Ideal POD vs. Concentration curve

• Transition range models

• Method performance requirements I

• Method performance requirements II

• Limit of Detection (‘LOD’)

• The ‘Concentration Fallacy’ for micro

Copyright 2011 by Robert A LaBudde 3

Summary (cont’d)

• Method Performance Parameters III

• Difference in POD: dPOD(C,R)

• Odds ratio ω

• Poisson Efficiency Ratio (‘PRE’)

• Relative LOD (‘RLOD’)

• Examples: Micro

Copyright 2011 by Robert A LaBudde 4

Summary (cont’d.)

• Non-micro methods

• Choice of metamer

• Warning for micro studies

• Conclusions & Recommendations

Copyright 2011 by Robert A LaBudde 5

Example POD vs.

Concentration curves

Copyright 2011 by Robert A LaBudde 6

Ideal Response vs.

Concentration curve • POD = ‘Probability of Detection’

= # Positive / # Trials

= mean of 0 or 1 data

• The ideal test method gives POD = 0 at

Concentration = 0, and POD = 1 for all

concentrations > 0.

• For real methods, there is a transition from POD =

0 to POD = 1 over a range of Concentration.

Copyright 2011 by Robert A LaBudde 7

Transition range models

• True shape of transition curve depends upon

underlying model of what happens in test method.

• Symmetric distribution threshold crossing: Probit

and Logit (historically these have been most

commonly used).

• Asymmetric distribution threshold crossing: can

be concave or convex shape.

• ‘Hormesis’: drop-off at high concentrations.

Copyright 2011 by Robert A LaBudde 8

Transition range models

(cont’d)

• There are a dozen or more possible model

forms in common use.

• Choice of a model form is subjective and

subject to controversy.

• Some curves convex, some curves concave,

some symmetric.

• Logit and probit are traditionally used as

middle ground when true shape is unknown.

Copyright 2011 by Robert A LaBudde 9

Transition range models

(cont’d)

• Advantage over individual POD values may

be improved precision by pooling across

concentrations.

• If model form is incorrect, may have worse

precision than individual POD values.

• Generally requires Concentration be known

accurately.

Copyright 2011 by Robert A LaBudde 10

Method Performance

Requirements I: Confirmation • At the most basic level, a qualitative method is

meant to discriminate between the presence and absence of an analyte.

• At zero concentration, POD < PODmax with 95% confidence. (Control false positive fraction.)

• At moderate concentration, POD > PODmin with 95% confidence. (Control false negative fraction.)

• Attainment of these two requirements validates the method as a ‘confirmation’ or ‘identification’ method in testing.

Copyright 2011 by Robert A LaBudde 11

Method Performance

Requirements I (cont’d)

• No real method, despite claims, has POD =

0 at zero concentration or POD = 1.0 even

at high concentration, due to various error

sources, including human-in-the loop .

• One method is better than another if it has

lower POD (FPF) at zero concentration and

higher POD (lower FNF) at moderate

concentration.

Copyright 2011 by Robert A LaBudde 12

Method Performance Requirements

II: Transition region

• The ‘I’ set of requirements does not speak to the

transition range of the POD vs. Concentration

curve.

• A method which satisfies the PODmin performance

requirement at lower Concentration is ‘better’ than

another method does so at higher Concentration.

• A method which has POD < 1 may still be useful

in repeated testing if no better method is available

(e.g., outbreak investigations for micro).

Copyright 2011 by Robert A LaBudde 13

Limit of Detection ‘LOD’ • One way commonly used to characterize a method in the

transitional range is to estimate the concentration at which

a particular POD is attained.

• ‘LOD50’: Concentration for which POD = 0.50

• ‘LOD90’: Concentration for which POD = 0.90

• Various techniques for estimation, including non-

parametric ones, such as linear interpolation and

Spearman-Kaerber (POD-based), or assumed models.

• Requires several points (at least 2, preferably more) in the

transitional or ‘fractional’ range.

• Requires accurately known concentrations!

Copyright 2011 by Robert A LaBudde 14

LOD50

Copyright 2011 by Robert A LaBudde 15

The ‘Concentration Fallacy’ for

Micro Methods • The transition region for qualitative methods for micro

testing typically occurs below 10 CFU/test portion and so cannot be quantified by plate count methods, particularly with other flora present. Instead a ‘MPN’ method is used, based on a reference qualitative method.

• So Concentration is determined from POD (not v.v.), and typically has large error limits (e.g., + 60% or much worse). POD is known more accurately than Concentration.

• Models fitting POD using Concentration as a predictor are invalid.

• LOD50 will be imprecise and unknown to a multiplicative factor (bias) due to clumping of cells.

Micro: 1-Hit-Poisson Model

Copyright 2011 by Robert A LaBudde 16

Copyright 2011 by Robert A LaBudde 17

Method Performance Requirements

III: Comparison of Methods

• Two methods which both satisfy the ‘I’

requirements equally can only be discriminated if

one or the other has data in its transition range.

• There are a number of measures of effect in

common use to compare a candidate method ‘C’

to a reference method ‘R’ (or any two methods) to

each other, based on measured POD values at

different concentrations in the transition range

(i.e., fractional POD range).

Copyright 2011 by Robert A LaBudde 18

Difference in POD:

dPOD(C,R) • The most basic comparison between a reference

method ‘R’ and a candidate method ‘C’ is the

difference in their POD values at a fixed

concentration

dPOD(C,R) = POD(C) – POD(R)

• Non-constant for difference concentrations.

• Expected difference in # positives easily estimated

as n x dPOD(C,R).

• Requires no assumptions, applicable in all cases.

Copyright 2011 by Robert A LaBudde 19

Odds ratio ω

• The most common measure of effect used to

compare two binary methods in scientific

research is the ‘odds ratio’ or ‘ω’

POD(R)/[1-POD(R)]

ω = ---------------------------

POD(C)/[1-POD(C)]

• If a Logit model is appropriate, the ‘odds

ratio’ is a constant across concentrations.

Copyright 2011 by Robert A LaBudde 20

Poisson Relative Efficiency

‘PRE’ or ‘R’ • LaBudde, R.A. (2006). Statistical analysis of interlaboratory validation

studies. X. Poisson-plot and Poisson relative efficiency to compare the

dose-response curves of two presence-absence methods. TR239. Least

Cost Formulations, Ltd., Virginia Beach, VA.

ln [ 1 – POD(C) ] γR

• R = ---------------------- = ----

ln [ 1 – POD(R) ] γC

where the one-hit Poisson model ‘1HPM’ is assumed to hold.

Copyright 2011 by Robert A LaBudde 21

PRE (cont’d)

• If both the reference and candidate methods obey the 1HPM model with cluster sizes γR and γC, resp., the R = γR / γC is the ratio of the two cluster sizes needed. If the reference method is ‘better’ (has a lower γ or LOD50), then R < 1.0.

• If the 1HPM is valid, R will be constant across different concentrations.

• Generally applicable to micro studies only.

Copyright 2011 by Robert A LaBudde 22

Relative LOD ‘RLOD’

• Anon. (2008). ISO 16140.

• If a 1HPM model assumption is made for the

mathematical form of POD, and

log(Concentration) is used as the metamer in the

model, a ‘complementary-log-log’ model results.

• For the complementary-log-log model, RLOD =

R, and γR and γC are the factor coefficients for the

‘Method’ term in the regression model.

• ‘RLOD’ is the same value as ‘PRE’ = ‘R’.

R vs. ω vs. POD

• If POD(C) and POD(R) are both small,

R ~ ω ~ POD(C) / POD(R)

• If POD(C) and POD(R) are both large,

R ~ ω ~ POD(C) / POD(R)

• If POD(C) ~ POD(R),

R ~ ω ~ POD(C) / POD(R) ~ 1

• Differ more otherwise.

Copyright 2011 by Robert A LaBudde 23

Copyright 2011 by Robert A LaBudde 24

Examples: Micro Analyte Matrix Study Level MPN(R) dPOD(C,R) w(C,R) R(C,R)

Salmonella Raw ground turkey Unpaired H 3.66 -0.04 0.38 0.75

Salmonella Raw ground beef #1 Unpaired H 2.18 -0.13 0.40 0.65H 2.33 -0.10 0.45 0.70

Salmonella Raw ground beef #2 Unpaired H 2.11 -0.11 0.47 0.70L 0.58 -0.23 0.34 0.41

Salmonella Dried whole egg Paired H 2.58 -0.05 0.59 0.82L 1.05 -0.03 0.88 0.92

Salmonella Milk chocolate #2 Paired H 1.55 -0.03 0.84 0.91L 0.48 -0.03 0.88 0.90

Salmonella Dry dog food Paired H 1.10 -0.03 0.86 0.91L 0.27 -0.02 0.91 0.92

E. coli O157:H7 Raw ground beef Unpaired H 3.18 0.72 74.41 11.80L 0.84 0.46 11.61 7.93

Copyright 2011 by Robert A LaBudde 25

Method Performance

Requirements III

Possible performance requirements:

|dPOD(C,R)| < dPODmax with 95% confidence

ω(R,C) > ω0 with 95% confidence

R(C,R) > Rmin with 95% confidence

Copyright 2011 by Robert A LaBudde 26

Non-Micro Methods

• Toxins, residues, chemicals, allergens, botanicals.

• There is a large literature associated with POD vs.

Concentration modeling and fitting for toxicology.

• Complementary-log-log is not typically a good

match for non-micro methods, typically logit and

probit have been used successfully.

• None of the standard regression models work for

botanical identification methods where complex

thresholding occurs.

Copyright 2011 by Robert A LaBudde 27

Choice of metamer

• Most models use either Concentration directly or

log(Concentration) as a predictor.

• The transform of Concentration to a new

independent variable is called the ‘metamer’ of

Concentration.

• It should be noted that linear models using

Concentration as metamer and linear models using

log(Concentration) as metamer cannot both be

correct.

Copyright 2011 by Robert A LaBudde 28

Warning for micro studies

• In the case of micro methods, cells are indivisible, and CFUs finitely divisible, so sampling error dominates at low concentrations. Method differences are obscured, if they have low LOD50’s.

• The 1HPM or complementary-log-log model will appear to fit the data very well, and this is fallacious because Concentration is determined assuming 1HPM in the MPN method. Micro study modeling should not use numerical values of Concentration!

Copyright 2011 by Robert A LaBudde 29

Conclusions • Models involving Concentration are flawed at

inception for micro methods. (This applies to Method Performance Requirements III in the transition region and to LOD50.)

• Serial dilution should be considered as a possible method to achieve a POD-independent Concentration estimate.

• Many alternative models exist, each with their own history and literature.

• PRE (aka RLOD or R) depends on the assumption that the 1HPM is correct, which it often is not.

Copyright 2011 by Robert A LaBudde 30

Conclusions (cont’d)

• LOD50 requires Concentration be known reasonably accurately, problematic for micro.

• LOD50 can be nonparametrically estimated from POD, with no model assumptions.

• The choice of statistic used to characterize the transition region of POD vs. Concentration should be made based on the scientific validity of the model assumptions and the ease and usefulness of interpretation of the result.

• Chemical-based methods have accurate Concentrations; Micro studies do not.

Copyright 2011 by Robert A LaBudde 31

Conclusions (cont’d)

• Use of the wrong model form will may give poorer results than using the POD vs. Concentration curve directly, and comparing methods by POD difference (‘dPOD’).

• Model forms that work for one analyte or matrix may not be appropriate for another, even in the same scientific method area.

• Nonparametric methods (POD included) that are distribution and model assumption-free are preferred to unvalidated model assumptions.

Working Group on Statistics

• Provide advisory guidance to Micro and Chem Working groups on aspects of statistical methodologies.

• Advise on strengths, weaknesses and applicability of various models.

• Advise on power of various validation experiment designs.

• Look for potential areas of agreement and encourage flow of ideas across Chem/Micro working groups.

Working Group on Statistics

• Develop scientific consensus on the best statistical techniques to use for validating qualitative methods.

Microbiological Harmonization June 29/30, 2011

Comparison of Method Validation Schemes

Worldwide Validation Schemes

• ISO 16140: internationally accepted standard for

microbiological method validation • AOAC Microbiology Guidelines • Health Canada Part 4 • NordVal (essentially 16140 w/o collaborative) • FDA Draft Guidelines • USDA/FSIS Draft Guidelines

Comparison of Elements

• Comprehensive table constructed • Six schemes compared • Qualitative: 30 topics • Quantitative: 19 topics • Initial effort on Qualitative • 5 of 6 schemes are either new or under

revision

Areas of Divergence

• Microbiology Working Group (WG) had 2 teleconferences

• Identified the 5 most significant areas of divergence among the 6 schemes.

• Nominated a Project Group (PG) with representative from each organization

• ISO/NordVal both use 16140 so only 1 representative

Significant Topics • From 30 topics 5 were chosen as most critical: • Reference method choice • Food/Sample Matrix Applicability Table.

Selection of Food/Category • # of levels/ # of samples/sample size/ # of

laboratories: Method Comparison & Collaborative

• Definition of fractional positive recovery • Data analysis (Chi square, RLOD, LOD, POD) &

performance parameters reported

Today

• 8 page comparative summary table prepared for the 5 significant topics

• PG will hold inaugural meeting to share ideas on harmonization

ISO 16140 AOAC OMA Health Canada NordVal FDA Draft USDA/FSIS

Page 1 of 8

QUALITATIVE

methods

ISO 16140 Doc N 1199 (ISO CD 16140-2)PIV

C2011-04-06

Pending revision of Part 2

AOAC OMA Draft revision document dated 3/24/11

Health Canada Draft Part 4 dated March, 2011

NordVal Protocol for the validation of alternative

microbiological methods

March 2009

FDA Guidelines FDA’s Qualitative

Microbiology Methods

Validation (ORA-LAB.7

version 1.2), pending

revision (proposed revision

marked in red).

Draft USDA/FSIS

Guidelines Disclaimer: The use of the

term “validation” is not

intended to have any

application to the

implementation of 9 CFR

417.4(a)(1) on initial

validation of HACCP plans.

The Draft FSIS Guidelines

deals exclusively with the

evaluation of pathogen test

kit methods.

Pre-Collaborative

Phase(s)

-Reference

Method

-Defined in ISO

16140-1

-1st priority is ISO

method, 2nd

priority is

CEN method, if

neither exists, then 3rd

priority is other

recognized methods

Note: definition still

under discussion at

ISO level to open up

for non ISO/CEN

methods (PIV)

-Can be various pre-

existing recognized

analytical methods

e.g. AOAC OMA,

ISO, FDA BAM,

FSIS MLG and

Health Canada

-If no appropriate Ref

can indicate “NA” in

summary tables for

POD

-Acceptable Ref published

by HC (Part 1)

-May include any methods

from methods organizations,

such as AOAC, BAM,

APHA, ICMSF, IDF, ISO

etc.

-Where no Ref exists, MMC

assess on case by case basis

ISO, CEN, NMKL,

BAM, etc. It is up to

the applicant;

however, as the EU

regulation in EC

2073/2005

Microbiological

criteria states EN ISO

methods, these are

most frequently used.

-Must be BAM, unless there

is no BAM reference

method.

-If these is no BAM

reference method, but if

there is a

nationally/internationally

recognized reference

method, then FSIS MLG,

AOAC, ISO, and Health

Canada are all potential

reference methods. APHA,

ICMSF, and IDF methods

also may be used as

reference methods.

For FSIS regulated products,

the current FSIS method,

which is found in the

Microbiology Laboratory

Guidebook (MLG), is the

most appropriate reference

cultural method for validating

methods used by FSIS-

regulated establishments.

FDA BAM, or methods

referenced by ISO or Codex

Alimentarius may be

appropriate. Non-cultural

methods applicable in some

circumstances.

ISO 16140 AOAC OMA Health Canada NordVal FDA Draft USDA/FSIS

Page 2 of 8

-Selection of

food RA

-5 categories for all

foods applications, 3

food types per

category (see below)

-Feed, environmental

samples and primary

production samples

(PIV)are additional

categories

RLOD

Same, except 1 food

type per category (if

possible) a different

food type

SLV

-All claimed matrices

must be included in

the study, in other

words, no defined

categories, and “all

foods” claim not

applicable

-Environmental

surfaces claim

require 3-7 different

surfaces (#

required is under

review (RF)

IV

At least 1 matrix that

was tested in the

SLV. For every 5

foods claimed, 1 food

matrix must be

included

-5 categories for all foods

applications, 3 food types

per category (Table 1).

-Environmental samples is

additional category

RA, SE, SP, Kappa

-5 categories for all

foods applications, 3

food types per

category (see below)

-Feed, environmental

samples are

additional categories

LOD

Same, except 1 food

type per category (if

possible) a different

food type

The selection of foods is

determined by FDA’s

regulatory needs.

Matrices commonly sampled

in FSIS regulated

establishments: meat,

poultry, and egg products,

and environmental samples

(sponges, swabs, brines)

All claimed matrices must be

included in the study.

Contains proposal to create

matrix categories based on

intrinsic properties. “All

Foods” claim not applicable

ISO 16140 AOAC OMA Health Canada NordVal FDA Draft USDA/FSIS

Page 3 of 8

- Food

category/type/

item

Each Food type can be

made of various

relevant food items.

Annex B provides

guidance ( not

mandatory)

These are then

grouped together to

meet the sample

number requirement of

a food type, i.e. 20

samples.

This is allow for the

use of naturally

contaminated samples

(BL)

Only one single food

item is accepted to

meet the sample size

requirement of a food

type, i.e. 20

replicates.

Each food type can be made

of various relevant food

items. Table 1.

CLARIFICATION

NEEDED

Can these be grouped

together to meet the sample

number requirement of a

food type, in this case 20

samples?

Yes, they can be group

together to meet the sample

number requirement. This

notion has been introduce to

allow for heterogeneity with

in a food type. Products in a

type may vary greatly in

origin, composition,

preparation processes,

natural background; all

those small variabilities

could have an influence on

the detectability of the target

organism. (I.I.)

Each Food type can

be made of various

relevant food items.

At NordVal’s

homepage

(www.nmkl.org)

provides a list of food

categories

These are then

grouped together to

meet the sample

number requirement

of a food type, i.e. 20

samples.

Currently, foods are

validated individually and

there are no category

claims. There are no “All

Foods” claims.

ISO 16140 AOAC OMA Health Canada NordVal FDA Draft USDA/FSIS

Page 4 of 8

-No. of

levels/samples RA

20 samples per food

type or 60 samples per

category

RLOD

3 levels

-negative controls =5

samples

-1 level ( theoretical

LOD, with fractional

positive results (BL))

= 20 samples

-Another level = at

least 5 samples

SLV and IV

3 levels:

-negative controls =5

samples

-1 level with

fractional positive

results = 20 samples

-Another (high) level

= 5 or 20 samples

(under review (RF)

3 levels:

-negative controls =5

samples

-1 level with fractional

positive results = 20

samples

-Another level up to 1 log

higher= 20 samples

RA

20 samples per food

type or 60 samples

per category

LOD

3 levels

-negative controls =5

samples

-1 level ( theoretical

LOD) = 20 samples

-Another level = at

least 5 samples

Level 1, 6 replicates/level,

single level

Level 2, 6 replicates/level, 1

inoculated level + 1

uninoculated level (5

replicates)

Level 3, 10 replicates/level,

1 inoculated level + 1

uninoculated level (5

replicates)

Level 4, 20 replicates/level,

1 inoculated level + 1

uninoculated level (5

replicates)

It is proposed that each of

the 4 levels use 20 replicate

test portions and that all

levels have a negative

control.

For each matrix and analyte:

1) minimum 60 samples

inoculated at fractional

recovery level per

alternative and reference

method

2) 5-10 uninoculated

samples per alternative

and reference method

-Sample size Undefined

MicroVal: Is specified

in the reference

method, other (larger)

samples size is

allowed but specified

in the certificate.

(PIV)

Standard is 25 g or 25

mL, unless Ref

method specified

larger sample size

-25 g, but larger sample

sizes are permitted

-Sample size must be the

same for alternate and Ref

methods, consult MMC if

testing composite samples

Undefined 25 g unless otherwise

specified.

Application dependent.

Portions should not be made

larger without validation.

Validation study conclusions

from larger portions

applicable to smaller

portions.

-Fractional

positive

Can be achieved by

either alternate or Ref.

- All samples should

not be all positive or

all negative.

-Ideal is 10 positive

and 10 negative (50%)

but any fractional

results is acceptable

Can be achieved by

either alternate or

Ref.

-proportion of

positives 25% to

75%, ideal is approx

50%

(10% to 90% is under

review (RF)

Can be achieved by either

alternate or Ref.

-proportion of positives

25% to 75%,

Can be achieved by

either alternate or

Ref.

- All samples should

not be all positive or

all negative.

-Ideal is 10 positive

and 10 negative

(50%) but any

fractional results is

acceptable

Yes, one or both methods

must give 40 – 90% positive

results. It is proposed that

the percentage positive

results be changed to 25 –

75%.

defined as a range of 20-80%

confirmed positive results

using reference method

ISO 16140 AOAC OMA Health Canada NordVal FDA Draft USDA/FSIS

Page 5 of 8

-Results analysis

and criteria RA

-By type and by

category

-Relative accuracy

AC, relative

specificity SP, relative

sensitivity SE

-First by unconfirmed

results, again by

confirmed results

-McNemar test as

criteria, (for paired

and unpaired) with

caveats i.e. really not

suitable for unpaired

and “never be

interpreted by only

the McNemar test”

RLOD

-by category

-LOD of alternate

method divided by

LOD of Ref

For paired, no lower

limit, but LOD

alternate might not be

> 2 times the LOD Ref

For unpaired samples,

no lower limit, the

LOD alternate might

not be >3 times the

LOD Ref (In the

ISO/CD 16140-2

version, I don’t find

any acceptability

limit settled for

unpaired samples,

only specified for

paired samples BL)

The values of 2

(paired) and 3

(unpaired) are still

tentative values!!!!!

(PIV)

- by level and by

matrix

- by POD Probability

of Detection 95%

confidence interval

for the alternate the

Ref and presumptive

and confirmed results

-then by difference

between POD

alternate and POD

Ref, confidence level

must contain zero for

method to be

considered not

different at 95%

confidence

- Chi Square is not

required but

“interesting”

Method Equivalence:

-POD

-one-tailed POD 95%

confidence interval (I.I.)

Performance parameters:

- by level and by food, but

only calculated for those

that passed POD

successfully

For Unpaired :

-Performance parameters is

the comparison of

presumptive vs. confirmed

results of the alternate

method ( not the Ref

method results)

-Specificity is based on

presumptive results

-Sensitivity is based on final

( confirmed) results

- Equivalence of alternate

method and Ref can only be

determined by the number

of true positives in both sets,

done by POD method

For Paired:

Use “absolute” results

where Ref can have FN

Criteria:

Sensitivity 98%

Specificity 90.4%

False negative rate < 2%

False positive rate ≤ 9.6%

Efficacy 94%

LOD must be comparable or

exceed the lower LOD of

the Ref

RA

-By type and by

category

-Relative accuracy

AC,

-Relative specificity

SP,

-Relative sensitivity

SE

- Kappa

-First by unconfirmed

results, again by

confirmed results

Criteria:

SE ≥ 95%

Kappa ≤ 0.80

LOD: fit for purpose

By level/individual

experiment for each matrix.

Per AOAC Microbiology

guidelines, McNemar Chi

Square statistics are used.

Performed for each matrix.

Unpaired study: One sided

chi-square test with alpha =

0.05. Criterion:

indistinguishable or better

performance than reference

method. Paired study:

Evaluate sensitivity with

minimum 29 confirmed

positive results. Zero false

negative results from 29

confirmed positives would be

consistent with a test having

a sensitivity that met or

exceeded 90% and zero

negative results from 50

confirmed positives would be

consistent with a test with a

sensitivity that met or

exceeded 94%. Criterion:

none proposed

ISO 16140 AOAC OMA Health Canada NordVal FDA Draft USDA/FSIS

Page 6 of 8

Inter-

laboratory

Study

Applicable to alternative

methods with a major

modification, defined as any

significant change in the

design or the component

reagents for a screening test,

for example, the introduction

of a new antibody or

oligonucleotide primer.

Follow guidance provided by

the AOAC International

Official Methods of Analysis

Program

- minimum no.

of valid data

sets/collaborators

-10; defined as

individuals working

independently using

different sets of

samples; from a min.

of 5 different

organizations,

including organizing

lab and different

locations from same

company

-10 valid lab data sets

required

- Specifies that 12

labs should start

Minimum of 8 labs

reporting valid data, labs

should be accredited per

17025 or demonstrate is

functioning under

equivalent quality system

-10; defined as

individuals working

independently using

different sets of

samples; from a min.

of 5 different

organizations,

including organizing

lab and different

locations from same

company

2 for a Level 2 study, 3 for a

Level 3 study, and 10 for a

level 4 study.

-Sample size NA

Is defined by the

protocol of the

reference method

(PIV)

Standard is 25 g or 25

mL, unless Ref

method specified

larger sample size

CLARIFICATION

NEEDED

Consistent with Pre-

collaborative?

Sample size is 25g unless

otherwise specified by the

method or need for larger

size ( to achieve enhance

detectability, regulatory

purpose or compositing)

(I.I.)

NA 25 g unless otherwise

specified.

ISO 16140 AOAC OMA Health Canada NordVal FDA Draft USDA/FSIS

Page 7 of 8

- number of

foods

1; relevant food item,

inoculated with target,

using a challenging

enrichment protocol

1 At least 1 1; relevant food

item, inoculated with

target, using a

challenging

enrichment protocol

One or more.

- number of

levels

3; negative control,

one level which

produce fractional

positive and another

level

3; negative control,

one level which

produce fractional

positive and another

level

3; negative control, one

level which produce

fractional positive and

another level about 10 times

greater than the detection

level

3; negative control,

one level which

produce fractional

positive and another

level

2 for a Level 2 study (1

inoculated and 1

uninoculated.

3 for Levels 3 & 4 (high,

low, and uninoculated.

- number of

replicates

8; per level of

contamination

-minimum of 48

results per collaborator

= 8 replicates x 3

levels x 2 methods

-minimum of 480

results (48 from each

collaborator) = ( 240

per method) for

statistical analysis

12 per level of

contamination

- 72 results per

collaborator = 12

replicates x 3 levels x

2 methods = 72

- minimum of 720

results ( 360 per

method) for statistical

analysis

8 per level

- min of 24 results per

collaborator ( 8 x3 levels )

per method

8 laboratories;

- 3 levels in

duplicates

8 labs x 3 levels x 2

replicates x 2

methods

6

-Confirmation for Paired, only

confirm the + Alt/-

Ref, for Unpaired,

confirm all

enrichments

Matched or

unmatched, confirm

all samples

Confirm all samples for Paired, only

confirm the + Alt/-

Ref, for Unpaired,

confirm all

enrichments

Yes.

ISO 16140 AOAC OMA Health Canada NordVal FDA Draft USDA/FSIS

Page 8 of 8

- Comparisons Analyzed two ways:

1.Unconfirmed

Alternate method

results vs. confirmed

Ref

2. Confirmed

Alternate method

results vs. confirmed

Ref

By level and by

matrix analyzed and

reported separately

CLARIFICATION

NEEDED

Consistent with Pre-

collaborative?

By level and by matrix, all

result confirmed

Confirmed alternate method

results vs reference (I.I.)

Alternative to reference

method (if available).

-Parameters

Calculated

Specificity ( only for

Neg controls)

Sensitivity ( only for

inoculated levels )

Relative Accuracy

(%of agreements )

RLOD of the

different participants

(BL)

Cross Lab Probability

of Detection (LPOD)

Difference between

Alternate LPOD and

Ref LPOD

CLARIFICATION

NEEDED

Consistent with Pre-

collaborative?

Yes,

POD , dPOD determined

for each matrix-level . All

dPOD data is then used to

assess the comparative

performance of both

methods

All 5 method

parameter(specificity,

selectivity, FP, FN and

method efficacy) calculated

in one of two ways,,

depending if sample is

paired or un paired. (I.I.)

Rel Specificity

Rel Sensitivity

Rel Accuracy

Kappa

Per AOAC guidelines,

Sensitivity, Specificity,

False Negative, and False

Positive Rates.

- Interpretation

McNemar test (chi

square)

RLOD is for

information only

: analysis of deviance

test to assess the

laboratory effect on

RLOD then

acceptability of

RLOD global value

(BL)

If confidence interval

of dLPOD does not

contain zero, then the

diff is statistically

significant

CLARIFICATION

NEEDED

Consistent with Pre-

collaborative?

Yes , dPOD one-tailed and

method parameter

requirement must be met.

(I.I.)

Criteria:

SE ≥ 95%

Kappa ≤ 0.80

[LOD: fit for its

purpose]

Per AOAC guidelines,

McNemar Chi Square

statistics.

AOAC ISPAM Small Group Micro Working Groups Meeting Agenda nlm PRE-DECISIONAL Page 1

International Stakeholder Panel on Alternative Methods

Microbiology Working Group for Harmonized Matrix Comparison

Thursday, June 30, 2011 at 1:00pm – 3:00pm

Twinbrook HILTON WASHINGTON D.C./ROCKVILLE EXECUTIVE MEETING CENTER

DRAFT MATRIX TABLES FROM:

EN ISO 1614:2008 – NORDVAL - AOAC BPMM – AOAC BPMM WORKING GROUP MATRIX EXTENTION DRAFT

ISPAM

1) EN ISO 16140:2008 E

2) NORDVAL MATRIX TABLES

a. Salmonella

b. Listeria

c. Campylobacter

d. E.coli O157

3) Annex A AOAC OMA Microbiological Guidelines

4) Appendix B – BPMM AOAC Microbiological Working Group for Matrix Extension

EN ISO 16140:2008 (E) nlm

EN ISO 16140:2008 (E) nlm

Matrix for Salmonella (NORDVAL)

Matrix group

Matrix

Food examples

1. Meat 1.1 Raw red meat Minced meat, offal

1.2 Raw white meat Chicken, turkey, duck

1.3 Raw smoked salted products Bacon

1.4 Heat treated products Sliced meat and poultry products

1.5 Fermented products Salami

2. Fish 2.1 Raw fish and shelfish Raw two-shelled mollusc, raw

shrimps

2.3 Heat treated fish products

and shelfish

Heat treated shrimps

3. Milk 3.1 Milk Raw milk

3.5 Desserts, ice-cream Ice-cream

3.6 Dry milk products Milk powder

4. Eggs 4.1 Raw egg Whole egg

4.2 Egg products Manufactured egg

4.3 Dried products Dried whole eggs

5. Vegetable products 5.1 Raw vegetables Sprouts

5.2 Dried products Spices

5.4 Fatty products Chocolate, mayonnaisesalads

7. Environment tests 7.1 Environment tests Swab tests

8. Feed

8.1 Animal feed Meat bone meal, fish meal, fish food

9. Animal faeces

10. Miscellaneous

NORDVALnlm

Matrix for Listeria (NORDVAL)

Matrix group Matrix Food Examples 1. Meat 1.1 Raw red meat Minced meat, (tatar – type)

1.3 Raw smoked salted

meat-products

Bacon, smoked filet

1.4 Heat treated products Sliced meat and poultry products

1.5 Fermented products Salami

2. Fish 2.1 Raw fish, shelfish and

Fish products

Cold smoked salmon

2.3 Heat treated fish products Heat treated shrimps

3. Milk 3.1 Milk Raw milk

3.4.1 Firm cheese Yellow cheese

3.4.2 Soft cheese Mould cheese

3.5 Desserts, ice-cream Ice-cream

4. Eggs 4.1 Raw egg Whole egg

5. Vegetable products 5.1 Raw vegetables Cut salads, sprouts

7. Environment tests 7.1 Environment tests Swab tests, Cleaning water

10. Miscellaneous

Matrix for Campylobacter (NORDVAL)

Matrix group Matrix Food Examples

1. Meat 1.1 Raw red meat Minced meat, offal

1.2 Raw white meat Chicken, turkey, duck

1.3 Raw smoked salted products Sliced smoked turkey meat

1.4 Heat treated products Sliced poultry meat

2. Fish 2.1 Raw fish and shelfish Raw two-shelled mollusc, raw shrimps

3. Milk 3.1 Milk Raw milk

4. Eggs 4.1 Raw egg Whole egg

4.2 Egg products Manufactured eggs

5. Vegetable products 5.1 Raw vegetables Sprouts

7. Environment tests 7.1 Environment tests Swab tests

9. Animal faeces

10. Miscellaneous

Matrix for E. coli O 157 (NORDVAL)

Matrix group Matrix Food Examples 1. Meat 1.1 Raw red meat Minced meat, cut meat, offal

1.3 Raw smoked salted

meat-products

Bacon, smoked filet

1.4 Heat treated products, ready

to eat smoked products

Sliced meat and poultry products,

smoked turkey filet

1.5 Fermented products Salami

3. Milk 3.1 Milk Raw milk

3.2 Sour milk products Yoghurt with fruit

3.4 Cheese Mould cheese

3.5 Desserts/ice-cream Ice cream

5. Vegetable products 5.1 Raw vegetables Cut salads, sprouts

5.4 Fatty products Mayonaise-salads

7. Environment tests 7.1 Environment tests Swab tests

9. Animal feces

10. Miscellaneous