41
What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group on the use of evidence in economic decision models‘ Department of Health Sciences, University of Leicester September 26, 2005 Tony Ades, Debbi Caldwell MRC Health Services Research Collaboration, Bristol

What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Embed Size (px)

Citation preview

Page 1: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

What can we learn from data ?A comparison of direct, indirect and observational

evidence on treatment efficacy

2nd Workshop

'Consensus working group on the use of evidence

in economic decision models‘Department of Health Sciences, University of Leicester

September 26, 2005

Tony Ades, Debbi Caldwell

MRC Health Services Research Collaboration, Bristol

Page 2: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Outline of presentation

Introduction: learning about parameters …Fixed effect models• Direct data, Indirect data, • Observational data: one new study, a meta-analysis of observational dataRandom effect modelsWhat to learn about: mean, variance, new or old groups ?• Direct and indirect data in RE models• Observational evidence• … And Surrogate end-points

Page 3: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Why might this be useful?

1. A “standard” systematic review is carried out. Has all relevant data been included ?”

2. Data is relevant if it reduces uncertainty … .. how effective might different kinds of data be in

reducing uncertainty ?3. Synthesis agenda Research prioritisation agenda.

Why collect more data if you don’t know what you can learn from it ?

4. …. a scientific basis for “Hierarchy of evidence” ?

Page 4: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Data tells us nothing unless there is a model

1. You must have something to learn about: a parameter

2. If you know what you are going to learn about, you must know how much you already know about it: a prior distribution

3. There must be a relationship between what the data estimates and the parameter: a model

Page 5: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

… it’s partly a language thing

1. Need to distinguish between data, parameters and estimates. Terms like ‘Log Odds Ratio’ tend to get used as if these were all the same thing.

2. Meta-analysis gives a “summary”.

Summary of what? …data, literature, estimates?

No “summary” without a model …

3. “evidence” => MODEL => “medicine”

Page 6: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

FIXED EFFECT MODEL

LOR Parameter Its prior distribution ~ N(0 , 0

2)

LOR data from an RCT Y, with standard error S

The model Y ~ N(, S2)

FE : data estimates exactly the parameter we want.

Uncertainty in prior

Uncertainty in data

Page 7: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Method

RCT gives DIRECT information on the parameter of interest.

Strategy: how much can Indirect Comparisons, Observational Data, etc tell us about the parameter …

…. COMPARED TO the same amount of direct RCT evidence.

Use standard deviation S as a measure of “information”

Page 8: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Scale of the day : Log Odds Ratios

1. LORs for Treatment Effects usually well within the range –1 to +1

… corresponding to Odds Ratios 0.4 to 2.5

2. And usually in range -0.5 to +0.5

… corresponding to OR : 0.6 to 1.65

3. We need to think of uncertainty on this scale.

values of 0 or S > 1 are HIGH, <0.25 LOW

Page 9: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

…the more you know, the less there is to learn

1. If prior uncertainty is large (0 high), posterior

uncertainty is dominated by the amount of new data – ie by S

2. If prior uncertainty is already low, only a large amount of new data (S low) will make a difference.

Page 10: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Effect of additional direct data on posterior uncertainty

00.05

0.10.15

0.20.25

0.30.35

0.40.45

0.5

0 0.2 0.4 0.6 0.8 1 1.2

SD of additional data

Po

ste

rio

r S

D

Prior SD = 0.5

Prior SD = 0.25

Prior SD = 0.1

Page 11: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Indirect RCT evidence on parameter AB

Target parameter AB ~ some prior

Model for Indirect evidence:

YAC ~ N(AC, SAC2), YBC ~ N(BC, SBC

2)

BC = AC - AB

IF SAC = SBC = S, then indirect evidence is equivalent to direct data with sd =2S = 1.414 S

Page 12: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

… the weakest link

BUT, the contribution of indirect comparisons depends on the weakest link:

If SAC is high (weak evidence on AC), the contribution to AB is small, no matter how much evidence there is on BC (ie no matter how low SBC)

…. don’t do big literature search on BC, if you know there is little evidence on AC (unless you are also interested in treatment C !!!)

Page 13: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

The weakest link: SD of equivalent direct AB evidence based on AC and BC evidence

1

1.5

2

2.5

3

3.5

4

1 1.5 2 2.5

SAC

SD

of

eq

uiv

ale

nt

dir

ec

t d

ata

0.1

0.5

0.75

1

1.5

2.5

SBC

Page 14: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Multiple indirect comparisons

Debbi Caldwell’s presentation:

Contribution to AB relative to a direct RCT of size S

via YAC, YBC - ONE indirect comparator => 1.414 S

and YAD, YBD TWO S

… etc THREE 0.82 S

… etc FOUR 0.71 S

… etc FIVE 0.63 S

Page 15: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Observational data: one new study

Observational data is biased: it does not give us an direct estimate of . Instead :

YOBS ~ N( + , SOBS2)

… in any given case we don’t know how big the bias is, or its direction …”unpredictability”

Let’s have a distribution .. perhaps .. ~ N(B , B2) …

to describe our views about ?

Page 16: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Prior distribution for bias

~ N(B , B2)…..

1. As a ‘first cut’, suppose B = 0 is our “best guess”

2. For B… how small / big might the bias be ?

an OR of 1.1 either way seems rather optimistic

an OR of 1.6 either way seems rather pessimistic

3. … assume these represent 95% credible limits on the amount of bias …

(in a “typical” single Observational study)

…for example

Page 17: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Uncertainty in the extent of bias

0 0.1 0.2 0.3 0.4 0.5 0.6

B, LOR scale

log(1.6)log(1.1)

In which: bias is on average 0.28 on log scale (either way)

Page 18: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Direct RCT data equivalent of Observational studies

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1.1

0 0.2 0.4 0.6 0.8 1

SD of the Observational Data, SOBS

RC

T d

ata

eq

uiv

ale

nt

sd

Min 1.15 Max 1.9Min 1.1 Max 1.6

B=0

Page 19: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Some shortcomings of this analysis

1. Assumes that bias is not related to the true . Maybe larger => larger bias ?

… this could be modelled too ..

2. What is our belief about the “AVERAGE BIAS in OBSERVATIONAL STUDIES” :

B ~ N(M , Exp-B2) ..

B=0 would mean: M=0, and Exp-B = 0 … No !

Page 20: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

A more reasonable view of the “average bias”

B ~ N(M , Exp-B2) ..

• The consensus seems to be that observational studies tend to exaggerate effects, ie M>0

• No problem: if we knew M exactly, we could adjust!…

• The problem is we don’t …ie Exp-B > 0.

Page 21: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Summary : the single observational study

Must include : uncertainty in the study bias, and uncertainty in the expectation of bias effects – and the size of the Obs study:

YOBS ~ N( + ,, SOBS)

~ N(B , B2)

B ~ N(M , Exp-B2) ..

=> YOBS ~ N( + M,, SOBS2 + B

2 + Exp-B2) ..

Page 22: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Meta-analysis of Observational Studies (1)

With ONE observational study

~ N(B , B2) is interpreted as uncertainty in bias

With several studies j = 1,2 … J studies

j ~ N(B , B2) is interpreted as between-study variation

in bias

BUT, the values of B , B are the same … ….Variation => predictive uncertainty

Page 23: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Variation and Uncertainty …Uncertainty is a state of mind. It can be reduced –

by collecting more data

Variation is a fact about objects, people, studies, estimates … It cannot be reduced

Predictive uncertainty that arises from variation cannot be reduced

Page 24: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Meta-analysis of Observational Studies (2)

A random effect Observational meta-analysis would be

YOBS-j ~ N( + j,, SOBS-j) data from study j

j ~ N(B , B2) between-study variation in bias

B ~ N(M , Exp-B2) uncertainty regarding

“expected” bias

=> YOBS-j ~ N( + M , SOBS-j + B2)

the mean of the M-A is a biased estimate of target parameter , biased by M… easily corrected …

So M-A (if large!) avoids the large uncertainty B and replaces it with the smaller uncertainty Exp-B

Page 25: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

How uncertain are we about M?

Can set some limits on uncertainty regarding

“Average Bias” M

If studies suggest that, eg, the average bias is an OR of 0.9, how uncertain is this .. . ?

Credible limits 0.75 to 1.05 ? … etc.

… or carry out a huge meta-meta-analysis and obtain a posterior distribution for B ~ N(M , Exp-B

2)

Page 26: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

“Fixed effect” parameter: Summary

1. Indirect comparisons – “weakest link” effect, but large uncertainty reduction possible with >1 comparator ..

2. Observational data from single study very weak … between-study variation AND uncertainty in “average bias”.

3. The estimated mean from a random effect Observational meta-analysis more useful: ONLY uncertainty in average bias to worry about….

Page 27: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Random Treatment Effect Models

Every RCT j=1,2 …J estimates a different parameter j

Yj ~ N(j, Sj2), The studies and their sampling error

j ~ N(RE , RE2) Variation in the true effects, from a

common RE distribution

RE ~ N(0 , 02) Uncertainty in the mean

RE ~ ? ? Uncertainty in the between-trials

variation

Page 28: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

What do we want to learn more about ?

(a) The mean effect : RE

(b) between-study variance RE2

(c) LOR j - in patient group / protocol studied before

(d) LOR J+1 - in a new patient group / protocol

from same distribution

PROBLEM: RE is an ‘unbiased’ estimate …

… but what is it an estimate of ???

Page 29: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

What can we learn from one new RCT ?(a) Not much about the mean effect RE

unless we can assume RE .- between-studies variation - is very low

(b) Not much about between-study variance RE2

(c) LOR j : Efficacy in a patient group / protocol studied before .... Then back to a Fixed Effect model for that group/protocol … (split or lump?)

Page 30: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

What does an RE model tell us about the parameter of interest

Given a RE distribution, ie RE, RE2 we can work out

• What we can say about efficacy in a new group

• How much does data on parameter j tells us about k

(data on one group, but need info on another)

Page 31: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

What can we learn from observational studies, given an RE model ?

1. Difficult: is each observational study giving us a biased estimate of some j , or is it averaging over many j and estimating a RE ?

…but no guarantee it’s the “same” RE as an RCT meta-analysis

2. At BEST (if many very large studies) the mean from a Observational MA is an estimate of (“RE” + M)

Only problem (still) uncertainty Exp-B about M

Page 32: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

What can we learn from Indirect Comparisons in a Random Effect context?

MTC RE meta-analysis provides unbiased information on mean treatment effects AB via AC and BC

- just as AB is informed by AC and BC inform.

- Same “weakest link” effect

… added bonus: far more information on RE

Page 33: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

What can we learn from Surrogate end-points

1. “Validated” surrogate end-points are rated high in the hierarchy of evidence ….

2. Validation, however, usually within trial

Page 34: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group
Page 35: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

What can we learn from Surrogate end-points ?Daniels & Hughes model.

Tj ~ N(j,ST,j) Data on True End Point, study j

Zj ~ N(j,SZ,j) Data on Surrogate End Point

j = + j, Regression relates true TEP to true SEP

If we knew the regression parameters and , then information on SEP would be as good as information on TEP

… but we DONT

(… also, this is a RE model – one study does not say much)

Page 36: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

What do we really know about the regression slope ?

1. Regression of T against S in Untreated cohort studies motivated the surrogacy concept – plenty of data on

and … uncertainty small

2. But people insist we can learn about and only from RCTs … back to uncertainty again .. !

3. BUT, then they also want to assume and are the identical regardless of treatment

… flip back to unrealistic certainty !

Page 37: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

placebo

active

Page 38: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Surrogacy summary …

1. What is a realistic level of certainty in projecting from surrogate evidence to clinical end-points ?

2. Careful analyses of data required – in every case –• CD4+ cell count, • bone mineral density, • blood pressure• cholesterol, etc ….

Page 39: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

A Research Agenda - Observational Studies

1. Models for bias: additive, multiplicative, combined

2. Evidence-based estimates for Between-studies variation in bias B

3. Evidence based distribution B needed => values for

average bias M and uncertainty Exp-B2

4. Do B and B depend on Type of Study: case-control,

cohort, register; or on condition ?

Page 40: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

A Research Agenda - RCTs

1. Evidence-based estimates of Between-studies variation in treatment effect

RE = 0 quite possible, but RE > 0.25 unlikely => 95% of ORs 0.6 – 1.65 around their median ?

2. The prior distribution ~ N(0 , 02)

why begin with complete ignorance about , 0 > 100…

… LORs > 3 (either way) are VERY rare, 0 = 0.55

Page 41: What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group

Any data is ‘relevant’ if it reduces uncertainty

…. depends on the model that relates it to the parameter of interest

Towards a Hierarchy of RELEVANCE ???

… work in progress