Upload
m-rosen
View
215
Download
1
Embed Size (px)
Citation preview
When can RCTs and observationalintervention studies mislead us andwhat can we do about it?
When and why have observationalintervention studies sometimesmisled us?
Well-executed RCTs are considered the gold stan-
dard, but sometimes observational studies or non-
randomised intervention studies must be relied on.
However, there are examples where observational
studies have systematically shown results later con-
tradicted by large RCTs. The two most frequently
cited examples concern the relationship between hor-
mone replacement therapy (HRT) or vitamin intake
and coronary heart disease (CHD). In both cases,
several large observational studies demonstrated
reduced risks for CHD. However, subsequent RCTs
did not find any risk-lowering effects of these inter-
ventions. In a recent comment in the Lancet by Jan
Vandenbroucke, he explores possible explanations for
the conflicting messages between observational stud-
ies and RCTs on the effects of HRT on CHD (4). He
claims that the main reason for the discrepancies was
rooted in timing of HRT and not in difference in
study design. This seems to be a likely explanation,
although both in the case of HRT and from a more
general view, we argue that socioeconomic factors
are the most neglected aspect in observational
studies.
The problem with the observational studies is gen-
erally inadequate control for confounders. Women
receiving HRT are generally healthier and better edu-
cated than those who are not. Lawlor et al. showed
that the reduction in relative risks
for CHD disappeared when socio-
economic status was controlled for
(5). The same holds true for relative
risks of CHD after intake of antioxi-
dants or vitamins. The risk reduc-
tion disappeared when adjusting for
socioeconomic indicators (SEI) was
included (6).
The importance of the relationship
between socioeconomic factors and
health or utilisation of health care is
well documented. For example,
responding to increasing concern
about persisting and widening ineq-
uities in health, WHO established the
commission on social determinants of health (7).
Access to care show large socioeconomic differences
and is often not distributed according to need. In Swe-
den, drugs for HRT and erectile dysfunction (sildena-
fil) are dispensed more frequently to well-educated
than low-educated women and men (8). The same
study showed that most drugs for ischaemic heart dis-
ease were – in accordance with need – dispensed more
often to low-educated people. The relatively expensive
angiotensin receptor blockers were an exception and
had higher odds of being dispensed to well-educated
people. Well-educated men in Sweden also had better
access to planned coronary revascularisation than
other men with the same need (9).
Worldwide, there are numerous studies showing
inequalities in access to health care and it seems
obvious that observational intervention studies have
to control for SEI, particularly when analysing pre-
ventive, symptom-relieving or non-acute interven-
tions. Well-educated people in comfortable financial
and social circumstances are generally more knowl-
edgeable about medical options and new technolo-
gies and are in a better position to advocate their
own interests. If costs are high, they have the neces-
sary means to defray them.
This knowledge raises the question to what extent
observational intervention studies use SEI for con-
founding control. We performed a review of all
observational intervention studies published by four
prestigious journals (Lancet, NEJM, BMJ and JAMA)
in 2006 (10). Our search strategy yielded 67 studies.
In the absence of well-executed randomised controlled trials
(RCTs), evidence on the effects of interventions must rely on
observational studies. In some cases like analysing adverse
events, observational intervention studies have advantages
over RCTs. Usually, it has been shown that RCTs and
observational studies often arrive at similar results (1,2) even
if some disagree and argue that observational studies
overestimate the effects (3). By analysing case studies where
observational studies and RCTs have misled us, we can get
methodological insights and new perspectives on
how to combine the best of different designs.
PERSPECT IVE
ª 2009 Blackwell Publishing Ltd Int J Clin Pract, November 2009, 63, 11, 1562–15641562 doi: 10.1111/j.1742-1241.2009.02202.x
Worldwide,
there are
numerous
studies
showing
inequalities in
access to
health care
We then excluded before–after studies, studies lack-
ing a control group, aetiological studies and studies
adjusting for SEI in general. We reviewed 29 studies,
only eight (28.5%) of which adjusted for socioeco-
nomic factors.
Bearing in mind the rigorous quality control poli-
cies of these four journals, this most likely overesti-
mates the proportion of published observational
studies adjusting for SEI in general.
Several international groups, including STROBE
(11) and Cochrane (12), have started to develop
guidelines for assessing non-randomised intervention
studies. They stress the risks of selection bias but do
not focus on what we regard as the most important
consideration of all – socioeconomic factors.
This omission of SEI in observational intervention
studies calls for action. Considering the importance
of socioeconomic factors for health and equity in
access to care, as well as the accompanying risk of
selection bias, the results suggest that there is sub-
stantial potential for improving the quality of obser-
vational studies.
When and why have RCTs sometimesmisled us?
There are examples when RCTs and especially meta
analysis of small RCTs have misleaded us. Aprotinin
to reduce blood loss during coronary artery bypass
surgery was approved by the US Food and Drug
Administration (FDA) already in 1993 and has been
a very common procedure since then. In the late
2007, aprotinin was withdrawn from the market after
early termination of a large RCT (The BART study)
showing excess mortality for patients receiving apro-
tinin compared with lysine analogues (13). Before
BART, several meta analyses of RCTs had shown no
indication of an excess risk of death or had even
shown a reduced risk (14–16), while several observa-
tional studies had shown excess mortality risks and
increased risk of renal dysfunction (17–19). Two
questions arise: Why did FDA disregard well-con-
ducted observational studies and why did all these
meta analyses fail to detect the excess mortality risks?
The findings of the observational studies by Mang-
ano (17), Schneeweiss (18) and Shaw (19) were
based on prospective cohorts of nearly 93,000
patients. They all showed statistically significant mor-
tality odds ratios. The study by Mangano (17) con-
trolled for background characteristics such as age,
gender, socioeconomic status, geographical region
and medical history. The study by Schneeweiss (18)
showed an odds ratio of 1.64 (95% CI, 1.50–1.78)
adjusted for 41 background characteristics. Both of
these studies showed a dose–response relationship,
with higher mortality for higher doses of aprotinin.
In our view, this is compelling evidence for the pres-
ence of adverse effects. We suspect that the FDA did
not take action because the observed risk of adverse
effects with aprotinin was not supported by meta
analyses of RCTs.
Could we then trust the results of these meta anal-
yses? We made an in-depth analysis of a Cochrane
report reviewing antifibrinolytic use for minimising
perioperative allogenic blood transfusion (20). The
report included assessments of both the benefits and
the risks of aprotinin vs. placebo or other treatment
options. The 52 studies included in the meta analysis
of the Cochrane report were reviewed according to
whether an objective to study mortality was formu-
lated in advance, whether follow-up method or time
were specified and whether the study had statistical
power to show any effect.
One of the main purposes of meta analysis is to
summarise data from small studies to obtain more
robust estimates of effects. However, this is appropri-
ate only if the small RCTs are well-designed and
well-conducted. The Cochrane report restricted the
analysis to RCTs, but the largest study should not
have been included given that it was a prospective
observational study. None of the RCTs had sufficient
statistical power to detect differences in mortality.
Most studies had fewer than 100 patients. Seven out
of 51 RCTs had mortality outcome as one of their
objectives. Only very few described follow-up
method or time. This clearly indicates the investiga-
tors’ lack of focus on mortality and adverse events.
However, this does not mean a priori that the studies
lacked quality, although it does indicate that less
time was spent on this part of the study design.
The fact that most studies completely lacked a
description and specification of follow-up method or
time should raise more serious concerns. Many stud-
ies did not specify whether deaths occurred during
surgery, during hospital stay or within a specified
period of time. It is questionable whether it is appro-
priate to perform a meta analysis with very different
follow-up times or when most of the studies did not
specify follow-up time at all. Mortality is not an
adverse effect that occurs during hospitalisation only.
Severe complications may lead to death long after
discharge from hospital.
It is doubtful whether small studies should be
included in meta analyses if they do not have the
purpose of studying the specified outcome and if the
follow-up method or time is not adequately
described. The aprotinin case shows overconfidence
in small RCTs of inferior quality compared with
well-conducted observational studies. In retrospect, it
seems clear that aprotinin would have been with-
Could we then
trust the
results of these
meta analyses?
Perspective 1563
ª 2009 Blackwell Publishing Ltd Int J Clin Pract, November 2009, 63, 11, 1562–1564
drawn from the market by the company earlier if
FDA and others had taken well-conducted observa-
tional studies more seriously.
Conclusions
What are the general lessons learnt from these case
studies? First, we should critically analyse all studies
irrespective of study design. Second, in the case of
observational intervention studies, much effort must
be made to control for confounders and especially
socioeconomic factors. Third, meta analysis should
only be conducted if the included studies have had
an objective to study the outcome in focus and if the
methods of follow up have been adequately
described.
Author contributions
MR initiated the study and wrote the first draft of
this article, JL made the literature search and all
authors contributed to the planning and subsequent
revisions of the paper.
Disclosures
All authors are employed at the Swedish Council on
Technology Assessment in Health Care (SBU). We
declare no competing interests.
M. Rosen, S. Axelsson, J. LindblomThe Swedish Council on Technology Assessment in
Health Care (SBU), Box 5650, SE-114 86 Stockholm,Sweden
E-mail: [email protected]
References1 Concato J, Shah N, Horwitz RI. Randomized, controlled trials,
observational studies, and the hierarchy of research designs. NEJM
2000; 342: 1887–92.
2 Benson K, Hartz AJ. A comparison of observational studies and
randomized controlled trials. NEJM 2000; 342: 1878–86.
3 Kunz R, Vist GE., Oxman AD. Randomisation to protect against
selection bias in healthcare trials. Cochrane Database Syst Rev 2007,
Issue 2. Art. No.: MR000012. DOI: 10.1002/14651858.MR000012.
pub2.
4 Vandenbroucke JP. The HRT controversy: observational studies
and RCTs fall in line. Lancet 2009; 373: 1233–5.
5 Lawlor DA, Davey Smith G, Ebrahim S. Socioeconomic position
and hormone replacement therapy use: explaining the discrepancy
in evidence from observational and randomized controlled trials.
Am J Public Health 2004; 94: 2149–54.
6 Lawlor DA, Davey Smith G, Brucksdorfer KR, Kundu D, Ebrahim
S. Those confounded vitamins: what can we learn from the differ-
ences bbetween observational versus randomised trial evidence?
Lancet 2004; 363: 1724–7.
7 Commission on Social Determinants of Health. CSDH Final
Report: Closing the Gap in a Generation: Health Equity Through
Action on the Social Determinants of Health. Geneva: World Health
Organisation, 2008.
8 Ringback Weitoft G, Rosen M, Ericsson O, Ljung R. Education
and drug use in Sweden – a nationwide register-based study.
Pharmacoepidemiol Drug Saf 2008; 17: 1020–8.
9 Haglund B, Koster M, Rosen M. Inequality in access to coro-
nary revascularisation in Sweden. Scand Cardivasc J 2004; 38:
334–9.
10 Rosen M, Axelsson S, Lindblom J. Observational studies
versus RCTs: what about socioeconomic factors? Lancet 2009; 373:
2026.
11 von Elm E, Egger M, Altman DG, Pocock SJ, Vandenbroucke JP.
Strengthening the reporting of observational studies in epidemiol-
ogy (STROBE) statement: guidelines for reporting observational
studies. BMJ 2007; 335: 806–8.
12 Reeves BC, Deeks JJ, Higgins JPT, Wells GA. Chapter 13: Includ-
ing non-randomized studies. In: Higgins JPT, Green S, eds. Coch-
rane Handbook for Systematic Reviews of Interventions. Chichester,
UK: John Wiley & Sons, 2008: 391–432.
13 Fergusson DA, Hebert PC, Mazer CD et al. A comparison of apro-
tinin and lysine analogues in high-risk cardiac surgery. N Engl J
Med 2008; 358: 2319–31.
14 Henry DA, Carless PA, Moxey AJ et al. Anti-fibrinolytic use for
minimising perioperative allogeneic blood transfusion. Cochrane
Database Syst Rev 2007, Issue 4. Art. No.: CD001886. DOI:
10.1002/14651858.CD001886.pub2.
15 Levi M, Cromheecke ME, de Jonge E et al. Pharmacological
strategies to decrease excessive blood loss in cardiac surgery: a
meta-analysis of clinically relevant endpoints. Lancet 1999; 354:
1940–7.
16 Sedrakyan A, Treasure T, Elefteriades JA. Effect of aprotinin on
clinical outcomes in coronary artery bypass graft surgery: a
systematic review and meta-analysis of randomized clinical trials.
J Thorac Cardiovasc Surg 2004; 128: 442–8.
17 Mangano DT, Tudor IC, Dietzel C. The risk associated with apro-
tinin in cardiac surgery. N Engl J Med 2006; 354: 353–65.
18 Schneeweiss S, Seeger JD, Landon J, Walker AM. Aprotinin during
coronary-artery bypass grafting and risk of death. N Engl J Med
2008; 358: 771–83.
19 Shaw AD, Stafford-Smith M, White WD et al. The effect of
aprotinin on outcome after coronary-artery bypass grafting. N Engl
J Med 2008; 358: 784–93.
20 Rosen M. The aprotinin saga and the risks of conducting meta-
analyses on small randomised controlled trials – a critique of a
Cochrane review. BMC Health Serv Res 2009; 9: 34. doi:10.1186/
1472-6963-9-34.
Meta analysis
should only be
conducted if
the included
studies have
had an
objective to
study the
outcome in
focus and if the
methods of
follow up have
been
adequately
described
1564 Perspective
ª 2009 Blackwell Publishing Ltd Int J Clin Pract, November 2009, 63, 11, 1562–1564