Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Research Design & Bias
Dr. Amy Grant
Health Policy Researcher
Maritime SPOR Support Unit
February 20th
, 2018
NSHA Research Education Series
Overview
• Types of Study Designs
– Forming a research question
– Levels of evidence
– Strengths/Weaknesses considering bias in design.
• Bias
– Specific considerations from study purpose to analysis
– Hierarchy of evidence
– Risk of bias assessment tools
– Reporting guidelines
Choosing a design
• How do you begin a research project?
– What is the first piece of information you need to
decide upon/refine?
• Design informed by:
• Research question(s)
• Budget
• Feasibility
• Methodology
• …….
Data Analysis!
FINER criteria
• Feasible: Cost, time, resources, sample size!
• Interesting: interest fades, ensure you can see the
study through to completion.
• Novel: new technique/methodology may improve
findings.
• Ethical: consider blinding and exposure/access to
treatment.
• Relevant: consider stakeholders, trust in results.
Case study
• Look at one particular “case” (e.g. patient with a rare
phenomenon)
Case series study
• Look at several cases of a particular phenomenon to
determine similarities/differences/predictors
Case & Case series studies
Strengths
• Unique: can explore rare
conditions; complex detail; may
identify new research directions.
Weaknesses
• Not generalizable
• Bias: selection; subjectivity from
researcher & participant
Cross-sectional study
• Look at individuals and outcomes at one point in
time.
– Can compare across several cohorts (ie. ages 18-25, 26-
32, 33-40.
– Consider cohort effects.
Cross-sectional Study
Strengths
• Feasibility: fast, “inexpensive”,
no attrition.
• Could be first step in a cohort
study.
Weaknesses
• Observation ≠ Causality.
• Bias: selection; one point in time
measurement.
Case control Study
• Retrospectively match “cases” (i.e. disease) with
“controls” (i.e. no disease) to compare the
prevalence of risk factors between the groups.
Case control study
Strengths
• Unique: good for studying rare
outcomes.
• Feasibility: quick, not costly.
• Hypothesis generating.
• Bias: can confirm who has
outcome of interest.
Weaknesses
• Only one outcome/disease can
be studied.
• Cannot establish risk or
prevalence.
• Bias: retrospective; limited
information available; recall
bias; administrative data
limitations.
Cohort Study
• Follows a group of people over time (prospective or
retrospective), collecting information on predictor
variables and measuring outcomes (presence or
absence of variable of interest).
Cohort Study
Strengths
• Unique: assesses incidence,
potential causes.
• Bias: less with prospective
design.
– Why? Recall!
Weaknesses
• Strength of conclusion: causal
inference difficult to show.
– Confounding variables.
– Testing and generating hypotheses.
• Feasibility: expense, time,
sample size.
– Inefficient for studying rare
outcomes.
Randomized Control Trial
• Involve randomizing patients to “experimental (ie.,
treatment)” or “control” group, then applying an
intervention to observe whether the effect is :
Types of RCTs
Intervention Group Control Group
Receives
intervention to be
tested
Receives no active
treatment-preferably
a placebo or
comparison
treatment
Compare outcomes between groups, over time
Parallel, between-groups. “Classic”
Types of RCTs
Intervention Group Control Group
Receives
intervention to be
tested
Receives an active
treatment
ie. “standard of care”
Compare outcomes between groups, over time
Active control
Types of RCTs
Intervention Group Control Group
New Treatment Proven effective
treatment
Compare outcomes between groups, over time
Non-inferiority/Equivalence Trial
Types of RCTs
• …. And the list goes on:
– Cluster randomization
– Adaptive design
– Nonrandomized between-group design
– Within-group design
– Cross-over design
RCT
Strengths
• Rigour – “gold standard”
• Random assignment
– Equivalent groups
• Prospective
• Cause – effect relationship.
• Minimizes bias
Weaknesses
• Feasibility: expensive, time-
consuming, ethical?!
• Potential harm to participants.
• Bias: big-pharma, issues with
randomization (errors, blinding,
analysis).
RCTs
Gold standard, but not infallible!
Example
• Treatment: enteral nutrition therapy, systemic
steroids
• Outcome: disease activity, quality of life, …
• Considerations around treatment choice?
• Side effects/consequences (medical, social, etc.)
• Considerations around outcome?
• Power/sample size
• Validity of outcome
Example: Types of questions
• Do patients receiving enteral nutrition therapy have better
quality of life 3 months after starting therapy than patients
receiving treatment with systemic steroids?
• Is there a relationship between type of treatment and
quality of life?
• Does enteral nutrition therapy increase quality of life
compared to patients receiving treatment with systemic
steroids?
Example:
Balancing Optimal vs. Feasible
• Best research design?
– What evidence do we already have?
– What do we want to know? What would add evidence
around the topic?
– What is feasible?
• Sample size? Budget? Time? Resources? Follow-through?
• Compromise on what you want to know, with what
you need to know.
Best study design
• Thoughts? Considerations?
• GIGO
– Garbage in, Garbage Out.
Validity = Bias?
• External validity =
Generalizability
(ie. relevancy) of results.
• Internal Validity =
Observed effect/differences
are representative of population.
Population
Target Population
Sample
Bias
• Bias = departure from internal validity,
systematic error.
Systematic errorRandom error
Internal validity
Selection bias Confounding
External validity
Information bias
What is “Risk of Bias”
• Risks of bias are the likelihood that features of the
study design or conduct of the study will give
misleading results.
• What are some ways to examine bias?
Focus on Bias
• How we conduct research:
1. Study purpose
2. Study design
3. Methods
4. Analysis
5. Limitations
Study Design: Levels of Evidence
Hierarchy of evidence alone
• “Level alone should not be used to grade evidence”
– Definitions of levels vary between hierarchies.
– Can lead to anomalous rankings.
• Useful for finding the best available, relevant evidence.
BMJ, Vol 328, 2004
Study Design:
Risk of Bias Assessment Tools
• If using a formal assessment:
– Pick a validated Tool!
Type of study Examples
Systematic Reviews AMSTAR 2Downs and Black Checklist
RCTs Cochrane Risk of Bias 2.0 ToolSIGN checklist
Prognostic (Risk prediction; incidence/prevalence)
PROBASTJBI checklist for prevalence studies
Diagnostic QUADAS-2
Qualitative JBI Checklist for qualitative studies
Cochrane: Higgins et al, BMJ 2011
• Selection bias
• Performance bias
• Detection bias
• Attrition bias
• Reporting bias
• Other bias
Cochrane: example
BMJ Open, 8(3), 2018
Reporting: Best-practice guidelines
• Systematic reviews of trials
– PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
• RCTs
– CONSORT: Consolidated Standards of Reporting Trials
• Case control/Cohort studies
– STROBE: The Strengthening of Reporting of Observational Studies in
Epidemiology Statement
Methods: Reliability & Validity
• Why are these concepts important to bias?
• Reliability = consistency in measurement.
– If outcome is captured differently across studies, difficult
to draw conclusions/synthesize information.
– Examples: Test-retest, Internal consistency
• Validity = accuracy in measurement.
– Are you assessing the right construct?
– Examples: Face, Convergent.
Methods: Benefits to using existing scales
• Validity and reliability
– Poor validity = systematic biases
– Poor reliability = large random errors
Methods: How to get the best data
• Consider:
– Labour, skill, speed, costs, accuracy
Interview Mail Phone E-mail
Potential
social
desirability
High Low Medium Low
Potential
interviewer
error
High Low Medium Low
Analysis
• Plan ahead!
• Don’t go fishing!
• Examples:
– Observational studies: Confounding by indication.
– RCTs: intention to treat, per protocol analysis.
Analysis: Propensity Scoring
Littnerova, S. (2013). Why to use propensity score in observational studies? Case study based on data from the Czech clinical database AHEAD 2006-09. Cor et Vasa, 55(4), 383-390.
Analysis: ITT vs. Per protocol analysis
Limitations
• Acknowledge limitations of study:
– Administrative data
– Measurement
– Sample – size, representativeness
Thank you!