Upload
doantruc
View
220
Download
4
Embed Size (px)
Citation preview
Critical Appraisal of Journal Articles
Kate Keetch, PhD, Research Development Specialist
Samar Hejazi, PhD, Epidemiologist
October 25, 2017 1pm-4pm Hemlock Room, Central City
Department of Evaluation and Research Services Contact Information
Susan Chunick
Director
604.587.4681
Sara O’Shaughnessy, PhD
Research Ethics Coordinator
604.587.4436
Caroline Shaker
Program Assistant
604.587.4628
Susan
Caroline
Sara
Sonia Singh, MD
Program Medical Director
604.541.5830
Kate Keetch, PhD
Research Development
Specialist
604.587.4637
Samar Hejazi, PhD
Epidemiologist
604.587.4438
Kate
Samar
Sonia
Camille Viray
Education & Communications Coordinator
604.587.4413
Sarah Flann
Research Ethics Coordinator (Harmonized Studies)
604.587.7855
Anat Feldman, PhD
Contracts and Business
Development Specialist
604.587.4445
Camille
Anat
Sarah
Marisa
Department of Evaluation and Research Services Contact Information
Allison Lambert
Library Technician
604.520.4755
Sarah Gleeson-Noyes
Librarian (maternity leave)
604.585.5666
x772467
Sarah
Allison
Michelle Purdon
Library Services
Manager
604.851.4700
x 646832
Michelle
Anita Thompson
Library Technician
604.851.4700
x 646832
Marisa MacDonald Librarian
604.795.4141
x614437
Anita
Ashley Lavoie
Library Technician
604.585.5666
x774510
Brooke Ballantyne Scott
Librarian
604.520.4755
Ashley
Brooke
Julie
Julie Mason
Librarian
604.412.6255
Vinny
Vinny Gibson
Librarian
604.585.5666
x774511
DERS Website: http://fraserhealth.ca/research
Key Features: • How to get started
• How we can help
• Description of services
and contacts
• Tons of resources!
• Research Toolkit
• Clinical Research
Start-up Toolkit
• Research Study Database
FH Research Study Database
http://researchdb.fraserhealth.ca/ersweb/
FH Methodology Unit - How can we help?
Research Development Specialist
Pre- and post-grant administration: Conducting targeted a search for funding opportunities Identifying a research team Facilitating letters of intent and proposal development in
collaboration with researchers Formulating the research budget and resources Understanding FH and funding agency requirements regarding
preparation of specific documents Administration of funding awards
FH Methodology Unit How can we help?
Epidemiologist Protocol Development Services Refine ideas into a researchable question Refine project objectives, questions & hypothesis Develop study methods: study design, sample size calculation,
analyses plan Statistical Analyses Services: Database design, data analyses and interpretation of results
Project Dissemination Services: Posters, power point presentations & manuscript
development
Objectives
Increase awareness of issues around quality and reporting of research evidence
Increase awareness of common research designs in health care
Use research tools that facilitate appraisal of research reports
What is your experience with critical appraisal, comfort
with reading articles?
Critical Appraisal – What is it?
The assessment of evidence by systematically reviewing its relevance, validity and results to specific situations Chambers, R. (1998)
A balanced assessment of strengths of research against its weaknesses. It is a skill that needs to be practiced.
Critical Appraisal – What it is NOT
Negative dismissal of a piece of research Assessment of results alone Based entirely on detailed statistical analysis An exact science (not always a right or wrong
answer) Undertaken only by research experts
Critical Appraisal – Why do it?
To be able to understand and assess the quality and applicability of research
Published research may not be reliable/valid Published research may not be relevant To identify gaps in knowledge
Approaches to Critical Appraisal
Common sense General appraisal tools Design specific appraisal tools
Appraisal tools help you focus on the important
parts of the article
Common Sense
Group activity 1: – Read the scenario described in the handout
and answer the questions – Report back with your comments
Wakefeild et al., 1998 Lancet 351 (9103): 637–41 • Reported on observation of bowel
symptoms in 12 children with ASD.
• Suggested possible link with MMR vaccine.
• Called for suspension of triple MMR vaccine.
• Based on retrospective recall by parents on time of behavioural symptom onset and MMR administration.
• Ileocolonoscopy and lumbar puncture conducted without ethical approval.
• Decline in vaccines in UK, rise in measles and two child deaths.
See Evidence for the frontline article
Lessons Learned
Never relay on only one study Temptation to pick and choose which evidence
to use
Instead: What is the trend in the literature?
Components of a Research Article
Abstract Introduction Methods Analysis and Results Discussion
Assessing Bias is Key to Appraisal
Main concerns - Quality – Validity: strength and accuracy of findings
• Internal validity – avoidance of systematic error/bias – Example: baseline characteristics of groups are the same
except for the intervention / exposure
• External validity – generalizability/applicability of findings – Example: baseline characteristics of the study sample
are similar to the general population
– Reliability: reproducibility of findings
General Appraisal
Bias – Any systematic error in the study that leads to
distortion of the results (validity)
– Can occur at any stage of research
– Can occur with all types of designs
– Usually a matter of degree of bias (i.e., minimizing)
– Different forms of bias specific to study design
Different forms of bias
Most common types: – Selection: groups compared are different, sample
unrepresentative of population
– Measurement: using a tool that is not valid, recalling past events, design and question not matched
– Intervention: no standardization of procedures, different settings used, groups not treated same (aside from intervention)
Traditional Criteria for Judging Quantitative Research
Alternative Criteria for Judging Qualitative Research
Internal validity Credibility
External validity Transferability
Reliability Dependability
Objectivity Confirmability (Can findings be confirmed by others?)
Judging the soundness of qualitative and quantitative research
Guba, E.G., and Lincoln, Y. S. (1981) Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches. San Francisco, CA: Jossey-Bass.
Levels of Evidence
Quality
General Appraisal – Key Questions
Was the study original? Whom is the study about? Was the design of the study sensible? Was systematic bias avoided or minimized? Was the study large enough, continued for long
enough, to make results credible?
How to read a paper: assessing the methodology quality of published papers. BMJ. Trisha Greenhalgh. 1997
General Appraisal – Introduction
Is (are) the reason(s) for the study clearly stated?
Is the review of the literature comprehensive? Was there reporting bias by the authors?
http://www.dorset-pct.nhs.uk/documents/the_organisation/archive/south_and_east_dorset/policies/29ClinicalEffectivenessStrategy.pdf
General Appraisal – Methods Section
Appropriateness of study design – Linkage to study question
– What has already been published
– Is the primary outcome appropriate (e.g., length of follow-up)
– Appropriate data collection tools (validated?)
– Ethical considerations
Is the sample appropriate? – Adequate description of sampling method – Sample size justification
Recruitment methods? Randomization procedures? Blinded assessment?
General Appraisal – Methods Section
General Appraisal - Quantitative Research Methods Section
Are the measurements / data collection likely to be reliable and valid Is there a description of the statistical
methods used
General Appraisal – Results Section
Did the study proceed as planned Were findings described adequately Are calculations correct Was statistical significance tested (quantitative
research)
General Appraisal - Conclusion/Discussion Section
What do the main findings mean? How are the hypotheses interpreted (quantitative
research)
Are the conclusions justified How do the findings compare with what others
have found Application of findings Limitations
General Appraisal Tools
General appraisal tools can be used to assess the following: Is the study trustworthy?
– Is the research question focused? – Was the method detailed? – How was it conducted, e.g. randomisation,
blinding, recruitment and follow up?
General Appraisal Tools
What are the results? – How was data collected and analysed? – Are they reported to be significant?
Will the results apply to my situation?
General Appraisal – Can I use this research?
Relevance
Application
Craig & Smyth 2002 The evidence-based practice manual for nurses. 2002 p117
General Appraisal using tool
Group activity 2 – Use General Appraisal Checklist tool to
critically appraise Chocolate study – Try to answer as many questions as possible – Report back
Break!
Appraisal of Specific Study Designs
Quantitative Analyzes numbers Explanation,
prediction Test theories Known variables Larger sample Standardized
instruments Deductive
Qualitative Analyzes words Explanation,
description Build theories Unknown variables Smaller sample Observations,
interviews Inductive
Crash Course in Qualitative Research Designs
Qualitative Research
An exploratory approach – To understand meaning through description – Experiences, perceptions, feelings, motives – Narratives, phenomenologies, ethnographies,
grounded theory, case studies – Not frequency
Possible components of Qualitative articles
Frameworks – Ethnography – Case studies – Phenomenology – Grounded Theory
Sampling – Convenience – Purposeful – Theoretical – Snowball
Methods ‒ Observation ‒ Interview ‒ Focus Group ‒ Written Text
Analysis ‒ Content analysis ‒ Thematic analysis
Critical Appraisal of Qualitative Research
Purpose Literature review Study design Sampling Data collection Procedural rigour Analytical rigour Auditability Theoretical connections Trustworthiness Conclusions and Implications
Critical Review Form – Qualitative Studies (Version 2.0) Letts, L., Wilkins, S., Law, M., Stewart, D., Bosch, J., & Westmorland, M., 2007 McMaster University
Qualitative Appraisal
Group activity 3 – Read and appraise qualitative study using
qualitative appraisal tool – Report back
Crash Course in Quantitative Research Designs
Quantitative Studies
Randomized Controlled Trial Cohort Case-control Case Study Single Case Design Before and After Cross-Sectional
Randomized Controlled Trial Experimental studies where effects of an intervention is
assessed by collecting data before and after the intervention has taken place
Used to compare one group of participants who receive the intervention with a control group who do not
Investigator randomly assigns participants into treatment group and “control” group – Makes groups “comparable”
Blinding of participants and/or researchers
Randomized Controlled Trial- Bias
Contamination: intervention and control in contact or control receive treatment accidentally
Co-intervention: participant receiving other treatment at the same time
Timing of intervention: interventions occurring over long versus short time periods (maturation in children)
Site effects/characteristics: hospital size; homes versus hospital units
Selection bias: use of probability sampling strategies
Randomized Controlled Trial
Performance bias: participants’ knowledge of their group assignment; inadequate standardization of procedures
Assessment bias: – knowledge of group assignment leads to over or
under estimation of the effects of the intervention • Concealment of group allocation from the clinicians,
researchers, patients, data collectors and others ‒ Differences in the assessment of outcomes between
different assessors • Standardize methods of assessment across the groups
Attrition bias: completeness of follow-up & systematic differences in withdrawals
Cohort Study Design
Observational, analytic study A defined group of people (the cohort) who do not
have the disease or outcome of interest is followed over time
Starting point is exposure – compare outcomes of those exposed or not
exposed (e.g., to drug, intervention) to disease/study outcome
Designed retrospectively or prospectively Prospective design provide higher quality data
Cohort Study
exposed not exposed
follow cohort over time to see who develops disease
/ study outcome Start here
Start here
Start here
Cohort Study- biases Selection bias: groups differ in important ways other than
intervention/exposure – Comparable in age, sex, SES, presence of comorbidities – Transparency of procedures used to select the comparison
group Loss to follow up: described and appropriately handled
– All participants accounted for – Attrition analyses carried out: explore systematic differences
between those who remained in the study and those who dropped out
– Reasons for drop-out described – Drop-out over 20% affects validity
Recall bias in retrospective studies
Case-Control Study
Purpose is to establish association between risk factors/disease and exposure
Starting point is outcome/disease – compare people with a specific outcome of interest
(cases) to people from the same population without that disease or outcome (control)
Used to investigate rare disease or when little is known about associations
Care in the selection of cases and controls, defining the disease, risk factors, identifying confounders
Case-Control Study
Cases
Controls
look at exposure histories to assess relationship
Start here
Case-Control Study - Bias The misidentification of when the individual becomes a “case” Prevalence-incidence bias: any risk factors resulting in death - less
represented − survival disadvantage of those with emphysema and smoking
Selection bias: C & C differ in important way other than exposure – Controls should be selected from the same population at risk – Controls should not have the disease – Investigators blinded to whether case/control status
Validity of retrospective data: − Recall bias – Data collected for other purposes
Descriptive Studies: Cross-sectional
Observational, descriptive study: no hypothesis Referred to prevalence studies Snapshot of the phenomena at a specified point in
time or over a short period – Exposures and outcomes are measured at the
same time – Cannot establish temporal relation (causality)
Surveys, questionnaires commonly used
Descriptive Studies- Biases
Sample selection bias: convenience sampling less representative of the general population (use multistage sampling methods)
Self reporting bias/social desirability Recall bias Measurement bias: use of surveys to collect data
– Adopt standardised and validated methods/tools – Use objective measures
Descriptive Studies- Biases
Temporal Bias: consider when study is looking at causal relationships- not sure of sequence of exposure and outcome
Confounding bias: consider when comparing groups – Example: high incidence of autism in certain
neighbourhoods & school locations
Appraising a Quantitative Article - Sample size
Calculation described and justified Parameters used to calculate size are described (e.g.,
mean difference; standard deviations; CI) Calculations compatible with design and analyses Adjustment for follow-up in longitudinal studies Underpowered studies may lead to Type II error: false
negative Overpowered studies may lead to Type I error: false
positive
Appropriateness of analysis: see statistical decision tree (will email out)
Appraisal of Quantitative Research - Results
Drop-outs – Reasons for drop-outs described? – How are drop-outs handled in the analysis?
Intention to treat analysis – Based on the initial treatment assignment and not on
actual treatment received or deviation from protocol
Appraisal of Quantitative Research - Results
“An internationally recognized system for rating quality of evidence and grading strength of recommendations for clinical practice guidelines that examine alternative management strategies or interventions, which may include no intervention or current best management”* *Guyatt, G. et al. GRADE guidelines: 1. Introduction _ GRADE evidence profiles and summary of findings tables. J. of Clinical Epidemiology 64 (2011) 383-394.
Grading of Recommendations Assessment, Development and Evaluation (GRADE)
A 5 step framework for assessing/judging quality of a body of research evidence
Adopted by FHA to standardize the CDSTs processes/development
Grading of Recommendations Assessment, Development and Evaluation (GRADE)
Repeat Steps 1 to 3 for each critical outcome of interest
Step 1 • Assign a priori ranking
Step 2 • Downgrade/Upgrade
Step 3 • Assign final grade
Step 4 • Consider factors affecting recommendations
Step 5 • Make recommendation
Background: GRADE Approach
Randomized controlled trials:
RATE HIGH
Observational studies:
RATE LOW
Downgrade for: Risk of Bias – Lack of: a. Clearly randomized allocation sequence b. Blinding c. Allocation concealment d. Adherence to Intention-To-Treat analysis e. Validated outcome measures AND f. Trial is cut short g. Large losses to follow up h. Outcome reporting is selective Inconsistency Indirectness Imprecision Publication Bias Upgrade for: Large consistent effect Dose response Confounders only reducing size of effect
High Moderate
Low Very low
Balance of desirable & undesirable effects Cost-effectiveness Patient Preferences
Strong for Using Weak for Using Strong against
Using Weak against Using
Adapted from: Goldet and Howick, Journal of Evidence-based Medicine, 6:50-54, 2013.
GRADE explicitly separates
FROM
The process of assessing the quality of a body of evidence
The process for making
recommendations based on those
assessment
GRADE Resources
Large Group Activity 4
Examples of problematic descriptions of methods
Worth Investigating
• Critical Appraisal Checklists http://mande.co.uk/blog/wp-content/uploads/2011/09/2009-
Methodology-checklist-qualitative-studies-NICE.pdf http://www.cebm.net/index.aspx?o=1157 http://onlinelibrary.wiley.com/doi/10.1002/9780470987407.app2/pdf
• Critical Appraisal Guidelines (GRADE) http://www.gradeworkinggroup.org/