38
1 9 th EVIDENCE–BASED CLINICAL PRACTICE WORKSHOP For Clinical Decision Making and Health Management I DENTIFY APPRAISE I NTERPRETELABORATE I MPLEMENT AMIL LIFESCIENCE Suzana Alves da Silva Peter Wyer Rio de Janeiro, 2015

For Clinical Decision Making and Health Management · For Clinical Decision Making and Health Management ... Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice

  • Upload
    vobao

  • View
    221

  • Download
    0

Embed Size (px)

Citation preview

1

9th EVIDENCE–BASED CLINICAL PRACTICE WORKSHOP

For Clinical Decision Making and Health Management

IDENTIFY – APPRAISE – INTERPRET– ELABORATE – IMPLEMENT

AMIL LIFESCIENCE

Suzana Alves da Silva Peter Wyer

Rio de Janeiro, 2015

40

Section II

CRITICAL$APPRAISAL$OF$SCIENTIFIC$LITERATURE$

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

41

4 PRINCIPLES OF CRITICAL APPRAISAL

Methodological validity is the main component of critical appraisal of literature and includes the analysis of the methodological limitations of different types of study. We will use the principle of beginning, middle and end as a timeframe to analyze each stage of study conduct, starting with patients selection and allocation, individual outcomes assessment while study progresses and analysis of pooled results after the study finishes, respectively, as shown in Figure 2.

Figure 2. Main itens that need to be evaluated when appraising the validity of a study to find possible sources of bias in the timeframe of beginning, middle and end, meaning patients selection and allocation after study started, individual outcomes assessment while study progressed and analysis of pooled results after the study finished.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

42

5 RANDOMIZED CONTROLED TRIALS

The User ’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 6: 67-86.

5.1 OBJECTIVES

! To demonstrate how to critically appraise a randomized controled trial. ! To demonstrate how to interpret its results. ! To demonstrate how to apply its results for decision making.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

43

5.2 MAIN TOPICS

5.2.1 Adequacy of randomization

Concealment Stratified randomization Balance of prognostic factors

5.2.2 How to deal with tr ials that were not bl inded?

Blinding levels Co-interventions

5.2.3 How to assess adequacy of outcomes assessment?

5.2.4 How to deal with loss to fol low up?

Sensitivity analysis

5.2.5 Why intention to treat and not per protocol analysis?

5.2.6 Why randomized tr ials should not be stopped early for benefit?

5.2.7 When to bel ieve in subgroup analysis?

5.2.8 Be careful with combined endpoints !

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

44

5.3 CHECK LIST

Is the study valid?

At the beginning, did intervention and control group start with the same prognosis? " Were patients appropriately randomized? " Was randomization concealed? " Were patients in the study groups similar with respect to known prognostic factors? " Is there a rationale for adopting a cluster design (threat of contamination of some interventions or is

the only feasible method of conducting a trial)?

" Were the effects of clustering incorporated into the sample size calculations (intracluster correlation coefficient calculation)?

At the middle, was prognostic balance maintained as the the study progressed? " To what extent was the study blinded (patients, care givers, data collectors, outcome assessors,

investigators and statisticians)?

" If study was not sufficiently blinded, were the groups balanced regarding co-interventions and frequency of follow up?

" Were outcomes assessed appropriately? What methods were used to enhance the quality of measurements (eg multiple observations, training of assessors)?

At the end, were the groups prognostically balanced at the study’s completion? " Was follow-up complete? " Were patients analyzed in the groups to which they were randomized? " Was the trial stopped early? " Was the analysis by intention to treat? " Were the effects of clustering incorporated into the analysis?

Interpretation of the c l inical re levant outcomes

" What are the magnitude and precision of the results related to the relevant outcomes?!!

Applicabi l i ty of the results

" Was the study PICO question similar to my PICO question? " Were all important outcomes considered? If a composite endpoint was used how it relate to

the important outcomes considered? " Is the intervention feasible? " Are the likely intervention benefits worth the potential harm and costs?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

45

6 RANDOMIZED CONTROLED TRIALS OF NON INFERIORITY

The User’s Guides To The Medical Literature . JAMA 2012; 308(24):2605-2611 PDF available in the Supplement Material at the Website

6.1 OBJECTIVES

! To demonstrate how to critically appraise a randomized controled trial of non inferiority. ! To demonstrate how to interpret its results. ! To demonstrate how to apply its results for decision making.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

46

6.2 MAIN TOPICS

6.2.1 How to interpret non- infer iority margins? 6.2.2 Why per protocol instead of intentiton to treat analysis?

6.3 CHECK LIST

Is the study valid?

" Did the investigators guard against an unwarranted conclusion of noninferiority by enrolling an appropriate spectrum of patients?

" Was randomization process adequate? Was randomization concealed? " Were patients in the study groups similar with respect to known prognostic factors? " To what extent was the study blinded (patients, care givers, data collectors, outcome assessors,

investigators and statisticians)? " If study was not sufficiently blinded, were the groups balanced regarding co-interventions and

frequency of follow up? " What methods were used to enhance the quality of outcomes assessment (multiple observations,

training of assessors)? " Did the investigators guard against an unwarranted conclusion of noninferiority by preserving the

effect of the standard treatment? " Was follow-up complete? " Was follow-up sufficciently long? " Did the investigators analyze patients according to the treatment they received (per protocol), as well

as to the groups to which they were assigned (intention to treat)? " Was the trial stopped early?

Interpretation of the c l inical re levant outcomes

" Was the non inferiority margin established in an appropriate manner? " How large and how precise was the intervention effect?

Aplicabi l idade

" Was the study question similar to my PICO question? " Were all important outcomes considered? If a composite endpoint was used how it relate to the

important outcomes considered? " Is the intervention feasible? " Are the likely advantages of the novel treatment worth the potential harm and costs?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

47

7 CASE-CONTROL STUDIES FOR CAUSAL RELATIONSHIP

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 12:363-381.

7.1 OBJECTIVES

! To demonstrate how to critically appraise a case-control study to assess causal relationship between exposure to an intervention or environmental or behavioural factor and an outcome of interest.

! To demonstrate how to interpret the results of a case-control study of causal relationship. ! To demonstrate how to apply its results for decision making

Such type of individual study usually answer questions under the domains of therapy and harm.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

48

7.2 CHECK LIST

Is the study valid?

At the beginning, during patients selection and inclusion " Were cases and controls similar with respect to the indication or circumstances that would lead to

exposure? How were cases and controls matched?

At the middle, while the study progressed " Did assessment guarantee that the circumstances and methods for determining exposure were

similar for cases and controls?

At the end, during results analysis " Did statistic adjustment control for imbalance of prognostic factors and inequalities of exposure?

Interpretation of the c l inical re levant outcomes

" Qual foi a associação entre a exposição e os Outcomes considerados relevantes?

Applicabi l i ty of the results

" Was the study question similar to my PICO question? " Is the exposure similar to what might occur in my population? " What is the magnitude of the risk? " Are there any benefits that are known to be associated with the exposure?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

49

8 COHORT STUDIES FOR CAUSAL RELATIONSHIP

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 12: 363-381

8.1 OBJECTIVES

! To demonstrate how to critically appraise a cohort study to assess the causal relationship between exposure to an intervention, behavioural or environmental factor and an outcome of interest.

! To demonstrate how to interpret the results of a cohort study of causal relationship. ! To demonstrate how to apply its results for decision making.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

50

8.2 CHECK LIST

Is the study valid?

At the beginning, in the inclusion " Were patients similar for prognostic factors that are known to be associated with the outcome?

At the middle, during study execution " Did assessment guarantee that the circumstances and methods for detecting the outcome were similar

among patients?

At the end, during results analysis " Did statistical adjustment control for imbalance of prognostic factors and inequalities of exposure? " Was follow-up sufficiently complete?

Interpretation of the c l inical re levant outcomes

" How strong and how precise was the association between exposure and outcome?

Applicabi l i ty of the results

" Was the study question similar to my PICO question?

" Was follow up sufficiently long?

" Is the exposure similar to what might occur in my context?

" What is the magnitude of the risk?

" Are there any benefits that are known to be associated with exposure?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

51

9 COHORT STUDIES FOR PROGNOSIS

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 18:509-520.

9.1 OBJECTIVES

! To demonstrate how to critically appraise a cohort study to assess the likelihood of occurrence of an outcome over time.

! To demonstrate how to interpret the results of a cohort study for prognostic assessment. ! To demonstrate how to apply its results for decision making.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

52

9.2 CHECK LIST

Is the study valid?

At the beginning, aside from the exposure of interest, did the exposed and control groups start with the same risk for the outcome? " Were sample of patients representative? " Were patients sufficiently homogeneous with respect to prognostic risk?

In the middle, was prognostic balance maintained as the study progressed? " Were outcome criteria objective and unbiased?

At the end, aside from the exposure of interest, did the exposed and control groups finish with the same risk for the outcome? " Was the follow up sufficiently complete?

Interpretation of the c l inical re levant outcomes

" How likely are the relevant outcomes over time? How precise are the estimate of likelihood?

Applicabi l i ty of the results

" Was study question similar to my PICO question?

" Was follow up sufficiently long?

" Can I use the results in managing patients in my setting?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

53

10 PREDICTION RULES FOR PROGNOSIS

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 17:491-505.

10.1 OBJECTIVES

! To introduce the phases of development of a clinical prediction rule: derivation, validation and impact assessment.

! To demonstrate how to critically appraise a clinical prediction rule for prognosis. ! To demonstrate how to apply its results for decision making.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

54

10.2 CHECK LIST

Is the study valid?

At the beginning, during the derivation of the rule, usually performed through the retrospective use of statistical techniques on an original database (‘Level IV rule’).

" Were all-important predictors included in the derivation process?

" Were all-important predictors present in significant proportion of the study population?

" Does the rule make clinical sense? " How well did the study design used to derive

the rule guard against bias?

In the middle, during the validation of the rule, which includes prospective studies on several different populations from that used to derive it (Level II rule) or is restricted to a single population (Level III rule).

" Has the rule undergone at least 1 prospective validation in a population separate from the derivation set?

" Has the rule shown accuracy either in 1 large prospective multicenter study including a broad spectrum of patients and clinicians or in several smaller settings that differ from one another?

" Were patients chosen in an unbiased fashion and do they represent a wide spectrum of severity of disease?

" Was there a blinded assessment of the outcome event (or was the outcome all cause mortality) for all patients?

" Was there an explicit and accurate interpretation of the predictor variables and the actual rule without knowledge of the outcome? If the study was prospective, was patient assessment adequate?

" How well did the study design used to validate the rule guard against bias?

At the end, during the impact validation of the rule,

which includes at least one prospective validation in a different population plus one impact analysis, along with a demonstration of change in clinician behavior with beneficial consequences (Level I rule).

" Has the rule undergone at least 1 impact analysis that demonstrates a change in clinician behavior with beneficial consequences?

" How well did the study design used to evaluate the utility of the rule guard against bias?

Interpretation of the c l inical re levant outcomes

" What are the overall results of the performance or impact validation of the rule and their precision?

Applicabi l i ty of the results

" Was study question similar to my PICO question?

" Were all important outcomes considered? " Can the rule be used in a wide variety of

settings with confidence that it can change clinician behavior, facilitate patient decision making, improve patient outcomes, or reduce costs? If there is no certainty in the impact of rule on patient-important outcomes, can clinicians at least use the rule with confidence in their performance? May it be applied in various settings or should it be restricted to specific patients very similar to the ones included in the study?

" Are the likely benefits woth the potential costs and risks?

" What is the overall quality of the evidence taking into consideration risk of bias, magnitude of effect, precision, consistence of results across studies and indirectness, ?

" Is the implementation of the rule feasible?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

55

11 CROSS-SECTIONAL STUDIES FOR DIAGNOSIS PERFORMANCE

The User ’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 16: 419-438.

11.1 OBJECTIVES

! To demonstrate how to critically appraise a cross-sectional study for the assessment of performance of a diagnostic test, screening test or prediction rule.

! To demonstrate how to interpret the results of cross-sectional study of performance. ! To demonstrate how to apply its results for decision making.

11.2 INTRODUCTION

A adequada determinação da acurácia de um teste (sensitivity, specificity e likelihood ratio) depende de realização de uma metodologia apropriada de avaliação do teste segundo padrões que minimizem a possibilidade de vieses em estudos dessa natureza, geralmente utilizando desenho seccional. Tais padrões foram previamente definidos de acordo com o protocolo STARD.44-46

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

56

Segundo Jaeschke e col.47 a possibilidade de vieses em desenhos de estudos seccionais que avaliam a acurácia de testes diagnosiss é minimizada se:

1) os patients incluídos na análise estiverem dentro do mesmo espectro de incerteza da

doença;

2) todos os patients incluídos na análise forem submetidos, de forma independente, tanto ao

teste que está sendo avaliado quanto ao padrão de referência ou aos critérios que definem

o diagnosis de certeza;

3) a análise do resultado final tanto do teste que está sendo avaliado quanto do padrão de

referência ou dos critérios que definem o diagnosis de certeza for feita de forma pareada

por observadores independentes e cegos para os Results de um ou outro teste.

O respeito a estes três itens na metodologia de análise da acurácia de um teste diagnosis minimiza a possibilidade de erros sistemáticos, evitando que tanto a sensitivity quanto a specificity e a resultante likelihood ratio do teste seja subestimada, superestimada ou erroneamente calculada para aquela population em question.

Este formulário tem por objetivo ajudá-lo na análise crítica de estudos seccionais para avaliação da performance de intervenções diagnósticas.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

57

11.3 CHECK LIST

Is the study valid?

At the beg inning , dur ing pat ients se lect ion and inc lus ion " Did participating patients present a diagnostic dilemma? At the middle , dur ing s tudy execut ion " Did the Investigators Compare the Test to an Appropriate, Independent Reference Standard? " Did Investigators Perform the Same Reference Standard to All Patients Regardless of the Results of

the Test Under Investigation? “work up bias”

At the end, dur ing resu l t s analys i s " Were Those Interpreting the Test and Reference Standard Blind to the Other Results? " If follow up was part of the criterion standard, was it complete?

Interpretation of the c l inical re levant outcomes

" What are the magnitude and precision of the results related to the relevant outcomes?!!!

Applicabi l i ty of the results

" Will the reproducibility of the test result and its interpretation be satisfactory in my clinical setting?

" Is study question similar to my PICO question?

" Will the test results change my management strategy?

" Will patients be better off as a result of the test?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

58

12 CROSS-SECTIONAL STUDIES FOR DIFFERENTIAL DIAGNOSIS

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 15: 407-417

12.1 OBJECTIVES

! To demonstrate how to critically appraise a cross-sectional study for the assessment of differential diagnoses.

! To demonstrate how to interpret the results of studies for differential diagnosis. ! To demonstrate how to apply its results for decision making.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

59

12.2 CHECK LIST

Is the study valid?

" Did the investigators enroll the right patients? (Was the patient sample representative of those with clinical problem?)

At the middle , dur ing s tudy execut ion " Was the definitive diagnostic process credible? (Sufficiently comprehensive, applied to all patients,

with explicit criteria for diagnoses, reproducible diagnostic labels and with few patients left undiagnosed)

At the end, dur ing resu l t s analys i s " For initially undiagnosed patients, was follow up sufficiently long? " Was follow up complete?

Interpretation of the c l inical re levant outcomes

" What are the results related to the main etiologies?

Applicabi l i ty of the results

" Are the study patients similar to the one being considered in my own practice?

" Is it unlikely that the disease possibilities or probabilities have changed since this evidence was gathered?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

60

13 CLINICAL PREDICTION RULES FOR DIAGNOSIS

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 17.4: 491-505.

13.1 OBJECTIVES

! To introduce the phases of development of a prediction rule for diagnosis: derivation, validation and impact assessment

! To demonstrate how to critically appraise prediction rule for diagnosis ! To demonstrate how to interpret the results of prediction rule for diagnosis to support

decision making processess

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

61

13.2 CHECK LIST

Is the study valid?

At the beginning, during the derivation of the rule, usually performed through the retrospective use of statistical techniques on an original database (Level IV rule). " Were all-important predictors included in the

derivation process? " Were all-important predictors present in

significant proportion of the study population?

" Does the rule make clinical sense? " How well did the study design used to derive

the rule guard against bias?

At the middle, during the validation of the rule, which includes prospective studies on several different populations from that used to derive it (Level II rule) or is restricted to a single population (Level III rule). " Has the rule undergone at least 1 prospective

validation in a population separate from the derivation set?

" Has the rule shown accuracy either in 1 large prospective multicenter study including a broad spectrum of patients and clinicians or validation in several smaller settings that differ from one another?

" Were patients chosen in an unbiased fashion and do they represent the population with diagnostic uncertainty?

" Was there a blinded assessment of the criterion standard for all patients?

" Was there an explicit and accurate interpretation of the predictor variables and the actual rule without knowledge of the outcome? If the study was prospective, was patient assessment adequate?

" How well did the study design used to validate the performance of the rule guard against bias?

At the end, during the impact validation of the rule, which must have at least one prospective validation in a different population plus one impact analysis, along with a demonstration of change in clinician behavior with beneficial consequences (Level I rule). " Was the impact of the rule on clinical

practice tested? " How well did the study design used to

evaluate the utility of the rule guard against bias?

Interpretation of the c l inical re levant outcomes

" Are the results found with the validation of the rule consistent with those found with its derivation?

" What are the magnitude and precision of the results related to the relevant outcomes?

Applicabi l i ty of the results

" Was study question similar to my PICO question?

" Were all-important outcomes considered? " Can the rule be used in a wide variety of

settings with confidence that it can change clinician behavior, facilitate patient decision-making, improve patient outcomes, or reduce costs? If there is no certainty in the impact of rule on patient-important outcomes, can clinicians at least use the rule with confidence in their performance?

" May it be applied in various settings or should it be restricted to specific patients very similar to the ones included in the study?

" Are the likely benefits worth the potential costs and risks?

" What is the overall quality of the evidence taking into consideration risk of bias, magnitude of effect, precision, consistence of results across studies and indirectness?

" Is the implementation of the rule feasible?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

62

14 SYSTEMATIC REVIEWS

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 19: 523-542 / 20:543-593.

14.1 OBJECTIVES

! To demonstrate how to critically appraise a systematic review. ! To demonstrate how to interpret its results. ! To demonstrate how to apply its results for decision making.

14.2 CHECK LIST

Is the study valid?

At the beginning, during patients selection and inclusion " Did the review explicitly address a sensible clinical question? " Were the primary studies of high methodological quality?

At the middle, during study execution

" Was the search for relevant studies detailed and exhaustive? " Were selection, assessments of studies and data collection reproducible?

At the end, during results analysis

" Were the results similar from study to study?

Interpretation of the c l inical re levant outcomes

" What are the magnitude and precision of the results related to the relevant outcomes?!!

Applicabi l i ty of the results

" Were all the relevant outcomes considered? " The effects atributable to the subgroups are reliable? " What is the overall quality of the evidence? " Are the benefits worth the costs and potential risks?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

63

15 CLINICAL DECISION ANALYSES

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 21: 597-615.

15.1 OBJECTIVES

! To demonstrate how to critically appraise a decision analysis. ! To demonstrate how to interpret its results. ! To demonstrate how to apply its results for decision making.

15.2 CHECK LIST

Is the study valid?

At the beginning, during patients selection and inclusion " Were all important strategies and outcomes included?

At the middle, during study execution " Were the utilities obtained in an explicit and sensible way from credible sources? " Was there an explicit and sensible process used to identify, select and combine the evidence into

probabilities

At the end, during results analysis " Was the potential impact of any uncertainty in the evidence determined?

What are the recommendations?

" In the baseline analysis, does one strategy result in a clinically important gain for patients? If not, does it result in a “toss-up”?

Applicabi l i ty of the results

" Are estimates of likelihood coherent with clinical characteristics of my setting? " Do the utilities reflect how my patient would value the outcomes of the decision?!

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

64

16 ECONOMIC ANALYSES

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 22.1: 619-642.

16.1 OBJECTIVES

! To demonstrate how to critically appraise an economic analysis. ! To demonstrate how to interpret its results. ! To demonstrate how to apply its results for decision making.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

65

16.2 CHECK LIST

Is the study valid?

At the beginning, during patients selection and inclusion " Did the investigators state their conflict of interests? " Did the study explicitly address a sensible clinical question, posed in answerable form? " Did the investigators adopt a sufficiently broad viewpoint?

At the middle, during study execution " Is there a systematic review and/or summary of evidence linking options to outcomes for each

relevant question?

" Were selected studies of high methodological quality? " Were costs and consequences measured accurately?

At the end, during results analysis " Are the results reported separately for relevant patient subgroups?

Interpretation of the c l inical re levant outcomes

" What were the incremental costs and effects of each strategy? " Do incremental costs and effects differ between subgroups, i.e., is there any difference between subgroups’ outcomes and costs and has this difference been considered in the analysis? " How much does allowance for uncertainty change the results?

Applicabi l i ty of the results

" Did the investigators report all issues of concern to patient? Could my patients expect similar health outcomes?

" Are the treatment benefits worth the harm and the costs?? " Can I expect similar costs in my setting?

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

66

17 HEALTH TECHNOLOGY ASSESSMENTS

Based on INAHTA Guidel ines (http://www.inahta.org/HTA/Checklist ) and on The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical

Practice, second edit ion. 21:597-618 / 22.4:679-702.

17.1 OBJECTIVES

! To demonstrate how to critically appraise a health technology assessment. ! To demonstrate how to interpret its results. ! To demonstrate how to apply its results for decision making.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

67

17.2 CHECK LIST

Is the study valid?

At the beginning, during patients selection and inclusion " Was there a statement regarding conflict of interest and sponsorship? " Did the assessment explicitly address sensible questions (PICO table) for all pertinent action

domains (Therapy, Diagnosis, Prognosis and Harm)?

" Was there a systematic and explicit approach to evaluate the quality of the evidence and to grade the strength of the recommendation?

At the middle, during study execution " Was an explicit and sensible process used to identify, select and combine evidence? " Was the Search for Relevant Studies Detailed and Exhaustive? " Were the Primary Studies of High Methodological Quality? " Are there systematic reviews of evidence that estimate the relative effect of management options on

relevant outcomes?

" Were there economic analysis and social implications?

At the end, during results analysis " Were the results similar from study to study? " Was there a description of the evidence? " Was there a short summary that can be understood by a non-technical reader and were practical,

clinically important, recommendations made?

" Were the findings of the assessment discussed and were the conclusions clearly stated?

Interpretation of re levant outcomes

" What are the magnitude and precision of the results related to the relevant outcomes?!!" What is the impact of uncertainty associated with the evidence and values used? " Was there interpretation of the assessment results?

Aplicabi l idade

" What was the scope or perspective of the assessment (government, consumers, and payers)? " Were there report regard feasibility and access of the technology? " Are the recommendations applicable to your patients? " How strong are these recommendations? " Were there suggestions for further action?

83

Section IV DEVELOPING$POLICIES$AND$RECOMMENDATIONS$USING$GRADE$

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

84

20 THE GRADE APPROACH FOR DEVELOPING RECOMMENDATIONS

The User’s Guides To The Medical Literature: A Manual for Evidence -Based Clinical Practice, second edit ion. 22.4:679-702.

20.1 OBJECTIVES

! To demonstrate how to assess the quality of evidence available to answer each relevant question.

! To demonstrate how to develop strength of recommendation. ! To discuss the relationship between quality of evidence and strength of recommendation.

There may be strong recommendation in the presence of evidence of low quality?

20.2 ELABORATION OF WELL STRUCTURED CLINICAL QUESTIONS AND IDENTIFICATION OF THE RELEVANT OUTCOMES

The formulation of structured questions should consider the perspectives of all stakeholders involved in the policy. These issues must clearly state the population of interest of the guideline, interventions and their alternatives, as well as relevant outcomes for the patient population and for the health system.

20.3 SYSTEMATIC REVIEW OF LITERATURE

The systematic review should be performed for each structured question that was drafted and selected as relevant by the guideline development team, with clear criteria on what types of studies to include or not in the review. More information on the methods of preparation of a systematic review can be found in the Cochrane Handbook.

20.4 HOW TO ASSESS THE QUALITY OF THE EVIDENCE?

A avaliação da quality of evidence diz respeito a avaliação do conjunto de evidências encontradas32 capazes de responder as questões estruturadas que foram formuladas. Os aspectos que devem ser considerados nesta avaliação estão descritos na Table 9. A quantificação da magnitude e precision dos Results, o gradiente dose-resposta, bem como a presença de confundidores que poderiam amenizar o benefício da intervention podem aumentar a quality of evidence.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

85

Table 9: Assessment of the quality of the evidence

Item Description

Study design used Systematic review of individual studies or individual studies (randomized controled trial, cohort study, cross-sectional study, case-control study). Systematic review of randomized studies or randomized studies are in the top of the hierarchy for the assessment of the utility of different interventions. Overall quality of evidence starts high with these types of studies.

Methodological validity Validity according to GRADE criteria can be classified as very high, high, low, very low according to the risk of bias of the study. Overall quality decreases if there is risk of bias.

Publication bias Studies with positive results are published more often than studies with negative results. If there is publication bias the overall quality of the evidence decreases.

Inconsistency If results are inconsistent across different studies, then the overall quality of the evidence decreases.

Indirectness If there is a direct relationship between the study PICO question and , a quality of evidence aumenta. In the case of Early Goal Direct Therapy, for instances, there was no study directly comparing volemic resuscitation guided by SvcO2 with resuscitation guided by lab and clinical parameters. The recommendation in favor of using SvcO2 to guide therapy in this situation is based on indirect evidence of studies that used this technique as part of their research protocol. Such a fact decreases the quality of the evidence that supports this recommendation.

Imprecision If the confidence interval crosses the line of non effect based on the minimal important difference that was established, then the quality of the evidence decreases.

Magnitude of effect If the magnitude of effect of the intervention is high, and the confidence interval for this effect is narrow, then the overall quality of the evidence increases.

Confounders If there are potential confounders decreasing the magnitude of effect, then the quality of evidence increases.

Dose-response gradient If there is a dose-response effect, then the quality of evidence also increases. Conclusion: Evidence very strong (A) – strong (B) – weak (C) – very weak (D) Adaptado do GRADE32

20.5 HOW TO DEVELOP A RECOMMENDATION?

The elaboration of a recommendation in the GRADE approach considers besides the overall quality of the evidence (design, risk of bias, inconsistency, indirectness, publication bias and imprecision), the balance between risk and benefits, values and preferences and the need for resource allocation. These four itens taken together will determine the strength of the recommendation (Table 10).

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

86

Table 10: Clinical consideration for the strength of recommendation on the GRADE approach.

Considerations in favor of Considerations against Conclusion Quality of the evidence Many high quality randomised trials

have shown the benefit of inhaled steroids in asthma

Only case series have examined the utility of pleurodesis in pneumothorax

A, B, C or D?

Do the benefits overcome risks?

Aspirin in myocardial infarction reduces mortality with minimal toxicity, inconvenience, and cost

Warfarin in low risk patients with atrial fibrillation results in small stroke reduction but increased bleeding risk and substantial inconvenience

Yes Maybe Maybe not No

Patients values and preferences are in accordance to the recommendation?

Young patients with lymphoma will invariably place a higher value on the life prolonging effects of chemotherapy than on treatment toxicity

Older patients with lymphoma may not place a higher value on the life prolonging effects of chemotherapy than on treatment toxicity

Yes Maybe Maybe not No

Resources allocation are clearly justifiable?

The low cost of aspirin as prophylaxis against stroke in patients with transient ischemic attacks

The high cost of clopidogrel and of combination dipyridamole and aspirin as prophylaxis against stroke in patients with transient ischaemic attacks

Yes Maybe Maybe not No

Recommendation: In favor of? Against?

Strength of recommendation: Strong? Weak?

Adapted from GRADE30

The establishment of the strength of recommendation is a complex process and should not be done by one or two individuals but by a board composed of at least specialists, methodologists and clinicians, with all holding the summary of the body of the evidence that has been selected and analyzed in a systematic way by the guideline development group. Consensus should also be made in a transparent and systematic way using a validated process such as Delphi.

In the GRADE approach the summary of evidence is produced by the guideline development group in a table format by using the GRADEpro tool. (http://www.gradeworkinggroup.org/toolbox/index.htm)

More information on the method of developing a guideline and making recommendations can be obtained at the Institute of Medicine's report on "Guidelines We Can Trust". These items are all available under "Supplements" in the SIMPLE. Website.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

87

20.6 CHECK LIST

Quality of Evidence

Outcome:

Outcome importance: ① ② ③ ④ ⑤ ⑥ ⑦ ⑧ ⑨ Study: ! Systematic review ! Individual study

Quality of evidence

Study Design:

Quality of evidence is initially assessed according to the study design. Quality of evidence will loose or gain points according to the criteria below.

Randomized Controled trial

High

Moderate Observational study Low

Very low

Métodos Notes Footnote

⊕⊕⊕⊕ High

A

⊕⊕⊕ Moderate

B

⊕⊕ Low C

⊕ Very low

D

Validity 0 -① -②

Imprecision 0 -① -②

Inconsistency 0 -① -②

Indirectness 0 -① -②

Publication bias 0 -① -②

Results Notes Footnote

Magnitude of Effect 0 +① +②

Confounders 0 +①

Dose-response effect 0 +①

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

88

Strength of recommendation

Factor Comment Strength of recommendation

Balance between desirable and undesirable effects

The larger the difference between the desirable and undesirable effects, the higher the likelihood that a strong recommendation is warranted. The narrower the gradient, the higher the likelihood that a weak recommendation is warranted

Strong recommendation in favor of the intervention

!!

Weak recommendation in favor of the intervention

!?

Weak recommendation against

"?

Strong recommendation against

""

Quality of evidence

The higher the quality of evidence, the higher the likelihood that a strong recommendation is warranted

Values and preferences

The more values and preferences vary, or the greater the uncertainty in values and preferences, the higher the likelihood that a weak recommendation is warranted

Resources allocation

The higher the costs of an intervention—that is, the greater the resources consumed—the lower the likelihood that a strong recommendation is warranted

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

102

References

1. Tan SY, Uyehara P. William Osler (1849-1919): medical educator and humanist. Singapore medical journal. 2009;50(11):1048-1049.

2. Sackett DL. Teaching critical appraisal. Journal of general internal medicine. 1990;5(3):272. 3. Sackett DL, Wennberg JE. Choosing the best research design for each question. BMJ. 1997;315(7123):1636. 4. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA : the journal of the

American Medical Association. 1992;268(17):2420-2425. 5. Guyatt G. Evidence-based medicine (Editorial). ACP Journal Club. Annals of Internal Medicine. 1991;114(Suppl.

2):A16. 6. Haynes RB. What kind of evidence is it that Evidence-Based Medicine advocates want health care providers

and consumers to pay attention to? BMC health services research. 2002;2:3. 7. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and

what it isn't. Bmj. 1996;312(7023):71-72. 8. Tanenbaum SJ. What physicians know. The New England journal of medicine. 1993;329(17):1268-1271. 9. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and

what it isn't. 1996. Clinical orthopaedics and related research. 2007;455:3-5. 10. Onate-Ocana LF, Ochoa-Carrillo FJ. The GRADE system for classification of the level of evidence and grade

of recommendations in clinical guideline reports. Cirugia y cirujanos. 2009;77(5):417-419. 11. Schunemann HJ, Best D, Vist G, Oxman AD. Letters, numbers, symbols and words: how to communicate

grades of evidence and recommendations. CMAJ : Canadian Medical Association journal = journal de l'Association medicale canadienne. 2003;169(7):677-680.

12. Glasziou P, Vandenbroucke JP, Chalmers I. Assessing the quality of research. BMJ. 2004;328(7430):39-41. 13. Barbui C, Dua T, van Ommeren M, et al. Challenges in developing evidence-based recommendations using

the GRADE approach: the case of mental, neurological, and substance use disorders. PLoS medicine. 2010;7(8).

14. Owens DK, Lohr KN, Atkins D, et al. AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions--agency for healthcare research and quality and the effective health-care program. Journal of clinical epidemiology. 2010;63(5):513-523.

15. Kunz R, Djulbegovic B, Schunemann HJ, Stanulla M, Muti P, Guyatt G. Misconceptions, challenges, uncertainty, and progress in guideline recommendations. Seminars in hematology. 2008;45(3):167-175.

16. Hitt J. Evidence-Based Medicine. THE YEAR IN IDEAS: A TO Z. 2001; http://www.nytimes.com/2001/12/09/magazine/the-year-in-ideas-a-to-z-evidence-based-medicine.html. Accessed Mar 29, 2012.

17. Godlee F. Milestones on the long road to knowledge. BMJ. 2007;334 Suppl 1:s2-3. 18. Sehon SR, Stanley DE. A philosophical analysis of the evidence-based medicine debate. BMC health services

research. 2003;3(1):14. 19. Silva SA, Charon R, Wyer PC. The marriage of evidence and narrative: scientific nurturance within clinical

practice. Journal of evaluation in clinical practice. 2011;17(4):585-593. 20. Silva SA, Wyer P. The Roadmap: a blueprint for evidence literacy within a Scientifically Informed Medical

Practice and LEarning model. Journal of Person Centred Medicine. 2012:In press. 21. Fryback DG, Thornbury JR. The efficacy of diagnostic imaging. Medical decision making : an international

journal of the Society for Medical Decision Making. 1991;11(2):88-94. 22. Silva SA, Wyer PC. The Roadmap: a blueprint for evidence literacy within a Scientifically Informed Medical

Practice and Learning Model. European Journal of Person Centered Healthcare. 2013;3(1):53-68. 23. Hlatky MA, Greenland P, Arnett DK, et al. Criteria for evaluation of novel markers of cardiovascular risk: a

scientific statement from the American Heart Association. Circulation. 2009;119(17):2408-2416.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

103

24. Methods Guide for Medical Test Reviews. 2010; http://www.effectivehealthcare.ahrq.gov/tasks/sites/ehc/assets/File/methods_guide_for_medical_tests.pdf, 2014.

25. Guidance by type. NICE Guidance 2014; http://www.nice.org.uk/guidance/index.jsp?action=byType, 2014.

26. Schwartz MD, Valdimarsdottir HB, DeMarco TA, et al. Randomized trial of a decision aid for BRCA1/BRCA2 mutation carriers: impact on measures of decision making and satisfaction. Health psychology : official journal of the Division of Health Psychology, American Psychological Association. 2009;28(1):11-19.

27. Saadatmand S, Rutgers EJ, Tollenaar RA, et al. Breast density as indicator for the use of mammography or MRI to screen women with familial risk for breast cancer (FaMRIsc): a multicentre randomized controlled trial. BMC cancer. 2012;12:440.

28. Anderson DR, Kahn SR, Rodger MA, et al. Computed tomographic pulmonary angiography vs ventilation-perfusion lung scanning in patients with suspected pulmonary embolism: a randomized controlled trial. JAMA : the journal of the American Medical Association. 2007;298(23):2743-2753.

29. Hachamovitch R, Hayes SW, Friedman JD, Cohen I, Berman DS. Comparison of the short-term survival benefit associated with revascularization compared with medical therapy in patients with no prior coronary artery disease undergoing stress myocardial perfusion single photon emission computed tomography. Circulation. 2003;107(23):2900-2907.

30. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924-926.

31. Brozek JL, Akl EA, Compalati E, et al. Grading quality of evidence and strength of recommendations in clinical practice guidelines part 3 of 3. The GRADE approach to developing recommendations. Allergy. 2011;66(5):588-595.

32. Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schunemann HJ. What is "quality of evidence" and why is it important to clinicians? BMJ. 2008;336(7651):995-998.

33. Conti R, Veenstra DL, Armstrong K, Lesko LJ, Grosse SD. Personalized medicine and genomics: challenges and opportunities in assessing effectiveness, cost-effectiveness, and future research priorities. Medical decision making : an international journal of the Society for Medical Decision Making. 2010;30(3):328-340.

34. Green RC, Roberts JS, Cupples LA, et al. Disclosure of APOE genotype for risk of Alzheimer's disease. The New England journal of medicine. 2009;361(3):245-254.

35. Briel M, Ferreira-Gonzalez I, You JJ, et al. Association between change in high density lipoprotein cholesterol and cardiovascular disease morbidity and mortality: systematic review and meta-regression analysis. BMJ. 2009;338:b92.

36. Charon R. Narrative Medicine: Attention, Representation, Affiliation. Narrative. 2005;13:261-269. 37. Charon R, Wyer P. Narrative evidence based medicine. Lancet. 2008;371(9609):296-297. 38. Epstein RM, Peters E. Beyond Information: Exploring Patient's Preferences. JAMA : the journal of the

American Medical Association. 2009;302:195-197. 39. Freire P. Education for Critical Consciousness. New York: Continuum; 1974. 40. Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-

based decisions. ACP journal club. 1995;123(3):A12-13. 41. Pfisterer M, Buser P, Rickli H, et al. BNP-guided vs symptom-guided heart failure therapy: the Trial of

Intensified vs Standard Medical Therapy in Elderly Patients With Congestive Heart Failure (TIME-CHF) randomized trial. JAMA : the journal of the American Medical Association. 2009;301(4):383-392.

42. Ellis P. Research on the Comparative Effectiveness of Medical Treatments: Issues and Options for an Expanded Federal Role. 2007; http://www.cbo.gov/publication/41655. Accessed April 9, 2012.

43. Haynes B. Clinical Study Categories. 2011; http://www.ncbi.nlm.nih.gov/pubmed/clinical. Accessed April 9, 2012.

44. Pasternak T, Movshon JA, Merigan WH. Creation of direction selectivity in adult strobe-reared cats. Nature. 1981;292(5826):834-836.

45. Pasternak T, Merigan WH, Movshon JA. Motion mechanisms in strobe-reared cats: psychophysical and electrophysical measures. Acta Psychol (Amst). 1981;48(1-3):321-332.

Silva, S. A. and Wyer, P. 9th Workshop on Evidence-Based Clinical Practice for Clinical Decision Making and Health Management. Rio de Janeiro. Sylabbus; 2015: 1-104

104

46. Melvill Jones G, Mandl G, Cynader M, Outerbridge JS. Eye oscillations in strobe reared cats. Brain Res. 1981;209(1):47-60.

47. Mandl G, Melvill Jones G, Cynader M. Adaptability of the vestibulo-ocular reflex to vision reversal in strobe reared cats. Brain Res. 1981;209(1):35-45.

48. Kopans DB. A strobe-sequenced device to facilitate the three-dimensional viewing of cross-sectional images. Radiology. 1980;135(3):780-781.

49. Skinner JS, Smeeth L, Kendall JM, Adams PC, Timmis A. NICE guidance. Chest pain of recent onset: assessment and diagnosis of recent onset chest pain or discomfort of suspected cardiac origin. Heart. 2010;96(12):974-978.