Karl Claxton and Tony Ades

Embed Size (px)

DESCRIPTION

Some methodological issues in value of information analysis: an application of partial EVPI and EVSI to an economic model of Zanamivir. Karl Claxton and Tony Ades. Partial EVPIs. Light at the end of the tunnel……. ……..maybe it’s a train. A simple model of Zanamivir. Distribution of inb. - PowerPoint PPT Presentation

Text of Karl Claxton and Tony Ades

  • Some methodological issues in value of information analysis:

    an application of partial EVPI and EVSI to an economic model of ZanamivirKarl Claxton and Tony Ades

  • Partial EVPIsLight at the end of the tunnel..maybe its a train

  • A simple model of Zanamivir

  • .000.007.013.020.026(40.00)(20.00)0.0020.0040.00Normal DistributionMean = (0.51)Std Dev = 12.52inbDistribution of inb

  • EVPI for the decisionEVPI = EV(perfect information) - EV(current information)

  • Partial EVPIEVPIpip = EV(perfect information about pip) - EV(current information)EV(optimal decision for a particular resolution of pip)Expectation of this difference over all resolutions of pipEV(prior decision for the same resolution of pip)-

  • Partial EVPISome implications:information about an input is only valuable if it changes our decision information is only valuable if pip does not resolve at its expected value General solution:linear and non linear modelsinputs can be (spuriously) correlated

  • Felli and Hazen (98) short cutEVPIpip = EVPI when resolve all other inputs at their expected valueAppears counter intuitive:we resolve all other uncertainties then ask what is the value of pip ie residual EVPIpip ?But:resolving at EV does not give us any informationCorrect if:linear relationship between inputs and net benefitinputs are not correlated

  • So why different values? The model is linearThe inputs are independent?

  • Residual EVPIwrong current information position for partial EVPIwhat is the value of resolving pip when we already have perfect information about all other inputs?Expect residual EVPIpip < partial EVPIpipEVPI when resolve all other inputs at each realisation ?

  • Thompson and Evans (96) and Thompson and Graham (96)Felli and Hazen (98) used a similar approachThompson and Evans (96) is a linear modelemphasis on EVPI when set others to joint expected valuerequires payoffs as a function of the input of interest

  • Reduction in cost of uncertaintyintuitive appealconsistent with conditional probabilistic analysisRCUE(pip) = EVPI - EVPI(pip resolved at expected value)Butpip may not resolve at E(pip) and prior decisions may changevalue of perfect information if forced to stick to the prior decision ie the value of a reduction in varianceExpect RCUE(pip) < partial EVPI

  • Reduction in cost of uncertaintyspurious correlation again?

    RCUpip = Epip[EVPI EVPI(given realisation of pip)] = partial EVPIRCUpip = EVPI Epip[EVPI(given realisation of pip)]= [EV(perfect information) - EV(current information)] - Epip[EV(perfect information, pip resolved) - EV(current information, pip resolved)]

  • EVPI for strategiesValue of including a strategy?

    EVPI with and without the strategy includeddemonstrates biasdifference = EVPI associated with the strategy?

    EV(perfect information, all included) EV(perfect information, excluded)Eall inputs[Maxd(NBd|all inputs)] Eall inputs[Maxd-1(NBd-1|all inputs)]

  • Conclusions on partialsLife is beautiful Hegel was rightprogress is a dialecticMaths dont lie but brute force empiricism can mislead

  • EVSI it may well be a trainHegels right again!contradiction follows synthesis

  • EVSI for model inputs

    generate a predictive distribution for sample of nsample from the predictive and prior distributions to form a preposteriorpropagate the preposterior through the modelvalue of information for sample of nfind n* that maximises EVSI-cost sampling

  • EVSI for pipEpidemiological study n

    prior:pip Beta (, )predicitive:rip Bin(pip, n) preposterior:pip = (pip(+)+rip)/((++n)

    as n increases var(rip*n) falls towards var(pip)var(pip) < var(pip) and falls with npip are the possible posterior means

  • EVSIpip= reduction in the cost of uncertainty due to n obs on pip= difference in partials (EVPIpip EVPIpip)Epip[Eother[Maxd(NBd|other, pip)] - Maxd Eother(NBd|other, pip)] -Epip[Eother[Maxd(NBd|other, pip)] - Maxd Eother(NBd|other, pip)] piphas smaller var so any realisation is less likely to change decisionEpip[Eother[Maxd(NBd|other, pip)] > Epip[Eother[Maxd(NBd|other, pip)]E(pip) = E(pip)Epip[Maxd Eother(NBd|other, pip)] = Epip[Maxd Eother(NBd|other, pip)]

  • EVSIpipWhy not the difference in prior and preposterior EVPI?effect of pip only through var(NB)change decision for the realisation of pip once study is completeddifference in prior and preposterior EVPI will underestimate EVSIpip

  • ImplicationsEVSI for any input that is conjugategenerate preposterior for log odds ratio for complication and hospitalisation etc trial design for individual endpoint (rsd)trial designs with a number of endpoints (pcz, phz, upd, rsd)n for an endpoint will be uncertain (n_pcz = n*pip, etc)consider optimal n and allocation (search for n*)combine different designs eg: obs study (pip) and trial (upd, rsd) or obs study (pip, upd), trial (rsd). etc