1
Statistical Discussion Form/Journal of Statistical Planning and Inference 55 {1996) 255 264 261 F47. Conditionality in structural inference In the referenced paper, I have pointed out a minor fallacy in Fraser's exposition namely that the conditional distribution is claimed as conditional upon the actual orbit, i.e. on a null event. This fallacy would be cured by accepting that the conditional distribution is relative to the sub-sigma-field generated by the orbits and applying the conditionality principle. But this leads to another difficulty. To simplify the argument consider the case when X has a densityf(x - 0), in which 0 !is a scalar, n independent observations are taken. Let di = x~+ ~ - xi, i -- 1, 2, ... (n - 1). The conditional density for given dl is computed as usual and yields a confi- dence interval for 0 of any given confidence level conditional on the observed di. In the theory of structural inference, this conditional confidence interval is treated as the end product. But the observed values of d~ are accidental - according to the continuous model the probability of their repetition in another experiment is zero. It is therefore reasonable to derive the unconditional confidence interval by integrating out the d~. But the unconditional confidence interval by integrating out the d~. But the uncondi- tional confidence interval may be inadmissable; there may exist another with the same confidence level, but of uniformly shorter length. Reference Joshi, V.M. (1972). A Fallacy of Fraser. Submitted to the Statistical Discussion Forum in 1992 (unpub- lished). V.M. Joshi Department of Statistics and Actuarial Sciences University of Western Ontario London, Ontario, Canada N6A 5B9 F48. A lacuna in the definition of Pitman-closerness Let T1, T 2 be estimators of a parametric function T(0). T~ is defned to be Pitman-closer to T(0) than T 2 if P{qT~ - T(0)] < ]T2 -- z(0)]0} ~> ½ for all 0. (1) (Definition 5, p. 290 in the referenced text book). But there is a lacuna in this definition. This is brought out by problem 9, p. 363 in tile referenced book. The problem is to show that, xl, x2 being i.i.d, observations of a Cauchy distribution, (x~ + x2)/2 is Pitman-closer to 0 than x~. This statement is

F47. Conditionality in structural inference

Embed Size (px)

Citation preview

Page 1: F47. Conditionality in structural inference

Statistical Discussion Form/Journal of Statistical Planning and Inference 55 {1996) 255 264 261

F47. Conditionality in structural inference

In the referenced paper, I have pointed out a minor fallacy in Fraser 's exposition namely that the conditional distribution is claimed as conditional upon the actual orbit, i.e. on a null event. This fallacy would be cured by accepting that the conditional distribution is relative to the sub-sigma-field generated by the orbits and applying the conditionality principle. But this leads to another difficulty.

To simplify the argument consider the case when X has a densi tyf(x - 0), in which 0 !is a scalar, n independent observations are taken. Let di = x~+ ~ - xi, i -- 1, 2, ... (n - 1). The conditional density for given dl is computed as usual and yields a confi- dence interval for 0 of any given confidence level conditional on the observed di. In the theory of structural inference, this conditional confidence interval is treated as the end product. But the observed values of d~ are accidental - according to the continuous model the probability of their repetition in another experiment is zero. It is therefore reasonable to derive the unconditional confidence interval by integrating out the d~. But the unconditional confidence interval by integrating out the d~. But the uncondi- tional confidence interval may be inadmissable; there may exist another with the same confidence level, but of uniformly shorter length.

Reference

Joshi, V.M. (1972). A Fallacy of Fraser. Submitted to the Statistical Discussion Forum in 1992 (unpub- lished).

V.M. Joshi Depar tment of Statistics and Actuarial Sciences

University of Western Ontario London, Ontario, Canada N6A 5B9

F48. A lacuna in the definition of Pitman-closerness

Let T1, T 2 be estimators of a parametric function T(0). T~ is defned to be

Pitman-closer to T(0) than T 2 if

P { q T ~ - T(0)] < ]T2 -- z(0)]0} ~> ½ for all 0 . (1)

(Definition 5, p. 290 in the referenced text book). But there is a lacuna in this definition. This is brought out by problem 9, p. 363 in

tile referenced book. The problem is to show that, xl, x2 being i.i.d, observations of a Cauchy distribution, (x~ + x2)/2 is Pitman-closer to 0 than x~. This statement is