10
Chapter 3:Random Variables and Distributions 3.9 Functions of Two or More Random Variables Random Variables with a Discrete Joint Distribution Theorem: Binomial and Bernoulli Distributions. As- sume that X1, . . . , Xn are i.i.d. random variables having the Bernoulli distribution with parameter p. Let Y = X1 + . . . + Xn. Then Y has the binomial distri- bution with parameters n and p. Example The joint p.f. f (x, y ) of X and Y is as specified in the following table: y =1 y =2 y =3 y =4 x =1 0.1 0 0.1 0 x =2 0.3 0 0.1 0.2 x =3 0 0.2 0 0 Table 1: Joint p.f. Find the p.f. of Z = X + Y . 1

Chapter 3--Section 3.9_6377_8557_20141030235103

Embed Size (px)

Citation preview

  • Chapter 3:Random Variables and Distributions

    3.9 Functions of Two or More Random Variables

    Random Variables with a Discrete Joint Distribution

    Theorem: Binomial and Bernoulli Distributions. As-

    sume that X1, . . . , Xn are i.i.d. random variables

    having the Bernoulli distribution with parameter p. Let

    Y = X1 + . . . + Xn. Then Y has the binomial distri-

    bution with parameters n and p.

    Example

    The joint p.f. f (x, y) of X and Y is as specified in the

    following table:

    y = 1 y = 2 y = 3 y = 4

    x = 1 0.1 0 0.1 0

    x = 2 0.3 0 0.1 0.2

    x = 3 0 0.2 0 0

    Table 1: Joint p.f.

    Find the p.f. of Z = X + Y .

    1

  • Solution: Z can take 2, 3,...,7.

    P (Z = 2) = P (X + Y = 2) = P (X = 1, Y = 1) = 0.1

    P (Z = 3) = P (X + Y = 3)

    = P [X = 1, Y = 2 or X = 2, Y = 1]

    = P (X = 1, Y = 2) + P (X = 2, Y = 1)

    = 0.3

    ... ...

    Z 2 3 4 5 6 7

    f (z) 0.1 0.3 0.1 0.3 0.2 0

    Random Variables with a Continuous Joint Distribu-tion

    (i) Y = X1 + X2

    Theorem: Let X1 and X2 be independent contin-

    uous random variables and let Y = X1 + X2. The

    distribution of Y is called the convolution of the

    distributions of X1 and X2. The p.d.f. of Y is

    g(y) =

    f1(y z)f2(z)dz

    2

  • Proof. The c.d.f. F (y) of Y is

    F (y) = P (Y y)= P (X1 + X2 y)

    =

    x1+x2y

    f (x1, x2)dx1dx2

    =

    x1+x2y

    f1(x1)f2(x2)dx1dx2

    =

    [ yx2

    f1(x1)f2(x2)dx1

    ]dx2

    x1

    x2

    x1 + x2 = y

    0

    Take derivative with respect to y on both sides,

    g(y) = F (y)

    =

    [ yx2

    f1(x1)f2(x2)dx1

    ]dx2

    =

    f1(y x2)f2(x2)dx2

    =

    f1(y z)f2(z)dz

    3

  • Example: Suppose that X1 and X2 are independent

    random variables with common distribution having p.d.f.

    f (x) =

    {2e2x x > 0

    0 otherwise

    find the p.d.f. of Y = X1 + X2.

    Consider the support: {(z, y) : y z > 0and z > 0}.

    z

    y

    y = z

    0

    If y 0, g(y) = 0.If y > 0,

    g(y) =

    f1(y z)f2(z)dz

    =

    y0

    2e2(yz)2e2zdz

    = 4ye2y

    g(y) =

    {4ye2y y > 0

    0 otherwise

    4

  • (ii) Maximum and Minimum of a Random Sample.

    Suppose that X1, ..., Xn form a random sample of size

    n from a distribution for which the p.d.f. is f and the

    c.d.f. is F. The largest value Yn and the smallest value Y1

    in the random sample are defined as follows:

    Yn = max{X1, ..., Xn}, Y1 = min{X1, ..., Xn}

    Consider Yn first. Let Gn stand for its c.d.f., and let gn

    be its p.d.f.

    Gn(y) = P (Yn y) = P (X1 y, ..., Xn y)= P (X1 y)... P (Xn y)= [F (y)]n

    gn(y) =dGn(y)

    dy= n[F (y)]n1f (y), y R

    consider Y1 with c.d.f. G1 and p.d.f. g1.

    G1(y) = P (Y1 y) = 1 P (Y1 > y)= 1 P (X1 > y, ..., Xn > y)= 1 P (X1 > y)... P (Xn > y)= 1 [1 F (y)]n

    g1(y) =dG1(y)

    dy= n[1 F (y)]n1f (y), y R

    5

  • (iii) Direct Transformation of a Multivariate p.d.f.

    Theorem: Let X1, X2 have a continuous joint distribu-

    tion for which the joint p.d.f. is f (x1, x2) . Assume that

    there is a subset S of R2 such that P [(X1, X2) S] = 1.Define two new random variables Y1, Y2 as follows:

    Y1 = r1(X1, X2)

    Y2 = r2(X1, X2)

    where we assume that the functions r1, r2 define a one-

    to-one differentiable transformation of S onto a subset T

    of R2. Let the inverse of this transformation be given as

    follows:

    x1 = s1(y1, y2)

    x2 = s2(y1, y2)

    Then the joint p.d.f. g(y1, y2) of Y1, Y2 is

    g(y1, y2) =

    {f (s1, s2) |J | (y1, y2) T

    0 otherwise

    where J is the determinant

    J = det

    [s1y1

    s1y2

    s2y1

    s2y2

    ]6

  • Example: Suppose that two random variables X1 and

    X2 have a continuous joint distribution for which the joint

    p.d.f. is as follows:

    f (x1, x2) =

    {4x1x2 0 < x1 < 1, 0 < x2 < 1

    0 otherwise

    determine the joint p.d.f. of two new random variables

    Y1 =X1X2

    and Y2 = X1X2.

    {Y1 = r1(X1, X2) =

    X1X2

    Y2 = r2(X1, X2) = X1X2

    S = {(x1, x2)|0 < x1 < 1, 0 < x2 < 1}r1, r2 define a one-to-one differentiable transformation of

    S onto a subset T of R2. The inverse functions can be

    found as X1 = s1(Y1, Y2) =Y1Y2

    X2 = s2(Y1, Y2) =

    Y2Y1

    Y1 > 0, Y2 > 0, 0 0, 0 < y1y2 < 1, 0