147
2. Random Variables Dave Goldsman Georgia Institute of Technology, Atlanta, GA, USA 5/21/14 Goldsman 5/21/14 1 / 147

2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

2. Random Variables

Dave Goldsman

Georgia Institute of Technology, Atlanta, GA, USA

5/21/14

Goldsman 5/21/14 1 / 147

Page 2: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 2 / 147

Page 3: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Intro / Definitions

Definition: A random variable (RV) is a function from the samplespace to the real line. X : S → R.

Example: Flip 2 coins. S = {HH,HT, TH, TT}.

Suppose X is the RV corresponding to the # of H ’s.

X(TT ) = 0, X(HT ) = X(TH) = 1, X(HH) = 2.

P (X = 0) = 14 , P (X = 1) = 1

2 , P (X = 2) = 14 .

Notation: Capital letters like X,Y, Z, U, V,W usually represent RV’s.

Small letters like x, y, z, u, v, w usually represent particular values ofthe RV’s.

Thus, you can speak of P (X = x).

Goldsman 5/21/14 3 / 147

Page 4: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Intro / Definitions

Example: Let X be the sum of two dice rolls. Then X((4, 6)) = 10. Inaddition,

P (X = x) =

1/36 if x = 2

2/36 if x = 3...

6/36 if x = 7...

1/36 if x = 12

0 otherwise

Goldsman 5/21/14 4 / 147

Page 5: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Intro / Definitions

Example: Flip a coin.

X ≡

{0 if T

1 if H

Example: Roll a die.

Y ≡

{0 if {1, 2, 3}1 if {4, 5, 6}

For our purposes, X and Y are the same, sinceP (X = 0) = P (Y = 0) = 1

2 and P (X = 1) = P (Y = 1) = 12 .

Goldsman 5/21/14 5 / 147

Page 6: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Intro / Definitions

Example: Select a real # at random betw 0 and 1.

Infinite number of “equally likely” outcomes.

Conclusion: P (each individual point) = P (X = x) = 0, believe it or not!

But P (X ≤ 0.5) = 0.5 and P (X ∈ [0.3, 0.7]) = 0.4.

If A is any interval in [0,1], then P (A) is the length of A.

Goldsman 5/21/14 6 / 147

Page 7: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Intro / Definitions

Definition: If the number of possible values of a RV X is finite orcountably infinite, then X is a discrete RV. Otherwise,. . .

A continuous RV is one with prob 0 at every point.

Example: Flip a coin — get H or T . Discrete.

Example: Pick a point at random in [0, 1]. Continuous.

Goldsman 5/21/14 7 / 147

Page 8: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Discrete Random Variables

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 8 / 147

Page 9: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Discrete Random Variables

Definition: If X is a discrete RV, its probability mass function (pmf) isf(x) ≡ P (X = x).

Note that 0 ≤ f(x) ≤ 1,∑

x f(x) = 1.

Example: Flip 2 coins. Let X be the number of heads.

f(x) =

1/4 if x = 0 or 2

1/2 if x = 1

0 otherwise

Example: Uniform Distribution on integers 1, 2, . . . , n. X can equal1, 2, . . . , n, each with prob 1/n. f(i) = 1/n, i = 1, 2, . . . , n.

Goldsman 5/21/14 9 / 147

Page 10: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Discrete Random Variables

Example/Definition: Let X denote the number of “successes” from nindependent trials such that the P (success) at each trial is p(0 ≤ p ≤ 1). Then X has the Binomial Distribution with parameters nand p.

The trials are referred to as Bernoulli trials.

Notation: X ∼ Bin(n, p). “X has the Bin distribution”

Example: Roll a die 3 indep times. Find

P (Get exactly two 6’s).

“success” (6) and “failure” (1,2,3,4,5)

All 3 trials are indep, and P (success) = 1/6 doesn’t change from trialto trial.

Let X = # of 6’s. Then X ∼ Bin(3, 16).Goldsman 5/21/14 10 / 147

Page 11: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Discrete Random Variables

Theorem: If X ∼ Bin(n, p), then the prob of k successes in n trials is

P (X = k) =

(n

k

)pkqn−k, k = 0, 1, . . . , n,

where q = 1− p.

Proof: Particular sequence of successes and failures:

SS · · ·S︸ ︷︷ ︸k successes

FF · · ·F︸ ︷︷ ︸n− k failures

(prob = pkqn−k)

Number of ways to arrange the seq is(nk

). Done.

Goldsman 5/21/14 11 / 147

Page 12: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Discrete Random Variables

Back to the dice example, where X ∼ Bin(3, 16), and we wantP (Get exactly two 6’s).

n = 3, k = 2, p = 1/6, q = 5/6.

P (X = 2) =

(3

2

)(1

6

)2(5

6

)1=

15

216

k 0 1 2 3

P (X = k) 125216

75216

15216

1216

Goldsman 5/21/14 12 / 147

Page 13: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Discrete Random Variables

Example: Roll 2 dice 12 times.

Find P (Result will be 7 or 11 exactly 3 times).

Let X = the number of times get 7 or 11.

P (7 or 11) = P (7) + P (11) = 636 + 2

36 = 29 .

So X ∼ Bin(12, 2/9).

P (X = 3) =

(12

3

)(2

9

)3(7

9

)9.

Goldsman 5/21/14 13 / 147

Page 14: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Discrete Random Variables

Definition: If P (X = k) = e−λλk/k!, k = 0, 1, 2, . . ., λ > 0, we say thatX has the Poisson distribution with parameter λ.

Notation: X ∼ Pois(λ).

Example: Suppose the number of raisins in a cup of cookie dough isPois(10). Find the prob that a cup of dough has at least 4 raisins.

P (X ≥ 4) = 1− P (X = 0, 1, 2, 3)

= 1− e−10(

100

0!+

101

1!+

102

2!+

103

3!

)= 0.9897.

Goldsman 5/21/14 14 / 147

Page 15: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Continuous Random Variables

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 15 / 147

Page 16: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Continuous Random Variables

Example: Pick a point X randomly between 0 and 1.

f(x) =

{1 if 0 ≤ x ≤ 1

0 otherwise

P (a < X < b) = area under f(x) from a to b= b− a.

Goldsman 5/21/14 16 / 147

Page 17: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Continuous Random Variables

Definition: Suppose X is a continuous RV. f(x) is the probabilitydensity function (pdf) if∫

R f(x) dx = 1 (area under f(x) is 1)f(x) ≥ 0, ∀x (always non-negative)If A ⊆ R, then P (X ∈ A) =

∫A f(x) dx (probability that X is in a

certain region A)

Remarks: If X is a continuous RV, then

P (a < X < b) =∫ ba f(x) dx.

An individual point has prob 0, i.e., P (X = x) = 0.

Think of f(x) dx ≈ P (x < X < x+ dx).

Goldsman 5/21/14 17 / 147

Page 18: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Continuous Random Variables

Note that f(x) denotes both pmf (discrete case) and pdf (continuouscase) — but they are different:

f(x) = P (X = x) if X is discrete.Must have 0 ≤ f(x) ≤ 1.

f(x) dx ≈ P (x < X < x+ dx) if X is continuous. Must have f(x) ≥ 0(and possibly > 1).

If X is cts, we calculate the prob of an event by integrating,P (X ∈ A) =

∫A f(x) dx.

Goldsman 5/21/14 18 / 147

Page 19: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Continuous Random Variables

Example: If X is “equally likely” to be anywhere between a and b, thenX has the uniform distribution on (a, b).

f(x) =

{1b−a if a < x < b

0 otherwise

Notation: X ∼ U(a, b)

Note:∫R f(x) dx =

∫ ba

1b−a dx = 1 (as desired).

Goldsman 5/21/14 19 / 147

Page 20: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Continuous Random Variables

Example: X has the exponential distriibution with parameter λ > 0 ifit has pdf

f(x) =

{λe−λx if x ≥ 0

0 otherwise

Notation: X ∼ Exp(λ)

Note:∫R f(x) dx =

∫∞0 λe−λx dx = 1 (as desired).

Goldsman 5/21/14 20 / 147

Page 21: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Continuous Random Variables

Example: Suppose X ∼ Exp(1). Then

P (X ≤ 3) =∫ 30 e−x dx = 1− e−3.

P (X ≥ 5) =∫∞5 e−x dx = e−5.

P (2 ≤ X < 4) = P (2 ≤ X ≤ 4) =∫ 42 e−x dx = e−2 − e−4.

P (X = 3) =∫ 33 e−x dx = 0.

Goldsman 5/21/14 21 / 147

Page 22: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Continuous Random Variables

Example: Suppose X is a cts RV with pdf

f(x) =

{cx2 if 0 < x < 2

0 otherwise

Find c.

Answer: 1 =∫R f(x) dx =

∫ 20 cx

2 dx = 83c, so c = 3/8.

Find P (0 < X < 1).

Answer: P (0 < X < 1) =∫ 10

38x

2 dx = 1/8.

Goldsman 5/21/14 22 / 147

Page 23: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Continuous Random Variables

Find P (0 < X < 1|12 < X < 32).

Answer:

P

(0 < X < 1

∣∣∣∣12 < X <3

2

)=

P (0 < X < 1 and 12 < X < 3

2)

P (12 < X < 32)

=P (12 < X < 1)

P (12 < X < 32)

=

∫ 11/2

38x

2 dx∫ 3/21/2

38x

2 dx= 7/26.

Goldsman 5/21/14 23 / 147

Page 24: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Cumulative Distribution Functions

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 24 / 147

Page 25: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Cumulative Distribution Functions

Definition: For any RV X, the cumulative distribution function (cdf)is defined for all x by F (x) ≡ P (X ≤ x).

X discrete implies

F (x) =∑{y|y≤x}

f(y) =∑{y|y≤x}

P (X = y).

X continuous implies

F (x) =

∫ x

−∞f(t) dt.

Goldsman 5/21/14 25 / 147

Page 26: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Cumulative Distribution Functions

Discrete cdf’s

Example: Flip a coin twice. X = number of H ’s.

X =

{0 or 2 w.p. 1/4

1 w.p. 1/2

F (x) = P (X ≤ x) =

0 if x < 0

1/4 if 0 ≤ x < 1

3/4 if 1 ≤ x < 2

1 if x ≥ 2

(You have to be careful about “≤” vs. “<”.)

Goldsman 5/21/14 26 / 147

Page 27: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Cumulative Distribution Functions

Continuous cdf’s

Theorem: If X is a continuous RV, then f(x) = F ′(x).

Proof: F ′(x) = ddx

∫ x−∞ f(t) dt = f(x), by the fundamental theorem of

calculus.

Remark: If X is continuous, you can get from the pdf f(x) to the cdfF (x) by integrating.

Goldsman 5/21/14 27 / 147

Page 28: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Cumulative Distribution Functions

Example: X ∼ U(0, 1).

f(x) =

{1 if 0 < x < 1

0 otherwise

F (x) =

0 if x ≤ 0

x if 0 < x < 1

1 if x ≥ 1

Goldsman 5/21/14 28 / 147

Page 29: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Cumulative Distribution Functions

Example: X ∼ Exp(λ).

f(x) =

{λe−λx if x > 0

0 otherwise

F (x) =

∫ x

−∞f(t) dt =

{0 if x ≤ 0

1− e−λx if x > 0

Goldsman 5/21/14 29 / 147

Page 30: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Cumulative Distribution Functions

Properties of all cdf’s

F (x) is non-decreasing in x, i.e., a < b implies that F (a) ≤ F (b).

limx→∞ F (x) = 1 and limx→−∞ F (x) = 0.

F (x) is right-cts at every point x.

Theorem: P (X > x) = 1− F (x).

Proof:

1 = P (X ≤ x) + P (X > x) = F (x) + P (X > x).

Goldsman 5/21/14 30 / 147

Page 31: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Cumulative Distribution Functions

Theorem:a < b ⇒ P (a < X ≤ b) = F (b)− F (a).

Proof:

P (a < X ≤ b)= P (X > a ∩ X ≤ b)= P (X > a) + P (X ≤ b)− P (X > a ∪ X ≤ b)= 1− F (a) + F (b)− 1.

Goldsman 5/21/14 31 / 147

Page 32: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 32 / 147

Page 33: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Great Expectations

Mean (Expected Value)

Law of the Unconscious Statistician

Variance

Chebychev’s Inequality

Goldsman 5/21/14 33 / 147

Page 34: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Definition: The mean or expected value or average of a RV X is

µ ≡ E[X] ≡

{ ∑x xf(x) if X is discrete∫

R xf(x) dx if X is cts

The mean gives an indication of a RV’s central tendency. It can bethought of as a weighted average of the possible x’s, where theweights are given by f(x).

Goldsman 5/21/14 34 / 147

Page 35: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Example: Suppose X has the Bernoulli distribution with parameterp, i.e., P (X = 1) = p, P (X = 0) = q = 1− p. Then

E[X] =∑x

xP (X = x) = 1 · p+ 0 · q = p.

Example: Die. X = 1, 2, . . . , 6, each w.p. 1/6. Then

E[X] =∑x

xf(x) = 1 · 1

6+ · · ·+ 6 · 1

6= 3.5.

Example: Suppose X has the Geometric distribution with parameterp, i.e., X is the number of Bern(p) trials until you obtain your firstsuccess (e.g., FFFFS corresponds to X = 5). Thenf(x) = (1− p)x−1p, x = 1, 2, . . ., and after a lot of algebra,

E[X] =∑x

xP (X = x) =

∞∑x=1

x(1− p)x−1p = 1/p.

Goldsman 5/21/14 35 / 147

Page 36: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Example: X ∼ Exp(λ). f(x) = λe−λx, x ≥ 0. Then

E[X] =

∫Rxf(x) dx

=

∫ ∞0

xλe−λx dx

= −xe−λx|∞0 −∫ ∞0

(−e−λx) dx (by parts)

=

∫ ∞0

e−λx dx (L’Hôpital’s rule)

= 1/λ.

Goldsman 5/21/14 36 / 147

Page 37: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Law of the Unconscious Statistician (LOTUS)

Definition/Theorem: The expected value of a function of X, say h(X),is

E[h(X)] ≡

{ ∑x h(x)f(x) if X is discrete∫

R h(x)f(x) dx if X is cts

E[h(X)] is simply a weighted function of h(x), where the weights arethe f(x) values.

Examples: E[X2] =∫R x

2f(x) dx

E[sinX] =∫R(sinx)f(x) dx

Goldsman 5/21/14 37 / 147

Page 38: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Just a moment please. . .

Definition: The kth moment of X is

E[Xk] =

{ ∑x x

kf(x) if X is discrete∫R x

kf(x) dx if X is cts

Definition: The kth central moment of X is

E[(X − µ)k] =

{ ∑x(x− µ)kf(x) X is discrete∫

R(x− µ)kf(x) dx X is cts

Goldsman 5/21/14 38 / 147

Page 39: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Definition: The variance of X is the second central moment, i.e.,Var(X) ≡ E[(X − µ)2]. It’s a measure of spread or dispersion.

Notation: σ2 ≡ Var(X).

Definition: The standard deviation of X is σ ≡ +√

Var(X).

Goldsman 5/21/14 39 / 147

Page 40: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Theorem: For any h(X) and constants a and b, we haveE[ah(X) + b] = aE[h(X)] + b.

Proof (just do cts case):

E[ah(X) + b] =

∫R

(ah(x) + b)f(x) dx

= a

∫Rh(x)f(x) dx+ b

∫Rf(x) dx

= aE[h(X)] + b.

Remark: In particular, E[aX + b] = aE[X] + b.

Similarly, E[g(X) + h(X)] = E[g(X)] + E[h(X)].

Goldsman 5/21/14 40 / 147

Page 41: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Theorem (easier way to calculate variance):

Var(X) = E[X2]− (E[X])2

Proof:

Var(X) = E[(X − µ)2]

= E[X2 − 2µX + µ2]

= E[X2]− 2µE[X] + µ2 (by above remarks)= E[X2]− µ2.

Goldsman 5/21/14 41 / 147

Page 42: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Example: X ∼ Bern(p).

X =

{1 w.p. p

0 w.p. q

Recall E[X] = p. In fact, for any k,

E[Xk] = 0k · q + 1k · p = p.

So Var(X) = E[X2]− (E[X])2 = p− p2 = pq.

Goldsman 5/21/14 42 / 147

Page 43: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Example: X ∼ U(a, b). f(x) = 1b−a , a < x < b.

E[X] =

∫ b

ax

1

b− adx =

a+ b

2

E[X2] =

∫ b

ax2

1

b− adx =

a2 + ab+ b2

3

Var(X) = E[X2]− (E[X])2 =(a− b)2

12(algebra).

Goldsman 5/21/14 43 / 147

Page 44: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Theorem: Var(aX + b) = a2Var(X). A “shift” doesn’t matter!

Proof:

Var(aX + b) = E[(aX + b)2]− (E[aX + b])2

= E[a2X2 + 2abX + b2]− (aE[X] + b)2

= a2E[X2] + 2abE[X] + b2

−(a2(E[X])2 + 2abE[X] + b2)

= a2(E[X2]− (E[X])2)

= a2Var(X)

Goldsman 5/21/14 44 / 147

Page 45: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Example: X ∼ Bern(0.3)

Recall that E[X] = p = 0.3 andVar(X) = pq = (0.3)(0.7) = 0.21.

Let Y = h(X) = 4X + 5. Then

E[Y ] = E[4X + 5] = 4E[X] + 5 = 6.2

Var(Y ) = Var(4X + 5) = 16Var(X) = 3.36.

Goldsman 5/21/14 45 / 147

Page 46: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Approximations to E[h(X)] and Var(h(X))

Sometimes Y = h(X) is messy, and we may have to approximateE[h(X)] and Var(h(X)). Use a Taylor series approach. To do so, letµ = E[X] and σ2 = Var(X) and note that

Y = h(µ) + (X − µ)h′(µ) +(X − µ)2

2h′′(µ) +R,

where R is a remainder term which we shall now ignore.

Then

E[Y ].= h(µ) + E[X − µ]h′(µ) +

E[(X − µ)2]

2h′′(µ) = h(µ) +

h′′(µ)σ2

2

and (now we’ll use an even-cruder approximation)

Var(Y ).= Var

(h(µ) + (X − µ)h′(µ)

)= [h′(µ)]2σ2.

Goldsman 5/21/14 46 / 147

Page 47: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Approximations to E[h(X)] and Var(h(X))

Example: Suppose that X has pdf f(x) = 3x2, 0 ≤ x ≤ 1, and that wewant to test out out approximations on the “complicated” randomvariable Y = h(X) = X3/4. Well, it’s not really that complicated, sincewe can calculate the exact moments:

E[Y ] =

∫ 1

0x3/4f(x) dx =

∫ 1

03x11/4 dx = 4/5

E[Y 2] =

∫ 1

0x6/4f(x) dx =

∫ 1

03x7/2 dx = 2/3

Var(Y ) = E[Y 2]− (E[Y ])2 = 2/75 = 0.0267.

Before we can do the approximation, note that

µ = E[X] =

∫ 1

0xf(x) dx =

∫ 1

03x3 dx = 3/4

σ2 = Var(X) = E[X2]− (E[X])2 = 3/80 = 0.0375.

Goldsman 5/21/14 47 / 147

Page 48: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Further,

h(µ) = µ3/4 = (3/4)3/4 = 0.8059

h′(µ) = (3/4)µ−1/4 = (3/4)(3/4)−1/4 = 0.8059

h′′(µ) = −(3/16)µ−5/4 = −0.2686.

Thus,

E[Y ].= h(µ) +

h′′(µ)σ2

2= 0.8059− (0.2686)(0.0375)

2= 0.8009

andVar(Y )

.= [h′(µ)]2σ2 = (0.8059)2(0.0375) = 0.0243,

both of which are reasonably close to their true values.

Goldsman 5/21/14 48 / 147

Page 49: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Chebychev’s Inequality

Theorem: Suppose that E[X] = µ and Var(X) = σ2. Then for anyε > 0,

P (|X − µ| ≥ ε) ≤ σ2/ε2.

Proof: See text.

Remarks: If ε = kσ, then P (|X − µ| ≥ kσ) ≤ 1/k2.

P (|X − µ| < ε) ≥ 1− σ2/ε2.

Chebychev gives a crude bound on the prob that X deviates from themean by more than a constant, in terms of the constant and thevariance. You can always use Chebychev, but it’s crude.

Goldsman 5/21/14 49 / 147

Page 50: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Example: Suppose X ∼ U(0, 1). f(x) = 1, 0 < x < 1.

E[X] = a+b2 = 1/2, Var(X) = (a−b)2

12 = 1/12.

Then Chebychev implies

P

(∣∣∣∣X − 1

2

∣∣∣∣ ≥ ε) ≤ 1

12ε2.

In particular, for ε = 1/3,

P

(∣∣∣∣X − 1

2

∣∣∣∣ ≥ 1

3

)≤ 3

4(upper bound).

Goldsman 5/21/14 50 / 147

Page 51: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Great Expectations

Example (cont’d): Let’s compare the above bound to the exact answer.

P

(∣∣∣∣X − 1

2

∣∣∣∣ ≥ 1

3

)= 1− P

(∣∣∣∣X − 1

2

∣∣∣∣ < 1

3

)= 1− P

(−1

3< X − 1

2<

1

3

)= 1− P

(1

6< X <

5

6

)= 1−

∫ 5/6

1/6f(x) dx

= 1− 2

3= 1/3.

(So Chebychev bound was pretty high in comparison.)

Goldsman 5/21/14 51 / 147

Page 52: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 52 / 147

Page 53: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Functions of a Random Variable

Problem Statement

Discrete Case

Continuous Case

Inverse Transform Theorem

Problem: You have a RV X and you know its pmf/pdf f(x).

Define Y ≡ h(X) (some fn of X).

Find g(y), the pmf/pdf of Y .

Goldsman 5/21/14 53 / 147

Page 54: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Discrete Case: X discrete implies Y discrete implies

g(y) = P (Y = y) = P (h(X) = y) = P ({x|h(x) = y}) =∑

x|h(x)=y

f(x).

Example: X is the # of H ’s in 2 coin tosses. Want pmf forY = h(X) = X3 −X.

x 0 1 2

P (X = x) 14

12

14

y = x3 − x 0 0 6

g(0) = P (Y = 0) = P (X = 0 or 1) = 3/4 andg(6) = P (Y = 6) = P (X = 2) = 1/4.

g(y) =

{3/4 if y = 0

1/4 if y = 6

Goldsman 5/21/14 54 / 147

Page 55: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Example: X is discrete with

f(x) =

1/8 if x = −1

3/8 if x = 0

1/3 if x = 1

1/6 if x = 2

0 otherwise

Let Y = X2 (Y can only equal 0,1,4).

g(y) =

P (Y = 0) = f(0) = 3/8

P (Y = 1) = f(−1) + f(1) = 11/24

P (Y = 4) = f(2) = 1/6

0, otherwise

Goldsman 5/21/14 55 / 147

Page 56: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Continuous Case: X continuous implies Y can be continuous ordiscrete.

Example: Y = X2 (clearly cts)

Example: Y =

{0 if X < 0

1 if X ≥ 0is not continuous.

Method: Compute G(y), the cdf of Y .

G(y) = P (Y ≤ y) = P (h(X) ≤ y) =

∫{x|h(x)≤y}

f(x) dx.

If G(y) is cts, construct the pdf g(y) by differentiating.

Goldsman 5/21/14 56 / 147

Page 57: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Example: f(x) = |x|, −1 ≤ x ≤ 1.

Find the pdf of the RV Y = X2.

G(y) = P (Y ≤ y) = P (X2 ≤ y) =

0 if y ≤ 0

1 if y ≥ 1

(?) if 0 < y < 1

,

where

(?) = P (−√y ≤ X ≤ √y) =

∫ √y−√y|x| dx = y.

Goldsman 5/21/14 57 / 147

Page 58: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Thus,

G(y) =

0 if y ≤ 0

1 if y ≥ 1

y if 0 < y < 1

This implies

g(y) = G′(y) =

{0 if y ≤ 0 or y ≥ 1

1 if 0 < y < 1

This is the U(0,1) distribution!

Goldsman 5/21/14 58 / 147

Page 59: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Example: Suppose U ∼ U(0, 1). Find the pdf of Y = −`n(1− U).

G(y) = P (Y ≤ y)

= P (−`n(1− U) ≤ y)

= P (1− U ≥ e−y)= P (U ≤ 1− e−y)

=

∫ 1−e−y

0f(u) du

= 1− e−y (since f(u) = 1)

Taking the derivative, we have g(y) = e−y, y > 0.

Wow! This implies Y ∼ Exp(λ = 1).

We can generalize this result. . .

Goldsman 5/21/14 59 / 147

Page 60: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Inverse Transform Theorem: Suppose X is a cts RV having cdf F (x).Then the random variable F (X) ∼ U(0, 1).

Proof: Let Y = F (X). Then the cdf of Y is

G(y) = P (Y ≤ y)

= P (F (X) ≤ y)

= P (X ≤ F−1(y)) (the cdf is mono. increasing)= F (F−1(y)) (F (x) is the cdf of X)= y. Uniform!

Goldsman 5/21/14 60 / 147

Page 61: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Functions of a Random Variable

Remark: This is a great theorem, since it applies to all continuous RV’sX.

Corollary: X = F−1(U), so you can plug in a U(0,1) RV into theinverse cdf to generate a realization of a RV having X ’s distribution.

Remark: This is what we did in the example on the previous page.This trick has tremendous applications in simulation.

Goldsman 5/21/14 61 / 147

Page 62: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 62 / 147

Page 63: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Bivariate Random Variables

Discrete Case

Continuous Case

Bivariate cdf’s

Marginal Distributions

Now let’s look at what happens when you consider 2 RV’ssimultaneously.

Example: Choose a person at random. Look at height and weight(X,Y ). Obviously, X and Y will be related somehow.

Goldsman 5/21/14 63 / 147

Page 64: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Discrete Case

Definition: If X and Y are discrete RV’s, then (X,Y ) is called a jointlydiscrete bivariate RV.

The joint (or bivariate) pmf is

f(x, y) = P (X = x, Y = y).

Properties:

(1) 0 ≤ f(x, y) ≤ 1.(2)∑

x

∑y f(x, y) = 1.

(3) A ⊆ R2 ⇒ P ((X,Y ) ∈ A) =∑∑

(x,y)∈A f(x, y).

Goldsman 5/21/14 64 / 147

Page 65: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Example: 3 sox in a box (numbered 1,2,3). Draw 2 sox at random w/oreplacement. X = # of first sock, Y = # of second sock. The jointpmf f(x, y) is

X = 1 X = 2 X = 3 P (Y = y)

Y = 1 0 1/6 1/6 1/3

Y = 2 1/6 0 1/6 1/3

Y = 3 1/6 1/6 0 1/3

P (X = x) 1/3 1/3 1/3 1

fX(x) ≡ P (X = x) is the “marginal” distribution of X.fY (y) ≡ P (Y = y) is the “marginal” distribution of Y .

Goldsman 5/21/14 65 / 147

Page 66: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

By the law of total probability,

P (X = 1) =

3∑y=1

P (X = 1, Y = y) = 1/3.

In addition,

P (X ≥ 2, Y ≥ 2)

=∑x≥2

∑y≥2

f(x, y)

= f(2, 2) + f(2, 3) + f(3, 2) + f(3, 3)

= 0 + 1/6 + 1/6 + 0 = 1/3.

Goldsman 5/21/14 66 / 147

Page 67: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Continuous Case

Definition: If X and Y are cts RV’s, then (X,Y ) is a jointly cts RV ifthere exists a function f(x, y) such that

(1) f(x, y) ≥ 0, ∀x, y.(2)∫ ∫

R2 f(x, y) dx dy = 1.(3) P (A) = P ((X,Y ) ∈ A) =

∫ ∫A f(x, y) dx dy.

In this case, f(x, y) is called the joint pdf.

If A ⊆ R2, then P (A) is the volume between f(x, y) and A.

Think of

f(x, y) dx dy ≈ P (x < X < x+ dx, y < Y < y + dy).

Easy to see how all of this generalizes the 1-dimensional pdf, f(x).

Goldsman 5/21/14 67 / 147

Page 68: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Easy Example: Choose a point at random in the interior of the circleinscribed in the unit square, e.g., C ≡ (x− 1

2)2 + (y − 12)2 = 1

4 . Find thepdf of (X,Y ).

Since the area of the circle is π/4,

f(x, y) =

{4/π if (x, y) ∈ C0 otherwise

Application: Toss n darts randomly into the unit square. The probabilitythat any individual dart will land in the circle is π/4. It stands to reasonthat the proportion of darts, p̂n, that land in the circle will beapproximately π/4. So you can use 4p̂n to estimate π!

Goldsman 5/21/14 68 / 147

Page 69: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Example: Suppose that

f(x, y) =

{4xy if 0 ≤ x ≤ 1, 0 ≤ y ≤ 1

0 otherwise

Find the prob (volume) of the region 0 ≤ y ≤ 1− x2.

V =

∫ 1

0

∫ 1−x2

04xy dy dx

=

∫ 1

0

∫ √1−y0

4xy dx dy

= 1/3.

Moral: Be careful with limits!

Goldsman 5/21/14 69 / 147

Page 70: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Bivariate cdf’s

Definition: The joint (bivariate) cdf of X and Y isF (x, y) ≡ P (X ≤ x, Y ≤ y), for all x, y.

F (x, y) =

∑∑

s≤x,t≤y f(s, t) discrete

∫ y−∞

∫ x−∞ f(s, t) ds dt continuous

Goldsman 5/21/14 70 / 147

Page 71: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Properties:

F (x, y) is non-decreasing in both x and y.

limx→−∞ F (x, y) = limy→−∞ F (x, y) = 0.

limx→∞ F (x, y) = FY (y) = P (Y ≤ y)limy→∞ F (x, y) = FX(x) = P (X ≤ x).

limx→∞ limy→∞ F (x, y) = 1.

F (x, y) is cts from the right in both x and y.

Going from cdf’s to pdf’s (continuous case).

1-dimension: f(x) = F ′(x) = ddx

∫ x−∞ f(t) dt.

2-D: f(x, y) = ∂2

∂x∂yF (x, y) = ∂2

∂x∂y

∫ x−∞

∫ y−∞ f(s, t) dt ds.

Goldsman 5/21/14 71 / 147

Page 72: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Example: Suppose

F (x, y) =

{1− e−x − e−y + e−(x+y) if x ≥ 0, y ≥ 0

0 if x < 0 or y < 0

The “marginal” cdf of X is

FX(x) = limy→∞

F (x, y) =

{1− e−x if x ≥ 0

0 if x < 0

The joint pdf is

f(x, y) =∂2

∂x∂yF (x, y)

=∂

∂y(e−x − e−ye−x)

=

{e−(x+y) if x ≥ 0, y ≥ 0

0 if x < 0 or y < 0

Goldsman 5/21/14 72 / 147

Page 73: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Marginal Distributions — Distrns of X and Y .

Example (discrete case): f(x, y) = P (X = x, Y = y).

X = 1 X = 2 X = 3 P (Y = y)

Y = 4 0.01 0.07 0.12 0.2

Y = 6 0.29 0.03 0.48 0.8

P (X = x) 0.3 0.1 0.6 1

By total probability,

P (X = 1) = P (X = 1, Y = any #) = 0.3.

Goldsman 5/21/14 73 / 147

Page 74: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Definition: If X and Y are jointly discrete, then the marginal pmf’s ofX and Y are, respectively,

fX(x) = P (X = x) =∑y

f(x, y)

andfY (y) = P (Y = y) =

∑x

f(x, y)

Goldsman 5/21/14 74 / 147

Page 75: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Example (discrete case): f(x, y) = P (X = x, Y = y).

X = 1 X = 2 X = 3 P (Y = y)

Y = 4 0.06 0.02 0.12 0.2

Y = 6 0.24 0.08 0.48 0.8

P (X = x) 0.3 0.1 0.6 1

Hmmm. . . . Compared to the last example, this has the samemarginals but different joint distribution!

Goldsman 5/21/14 75 / 147

Page 76: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Definition: If X and Y are jointly cts, then the marginal pdf’s of X andY are, respectively,

fX(x) =

∫Rf(x, y) dy and fY (y) =

∫Rf(x, y) dx.

Example:

f(x, y) =

{e−(x+y) if x ≥ 0, y ≥ 0

0 otherwise

Then the marginal pdf of X is

fX(x) =

∫Rf(x, y) dy =

∫ ∞0

e−(x+y) dy =

{e−x if x ≥ 0

0 otherwise

Goldsman 5/21/14 76 / 147

Page 77: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Bivariate Random Variables

Example:

f(x, y) =

{214 x

2y if x2 ≤ y ≤ 1

0 otherwise

Note funny limits where the pdf is positive, i.e., x2 ≤ y ≤ 1.

fX(x) =

∫Rf(x, y) dy =

∫ 1

x2

21

4x2y dy =

21

8x2(1− x4), if −1 ≤ x ≤ 1

fY (y) =

∫Rf(x, y) dx =

∫ √y−√y

21

4x2y dx =

7

2y5/2, if 0 ≤ y ≤ 1

Goldsman 5/21/14 77 / 147

Page 78: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Distributions

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 78 / 147

Page 79: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Distributions

Conditional Distributions

Recall conditional probability: P (A|B) = P (A ∩B)/P (B) if P (B) > 0.

Suppose that X and Y are jointly discrete RV’s. Then if P (Y = y) > 0,

P (X = x|Y = y) =P (X = x ∩ Y = y)

P (Y = y)=

f(x, y)

fY (y)

P (X = x|Y = 2) defines the probabilities on X given that Y = 2.

Definition: If fY (y) > 0, then fX|Y (x|y) ≡ f(x,y)fY (y) is the conditional

pmf/pdf of X given Y = y.

Remark: Usually just write f(x|y) instead of fX|Y (x|y).

Remark: Of course, fY |X(y|x) = f(y|x) = f(x,y)fX(x) .

Goldsman 5/21/14 79 / 147

Page 80: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Distributions

Discrete Example: f(x, y) = P (X = x, Y = y).

X = 1 X = 2 X = 3 fY (y)

Y = 4 0.01 0.07 0.12 0.2

Y = 6 0.29 0.03 0.48 0.8

fX(x) 0.3 0.1 0.6 1

Then, for example,

f(x|y = 6) =f(x, 6)

fY (6)=

f(x, 6)

0.8=

2980 if x = 1380 if x = 24880 if x = 3

0 otherwise

Goldsman 5/21/14 80 / 147

Page 81: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Distributions

Old Cts Example:

f(x, y) =21

4x2y, if x2 ≤ y ≤ 1

fX(x) =21

8x2(1− x4), if −1 ≤ x ≤ 1

fY (y) =7

2y5/2, if 0 ≤ y ≤ 1

Then, for example,

f(y|12

) =f(12 , y)

fX(12)=

214 ·

14y

218 ·

14 · (1−

116)

=32

15y, if 1

4 ≤ y ≤ 1.

Goldsman 5/21/14 81 / 147

Page 82: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Distributions

More generally,

f(y|x) =f(x, y)

fX(x)

=214 x

2y218 x

2(1− x4), if x2 ≤ y ≤ 1

=2y

1− x4, if x2 ≤ y ≤ 1.

Note: 2/(1− x4) is a constant with respect to y, and we can check tosee that f(y|x) is a legit condl pdf:∫ 1

x2

2y

1− x4dy = 1.

Goldsman 5/21/14 82 / 147

Page 83: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Distributions

Typical Problem: Given fX(x) and f(y|x), find fY (y).

Steps: (1) f(x, y) = fX(x)f(y|x)

(2) fY (y) =∫R f(x, y) dx.

Example: fX(x) = 2x, 0 < x < 1.

Given X = x, suppose that Y |x ∼ U(0, x). Now find fY (y).

Goldsman 5/21/14 83 / 147

Page 84: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Distributions

Solution: Y |x ∼ U(0, x) ⇒ f(y|x) = 1/x, 0 < y < x. So

f(x, y) = fX(x)f(y|x)

= 2x · 1

x, if 0 < x < 1 and 0 < y < x

= 2, if 0 < y < x < 1.

Thus,

fY (y) =

∫Rf(x, y) dx =

∫ 1

y2 dx = 2(1− y), 0 < y < 1.

Goldsman 5/21/14 84 / 147

Page 85: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 85 / 147

Page 86: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Independence

Recall that two events are independent if P (A ∩B) = P (A)P (B).

ThenP (A|B) =

P (A ∩B)

P (B)=

P (A)P (B)

P (B)= P (A).

And similarly, P (B|A) = P (B).

Now want to define independence for RV’s, i.e., the outcome of Xdoesn’t influence the outcome of Y .

Definition: X and Y are independent RV’s if, for all x and y,

f(x, y) = fX(x)fY (y).

Goldsman 5/21/14 86 / 147

Page 87: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Equivalent definitions:

F (x, y) = FX(x)FY (y), ∀x, y

orP (X ≤ x, Y ≤ y) = P (X ≤ x)P (Y ≤ y), ∀x, y

If X and Y aren’t indep, then they’re dependent.

Theorem: X and Y are indep if and only if f(y|x) = fY (y) ∀x, y.

Proof:f(y|x) =

f(x, y)

fX(x)=

fX(x)fY (y)

fX(x)= fY (y).

Similarly, X and Y indep implies f(x|y) = fX(x).

Goldsman 5/21/14 87 / 147

Page 88: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Example (discrete): f(x, y) = P (X = x, Y = y).

X = 1 X = 2 fY (y)

Y = 2 0.12 0.28 0.4

Y = 3 0.18 0.42 0.6

fX(x) 0.3 0.7 1

X and Y are indep since f(x, y) = fX(x)fY (y), ∀x, y.

Goldsman 5/21/14 88 / 147

Page 89: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Example (cts): Suppose f(x, y) = 6xy2, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1.After some work (which can be avoided by the next theorem), we canderive

fX(x) = 2x, if 0 ≤ x ≤ 1, and

fY (y) = 3y2, if 0 ≤ y ≤ 1.

X and Y are indep since f(x, y) = fX(x)fY (y), ∀x, y.

Goldsman 5/21/14 89 / 147

Page 90: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Easy way to tell if X and Y are indep. . .

Theorem: X and Y are indep iff f(x, y) = a(x)b(y), ∀x, y, for somefunctions a(x) and b(y) (not necessarily pdf’s).

So if f(x, y) factors into separate functions of x and y, then X and Yare indep.

Example: f(x, y) = 6xy2, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1. Take

a(x) = 6x, 0 ≤ x ≤ 1, and b(y) = y2, 0 ≤ y ≤ 1.

Thus, X and Y are indep (as above).

Goldsman 5/21/14 90 / 147

Page 91: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Example: f(x, y) = 214 x

2y, x2 ≤ y ≤ 1.

“Funny” (non-rectangular) limits make factoring into marginalsimpossible. Thus, X and Y are not indep.

Example: f(x, y) = cx+y , 1 ≤ x ≤ 2, 1 ≤ y ≤ 3.

Can’t factor f(x, y) into fn’s of x and y separately. Thus, X and Y arenot indep.

Now that we can figure out if X and Y are indep, what can we do withthat knowledge?

Goldsman 5/21/14 91 / 147

Page 92: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Consequences of Independence

Definition/Theorem (another Unconscious Statistician): Let h(X,Y ) bea fn of the RV’s X and Y . Then

E[h(X,Y )] =

{ ∑x

∑y h(x, y)f(x, y) discrete∫

R∫R h(x, y)f(x, y) dx dy continuous

Theorem: Whether or not X and Y are indep,

E[X + Y ] = E[X] + E[Y ].

Goldsman 5/21/14 92 / 147

Page 93: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Proof (cts case):

E[X + Y ] =

∫R

∫R

(x+ y)f(x, y) dx dy

=

∫R

∫Rxf(x, y) dx dy +

∫R

∫Ryf(x, y) dx dy

=

∫Rx

∫Rf(x, y) dy dx+

∫Ry

∫Rf(x, y) dx dy

=

∫RxfX(x) dx+

∫RyfY (y) dy

= E[X] + E[Y ].

Goldsman 5/21/14 93 / 147

Page 94: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Can generalize this result to more than two RV’s.

Corollary: If X1, X2, . . . , Xn are RV’s, then

E

[ n∑i=1

Xi

]=

n∑i=1

E[Xi].

Proof: Induction.

Goldsman 5/21/14 94 / 147

Page 95: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Theorem: If X and Y are indep, then E[XY ] = E[X]E[Y ].

Proof (cts case):

E[XY ] =

∫R

∫Rxyf(x, y) dx dy

=

∫R

∫RxyfX(x)fY (y) dx dy (X and Y are indep)

=

(∫RxfX(x) dx

)(∫RyfY (y) dy

)= E[X]E[Y ].

Goldsman 5/21/14 95 / 147

Page 96: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Remark: The above theorem is not necessarily true if X and Y aredependent. See the upcoming discussion on covariance.

Theorem: If X and Y are indep, then

Var(X + Y ) = Var(X) + Var(Y ).

Remark: The assumption of independence really is important here. IfX and Y aren’t independent, then the result might not hold.

Corollary: If X1, X2, . . . , Xn are indep RV’s, then

Var

( n∑i=1

Xi

)=

n∑i=1

Var(Xi).

Proof: Induction.

Goldsman 5/21/14 96 / 147

Page 97: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Proof:

Var(X + Y )

= E[(X + Y )2]− (E[X + Y ])2

= E[X2 + 2XY + Y 2]− (E[X] + E[Y ])2

= E[X2] + 2E[XY ] + E[Y 2]

−(E[X])2 − 2E[X]E[Y ]− (E[Y ])2

= E[X2] + 2E[X]E[Y ] + E[Y 2]

−(E[X])2 − 2E[X]E[Y ]− (E[Y ])2

= E[X2]− (E[X])2 + E[Y 2]− (E[Y ])2

= Var(X) + Var(Y ).

Goldsman 5/21/14 97 / 147

Page 98: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Random Samples

Definition: X1, X2, . . . , Xn form a random sample ifXi’s are all independent.Each Xi has the same pmf/pdf f(x).

Notation: X1, . . . , Xniid∼ f(x) (“indep and identically distributed”)

Goldsman 5/21/14 98 / 147

Page 99: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Example/Theorem: Suppose X1, . . . , Xniid∼ f(x) with E[Xi] = µ and

Var(Xi) = σ2. Define the sample mean as

X̄ ≡ 1

n

n∑i=1

Xi.

Then

E[X̄] = E

[1

n

n∑i=1

Xi

]=

1

n

n∑i=1

E[Xi] =1

n

n∑i=1

µ = µ.

So the mean of X̄ is the same as the mean of Xi.

Goldsman 5/21/14 99 / 147

Page 100: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Independent Random Variables

Meanwhile,. . .

Var(X̄) = Var

(1

n

n∑i=1

Xi

)

=1

n2Var

( n∑i=1

Xi

)

=1

n2

n∑i=1

Var(Xi) (Xi’s indep)

=1

n2

n∑i=1

σ2 = σ2/n.

So the mean of X̄ is the same as the mean of Xi, but the variancedecreases! This makes X̄ a great estimator for µ (which is usuallyunknown in practice), and the result is referred to as the Law of LargeNumbers. Stay tuned.

Goldsman 5/21/14 100 / 147

Page 101: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 101 / 147

Page 102: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Conditional Expectation

Usual definition of expectation. E.g., what’s the avg weight of a male?

E[Y ] =

{ ∑y yf(y) discrete∫

R yf(y) dy continuous

Now suppose we’re interested in the avg weight of a 6’6" tall male.

f(y|x) is the conditional pdf/pmf of Y given X = x.

Definition: The conditional expectation of Y given X = x is

E[Y |X = x] ≡

{ ∑y yf(y|x) discrete∫

R yf(y|x) dy continuous

Note that E[Y |X = x] is a function of x.Goldsman 5/21/14 102 / 147

Page 103: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Discrete Example:

f(x, y) X = 0 X = 3 fY (y)

Y = 2 0.11 0.34 0.45

Y = 5 0.00 0.05 0.05

Y = 10 0.29 0.21 0.50

fX(x) 0.40 0.60 1

The unconditional expectation is E[Y ] =∑

y yfY (y) = 6.15. Butconditional on X = 3, we have

f(y|x = 3) =f(3, y)

fX(3)=

f(3, y)

0.60=

3460 if y = 2560 if y = 52160 if y = 10

So the expectation conditional on X = 3 is

E[Y |X = 3] =∑y

yf(y|3) = 2(34

60

)+ 5( 5

60

)+ 10

(21

60

)= 5.05.

Goldsman 5/21/14 103 / 147

Page 104: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Old Cts Example:

f(x, y) =21

4x2y, if x2 ≤ y ≤ 1.

Recall thatf(y|x) =

2y

1− x4if x2 ≤ y ≤ 1.

Thus,

E[Y |x] =

∫Ryf(y|x) dy =

2

1− x4

∫ 1

x2y2 dy =

2

3· 1− x6

1− x4.

Thus, e.g., E[Y |X = 0.5] = 23 ·

6364/

1516 = 0.70.

Goldsman 5/21/14 104 / 147

Page 105: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Theorem (double expectations): E[E(Y |X)] = E[Y ].

Remarks: Yikes, what the heck is this!?

The expected value (averaged over all X ’s) of the conditional expectedvalue (of Y |X) is the plain old expected value (of Y ).

Think of the outside exp value as the exp value of h(X) = E(Y |X).Then the Law of the Unconscious Statistician miraculously gives usE[Y ].

Believe it or not, sometimes it’s easier to calculate E[Y ] indirectly byusing our double expectation trick.

Goldsman 5/21/14 105 / 147

Page 106: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Proof (cts case): By the Unconscious Statistician,

E[E(Y |X)] =

∫R

E(Y |x)fX(x) dx

=

∫R

(∫Ryf(y|x) dy

)fX(x) dx

=

∫R

∫Ryf(y|x)fX(x) dx dy

=

∫Ry

∫Rf(x, y) dx dy

=

∫RyfY (y) dy = E[Y ].

Goldsman 5/21/14 106 / 147

Page 107: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Old Example: Suppose f(x, y) = 214 x

2y, if x2 ≤ y ≤ 1. Find E[Y ] twoways.

By previous examples, we know that

fX(x) =21

8x2(1− x4), if −1 ≤ x ≤ 1

fY (y) =7

2y5/2, if 0 ≤ y ≤ 1

E[Y |x] =2

3· 1− x6

1− x4.

Goldsman 5/21/14 107 / 147

Page 108: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Solution #1 (old, boring way):

E[Y ] =

∫RyfY (y) dy =

∫ 1

0

7

2y7/2 dy =

7

9.

Solution #2 (new, exciting way):

E[Y ] = E[E(Y |X)]

=

∫R

E(Y |x)fX(x) dx

=

∫ 1

−1

(2

3· 1− x6

1− x4

)(21

8x2(1− x4)

)dx =

7

9.

Notice that both answers are the same (good)!

Goldsman 5/21/14 108 / 147

Page 109: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Example: An alternative way to calculate the mean of the Geom(p).

Let Y ∼ Geom(p), e.g., Y could be the number of coin flips before Happears, where P (H) = p. Further, let

X =

{1 if first flip is H

0 otherwise.

Based on the result X of the first step, we have

E[Y ] = E[E(Y |X)]

=∑x

E(Y |x)fX(x)

= E(Y |X = 0)P (X = 0) + E(Y |X = 1)P (X = 1)

= (1 + E[Y ])(1− p) + 1(p). (why?)

Solving, we get E[Y ] = 1/p (which is indeed the correct answer!)Goldsman 5/21/14 109 / 147

Page 110: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Theorem (expectation of the sum of a random number of RV’s):

Suppose that X1, X2, . . . are independent RV’s, all with the samemean. Also suppose that N is a nonnegative, integer-valued RV, that’sindependent of the Xi’s. Then

E

(N∑i=1

Xi

)= E[N ]E[X1].

Remark: You have to be very careful here. In particular, note thatE(∑N

i=1Xi) 6= NE[X1], since the LHS is a number and the RHS israndom.

Goldsman 5/21/14 110 / 147

Page 111: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Proof: By double expectation and the fact that N is indep of the Xi’s,

E

(N∑i=1

Xi

)= E

[E

(N∑i=1

Xi

∣∣∣∣N)]

=

∞∑n=1

E

(N∑i=1

Xi

∣∣∣∣N = n

)P (N = n)

=

∞∑n=1

E

(n∑i=1

Xi

∣∣∣∣N = n

)P (N = n)

=

∞∑n=1

E

(n∑i=1

Xi

)P (N = n)

=

∞∑n=1

nE[X1]P (N = n)

= E[X1]

∞∑n=1

nP (N = n). 2

Goldsman 5/21/14 111 / 147

Page 112: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Example: Suppose the number of times we roll a die is N ∼ Pois(10).If Xi denotes the value of the ith toss, then the expected total of all ofthe rolls is

E

(N∑i=1

Xi

)= E[N ]E[X1] = 10(3.5) = 35. 2

Theorem: Under the same conditions as before,

Var

(N∑i=1

Xi

)= E[N ]Var(X1) + (E[X1])

2Var(N).

Proof: See, for instance, Ross. 2

Goldsman 5/21/14 112 / 147

Page 113: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Computing Probabilities by Conditioning

Let A be some event, and define the RV Y as:

Y =

{1 if A occurs

0 otherwise.

ThenE[Y ] =

∑y

yfY (y) = P (Y = 1) = P (A).

Similarly, for any RV X, we have

E[Y |X = x] =∑y

yfY (y|x)

= P (Y = 1|X = x)

= P (A|X = x).

Goldsman 5/21/14 113 / 147

Page 114: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Further, since E[Y ] = E[E(Y |X)], we have

P (A) = E[Y ]

= E[E(Y |X)]

=

∫R

E[Y |x]dFX(x)

=

∫RP (A|X = x)dFX(x).

Goldsman 5/21/14 114 / 147

Page 115: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Example/Theorem: If X and Y are independent continuous RV’s, then

P (Y < X) =

∫RFY (x)fX(x) dx,

where FY (·) is the c.d.f. of Y and fX(·) is the p.d.f. of X.

Proof: (Actually, there are many proofs.) Let the event A = {Y < X}.Then

P (Y < X) =

∫RP (Y < X|X = x)fX(x) dx

=

∫RP (Y < x|X = x)fX(x) dx

=

∫RP (Y < x)fX(x) dx

(since X,Y are indep). 2

Goldsman 5/21/14 115 / 147

Page 116: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Example: If X ∼ Exp(µ) and Y ∼ Exp(λ) are independent RV’s. Then

P (Y < X) =

∫RFY (x)fX(x) dx

=

∫ ∞0

(1− e−λx)µe−µx dx

λ+ µ. 2

Goldsman 5/21/14 116 / 147

Page 117: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Example/Theorem: If X and Y are independent continuous RV’s, then

P (X + Y < a) =

∫RFY (a− x)fX(x) dx,

where FY (·) is the c.d.f. of Y and fX(·) is the p.d.f. of X. The quantityX + Y is called a convolution.

Proof:

P (X + Y < a) =

∫RP (X + Y < a|X = x)fX(x) dx

=

∫RP (Y < a− x|X = x)fX(x) dx

=

∫RP (Y < a− x)fX(x) dx

(since X,Y are indep). 2

Goldsman 5/21/14 117 / 147

Page 118: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

Example: Suppose X,Y iid∼ Exp(λ). Note that

FY (a− x) =

1− e−λ(a−x) if a− x ≥ 0 and x ≥ 0

(i.e., 0 ≤ x ≤ a)

0 if otherwise

P (X + Y < a) =

∫RFY (a− x)fX(x) dx

=

∫ a

0(1− e−λ(a−x))λe−λx dx

= 1− e−λa − λae−λa, if a ≥ 0.

d

daP (X + Y < a) = λ2ae−λa, a ≥ 0.

This implies that X + Y ∼ Gamma(2, λ). 2

Goldsman 5/21/14 118 / 147

Page 119: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Conditional Expectation

And Now, A Word From Our Sponsor. . .

Congratulations! You are now done with the most difficult module ofthe course!

Things will get easier from here (I hope)!

Goldsman 5/21/14 119 / 147

Page 120: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 120 / 147

Page 121: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Covariance and Correlation

These are measures used to define the degree of association betweenX and Y if they don’t happen to be indep.

Definition: The covariance between X and Y is

Cov(X,Y ) ≡ σXY ≡ E[(X − E[X])(Y − E[Y ])].

Remark: Cov(X,X) = E[(X − E[X])2] = Var(X).

If X and Y have positive covariance, then X and Y move “in the samedirection.” Think height and weight.

If X and Y have negative covariance, then X and Y move “in oppositedirections.” Think snowfall and temperature.

Goldsman 5/21/14 121 / 147

Page 122: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Theorem (easier way to calculate Cov):

Cov(X,Y ) = E[XY ]− E[X]E[Y ].

Proof:

Cov(X,Y ) = E[(X − E[X])(Y − E[Y ])]

= E

[XY −XE[Y ]− Y E[X] + E[X]E[Y ]

]= E[XY ]− E[X]E[Y ]− E[Y ]E[X] + E[X]E[Y ]

= E[XY ]− E[X]E[Y ].

Goldsman 5/21/14 122 / 147

Page 123: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Theorem: X and Y indep implies Cov(X,Y ) = 0.

Proof:

Cov(X,Y ) = E[XY ]− E[X]E[Y ]

= E[X]E[Y ]− E[X]E[Y ] (X, Y indep)

= 0.

Goldsman 5/21/14 123 / 147

Page 124: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Danger Will Robinson: Cov(X,Y ) = 0 does not imply X and Y areindep!!

Example: Suppose X ∼ U(−1, 1) and Y = X2 (so X and Y are clearlydependent).

But

E[X] =

∫ 1

−1x · 1

2dx = 0 and

E[XY ] = E[X3] =

∫ 1

−1x3 · 1

2dx = 0,

soCov(X,Y ) = E[XY ]− E[X]E[Y ] = 0.

Goldsman 5/21/14 124 / 147

Page 125: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Definition: The correlation between X and Y is

ρ = Corr(X,Y ) ≡ Cov(X,Y )√Var(X)Var(Y )

=σXYσXσY

.

Remark: Cov has “square” units; corr is unitless.

Corollary: X, Y indep implies ρ = 0.

Theorem: It can be shown that −1 ≤ ρ ≤ 1.

ρ ≈ 1 is “high” corrρ ≈ 0 is “low” corrρ ≈ −1 is “high” negative corr.

Example: Height is highly correlated with weight.Temperature on Mars has low corr with IBM stock price.

Goldsman 5/21/14 125 / 147

Page 126: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Anti-UGA Example: Suppose X is the avg GPA of a UGA student, andY is his IQ. Here’s the joint pmf.

f(x, y) X = 2 X = 3 X = 4 fY (y)

Y = 40 0.0 0.2 0.1 0.3

Y = 50 0.15 0.1 0.05 0.3

Y = 60 0.3 0.0 0.1 0.4

fX(x) 0.45 0.3 0.25 1

Goldsman 5/21/14 126 / 147

Page 127: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

E[X] =∑x

xfX(x) = 2.8

E[X2] =∑x

x2fX(x) = 8.5

Var(X) = E[X2]− (E[X])2 = 0.66

Similarly, E[Y ] = 51, E[Y 2] = 2670, and Var(Y ) = 69.

E[XY ] =∑x

∑y

xyf(x, y)

= 2(40)(.0) + · · ·+ 4(60)(.1) = 140

Cov(X,Y ) = E[XY ]− E[X]E[Y ] = −2.8

ρ =Cov(X,Y )√

Var(X)Var(Y )= −0.415.

Goldsman 5/21/14 127 / 147

Page 128: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Cts Example: Suppose f(x, y) = 10x2y, 0 ≤ y ≤ x ≤ 1.

fX(x) =

∫ x

010x2y dy = 5x4, 0 ≤ x ≤ 1

E[X] =

∫ 1

05x5 dx = 5/6

E[X2] =

∫ 1

05x6 dx = 5/7

Var(X) = E[X2]− (E[X])2 = 0.01984

Goldsman 5/21/14 128 / 147

Page 129: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Similarly,

fY (y) =

∫ 1

y10x2y dx =

10

3y(1− y3), 0 ≤ y ≤ 1

E[Y ] = 5/9, Var(Y ) = 0.04850

E[XY ] =

∫ 1

0

∫ x

010x3y2 dy dx = 10/21

Cov(X,Y ) = E[XY ]− E[X]E[Y ] = 0.01323

ρ =Cov(X,Y )√

Var(X)Var(Y )= 0.4265

Goldsman 5/21/14 129 / 147

Page 130: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Theorems Involving Covariance

Theorem: Var(X + Y ) = Var(X) + Var(Y ) + 2Cov(X,Y ), whether ornot X and Y are indep.

Remark: If X, Y are indep, the Cov term goes away.

Proof: By the work we did on a previous proof,

Var(X + Y ) = E[X2]− (E[X])2 + E[Y 2]− (E[Y ])2

+2(E[XY ]− E[X]E[Y ])

= Var(X) + Var(Y ) + 2Cov(X,Y ).

Goldsman 5/21/14 130 / 147

Page 131: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Theorem:

Var(

n∑i=1

Xi) =

n∑i=1

Var(Xi) + 2∑∑

i<jCov(Xi, Xj).

Proof: Induction.

Remark: If all Xi’s are indep, then

Var(

n∑i=1

Xi) =

n∑i=1

Var(Xi).

Goldsman 5/21/14 131 / 147

Page 132: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Theorem: Cov(aX, bY ) = abCov(X,Y ).

Proof:

Cov(aX, bY ) = E[aX · bY ]− E[aX]E[bY ]

= abE[XY ]− abE[X]E[Y ]

= abCov(X,Y ).

Theorem:

Var(

n∑i=1

aiXi) =

n∑i=1

a2iVar(Xi) + 2∑∑

i<jaiajCov(Xi, Xj).

Proof: Put above two results together.

Goldsman 5/21/14 132 / 147

Page 133: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Covariance and Correlation

Example: Var(X − Y ) = Var(X) + Var(Y )− 2Cov(X,Y ).

Example:

Var(X − 2Y + 3Z)

= Var(X) + 4Var(Y ) + 9Var(Z)

−4Cov(X,Y ) + 6Cov(X,Z)− 12Cov(Y,Z).

Goldsman 5/21/14 133 / 147

Page 134: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Outline

1 Intro / Definitions2 Discrete Random Variables3 Continuous Random Variables4 Cumulative Distribution Functions5 Great Expectations6 Functions of a Random Variable7 Bivariate Random Variables8 Conditional Distributions9 Independent Random Variables10 Conditional Expectation11 Covariance and Correlation12 Moment Generating Functions

Goldsman 5/21/14 134 / 147

Page 135: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Introduction

Recall that E[Xk] is the kth moment of X.

Definition: MX(t) ≡ E[etX ] is the moment generating function (mgf)of the RV X.

Remark: MX(t) is a function of t, not of X!

Example: X ∼ Bern(p).

X =

{1 w.p. p

0 w.p. q

Then

MX(t) = E[etX ] =∑x

etxf(x) = et·1p+ et·0q = pet + q.

Goldsman 5/21/14 135 / 147

Page 136: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Example: X ∼ Exp(λ).

f(x) =

{λe−λx if x > 0

0 otherwise

Then

MX(t) = E[etX ]

=

∫Retxf(x) dx

= λ

∫ ∞0

e(t−λ)x dx

λ− tif λ > t.

Goldsman 5/21/14 136 / 147

Page 137: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Big Theorem: Under certain technical conditions,

E[Xk] =dk

dtkMX(t)

∣∣∣∣t=0

, k = 1, 2, . . . .

Thus, you can generate the moments of X from the mgf. (Sometimes,it’s easier to get moments this way than directly.)

Goldsman 5/21/14 137 / 147

Page 138: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

“Proof:”

MX(t) = E[etX ] = E

[ ∞∑k=0

(tX)k

k!

]=

∞∑k=0

E

[(tX)k

k!

]

= 1 + tE[X] +t2E[X2]

2+ · · ·

This impliesd

dtMX(t) = E[X] + tE[X2] + · · ·

and sod

dtMX(t)

∣∣∣∣t=0

= E[X].

Same deal for higher-order moments.

Goldsman 5/21/14 138 / 147

Page 139: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Example: X ∼ Exp(λ). Then MX(t) = λλ−t for λ > t. So

E[X] =d

dtMX(t)

∣∣∣∣t=0

(λ− t)2

∣∣∣∣t=0

= 1/λ.

Further,

E[X2] =d2

dt2MX(t)

∣∣∣∣t=0

=2λ

(λ− t)3

∣∣∣∣t=0

= 2/λ2.

Thus,

Var(X) = E[X2]− (E[X])2 =2

λ2− 1

λ2= 1/λ2.

Goldsman 5/21/14 139 / 147

Page 140: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Other Applications

You can do lots of nice things with mgf’s. . .

Find the mgf of a linear function of X.

Find the mgf of the sum of indep RV’s.

Identify distributions.

Derive cool properties of distributions.

Goldsman 5/21/14 140 / 147

Page 141: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Theorem: Suppose X has mgf MX(t) and let Y = aX + b. ThenMY (t) = etbMX(at).

Proof:

MY (t) = E[etY ] = E[et(aX+b)] = etbE[e(at)X ]

= etbMX(at).

Example: Let X ∼ Exp(λ) and Y = 3X + 2. Then

MY (t) = e2tMX(3t) = e2tλ

λ− 3t, if λ > 3t.

Goldsman 5/21/14 141 / 147

Page 142: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Theorem: Suppose X1, . . . , Xn are indep. Let Y =∑n

i=1Xi. Then

MY (t) =

n∏i=1

MXi(t).

Proof:

MY (t) = E[etY ] = E[et∑Xi ] = E

[ n∏i=1

etXi

]

=

n∏i=1

E[etXi ] (Xi’s indep)

=

n∏i=1

MXi(t).

Goldsman 5/21/14 142 / 147

Page 143: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Corollary: If X1, . . . , Xn are i.i.d. and Y =∑n

i=1Xi, then

MY (t) = [MX1(t)]n.

Example: Suppose X1, . . . , Xniid∼ Bern(p). Then by a previous

example,MY (t) = [MX1(t)]n = (pet + q)n.

So what use is a result like this?

Goldsman 5/21/14 143 / 147

Page 144: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Theorem: If X and Y have the same mgf, then they have the samedistribution (at least in this course)!

Proof: Too hard for this course.

Example: The sum Y of n i.i.d. Bern(p) RV’s is the same as aBinomial(n, p) RV.

By the previous example, MY (t) = (pet + q)n. So all we need to showis that the mgf of the matches this.

Goldsman 5/21/14 144 / 147

Page 145: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Meanwhile, let Z ∼ Bin(n, p).

MZ(t) = E[etZ ] =∑z

etzP (Z = z)

=

n∑z=0

etz(n

z

)pzqn−z

=

n∑z=0

(n

z

)(pet)zqn−z

= (pet + q)n (by the binomial theorem),

and this matches the mgf of Y from the last pg.

Goldsman 5/21/14 145 / 147

Page 146: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Example: You can identify a distribution by its mgf.

MX(t) =

(3

4et +

1

4

)15

implies that X ∼ Bin(15, 0.75).

Example:

MY (t) = e−2t(

3

4e3t +

1

4

)15

implies that Y has the same distribution as 3X − 2, whereX ∼ Bin(15, 0.75).

Goldsman 5/21/14 146 / 147

Page 147: 2. Random Variablessman/courses/6739/6739-02-RandomVariables_140521.pdfOutline 1 Intro / Definitions 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution

Moment Generating Functions

Theorem (Additive property of Binomials): If X1, . . . , Xk are indep, withXi ∼ Bin(ni, p) (where p is the same for all Xi’s), then

Y ≡k∑i=1

Xi ∼ Bin(

k∑i=1

ni, p).

Proof:

MY (t) =

k∏i=1

MXi(t) (mgf of indep sum)

=

k∏i=1

(pet + q)ni (Binomial(ni, p) mgf)

= (pet + q)∑k

i=1 ni .

This is the mgf of the Bin(∑k

i=1 ni, p), so we’re done.

Goldsman 5/21/14 147 / 147