View
217
Download
0
Category
Preview:
Citation preview
8/9/2019 Problems in Communication Theory With Solutions
1/75
www.MathGeek.com
Communication Theory: Problems and SolutionsEric B. Hall
Contents
1 Detection Problems . 21.1 Bayesian and l\Iinimax Tests . 21.2 Neyman-Pearson Tests 31.3 Locally Optimal Tests 9
2 Estimation Problems 102.1 General Problems 10
2.2 Unbiased Estimators 112.3 Maximum Likelihood Estimation 132.4 Minimum Mean Square Estimation 162.5 Hilbert Spaces . 17
3 Detection Solutions 193.1 Bayesian and Minimax Tests . 193.2 Neyman-Pearson Tests 233.3 Locally Optimal Tests 43
4 Estimation Solutions 464.1 General Problems 46
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
2/75
www.MathGeek.com
4.2 Unbiased Estimators 504.3 Maximum Likelihood Estimation 534.4 :Minimum :Mean Square Estimation 664.5 Hilbert Spaces . 70
1. Detection Problems1.1. Bayesian and Minimax Tests
Problem 1.1. As an example in which randomization reduces th e maximum risk, supposethat a coin is known to be either standard (HT) or to have heads on both sides (HH). Th enature of the coin is to be decided on the basis of a single toss, th e loss being one for anincorrect decision and zero for a correct decision. Let the decision be HT when T is observed,and let the decision be made at random if H is observed with probability p for HT and 1 - pfor HH. Show that the ma.ximum risk is minimized when p =Probleln 1.2. Consider a zero mean Gaussian random variable X with positive variance(J2. This random variable is passed through one of two equally likely nonlinearities to obtainan output Y. The null hypothesis Ho is that Y = X 2 and th e alternative hypothesis HI isthat Y = exp(X). Design a test that on the basis of one observation will choose one of theabove hypotheses while minimizing th e probability of error.Problem 1.3. Consider two independent, zero mean Gaussian random variables Xl andX 2 that under hypothesis Ho each have unit variance an d under HI each have a varianceequal to 2. Assign a unit cost to an incorrect decision and zero cost to a correct decision.Find a minimax test for this problem based upon the two observations Xl and X 2 .Problem 1.4. Let N be a random variable with a density function given by
1f(x) = "2 exp(-Ixl)for all x E lR, and consider a hypothesis testing problem in which Ho states that Y = Na n dHI states that Y = N + s. Let s = 2, and assume that the prior probabilities for Ho and HIare given by ITo = 0.1 and ITI = 0.9, respectively. Design a hypothesis test that minimizesth e probability of error. For what values of the observation Y does your test announce thatth e signal s is present?
2
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
3/75
www.MathGeek.com
Problem 1.5. (A.) Let 8 be a random variable with probability density function f(8), an dassume that a probability density function for our observation X is given by Pe (x) when8 = 8. Show that b is a Bayesian decision rule if for each x th e decision b(x) is chosen tominimize fJR L(8, b(:c))g(8Ix) d8, where
f(8)pe(x)g(81:c) = f f()pq,(x)dis a conditional probability density function of given X = x.(B) Consider a two-decision problem in which Wo an d WI are, respectively, th e sets of 8-valuesfor which do and d l are th e correct decisions. Assume that th e loss is 0 when the correctdecision is made, and otherwise is given by L(8, do) = a if 8 E WI an d L(8, dd = f3 if 8 E WooShow that a Bayes solution consists in choosing decision do if
aP(8 E wIIX = x) < f3P(8 E wolX = x)and decision d l if the reverse inequality holds where the choice of decision is immaterial incase of equality.
1.2. Neyman-Pearson Tests
Problem 1.6. Consider a Gaussian random variable X with mean 8 and unit variance. Let0: 0 = 0.05 an d design a level-o:o Neyman-Pearson test for testing Ho: 8 = 0 against HI:8 = 1000. I f we observe x = 1.8, then will we accept or reject th e hypothesis Ho? Does thismake sense for an "optimal" test?Problem 1. 7. Consider a random variable Y an d two probability density functions f an dg. Let Ho denote the hypothesis that Y is distributed according to the density function fand le t HI denote the hypothesis that Y is distributed according to the density function g.A hypothesis test is designed for this situation such that th e false alarm probability 0: of thetest is equal to 0.01 and the power of the test f3 is equal to 0.0099. Is it possible that thistest is a Neyman-Pearson test for th e level ao = 0.01? Why or why not?Problem 1.8. Consider a detection problem in which we desire to test for th e presenceof a decaying exponential signal in zero mean, mutually independent Gaussian noise. vVewill base our decision on k mutually independent samples of th e received signal. Hence, wemodel th e situation as follows where th e t/s denote our sampling times:
Ho: Xi = Ni for i = 1, . . . , kHI: Xi = exp( -t,) + Ni for i = 1, . . . , k.
3
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
4/75
www.MathGeek.com
Assume that the Ni's are identically distributed and mutually independent with NI havinga zero mean Gaussian distribution with a positive variance denoted by 1T2. Unfortunately,in taking our samples we encounter a problem in synchronizing th e clock at th e receiver. Inparticular, assume that th e sampling times t l , ... ,t k are modeled as t i = iA - ewhere A > 0an d e is a fixed yet unknown parameter that lies in th e interval
[-A A]10 ' 10 .Design and describe a level-ao Neyman-Pearson detector for this situation. What is thethreshold in terms of 000, A, 0" , k, and e? Does the threshold depend on e? Find anexpression for f3 in terms of 000, A, IT, k, and e.Problem 1.9. Consider th e following decision problem: Ho: Yi = Ni for i = 1,2, . . . , kversus HI: Yi = s+Ni for i = 1,2, . . . , k, where s is a fixed positive constant and where theN/s ar e mutually independent zero mean Gaussian random variables each with variance 2.(A) Let k = 9. What is th e smallest positive signal s for which a Neyman-Pearson test withfalse alarm probability 0.01 has a detection probability 13 0.97?(B) Let s = 1. \Vhat is the smallest number of observations k for which a Neyman-Pearsontest with false alarm probability 0.05 has a detection probability 13 > 0.99?Problem 1.10. Consider a collection {Xl, ... ,X,,} of mutually independent, identicallydistributed random variables each with a N (e, 1) distribution for some real number e. LetHo denote the hypothesis that e = 0 and let HI denote the hypothesis that e = 1/2. Let000 = 0.005.(A) Design a Neyman-Pearson test for this problem as a function of n. \Vhat is th e powerof th e test if n = 9?(B) Now, consider a different test. Flip a fair coin. If th e outcome is heads then performth e test in part (A) with n = 2. If, on the other hand, th e outcome is tails then perform thetest in part (A) with n = 16. What is th e power of this test? \Vhat can you conclude abouttests that allow random samples sizes?Problem 1.11. Consider a coin that, when flipped, comes up heads with probability p andtails with probability 1 - p. We flip the coin twice and then must decide whether p =(which we will call hypothesis Ho) or p = (which we will call hypothesis Hd. Determinea procedure that is most powerful for testing Ho against HI subject to the constraint that0: no t exceed What is th e power of your test?Problem 1.12. Consider the problem of detecting a continuous signal s(t) = exp(- t) fort 0 on the basis of two samples. Assume that the observations are either of the form
4
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
5/75
www.MathGeek.com
Yi = Ni (which we will call hypothesis Ho) or Yi = s(h.i) +Ni (which we will call hypothesisHd where h is a positive constant and where i = 1,2. Assume that N1 and N2 are zeromean, mutually Gaussian random variables such that E[N?] = E[N?] = 1 and E[N1N2] =Let ao = 0.1. Design a Neyman-Pearson test for this situation. What is th e power of thetest if h = 0.1?Problem 1.13. Consider a problem in which we wish to test for th e presence of a knownpositive signal s(t) = exp( -t) defined for t 2: 0 on th e basis of three samples corruptedby additive Gaussian noise. In particular, let Sk = s((k - l)h) for k = 1,2,3 and h > 0,le t Zl, Z2, Z3 be mutually independent, identically distributed Gaussian random variableseach with mean zero and variance 110 , and le t Xl, X 2 , X3 denote ou r observations. Further,for k = 1,2,3, le t Ho denote the hypothesis that X k is equal to Z k and let H1 denote thehypothesis that X k is equal to Zk + Sic. Design a Neyman-Pearson test with a level ofsignificance given by ao = 0.05. What is th e maximum sampling interval h such that thepower of th e test is not less than 0.99?Problem 1.14. Consider a binary channel such that if we transmit a 'zero' we receive a'zero' at the other end with probability 1 - Ao and we receive a 'one' with probability Ao.Further, if we transmit a 'one' we receive a 'one' with probability 1 - A1 and we receivea 'zero' with probability A1. (Assume that 0 < Ao < 1, 0 < A1 < 1, an d Ao + A1 < 1.)Consider the transmission and subsequent receipt of a single binary digit. Let Ho denoteth e hypothesis that a 'zero' was transmitted and let H1 denote the hypothesis that a 'one'was transmitted. Design a Neyman-Pearson test for this situation and find the power as afunction of th e size ao for 0 S ao S 1. Sketch an RO C (Receiver Operating Characteristic)curve for th e special case when Ao = A1 = and for th e special case when Ao = A1 = 1/8.(An ROC curve is a plot of the power of a test as a function of the size of the test.)Probleln 1.15. A test to determine whether a communications satellite is in working ordermight be run as follows: A very strong signal is sent from Earth. The satellite responds bysending a signal v > 0 for n seconds if it is working or does no t respond if it is not working.After an appropriate delay, we take n samples (one each second) X l, ... , xn an d we assumethat our observations are random samples from a Gaussian distribution with positive knownvariance rJ"2 and mean JL. Hypothesis Ho is that JL = 0 and hypothesis H1 is that JL = 'U.Two systems are available for testing the satellite. System A ha s a signal to noise ratio of'u/rJ = 2 and System B is such that 'u/rJ = 1. System A costs $1,000,000 an d System Beasts$250,000. One second of transmission with either system costs $1000. You need to test yoursatellite 100 times each year and decide to do so each time with a Neyman-Pearson testdesigned for a level of significance of 0.05. Assume that each time you test the satellite, youwant the number of seconds of response to be sufficient to ensure that the power of your testexceeds 0.95. What is th e cost to purchase and operate each system for one year?
5
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
6/75
www.MathGeek.com
Problem 1.16. Consider a random variable X that under hypothesis Ho ha s density
{2. )x+1Po(x) = g( i f O < x < l
otherwise.and under hypothesis HI has density
PI (:c) = { 01 if 0 < x < 1otherwise.(A) Find a minima.x test of Ho versus HI based upon th e single observation X.(B) Find a Neyman-Pearson test of Ho versus HI with size 110 based upon the single observation X.
Problem 1.17. Consider a random variable X that under hypothesis Ho has a uniformdistribution on (0,1) and that under hypothesis HI has a uniform distribution on ( ~ , ~ ) . Ofth e following three tests, which one is or which ones are Neyman-Pearson tests of Ho versusHI based upon the single observation X and with a level of significance equal to
{. 7'() 1 1 f x > -CPI x = 8o otherwise
1 if- 11 . 1- if-
8/9/2019 Problems in Communication Theory With Solutions
7/75
www.MathGeek.com
1H({I}) = 1 - H({2}) ="3
Consider the problem of testing Ho: {Po} against HI: {PI, P2 } on the basis of one sample.(A) Does there exist a UMP test in this case that has a level of significance equal to 1/6? I fyes, then find th e test. I f no, then explain why not.(B) Does there exist a UMP test in this case that has a level of significance equal to 1/27 I fyes, then find th e test. I f no, then explain why not.Problem 1.19. Consider a collection {Y , Y2, . .. ,Yd of random variables such that Xi =f( t i ) + Ni for each i where f: JR. ---7 JR., where
and where th e Ni's are mutually independent, zero mean Gaussian random variables eachwith positive variance (J2. Hypothesis Ho states that f(t) = cos(t) and hypothesis HI statesthat f(t) = sin(t). Find a Neyman-Pearson test of Ho against HI at size ao based on{Y , Y2, . .. ,Yd What is the power of your test? How should one choose th e sampling timesin order to maximize th e power?Problem 1.20. Consider a random variable X with a probability density function given by
{I (-(x-a))fa.b(X) = b exp a b if x a
if x < awhere a E JR. an d b > o. (Assume that b is known.) Design a Neyman-Pearson test of sizeao for testing Ho : a = ao against HI : a = a l based upon X where a l < ao. \Vhat is thepower of your test?
Problem 1.21. Consider th e detection problemHo Xi = Ni for i = 1, . . . , 100HI Xi = 2 + Ni for i = 1, . . . , 100
where N I , ... , NlOO are mutually independent, Gaussian random variables each with meanzero an d variance 9. Find a value of ao so that a Neyman-Pearson test of Ho versus HI with7
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
8/75
www.MathGeek.com
size ao is also a minimax test of Ho versus HI with respect to a loss function that assigns aunit loss to an error and a zero loss otherwise.Problem 1.22. Consider the following probability density functions:
Po (:c) =
PI (:r;)
{Ol i f O < x < l
otherwise
14 ~ : 4 ' 1i f O< x< - 21i f " 2 00 if x::::; 0JI(x) { xp( -(x - 1)) if x> l= 0 if x
8/9/2019 Problems in Communication Theory With Solutions
9/75
www.MathGeek.com
M N
Problem 1.25. Consider a random variable X that has a Poisson distribution with parameter A. That is, assume that
AkP(X = k) = - , exp( -A)kfor k = 0,1,2 , . . . . Assume that A = Ao under Ho an d that A = Al under HI whereAl > Ao > O. For what values of ao will there exist a nonrandomized Neyman-Pearson testof Ho versus HI? Find a Neyman-Pearson test of Ho versus HI if Ao = 1 an d ao = 0.02.
1.3. Locally Optimal Tests
Problem 1.26. Consider the detection problemHo Y; = Ni fori = 1, . . . , kHI Y;= . s+N i f o r i= l , . . . ,k
where k is a positive integer, where s is a positive constant signal, and where N 1 , ... , Nk aremutually independent random variables each with probability density function
(A) Find th e form of a locally optimal test of Ho versus HI. (You do not need to find thethreshold. )(B) Use the result of Part (A) to find th e form of a Neyman-Pearson test of Ho versus HI.(Again, you do no t need to find th e threshold.)
Problem 1.27. Consider th e following decision problem: Ho: Xi = Ni for i = 1,2,3 versusHI: Xi = S + Ni for i = 1,2,3, where s is a fixed (unknown) positive constant and whereN 1 , N 2 , an d N3 are mutually independent random variables each with a probability densityfunction given by 1j(x) = "2 exp(-Ixl)
9
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
10/75
www.MathGeek.com
for x E II{.(A) Design a locally optimal test for this problem subject to the constraint that the falsealarm probability not exceed 116 (B) What is the smallest value of s for which th e detection probability /3 is no t less than0.497(C) Under what circumstances will (3 exceed FProblem 1.28. Consider a random variable N that possesses a probability density functionof the form 1fN(X) = 1T(1 + x2)Let X be a random variable that under Ho is equal to N an d under HI is equal to l{ + 5where s is positive. We desire to determine the true hypothesis Ho or HI based on th e singleobservation X.(A) Find the form of a Neyman-Pearson test for this problem. (You do no t need to find th ethreshold.) Is it uniformly most powerful over all positive signals 57 \Vill it always announcethat the signal is present whenever th e observation is much larger than s7(B) Find th e form of a locally optimal test for this problem. (You do not need to find th ethreshold.) Will it always announce that th e signal is present whenever th e observation ismuch larger than 57
2. Estimation Problems
2.1. General Problems
Problem 2.1. Consider a marathon with N participants. Further, assume that these Nrunners are each wearing an identifying tag displaying a different number between 1 andN. As you drive by you see a runner wearing th e number 87. How might you use thatinformation to estimate N7
Problenl 2.2. Let X be a Gaussian random variable with mean 8 and positive variance (J2.Find the Fisher information 1(8) that X contains about the parameter 8.Problem 2.3. Let X be a Gaussian random variable with mean 8 and positive variance (J2.Find the Fisher information 1((J2) that X contains about the parameter (J2.
10
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
11/75
www.MathGeek.com
Problem 2.4. Let Xl, X 2 , and X3 be mutually independent random variables with Xl an dX 2 identically distributed. Further, le t Xl take on th e values 0 and 2 each with probabilityan d le t X3 take on th e values 1 and each with probability Now, consider the problem
of estimating which of these three distributions has th e largest mean. A. natural methodof proceeding is to take a random sample from each distribution and then to select thedistribution that produces th e largest sample mean. Answer the following questions underthe assumption that such a procedure is used.(A) What is the probability of correctly determining the distribution with th e largest meanif we take one sample from each distribution.(B) \Vhat is the probability of correctly determining th e distribution with th e largest mean ifwe take one sample from the distributions of Xl an d X 2 an d two samples from the distributiongoverning X3?(C) Will an estimate based upon 17, samples always be at least as good as an estimate basedon m samples if 17, > m?Problem 2.5. Let X and Y be random variables with probability density functions i I x - 8)an d h(x - 8), respectively, where 8 is some fixed yet unknown real number. Assume that Jlan d h are continuous an d even.(A) I f Jl (0) > h(O) then show that P(IX - 81 :::; s) > P(IY - 81 :::; s) for some positive valueof s.(B) Now, le t k > 3 be a fixed integer, le t 8 be a fixed yet unknown real number, an d let Xlan d X 2 be independent random variables each with a density given by J(x - 8) where
k -1J(:c) = 2(1 + Ixl)k'Also, consider a cost function CE that assigns a cost of 1 to errors that are larger in magnitudethan some positive constant s an d that assigns a zero cost to smaller errors. (In particular,an estimator eof 8 is "good" if p(le - 81 :::; c:) is large.) Let Y denote th e sample mean of Xlan d X 2 Show that there exists some positive c: for which P(IXI - 81 :::; c:) > P(IY - 81 :::; c:).That is, show that a single observation may yield a better estimate of th e mean than will asample mean of two observations.
2.2. Unbiased Estimators
Problem 2.6. Consider a random variable X that has a Poisson distribution with parameter). > 0; that is, assume thatP(X = x) = X" exp( -).)x!
11
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
12/75
www.MathGeek.com
for x = 0,1,2, . . . . Consider th e problem of estimating the parameter exp( -3A) based uponone sample from the distribution for X. Show that T(X) = (-2)X is an unbiased estimatorfor exp( -3A). Is T(X) a reasonable estimator for exp( -3A)?Problem 2.7. Consider an unbiased estimator T of a parameter 8. Find a condition thatis both necessary and sufficient for T2 to be an unbiased estimator of 82 .Problem 2.8. Let N be a fixed positive integer. Toss a coin N times and, for 1 :::; i :::; N,le t Xi be 1 or 0 according to whether the ith toss is a head or a tail. Let the probability oftossing a head be given by some fixed yet unknown value 8 from the interval [0,1]. For whatfunctions g: [0,1] --7lR do there exist unbiased estimators of g(8)?Problem 2.9. Consider a random variable X with a Poisson distribution given by
for each nonnegative integer k and for some fixed positive value of A. Let T(x) be anynonconstant function of :c such that T(X) provides an estimate of 1/A. Show that T(X)cannot be an unbiased estimator of 1/A.Problem 2.10. Consider a parameterized family {fe : 8 E lR} of probability density functions in which 113 is a Gaussian density function with mean 8 and unit variance. LetXl, ... ,Xn denote a collection of identically distributed, mutually independent randomvariables each having a density function given by 113 for some fixed yet unknown valueof 8. Let T(X l , .. . , Xn) be an unbiased estimator for 8. Find a positive lower bound forVARe(T(Xl , ... , Xn)).Problem 2.11. Consider a random variable X such that
for k = 0,1,2, . . . and for some fixed, bu t unknown, positive constant A. (That is, let X havea Poisson distribution with parameter A. Note that E[X] = VAR[X] = A.) Find the Fisherinformation that X contains about th e parameter 8 = exp( -A). Let
T(x) = { 1 x = 0o If x i= O.Is T(X) an unbiased estimator of 8? Is T(X) an efficient estimator of 8? Hint: Note thatexp (x) > :c + 1 for all nonzero :c.
12
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
13/75
www.MathGeek.com
2.3. Maximum Likelihood Estimation
Problem 2.12. Consider a random sample Xl , . .. , Xn from a distribution with a fixed yetunknown finite mean aE JR.. I f a maximum likelihood estimate for aexists will it always begiven by the sample mean ~ ( X I + ... + Xn)? \Vhyor why not?Problem 2.13. Consider a random variable X for which P(X = 1) = p and P(X = 0) =1 - p where p is a fixed yet unknown element from [ ~ , ~ J . Find a maximum likelihoodestimate for p. \Vhat is the mean square error for this estimate? Find a constant estimatorfor p which always ha s a smaller mean square error than the maximum likelihood estimatorthat you found.Problem 2.14. Consider a coin with a probability of heads given by some fixed yet unknownvalue of a rom th e interval (0,1). Does there exist a maximum likelihood estimator of abased upon a single flip of th e coin. I f not, why not, and if so then find one. I f you foundthat a maximum likelihood estimator does no t exist then can you think of a simple remedy?Does your remedy provide a reasonable estimator?Problem 2.15. Consider a detection problem in which Ho states that Xi = Ni for i =1, . . . , k and HI states that Xi = a+Ni fori = 1, . . . , k an d lal > 0 where th e N/s are identically distributed and mutually independent each with a zero mean Gaussian distributionwith a positive variance (J2. When (as in this case) a UMP test does no t exist we sometimes use a generalized likelihood ratio test. A generalized likelihood ratio test is simply athreshold test in which the "processor" is given by
A ( ) _ fl(B(XI, . . . ,Xk),XI, . . . ,Xk)9 XI, ,Xk - )fO(XI, . . . , Xkwhere fO(XI, . . . , Xk) is a joint probability density for the X/s under hypothesis Ho, wherefl(a,XI, . . . ,Xk) is a joint probability density for the Xi's under hypothesis HI assumingthat a s the true parameter, and where 8(XI, . . . , Xk) is a maximum likelihood estimator ofabased upon th e Xi'S under hypothesis HI.(A) Find B. Is it unbiased? Is it efficient?(B) Find a generalized likelihood ratio test for this example. What is the power of your test?(C) Assume that a s known to be positive, and find a UM P test. What is the power of thistest? How does this power compare to your result in (B)?Problem 2.16. Consider a family of probability density functions {fe : aE JR.} where
13
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
14/75
www.MathGeek.com
(Recall that IA (:c) equals 1 if x E A and equals 0 otherwise.) For a fixed positive integern, let Xl, . .. ,Xn be a collection of mutually independent, identically distributed randomvariables each with a probability density function given by Ie for some fixed yet unknownvalue of e. Find a maximum likelihood estimator for e as a function of Xl, .. . ,Xn . Is thismaximum likelihood estimator unique?Problem 2.17. Consider a family of probability density functions {Ie : eE (0, oo)} where
For a fixed positive integer n, let Xl , .. . ,Xn be a collection of mutually independent, identically distributed random variables each with a probability density function given by Ie forsome fixed yet unknown value of e. Find a maximum likelihood estimator for eas a functionof Xl, ... ,Xn . Is this maximum likelihood estimator unbiased?Problem 2.18. An estimator e s said to be admissible with respect to the squared errorcost function ifthere exists no estimator esuch that Ee [(e e)2] :::; Ee [(e e)2] for all allowablevalues of ewith the inequality being strict for some value of e. Consider a random variableX with a distribution given by P(X = 1) = e an d P(X = 0) = 1 - e where i :::; e :::;(A) Find a maximum likelihood estimator of ebased on one sample from the distribution ofX.(B) Consider th e collection of all estimators ee< that are of th e form
{a i f x=Oen(x) = 1 ' f - 1- a I x-
where i :::; a :::; (For what value of a is e",(X) equal to the estimator that you found inpart (A)?) Show that th e maximum likelihood estimator you found in part (A) has a largermean square error than e for any a such that i < a :::;(C) Is the maximum likelihood estimator you found in part (A) an admissible estimator?Problem 2.19. Let Ie be a density function of a triangular distribution on a fixed interval[0, A] where A > 0 and where the peak of th e triangle is at x = ewhere e s some fixed ye tunknown element from (0, A). That is,
J,(c) I xAe2(A - x)A(A - e)14
ifO:::;x:::;e
i fe
8/9/2019 Problems in Communication Theory With Solutions
15/75
www.MathGeek.com
Consider a maximum likelihood estimator Bof (J based upon n samples Xl, ... ,Xn from thedistribution .Ie where we assume without loss of generality that the 11 observations have beenarranged so that 0 :::; Xl :::; X2 :::; . . . :::; Xn :::; A.(A) Show that a maximum of th e likelihood function is attainable only when B s equal toone of the n observations.(B) Prove or Disprove: For th e jth sample to be a possible maximum likelihood estimatorfor (J it must be true that j - 1 j~ ~ A < x
8/9/2019 Problems in Communication Theory With Solutions
16/75
www.MathGeek.com
where 81 an d 8'2 are positive constants. Let Xl, . . . ,Xn be a collection of mutually indepen-dent, identically distributed random variables each with probability density function ./e1 ,e2 (A) Assume that 81 is a known positive constant. Find a maximum likelihood estimate for 82as a function of Xl, . .. , X n. Be sure to show that your solution corresponds to a maximu.mof th e likelihood function.(B) Assume that 82 is a known positive constant. Find a maximum likelihood estimate for81 as a function of Xl, . . . ,Xn. Again, be sure to show that your solution corresponds to amaximu.m of th e likelihood function.Problem 2.23. Consider a random variable X that has a probability density function ofthe form
. / e ( x ) = 1 ~ 8 x where -1 < x < 1 an d -1 ::; 8 ::; 1 and where, as usual, we assume that 8 is fixed butunknown. Find a maximum likelihood estimate of 8 as a function of X. Is your estimateunbiased? What is th e mean square error of your estimate? Is your estimate admissible?Problern 2.24. Consider a collection Xl, . .. , Xn of mutually independent, Gaussian ran-dom variables each with mean 8 an d variance 82 where 8 is some fixed, but unknown,positive real number. Find a candidate for a ma.ximum likelihood estimate of 8 as a functionof Xl, . .. , X n. (You do no t need to prove that your candidate corresponds to a maximum.)Problem 2.25. Consider mutually independent, identically distributed random variablesXl, . . . ,Xn such that each has a probability density function of the form
fe(x) = { exp(-(x - 8)) if x ~ . 8 o otherwIsewhere 8 is some fixed, but unknown, real number. Find a maximum likelihood estimate of8 as a function of Xl, . .. ,Xn. Is your estimate unbiased?
2.4. Minimum Mean Square Estimation
Problem 2.26. Consider a zero mean, wide sense stationary random process {X (t) : t E JR.}with autocorrelation function R(T) = E[X(t)X(t+T)]. Find a minimum mean square linearestimate of X (t + k) as a function of X (t) where k is any fixed positive real number.Problem 2.27. Consider a zero mean, wide sense stationary random process {X(t) : t E JR.}with autocorrelation function R(T) = E[X(t)X(t+T)]. Find a minimum mean square linear
16
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
17/75
www.MathGeek.com
estimate of X(t) in terms of X(O) an d X(T) where T is any fixed positive real number.\Vhat is your estimate when t = T /2 ?Problem 2.28. Consider a zero mean, wide sense stationary random process {X(t) : t E JR.}with autocorrelation function R(T) = E[X(t)X(t + T)]. Fi x a positive real number T andassume that R(t) is integrable over [0, T]. Assume that th e integral foT X(t) dt exists foreach sample path of X(t). Find a minimum mean square linear estimate of foT X(t) dt interms of X(O) and X(T).Problem 2.29. Consider a zero mean wide sense stationary random process {X(t) : t E JR.}with an autocorrelation function given by R(T) = E[X(t + T)X(t)] = exp(-ITI). We desireto estimate X(t) via a linear combination of X(t - 1) and X(t - 2) so as to minimizethe mean square error. That is, we wish to estimate X(t) via f(X(t - 1), X(t - 2)) =a1X(t - 1) + a"2X(t - 2) so that E[(X(t) - f(X(t - 1), X(t - 2)))"2] is minimized. Find theconstants a1 and a"2.Problem 2.30. Suppose that we want to build a filter that is modeled by convolution ofan input :c(t) with h(t) where h(t) = 1 if 0 ::; t < 1 an d h(t) = 0 elsewhere. Since we cannotbuild such a filter we decide to construct an approximation to it using it in place of h whereh(t) = Ct1 exp(- t) + C ~ " 2 t exp( -t) for t :2: 0 and h(t) = 0 for t < o. Our goal will be tominimize fo'XO Ih(t) - h(t)12 dt. Find a1 and a2.Problem 2.31. Consider a random variable X that has a uniform distribution on th einterval [0,1]. Find a minimum mean square affine estimate of X 3 in terms of X 2 an d X.\Vhat is th e mean square error of your estimate?
2.5. Hilbert Spaces
Problem 2.32. Let X denote the se t of al l continuous functions f:[O, 1] ----:- JR. such that.1(0) = O. Le t 11.111 denote the supremum (least upper bound) of the se t {If(t)1 : t E [0, I]}.Let K denote th e collection of al l functions 9 in X such that f01 g(t) dt = 1.(A) Show that II . II is a norm on X.(B) Show that K is convex.(C) Show that there is no point in K with minimum norm.Problem 2.33. Consider a Hilbert space H and a closed proper subspace M of H. Considerthe function PM(h) that maps a point h in H to the point in !vI that is nearest to h. Thatis, PM is th e orthogonal projection of H onto M.
17
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
18/75
www.MathGeek.com
(B) Le t N be another closed proper subspace of H. Show that the subspace j\l1 is orthogonalto the subspace N if and only if PM 0 PN = PN 0 PM = o.(C) Show that PM 0 PN = 0 if and only if PN 0 PM = o.Problem 2.34. For this problem consider th e Hilbert space H of all second order randomvariables defined on some probability space where (S, T) = E[ST]. Let X be a zero mean,unit variance Gaussian random variable. Le t
1a2. l + X
for real numbers ( ~ i , j . For what choice of th e ai , j 'S are the Y; 's orthogonal? Find theminimum mean-square linear estimate of Y4 based upon Y1 , }2, and }3. Find th e minimummean-square nonlinear estimate of Y4 based upon Yi, }2, and Y3. (Hint: E[X4] = 3.)Problem 2.35. (A) Let ]'vl be a closed subspace of a Hilbert space H and, for :c E H, le tx = PM(x) +QM(X) where PM(x) E J'vl an d QM(X) E M.1.. Prove that
max { 1\x, y) I : y E M.1., II y I = 1 } = IIQM (x) II(B) Use th e result from Part (A) to find th e maximum value of
subject to the following constraints on g:1: g(x) dx 01: xg(x) dx 01: x 2g(x) dx 01: ( :c) dx 1.
18
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
19/75
www.MathGeek.com
3. Detection Solutions
3.1. Bayesian and Minimax Tests
Solution 1.1. Let th e parameter set be given by 8 = {HH,HT}, where HH denotes thecoin with two heads and HT denotes the standard coin. Th e sample space for a single flip ofthe coin is given by {H,T}, where H means that we observed a Head an d T means that weobserved a Tail. Ou r decision set D is th e same as 8. Let X denote the observation fromour single coin flip. Our decision rule b has the following distribution:
P(b(X) = HTIX = T)P(b(X) = HTIX = H)P(b(X) = HHIX = T)P(8(X) = HHIX = H)
1po1- p.
Thus, th e risk function R is given byR(HH, b)
an dR(HT, 8)
EHH[L(HH, b(X))]PHH(b(X) = HT)L(HH, HT) +PHH(b(X) = HH)L(HH, HH)p x 1+ (1 - p) x 0p
EHT[L(HT, b(X))]PHT(8(X) = HT)L(HT, HT) + PHT(b(X) = HH)L(HT, HH)( ~ + ~ ) x O + ~ ( 1 - P ) X 1 12"(1 - p).
The maximum risk is then given by max (p, ~ ( 1 - p)), which is minimized when p = ~ . Solution 1.2. Since X is Gaussian with mean zero and variance ()2, it follows that X/IThas a standard Gaussian distribution. Thus, X 2 / ()2 has a chi-square distribution with onedegree of freedom. That is, X 2 / ()2 has a density function given by
1 (X)f(x) = y'2Jrx exp -"2for X > o. Thus, X 2 has a density function given by
1 (X) 1 (X )Po (x) = () 2 f ()2 = (J y'21TX exp - 2()219
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
20/75
8/9/2019 Problems in Communication Theory With Solutions
21/75
www.MathGeek.com
which after reducing yields a test of th e form
Let 5 = Xf + X5 and note that 5 is our test statistic. Under Ho, 5 ha s a chi-squaredistribution with 2 degrees of freedom. Thus, under Ho, 5 has density
Po(x) = exp ( - ~ ) for x o. Under HI, 5/2 is chi-square with 2 degrees of freedom. Thus, under HI, 5 ha sdensity
PI(X) = ~ p o ( ~ ) .For a minimax test, we choose T so that the probability of error of the first kind (Qo) isequal to the probability of error of th e second kind (Qd. Note that
and that
Thus, we seek T such that
~ e x p (_T) = _ ~ e x p ( _ T)2 4 2 2 2 'or that y"2 + y - 1 = 0, where y = exp( - T /4). Solving yields
- lVSy=- - -2which implies (since y > 0 is required) that
( -1 + vs)= -4ln 2 ;::::: 1.925.Solution 1.4. From class notes we know that the processor in this case is given by
{-2
g(y) = ~ ( y - 1)
21
ify:S;OifO:S;y:S;2if y 2,
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
22/75
www.MathGeek.com
where the threshold is given by
T = In ( ~ : ) -2.197.Since g(y) > T for all possible values of y, it follows that our test will always announce thatth e signal is present regardless of our observation.Solution 1.5. (A) Assume that there exists a decision rule 8 that minimizes
0 0J (B, 8(x))g(Blx) dB-0 0
for each x. Let 8 be another rule, and note that0 0
rU,8) = J (B, 8)f(B) dB-00
0 0J e[L(B, 8(X))lf(B) dB-0 0
0 0 0 0JJ (B, 8(x))pe(x) dx f(B) dB-00 -00= = 0 0JJ (B, 8(x)) p e ~ ; ~ ( B ) dB h(x) dx where h(x) = J e(x)f(B) dB-00 -00 -0 00 0 0 0JJ (B, 8(x) )g( Blx) dB h(x) dx
-00 -00
(Xl (Xl
< J J (B,lJ(:c))g(Blx) dB h(x) dx by our assumption-00 -00=J e[L(B, 8(X))lf(B) dB-0 0
rU,8).Thus, it follows that 8 is a Bayes rule since it s average risk is no t greater than the averagerisk of any other test.
22
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
23/75
www.MathGeek.com
(B) Note that0 0
/ L(e, b(x))g(elx) de-00 {
a Jg(e x) de if b x) = do132 g(elx) de if b(x) = d l
= { aF (8 E wIIX = x) if 8(:) = dof3P(8 E wolX = x) if 8(:c) = dl .From Part (A), we know that a rule that minimizes th e previous expression is a Bayes rule.Thus, a Bayes rule 80 is given by
b () = {do if aF(8 E wIIX = x) < f3P(8 E wolX = x)o X d l if aF(8 E wIIX = x) ;:::: (3P(8 E wolX = x).Since the decision rule given by
8' (x) = {do if aF(8 E wIIX = x) ::; )3F(8 E wolX = x)o d l if aF(8 E wIIX = x) > 13F(8 E wolX = x).is also a Bayes rule, it follows that th e decision taken when
aF(8 E wIIX = x) = f3P(8 E wolX = x)is immaterial.
3.2. Neyman-Pearson Tests
Solution 1.6. Finding the ratio of the densities an d making the standard reductions yieldsa test that consists of comparing the observation X to a threshold T. Since X has a standardGaussian distribution under Ho, it follows that T must be such that 1 - (T) = ao, whereao is the level of significance. For ao = 0.05, it follows that T ::::::; 1.65. Thus, if we observe1.8 then our optimal test will reject Ho in favor of HI even though Ho is virtually certainto be the conect hypothesis in light of such an observation! Although seemingly surprising,th e trouble is due to the large value of ao that we chose. We are in effect forcing our test tobe wrong 5% of th e time. For this example, which is virtually singular, we could allow aoto be much smaller without significantly lowering th e power of the test.Solution 1. 7. It is not possible for this test to be a Neyman-Pearson test. To demonstratethis, all we need to do is find another test with a larger power and the same size. (If th etest were Neyman-Pearson then there would exist no other test with a larger power and thesame size.) Consider the test given by (x) = ao. That is, consider a completely randomized
23
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
24/75
www.MathGeek.com
test that announces HI with probability ao no matter what the observation x is. The sizeof this test is given by
a = Eo [
8/9/2019 Problems in Communication Theory With Solutions
25/75
www.MathGeek.com
Solution 1.9. From class notes, we know that th e test in this situation consists of comparingth e su m of the observations to a threshold given by
T = vk(J1)-I(l - ao).Further, the power of the test is given by
( -1( ) v'ks)= 1 - 1) 1) 1 - ao - ----;;- ,where in this case (J = V2.(A.) Let k = 9 an d ao = 0.01. \Ve seek a value of s such that
S = 1 - 1) (1)-1(0.99) - 2: 0.97.Thus, we require that
(B) Let s = 1 an d ao = 0.05. We seek a value of k such that19 j - if> (,,-1(0.95) - 2- 0.99.
Thus, we require thatk 2: 2(1)-1(0.95) - 1)-1(0.01))2 31.6,
which implies that we should choose k = 32.Solution 1.10. (A) From class notes, we know that the Neyman-Pearson test for this casehas th e form
where the threshold T is given byT = yn1)-1(0.995) = 2.576yn.
Further, th e test statistic Zn has a Gaussian distribution with mean 0 and variance n underHo and Gaussian distribution with mean n/2 and variance n under HI. Finally, recall fromth e class notes that th e power of th e test (written as a function of n) is given by
f3(n) = 1 - 1) (1)-1(0.995) -( yin)- 1) 2.576 - 2 .25
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
26/75
www.MathGeek.com
Note that 13(9) = 1 - (1.076) = 0.141.(B) Note that the expected number of observations in this case is H2 ) + H16) = 9, which isth e same number of observations as were used in part (A). The power in this case, however,is given by
s 1 12"13(2) + 2"13(16)~ ( 0 . 0 3 1 ) + ~ ( 0 . 2 8 2 ) 0.157,
which is larger than th e power of the Neyman-Pearson te st considered in part (A). Thus, aNeyman-Pearson test with a fixed number of observations can have a smaller power than atest with a random number of observations even when the expected number of observationsfor the second test is th e same as the fixed number of observations used in the first test.This does not violate the Neyman-Pearson lemma since that lemma was based on fixeddistributions and the distribution is no t fixed when the number of observations is allowed tovary.Solution 1.11. Let Y denote the number of Heads that we observe after two flips of thecoin. Note that under Ho
y ~ { 0 wp 1/4 ==Po1 wp1/2==PI2 wp 1/4 ==P2
and that under HI
y ~ { 0 wp 4/9 == qo1 wp 4/9 == qI2 wp 1/ 9 == q2.
Thus, if An = qn/Pn, then Ao = 16/9, Al = 8/9, an d A2 = 4/9. Using our analogy of a buyerwith a limited budget, th e most "valuable" point is 17 = 0 since it has th e largest ratio ofvalue (qn) to price (Pn). Unfortunately, th e price of item 0 (Po) is 1/4, which exceeds ourbudget (ao) of 1/8. Thus, we ca n only purchase a piece of item o. That is, we must use arandomized test.Our test then will be to announce HIwith probability P if Yotherwise. Th e size of our test is given by
1a = Po (Announce HI) = 4P.
0, an d to announce Ho
Since th e Neyman-Pearson test requires that a = ao, it follows that P = 1/2. Th e power ofthe test is given by
26
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
27/75
www.MathGeek.com
Solution 1.12. Consider first th e general situation in which we have the two hypothesesHa Xj = NjHI X j = . ' 3 j +N j
for j = 1, . . . , n , where th e Ni ' s have a zero mean multivariate Gaussian distribution withcovariance m a t r i x ~ . Under Ha, th e Xl's have joint density
and under H I, the Xl's have joint density
A(x) = PI (:r:)Pa(x) exp ( - ~ x T ~ - I X )
Note that
( 1 T 1 IT 1)xp -"2 (x - .'3) - (x - .'3) + "2 x - xexp ( (x T ~ - I X - X T ~ - 1 S - S T ~ - I X + S T L -l s - X T ~ - I X ) ) exp ( (-x T ~ - l s - S T ~ - I X + S T ~ - l s ) ) .
ST (L - l ) TX.'3 T ~ - I X since is symmetric( s T L - 1X)T since s T ~ - I X is real.
Thus, after taking the natural log of A(x) and cancelling constants, we obtain a test of th eform
Under H a, the test statistic X T ~ - 1 s is Gaussian with mean zero an d variance
27
S T ~ - I E [ X X T l ~ - l s s T ~ - I ~ ~ - I S . ' 3 T ~ - I S .
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
28/75
www.MathGeek.com
Under HI , th e test statistic XT L - I s is Gaussian with mean sTL - I s an d variance sTL - I s.Thus,
an d
Now, for the particular problem under consideration, we have TI = 2, S l = exp( -h), an dS2 = exp( -2h). Further,
and hence[ 4/3 -2 /3 ] = [()U ()12]-2/3 4/3 - ()21 {)22 .
The processor is given by
Further,
[ h 2h ] [ 4/3 -2 /3 ] [ e- h ]e- e- -2/3 4/3 e- 2h4 (-2h -3h + - 4h)- e -e e .3
Thus, ou r test is given byHi r----------------
( 4 - h 2 - 2 h ) (4 -2h 2 -h) > /4 . ."3e -"3 e Xl +"3e - "3e X2 < 1.28Y"3 (e - 2h - e- 3h + e-4h ).Ho
I f h = 0.1, then sTL - I s = 0.9976 and (-J = 1 -
8/9/2019 Problems in Communication Theory With Solutions
29/75
www.MathGeek.com
where Sj = exp( - ( j - 1)11,) and
Further, th e power of th e test is given by
(-1 In);3 = 1 - (0.95) - (JG '
wherem
Thus, we seek th e largest value of h for which;3 = 1 - (1.65 - JiOyfl + exp(-2h) + exp( -4h)) 0.99.
Setting y = exp(-2h), we are then seeking th e largest value of h for which1.65 - JiOyfy2 + y+ 1 ::; -1(0.01) = -2.33,
and hence such that y2 + y - 0.584 > o. The roots of th e corresponding quadratic equationare Yl = 0.413 an d Y2 = -1.413. Since y must be positive, we conclude that y must begreater than 0.413. I f y > 0.413, then h < In(0.413) = 0.44215.Solution 1.14. To begin, note that
and
Thus,
Po(Y) = { 1 - Ao if y = 0Ao if y = 1
{ AI if Y = 0PI (y) = 1 - Al if Y = 1.
A(y) = Pl(y) = {Po(Y)Al if y = 01 - Ao1 - Al if y = 1.
29
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
30/75
Note that
www.MathGeek.com
Po(A(Y) > T) =1 . AlI fT< -- AOAl 1 - AlAO if-- < T < --
- AO - AOo if T > 1 - Al- AO 'where we have noticed that since AO + Al < 1, it follows that Al < 1 - AO and 1 - Al > AO ,and hence that Al 1 - Al- -
8/9/2019 Problems in Communication Theory With Solutions
31/75
Thus, we see that
www.MathGeek.com
ao-(1- AdAo(1 - Ad + Al( a o - AO)1 - Ao1
An ROC plot is given in Figure 3.1.
A =A =1/8I 2A =A =3/8I 2
if 0 :::; ao < Aoif Ao :::; [Yo < 1if ao = 1.
Figure 3.1: Receiver Operating Characteristic Curve for Problem 1.14Solution 1.15. From class notes, we know that the Neyman-Pearson test consists of comparing the sum of the observations to the threshold T = vnO"1>-l(l - ao). Further, thepower of th e test is given by
( '0)3 = 1 -1> 1>-1(1 - [Yo) - vn-;; .Note that 1>-1(1 - ao) = 1>-1(0.95) = 1.65.For System A, we seek the smallest value of n for which 1 -1>(1.65 - 2.jn) > 0.95. Solvingfor n implies that we must have 17 > 2.7. Thus, for System A, we choose n = 3. ForSystem B, we seek the smallest value of 17 for which 1 - 1>(1.65 - fo) > 0.95. Solving forn implies that we must have TI > 10.9. Thus, for System B, we choose n = 11. Th e costof System A is $1,000,000 + (100 x 3 x $1000) = $1,300,000 and th e cost of System B is$250,000 + (100 x 11 x $1000) = $1,350,000. Thus, we should choose System A.Solution 1.16. (A) To begin, note that
A(x) = P1(X) = ~ _ 1 _ Po(x) 2 x + 131
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
32/75
www.MathGeek.com
for 0 < x < 1. Reducing the test, we obtain a test of the formHo>x T. T) . Thus,
T 1Qo = Ql {::? J~ ( x + 1) dx = J x
o T{::? - - T + T =1 - T1:2 )3 2
1 2 5{::? -T + -T - 1 = O.3 3Thus, since 0 < T < 1, we conclude that
T = - + v'37 0.54138.2 2(B) For a Neyman-Pearson test, we will choose T so that Qo = ao where ao = 1/10. ThatIS,
110
Again, since 0 < T < 1, we conclude that
TJ~ ( x + 1) d:co1 2 2-T + -T.3 3
yi130T = ----w- - 1 0.140175.Note that the power of the test is given by
0.140l75S = Pl(X < T) = J dx = 0.140175.
oSolution 1.17. To begin, note that
( ) {O i f O < x < ~ A(x) = P I X = 1 if < x < 1Po(x) . 300 I f l
8/9/2019 Problems in Communication Theory With Solutions
33/75
Further, under Ho,
and under HI,
www.MathGeek.com
A ( C ) ~ { L 1wp 2"1wp 2wp 0
wp 01wp 21wp 2.
The Neyman-Pearson lemma states that the threshold T and the randomization constant pmust be chosen so that
1"8 = Po(A(X) > T) + pPo(A(X) = T). (3.1)Note that Po(A(X) > T) takes values only in {O, 1}. Thus, we must choose a value of Tsuch that Po(A(X) > T) = 0 and such that Po(A(X) = T) i- o. The only such value of Tis1. \Vith T = 1, equation (3.1) implies that
which thus implies that p = 1/4. Thus, our test ha s the form:Announce HI if A(x) > 1Announce HI wp if A(x) = 1Announce Ho wp if A(:c) = 1Announce Ho if A(x) < 1.
In terms of :c, th e test has the form:
Announce HI if :c > 1Announce HI wp if < x < 1Announce H 0 wp if < x < 1Announce Ho if x T) and Q1 = P1(S(X) < T) . Notethat, for 0 < T T,O < X < + Po (1 -X > T, < X < 1)Po (T < X < + Po < X < 1 - T( ~ - T ) + ( l - T - ~ ) 1- 2T,
Q1 H ( X < T,O < X < + PI (1 -X < T, < X < 1)H (0 < X < T) + PI (1 - T < X < 1)
39
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
40/75
www.MathGeek.com
T 1J xdx+ J (1-x)dxo 1-T2T2 +4 (1 - (1 - T)) - 2 (1 - (1 - T)2)4T2.
Thus, Qo = Ql if 1 - 2T = 4T2, which holds if T = 0.309, where we have discarded thenegative root.(C) For a Neyman-Pearson test at level 1/100, we choose T so that Qo = 1/100. That is,we choose T so that 1 - 2T = 1/100. Solving implies that T = 0.495. Th e power of th e testis given by 1 - Ql, which in this case is 1 - 4T2 = 0.0199.Solution 1.23. (A) To begin, note that
A(x) =
Under H o,
and under HI,
{ exp(-((x - 1) - x)) if x> 1o ifO:::; :clifO:::;x T) +pPo(A(X) = T) = 10.Thus, it follows that T = e and p = e/10. Th e resulting test announces HI with probabilitye/10 if x 2': 1 and announces Ho otherwise. Th e power of th e test is given by
(3e e0+ - X 1 =-.10 10
(B) Th e critical function of any Neyman-Pearson test in this situation must have th e form(x) = {01 if A(:c) > eif A(:c) < e.
40
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
41/75
www.MathGeek.com
Note that A(x) is never greater than e. Thus, the only requirement on is that it bezero when A(x) < e; that is, when 0 ::; x ::; 1. I f we require in addition that our test benonrandom, then we must require that take on only the values 0 or 1. A test that satisfiesboth conditions is given by , {I i fx>AqJo(x) = 0 if x ::; A 'where A > 1. Th e value of A is determined by th e requirement that the size of ou r test be1/10. That is,
110 Po (Announce HI)
Po(X> A)CXJJ -xdx
)..exp( -A).
Thus, A = In(IO), and our nonrandomized test announces HI when x> In(IO) and announcesHo otherwise. Note that both the randomized test in (A) and the nonrandomized test in (B)are Neyman-Pearson tests. However, th e test in (A) is constant on the threshold, whereasthe test in (B) is not.Solution 1.24. Let f ( x ) denote a Gaussian density function with mean 0 and variance (J2.Then, under Ho, it follows that Y has density 1(y), and, under HI , it follows that Y ha sdensity 1 1
"21(x - 8) + "21(x + 8).That is, under HI, Y's distribution is an even mixture of two Gaussian distributions-onecentered at 8 and one centered at -8 . Thus,
A(y) = PI(y)Po(Y)1 1 ((y - 8)2) 1 1 ((y + 8)2)- exp - + - exp - """'"""------:=-'-2(J2 2(J2
1 (y2 )xp --V27T1y2 2(T2
( 2) ( ( ) ( ) )-8 y8 y8-exp - - exp +exp2 2(T2 (T2(_8 2 ) (Y8)exp 2(T2 cosh (T2 '41
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
42/75
www.MathGeek.com
and our reduced test has th e form
(YB)cosh /T2 < T.
HoNote that this test is not UM P since it depends upon th e parameter B.Solution 1.25. Note that
A(k)
Taking the log of th e test and noting that In(AdAo) is positive, we obtain a test of the formHI>k T, T) + pPo(X = T).
The test will be nonrandom if [t o = Po(X > T) for some choice of T. Let iTl denote thesmallest integer not less than T. Then
Thus, the test is nonrandom if 000 ma y be expressed in the form
for some integer m.
42
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
43/75
www.MathGeek.com
Let Ao = 1 and ao = 0.02. By trial and error we first seek th e largest value of T for whichPo(X > T) is not greater than 0.02. Note that
Further,
Thus, we seek p so that
3 1Po (X > 3) = 1 - L k! e-1k=O
1 - (1 + 1 + +e 2 681--3e
0.019.
1Po(X = 3) = - 0.061.6e
0.02 Po(X > 3) + pPo(X = 3)0.019 + 0.061p,
which implies that p = 0.016. Thus, our test announces HI if X > 3, announces HI withprobability 0.016 if X = 3, an d announces Ho otherwise.
3.3. Locally Optimal Tests
Solution 1.26. (A) Th e density of th e noise is given by
for I:cl < 1. Thus,
an d d 1 . () 2xn.fN x = ~ ~ . d:c 1 - x 2for I:cl < 1. Thus, the locally optimal processor is given by
2xgzo(x)=I_x 2
43
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
44/75
www.MathGeek.com
(B) From class notes, we know thatx
9NP(X) = . / 9zo(U) dux -s
x -s
2 IXIn ( l - u' ) x -s- In ( l - x2) + In(l - (x - .s?)
( 1-(X-.s)2)In ?1-
Solution 1.27. (A) From class notes, we know that the test statistic in this case is givenbyUnder HO l
\Ve choose T and p so that{
3Z : : ~
wp 1/8wp 3/8wp 3/8wp 1/8.
116 = Po(Z > T) + pPo(Z = T).Solving as usual we see that T = 3 and p = Thus, our test announces HI with probability
if all of th e observations are positive and announces Ho otherwise.(B) Note that under HI,
) { 1 if NI > - ssgn(XI = l 'f N- 1 1 < -.s.Thus,
P(NI > -s)P(N2 > -s)P(N3 > -s )
(J cxp(-IX1)dX) 3(+/ cxp(-IXlJdX) ,44
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
45/75
www.MathGeek.com
and hence1PI(Z> 3) + "2H (Z = 3)1 1 )3+"2 1 - "2 exp( -s)
1 ( 1 . )32 1 - "2 exp( -s)\Ve seek the smallest value of s for which fJ 0.49. Solving, we find that if s 4.3 then}3 0.49.(C) Th e power fJ is never greater thanSolution 1.28. (A) To begin, note that
A(x) = PI (x) = 1+ x2 .Po(x) 1+ (x - sFAlso, note that A(x) -----+ 1 as x -----+ oo and that A(s/2) = 1. Further, A(x) > 1 whenx> s/2 and A(:c) < 1 when:c < s/2. Consider Figure 3.2. For th e threshold T shown in thefigure, it is clear that ou r test will announce that no signal is present when th e observationis sufficiently large! This phenomenon is due to the very heavy tails of the Cauchy noisedensity. Finally, note that the test is no t UMP since it depends upon th e signal s.(B) Let 1PI (x, s) = ----:----:---...,.....,.,..7T(1 + (x - s)2)and note that o ) 2(x-s)osPI(X, s = 7T(1 + (x - s)2)2Thus, th e locally optimal processor is given by
S(x)olim -:;;-PI (x, s)s10 uSPo(x)r 2(x-s);m 7T(1 + (x - S 2)2
1
45
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
46/75
www.MathGeek.com
T ----------------- ----
o III" .1Announce H, hereAnnounce elsewhere
Figure 3.2: Neyman-Pearson processor for Problem 1.282x
(1 + X 2 )2x
1 + x 2 Note that this processor exhibits the same unusual behavior that we observed in part (A).
4. Estimation Solutions
4.1. General Problems
Solution 2.1. I f X denotes the number that we observe, then it would not seem unreasonable to suppose that X =i with probability l iN for 1 SiS N. Note that
N 1 1 N(N+1) N+1E[Xl=Li N = N 2 =-2-i=1
46
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
47/75
www.MathGeek.com
Thus, E[2X - 1] = N, which means that 2X - 1 is an unbiased estimate of N. In general,if we observe k numbers Xl, . .. , X k , then 2X - 1 is an unbiased estimate of N, wherex = _X__+_ _._+_X_
NOn th e other hand, if we observe X l , . . . , X).;, then our goal might be to find a value f"r for Nso that kII P(Xi = Xi)
i= l
is maximized. I f N < Xi for some value o fi then P(Xi = Xi) = 0 and th e entire product iszero. I f N 2: max1
8/9/2019 Problems in Communication Theory With Solutions
48/75
www.MathGeek.com
an d
Thus,
Solution 2.4. (A) Note first that E[XIJ = E[X2J = 1 an d E[X3J = i. Hence, our estimatewill be correct when it states that X3 's distribution has th e largest mean. Thus, taking onesample from each distribution, the probability that our estimator will correctly determinethe distribution with the largest mean is simply the probability that Xl :S X3 an d X2 :S X3.That is, th e probability our estimate is correct is given by
(B) Let X denote th e sample mean of th e two samples from the third distribution, an d notethat
{I wp 1/4
X = 5/2 wp 1/47/4 wp 1/2.Thus, th e probability our estimator is correct in this case is given by
P(XI:SX,X2:SX) = p ( X = ~ ) +P (X = P(XI = 0)P(X2 = 0)+P(X = l)P(XI = 0)P(X2 = 0)+ x x + x x4222 4 22
716
48
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
49/75
www.MathGeek.com
(C) No. The probability of being correct decreased in (B) even though the number ofobservations increased.Solution 2.5. (A) Since .h(O) > .12(0), it follows that .h(O) - .1"2(0) > O. Thus, since.h an d12 are continuous, it follows that h(x) - 12(x) > 0 for all:c E [O,E] for some E > O. Further,since hand 12 are even, it follows that h (x) - 12(x) > 0 for all x E [ - E, E]. Thus, since
E
j fl(X) - 12 (x) dx > 0,-6
it follows that 8+E 8+Ejh (X-8)dX> j12(X-8)dX.
8-E 8-E\. V' .J \. . '---Vv---'
P(IX-81-C;c) P(IY-81-C;E)(B) Note that ~ ( X l + X 2 ) has a density function given by
00
g(x - 8) = 2 j f(2x - t - 8) f(t - 8) dt.-00
Using th e result from (A), th e desired result here will follow if we can show that f(O) > g(O).Note that
am P
f(O) = k ; 1
0 0
g(O) 2 j P(s) ds (since .f is even)-00
0 0j (k - 1)2 12 4 (1 + s l)2k ds-00(k - 1)22k - 1 .
Thus, f(O) > g(O) since we have assumed that k> 3.1See Table of Integrals, Series, and Products, Corrected an d Enlarged Edition, (Academic Press: Orlando,
1980) by I. S. Gradshteyn an d I. M. Ryzhik page 285.3.194(3).
49
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
50/75
www.MathGeek.com
4.2. Unbiased Estimators
Solution 2.6. Note that T is unbiased sinceE[T(X)] = f ( _2)k Ak:!-A
k=O0 0 (-2A)k
e- A z= k!k=O
e- Ae- 2A-3 Ae .
However, T is no t a reasonable estimator since it is negative when X is an odd integeryet e-3A is positive. Indeed, the estimate T oscillates wildly between positive an d negativevalues.Solution 2.7. Note that
VAR[T] E[T2] - E2 [T]E[T2] - 82 .
Thus, E[T2] = VAR[T] + 82 , which implies that E[T2] = 82 if an d only if VAR[T] = 0; thatis, if and only if T is constant with probability one.Solution 2.8. I f T(XI' ... ,XN) is an unbiased estimator of g(8) then
Ee[T(XI, ... ,XN)] = g(8)I I Iz= z= ... z= T(XI, ... XN)P(XI = X l , ,XN = XN)
Xl=OX2=O XN=OI I Iz= z= ... z= !(XI, .. .xN)8L : 1Xi (1 - 8 ) N - ' f ' = l X i ~
vpolynomial in 8 of degree ::; N
for each fixed X l, ... ,XN~ - - - - - - - - - - - - - - - - - - v r - - - - - - - - - - - - - - - - - - ~ sum of such polynomials is also a
polynomial in 8 of degree ::; NThus, g( 8) must be a polynomial of degree no t greater than N in order for T to possibly bean unbiased estimator. As an application of this result, note that there exists no unbiasedestimate of th e odds 8/(1 - 8) of 'heads' versus 'tails' for a coin that comes up 'heads' withprobability 8.
50
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
51/75
www.MathGeek.com
Solution 2.9. Assume that T(X) is an unbiased estimator of 1/A. Then it follows that00 -).Ak 1E[T(X)] = '""'T(k)_e-, =-.k. Ak=O
Rearranging terms, we see thatf-.. T(k) Ak = e)..k! Ak=O
Thus, if such an estimator T exists then there must exist a Taylor series expansion for e). / Aabout th e origin. However, no such Taylor series expansion exists since e ). A is not definedat th e origin. Thus, we conclude that no such estimator T exists.Solution 2.10. Since T is unbiased, it follows that
where
1VARo[T] 2: 1(e)'
1(e) Eo [ - :;2 In ( (vk) n exp ( - ~ ( ( X I - e)2 + ... + (Xn - e)2)) ) ][ [p ( 1 2 2 ) ]Eo - ae2 -"2 ((X I - e) + ... + (Xn - e) )
Eo [ - :e ((Xl - e) + ... + (Xn - e))]Eo [ - :e (Xl + ... + Xn - ne)]Eo[n]n.
Thus, th e variance of T is lower bounded by l/n.Solution 2.11. In terms of A, X has a discrete probability density function given by
AXP(X = x) = e-).-, == P).(x)x.for x = 0,1,2, . . . . In terms of e= exp( -A), X ha s density
X ) e(-lneyP ( = x = ---'----,----'--x!for x = 0,1,2, . . . . Note that
In Po (x) = In e+ x In ( - In e) - In:r:!51
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
52/75
and thus that
Further,
In addition, note that
www.MathGeek.com
e2 1ne1 exp(2)').
Ee[T(X)]
1 a- + x - l n ( - In e)e ae1 1 1e+ xelne
00
LT(k)P(X = k)k=OT(O)P(X = 0)
e( - In e)O1 X ,O.e,
which implies that T is unbiased. Also,00
LT"2(k)P(X = k)k=OT2(0)p(X = 0)
') e(-ln e)O1 X ----'------'----O!e.
Thus, VARe[T(X)] = e- e2 = exp( -).) - exp( -2),). Since exp().) > ). + 1 when). > 0, itfollows that
52
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
53/75
www.MathGeek.com
or that-) . - 2 ) , - 2 ) , 1e - e > Ae = I(e) ,
which implies that T is not efficient.
4.3. Maximum Likelihood Estimation
Solution 2.12. No. A maximum likelihood estimate of a mean need not be the samplemean. For example, if Xl, . .. ,Xn are mutually independent and uniform on (e - ~ , e + ,then a maximum likelihood estimate of th e mean e s given by (miul
8/9/2019 Problems in Communication Theory With Solutions
54/75
www.MathGeek.com
where 8 E 8 = (0,1). No maximum likelihood estimate of 8 exists since 8 possesses nomaximum in e.A remedy to the problem is to let e = [0,1]. In this case, a maximum likelihood estimateof 8 is given by
A ) {I i f x = l8(x = 0 if x = o.That is, if we include physically impossible values in 8, then a maximum likelihood estimatorexists and it returns as its estimate the physically impossible values that we included!Solution 2.15. (A) From class notes, we know that a maximum likelihood estimate in thiscase is given by th e sample mean,
Since
it follows that 8 is unbiased. Note that if
( 1 ) k ( 1 k 2)Pe(X1, . .. , Xk) = 0"Y"21T exp - 20"2 ~ ( X j - 8) ,then
an d
Thus,
Further,1 kVAR[8(X1' . .. ,X k )] = k2 L VAR[Xj] = : ~ .
j= l
Since VAR[8(X1' .. . ,Xk )] = 1/1(8), we see that 8 is efficient.
54
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
55/75
www.MathGeek.com
(B) Note that
and thus that
Thus, our test has the form
where we choose T 0 so that
"0 Po ((t,Xj)' > T2)
sInce
1 - Po ( - T : t, Xj
8/9/2019 Problems in Communication Theory With Solutions
56/75
under Ho. Thus,
Note that
www.MathGeek.com
T = -vko--l ( ~ o ) .kL Xj "" N(kB, ko-2)
j=lunder HI. Thus, the power /3e; of the test is given by
T- kB(Tyik1 - J vh exp ( _;2) du
-T- kBo-Vk(-T-kB) (T -kB)1+ !'1: - !'1:(TV k (TV k
1 + P (,p-1 (';0) _B:;*) _ (_ ~ - 1 (';0) _B:;*) .(C) From class notes we know that a Neyman-Pearson test for this situation has th e form
where
and
I f 000 = 10-2 and yikB = 5, then ifJe; and /3NP both exceed 0.99.(T
56
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
57/75
www.MathGeek.com
Solution 2.16. Le t nL(e, Xl, .. ,Xn) = II e(Xi)
i= l
and note that L(e,X1"",Xn) = 1 if an d only if Xi E [e- ~ , e + ~ ] forequals zero otherwise. That is, L(e, Xl, . .. ,Xn) = 1 if and only if1 1max {xJ - - :::; e:::; min {xJ + -l:";;:";n 2 l:";i:";n 2
and equals zero otherwise. Thus, any estimator T : ffi.n -7 ffi. such that
is a maximum likelihood estimate of e. In particular,
1, . . . , nand
is a maximum likelihood estimate of e for any value of A in [0,1]. Note that in this case,no t only is a maximum likelihood estimate no t unique, but there exist uncountably manydistinct maximum likelihood estimates of e!Solution 2.17. Note that
n. 1L(e,:C1, ... ,Xn) = IIfe(xi) = eni= l
when e max1
8/9/2019 Problems in Communication Theory With Solutions
58/75
www.MathGeek.com
for 0 < z :::; 8. Thus, Z possesses a probability density function given by
Thus,
Jz(z) = ddzP(Z:::; z)
{nzn-18n i fO
8/9/2019 Problems in Communication Theory With Solutions
59/75
www.MathGeek.com
In addition,
for all 0; E [i, and all aE [h ~ l (C) No, since the estimator from (B) has a smaller mean square error than the maximumlikelihood estimator from (A) for all possible values of th e parameter a. Thus, we concludethat in general a maximum likelihood estimator need not be admissible.Solution 2.19. (A) Note that
which is continuous in a and differentiable in a between th e x/so I f Xj < a < Xj+1, then[,(8, x" . .. ,xn) Gr 8- j (,1- B)-(n- j ) 11 x, itt ,1- Xi),
a -j 17 - jaa lnL (a,XI""'X n ) =7 ) + A-a 'an d a2 j n-jaa2InL(a,XI,'" ,xn ) = a2 + (A _ a)2 > O.Since th e second derivative is positive, an y critical point between the Xi 'S must correspondto a minimum (not a maximum) of L. (That is, any critical point a such that Xj < a < Xj+1must be a local minimum of L. I f 0 a< Xl, then
which is strictly increasing in a. Thus, no value of a n [0, X l) ca n correspond to a maximumof L. Similarly, no value of a n (xn, A] can correspond to a maximum of L. Thus, weconclude that a maximum of L is attainable only when a s equal to one of the observationssince no other value of acould possibly correspond to a maximum of L.(B) The strict positivity of th e second derivative within th e intervals between th e observationsimplies that any local maximum of L that exists at Xj must correspond to a cusp of L; that
59
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
60/75
www.MathGeek.com
is !1e In L > 0 to the left of x j and to In L < 0 to the right of x j . Thus, for x j to correspondto a local maximum it must be true that
( . .)rz - Jlim --+--OlXj () A - () < 0 < lim (_J_'_-_1 + _n_-_(,-j_-_1_))- - OTxj () A - () ,which implies that j 11j-j j-1 11 - j + l- - + < 0 < - - - + ----,---"----Xj A - Xj - - :r;j A - X jor that j-1 j--A 2x. Thus, emust correspond to a global maximum, an d hence is a maximumlikelihood estimate for ().
60
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
61/75
www.MathGeek.com
Recall that
Since
it follows that e s not unbiased. Further, Hate thatE[4Xi] - 48E[X1] + 82
4E[Xi]- 38200 ,
since E[Xf] = 00 . However, if 8(x) = A for any real number A, then E[(8(X) - 8)2] is finite.Thus, we conclude that e s not admissible.(B) Note that
and hence that
lnL(8,Xl,X2) = 2ln2 + 4ln8 - 3ln(xl + 8) - 3ln(:2 + 8).Thus,
which is equal to zero if
is a candidate for a maximum likelihood in this case.61
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
62/75
www.MathGeek.com
Solution 2.21. Note that Y = 1 if and only if X 2 = 1 and X3 = 1. Thus,
Thus,L(8, x, y) Pe(X1 = x)Pe(Y = y)
{(1-82)(1-8) i fx=Oandy=O8 (1 - 82 ) if x = 1 and y = 082 (1 - 8) if x = 0 and y = 183 if x = 1 and y = 1
{I - 8 - 82 + 83 if x = 0 and y = 08 - 83 if x = 1 and y = 0= 82 - 83 if x = 0 and y = 183 if x = 1 and y = 1.
Note that 1 - 8 - 82 + 83 is maximized when 8 = 0 and that 83 is maximized when 8 = 1.Further, note that 8 - 83 is maximized when 8 = 1/V3 ;::::j 0.57735 and that 82 - 83 ismaximized when 8 = 2/3. (See Figure 4.1.) Thus,
A 1/V3{
08(x,y)= ~ / 3
0.8
06
0.4
0.2
00 02
if x = 0 and y = 0if x = 1 and y = 0if x = 0 and y = 1if x = 1 and y = 1.
06 08
Figure 4.1: L(8,:c,y) as a function of 8 for x = 0,1 and y = 0,1.
Solution 2.22. (A) Note that
62
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
63/75
www.MathGeek.com
i= ln
;= 1n
-n ( )1 1n ()2 + 17 In ()1 + (()1 - 1)L In Xi;= 1
Thus, to maximize In fe 1 .e 2 (Xl, ... ,xn ) we want to make ()2 as small as possible. However,fe 1 ,e2 (:C1, . .. ,xn ) = 0 if Xi > ()2 for any i. That is, ()2 cannot be less than any of theobservations. Thus, a maximum likelihood estimate for ()2 is given by
(B) Note that
which is equal to zero when
Since [J2 118()21nfel,e2(x1, . . . , xn) = - ()2 < 0,1 1
we conclude that this critical point corresponds to a maximum point. Thus, a maXllImmlikelihood estimate for ()1 is given by
Solution 2.23. I f X > 0 then fe(x) is a strictly increasing function of () and is thus maximized when () = 1. I f :c < 0 then fe (x) is a strictly decreasing function of () and is thusmaximized when () = -1 . I f X = 0 then Ie (x) is a constant function of () an d is thusmaximized for any choice of (). Thus a maximum likelihood estimate of () is given by
T(X)={ 1 ~ f x ~ O -1 If X < O.
63
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
64/75
Note that
Ee[T(X)]
www.MathGeek.com
1J (x)fe(x) dx-1o 1J 1 + ax J. 1 + ax( - 1) 2 dx + ( ) 2 d:c-1 0
[X ax2]O [x ax2]1- -+- + -+-2 4 -1 2 4 - - +( ~ + ~ )
~ = f a .Thus, we see that T is no t unbiased. Th e mean square error of T is given by
Ee[T2(X)] - 2aEe[T(X)] + a2(a) :21 - 2a 2" + a
1.However, th e trivial estimator S(X) = 0 has a mean square error given by
which is less than or equal to 1 for all a! Thus, we conclude that T is not admissible.Solution 2.24. Note that
and hence that
:0 (-nln y2;; - nIne - 2 ~ ' ( ~ ( X ; -eJ'))17 1 ( ~ . ) 1 ( ~ . .J)7) + a2 {;;{(x i - a) + a3 {;;{(x; - a ) ~
-i+ ;, ( ( ~ X i ) -ne)64
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
65/75
which is equal to zero when
www.MathGeek.com
+ 3 ((t, X;) - 2B (t,Xi) + nO' )-:' (t,x,) + : ' (t,x;)
Solving for eand selecting the positive root yields a maximum likelihood candidate given by
Solution 2.25. Note thatn
L(e, Xl , , Xn) = II xp( - (X i - e))I[8,CXl) (Xi),i= l
which equals zero if X i < e for any value of i. I f Xi 2: e for all i thenn
L(e, Xl, .. . , Xn) = e8 II - x "i= l
which is maximized when e is as large as possible. Thus, a maximum likelihood estimatorfor e is given by 8(X1," " xn ) = min1 X)l::;,::;n1 - P(X1 > X, ... , Xn > x)1- P(X1 > x) P(Xn > x)1- ([ P+'JdH) n1 - en (8-x) ,
65
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
66/75
www.MathGeek.com
which implies that Bpossesses a probability density function given by
for x > 8. Thus,
.f"e(x) (1 - en(iJ-x))nen(iJ-x)
CXl
EiJ[B] I fe(x) dxiJ00I nen(iJ-x) dxo 18+ - ,n
from which it is again clear that 8 is not unbiased.
4.4. Minimum Mean Square Estimation
Solution 2.26. We choose a so that E[(X(t + k) - aX(t))X(t)] = 0, which implies thatE[X(t + k)X(t)] = ( ~ E [ X 2 ( t ) ] , or that R(k) = aR(O). Solving, we see that a = R(k)jR(O).Thus, we estimate X(t + k) via R(k)X(t)jR(O).Solution 2.27. We choose a and fJ so that they satisfy
which become
or
Solving we see obtain
E[(X(t) - aX(O) - fJX(T))X(O)] 0E[(X(t) - aX(O) - i3X(T))X(T)] 0,
E[X(t)X(O)]E[X(t)X(T)]
aE[X2(0)] + j3E[X(T)X(0)]aE[X(O)X(T)] + fJE[X2(T)]
R(t)R(T - t)
aR(O) + f3R(T)aR(T) + fJR(O).
(J
R(t)R(O) - R(T)R(T - t)R2(0) - R2(T)
R(T - t)R(O) - R(t)R(T)R2(0) - R2(T)
66
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
67/75
www.MathGeek.com
and our estimate of X(t) is aX(O) + (3X(T). I f t = T /2 thena = 3 = R(T/2)
I R(O) + R(T)Solution 2.28. We choose a an d (3 so that they satisfy
B [ (J X(l)dl- aX(O) -11X(T)) X(O)] 0E [ ( [ X(l) d l- X(O) -IiX(T)) X(T)] 0,
which becomeE [J X(t)X(O)dt] aE[X2(O)] + /JE[X(T)X(O)]E [ [ X(t)X(T)dt] aE[X(O)X(T)] + /JE[X2(T)]
orT
J (t ) dt aR(O) + (3R(T)J (T- t )d t = aR(T) + /JR(O).
Solving an d noticing (via a change of variable) thatTOTJ (T - t) dt = J (s)( - ds) = J (t) dt T we find that TJ (t) dt
a = 3 = -----:"-0--:--_-,----:-I R(O) + R(T) ,
67
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
68/75
www.MathGeek.com
and our estimate is o:X(O) + /3X(T).Solution 2.29. We choose 0:1 and 0:2 so that they satisfy
which become
or
E[(X(t) - ( ~ l X ( t - 1) - 0:2X(t - 2))X(t - 1)] 0E[(X(t) - O:lX(t - 1) - 0:2X(t - 2))X(t - 2)] 0,
E[X(t)X(t - 1)]E[X(t)X(t - 2)]
0:1E[X2(t - 1)] + 0:2E[X(t - 1)X(t - 2)]O:lE[X(t - 1)X(t - 2)] + 0:2E[X2(t - 2)]
Solving we see that 0: 1 = e-1 and 0:2 = 0, and we estimate X(t) via e-1X(t - 1). (Note thatX (t) is a Markov process and hence, as expected, our estimate depends only on th e mostrecent observation.)
Solution 2.30. Consider th e subspace NI (e- t , te-t) of LdO, (0). We seek the point hin j \ { that is closest to h, i.e. such that h - h ..1 M. Thus, we must have
0 0
. / h(t) - h(t))e-tdt 0o0 0
. / h(t) -11(t))te- tdt 0o
where
Substituting for h we obtain0 0 0 0
0:1 . / e-2tdt + 0:2 . / e- 2tdto 0 o0 0 0 0
( ~ 1 j ' te- 2tdt + 0:2 j ' t2e-2tdto 0
68
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
69/75
where we note that
www.MathGeek.com
CXJJ e-2t dt =o1J t 1e dt = 1 --eoCXJJ"2 e-"2tdt =o1J -t 2te dt = 1 -o
Thus, we obtain th e system of equations given by
and solving we see that
and
11-e21- - ,e
12Lt2 = 4 - - .eSolution 2.31. We estimate X 3 via aX2+ bX + c. We choose a, b, and c so that
E[(X3 - (aX2 + bX + C))X2] 0E[(X3 - (aX2 + bX + c))X] 0
E[X3 - (aX2 + bX + c)] O.Note that
69
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
70/75
www.MathGeek.com
Thus our system of equations simplifies to1 1 1-a + -b + -c.5 4 31 1 1-a + -b + -c4 3 21 1-a+ -b+ c3 2
1-61514
Solving we find that a = 3/2, b = -3/5, and c = 1/20. Thus, we estimate X 3 via3 2 3 1-X --X+-.2 5 20
The mean square error of our estimate is given by
! X3 - ~ X 2 + - 2 ~ J ) , xo128003.5714 x 10-4 .
4.5. Hilbert Spaces
Solution 2.32. (A.) Clearly Ilgll 2:: 0 for all g. Further, if 9 = 0 then Ilgll = 0, and if Ilgll = 0then Ig(t) I = 0 for all t E [0,1], i.e. 9 = O. In addition,
Ilagll sup{lag(t)I : t E [0, I]}lal snp{lg(t)l}lalllgll
Finally, Ilg + hll ::::; Ilgll + Ilhll since Ig(t) + h(t)1 ::::; Ig(t)1 + Ih(t)l(B) Let f and 9 be points in K and let 0 < ). < 1. Note that
1 1 1j(1 - ).)f(t) + ).g(t) dt (1 - ).) j f(t) dt + ) . j g(t) dto o 0(1 - ).) + ).
1.Thus, since (1 - ).)f(t) + ).g(t) E K, we see that K is convex.
70
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
71/75
www.MathGeek.com
(C) Let 9 E K. Since g(O) = 0 and 9 is continuous, it follows that Ilgll > 1 since1.I g(t) dt = 1.o
Clearly, however, if > 0 then there exists some f E K such that Ilfll < 1 + E.Solution 2.33. (A) PM(h) is th e point in 1\IJ that is nearest to h with respect to the norminduced by the inner product on H. I f x E M then clearly PM(x) = x. Since PM(h) E 1\IJfor any h E H, it follows that PM(PM(h)) = PM(h) for all hE H.(B) Let x and Y be points in H. Th e Hilbert Space Projection Theorem implies that Y =PM(y) +Q.u(Y) where PM(y) EM an d Q.u(Y) E 1\IJl... Note that
I f M ..1 N then
That is,
(PM(x), PM(Y) +QM(Y))(PM(:c), PM(y)) + (PM(x), QM(Y))(PM(x), PM(Y)) since PM(x) E jVI an d QM(Y) E Ml..(PM(x), PM(Y)) + (Q.w(x), PM(Y)) since PM(Y) E ]1;J and QM(X) E ]1;Jl..(PM(:C) +QM(X), PM(y))(x, PM(y)).
o since PN(y) EN and PM(x) EM(y, PN(PM(X))) via the previous result.
(y, PN(PM(X))) = 0for all :1: an d y in H, which implies that PN(PM(X)) = 0 for all x E H. Similarly,
which implies that PM(PN(y)) = 0 for all Y in H.Now assume that PM(PN(X)) = PN(PM(X)) = 0 for all x in H. Let m E M an d n E N.Then
(m, n) (PM(m) +QM(m), PN(n) +QN(n))(PM(rn), PN(n)) since m E M and n E N(m, PM(PN(n))) using the previous result(m,O)O.
71
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
72/75
www.MathGeek.com
Thus, M ..1 N.(C) Note that
PM(PN(X)) = 0 \Ix E H ::::::? (y, PM(PN(X))) = 0 \Ix, y E H::::::? (PM(y),PN(x)) = 0 \I:c,y E H::::::? (PN(PM(y)), x) = 0 \I:c, y E H::::::? PN(PM(y)) = 0 \ly E H.
Solution 2.34. To begin, note thatE[02,1 + X]02,1 + E[X]02,1 since E[X] = 0,
which is equal to zero if 02,1 = O. Further,E[Y1Y 3] E[03,1 + 03,2X + X2]
= 03,1 + 03,2E[X] + E[X2]. [2]= 03,1 + 1 SInce E X = 1,which is equal to zero if 03,1 = -1 . In addition,
E[X( -1 + 03,2X + X2)]E[X3] - E[X] + 03,2E[X2]03,2 since E[X3] = 0,
which is equal to zero if 03,2 = O. Further,E[XY4]04,lE[X] + 04.2E[X2] + 04,3E[X3] + E[X4]04.2 + 3 since E[X4] = 3,
which is equal to zero if 04,2 = -3 . Finally,
andE[( -1 + X2)(04,1 - 3X + 04,3X2 + X 3)]E[04,lX 2 - 3X 3+ 04,3X4 + X 5 - ( ~ 4 , 1 + 3X - 04,3X2 - X 3]
[ 2 4 2]E 04,1X + ( ~ 4 , 3 X - (t4,1 - (Y4,3X(t4,1 + 304,3 - 04.1 - (t4,3204,3,
72
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
73/75
www.MathGeek.com
which are both equal to zero if 0:4,3 = 0:4,1 = O. Combining these results, we obtainYi 1}2 XY3 X2 - 1Y4 X 3 - 3X.
To obtain the best estimate of Y4 that is of the form O:lYi + 0:2}2 + 0:3Y3, we seek values of0:1, 0:2, and 0:3 such that
E[(Y4 - (O:lYi + 0:2Y2 + 0:3Y3))Yi] = 0for i = 1,2,3. Since E[YiYj] = 0 when i =f j , it follows that Lt1 = Ct2 = Ct3 = O. That is, thebest linear estimate of Y4 in terms of Y1 , }2, an d Y3 is z-;ero! The best nonlinear estimate ofY4 in terms of Y1 , }2, an d Y3 , however, is y;3 - 3}2, which ha s a zero mean square error.Solution 2.35. (A) To begin, note that if x E Ha n d y E lvIl.. with Ilyll = 1, then
I\X,y)1 I\Pl\f(x) +Ql\f(x),y)1I(PM(:C) ,y) + (QM(:C), y) II(QM (x) ,y) I since y E M l..
< IIQl\f(x)IIIIYIIII Ql\f(:c) II since Ilyll = 1.
Further, note that equality occurs if
Thus, since IIQM(:c)11 is an attainable upper bound, we conclude thatmax I(:r;, y)1 = IIQM(X) IIy E M ~ IIYII=l
(B) Let H denote th e Hilbert space consisting of all square integrable functions on [-1, 1]with 1
\1,g) =./ f(x)g(x)dx.-1
Let Ai denote th e subspace consisting of all polynomials on [-1,1] of degree not greaterthan 2. Since M is finite dimensional, it follows that ]0 . is closed. Note that if h E lvIl..,then \1, h(x)) = \X, h(x)) = \X2, h(x)) = O. Our goal is to find
ma x \X3,g(x)) .g E . U ~ Ilgll=l
73
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
74/75
www.MathGeek.com
The Hilbert Space Projection Theorem suggests that we ca n express :c 3 as PM(X3 ) +QM(X3 )where PM(X 3 ) E }v! and Q}H(X3 ) E } \ ; J ~ . Note that since PM(X3 ) E }v!, it follows thatPM (X 3 ) = a + bx + ex 2 where a, b, an d e must satisfy
1(x' - a - bx - ex ) dxj 3 2
-11
j ( x 3 - a - bx - cx 2 )xdx-11
j (x 3 - a - bx - CX 2 )X2 dx-1
o
o
o.
Simplifying we obtain:1 1 1 1
j :c3 dx - a j d:c - b j x dx - c j x 2 dx 0- 1 -1 -1 -1
1 1 1 1j X4 dx - a j x dx - b j x 2 dx - c j x 3 dx 0- 1 -1 -1 -11 1 1 1
j x 5 dx - a j x 2 dx - b j x 3 dx - c J' x4 dx 0,-1 -1 -1 -1
which becomes2- 2 a - -e 032 2- - -b 05 32 2- - a - -c O.3 .5
Solving we find that a = 0, b = an d c = O. Thus,
an dx 3 - PM (X 3 )3x3 --x" 5" .
74
www.MathGeek.com
8/9/2019 Problems in Communication Theory With Solutions
75/75
www.MathGeek.com
From the result in Part (A), it follows that
f ,3 - ~ , , ) 2x-1
2-v14350.21381.
Recommended