Steepest Descent Adaptive Filter

Embed Size (px)

Citation preview

  • 7/28/2019 Steepest Descent Adaptive Filter

    1/3

    THE STEEPEST DESCENT ADAPTIVE F I LTER

    In designing an FIR adaptive filter, goal is to find the vector at time n that minimizes thequadratic function

    () *()+Although the vector the minimize () may be found by setting the derivatives of() w.r.t() equal to zero, another approach is to search for the solution using the method ofSteepest descent.

    The method of steepest descent is an iterative procedure that has been used to find extrema of

    nonlinear functions.

    BASIC IDEA

    Let be an estimate of the vector that minimizes the mean square error() at time n. Attime n+1 a new estimate is formed by adding a correction to that is designed to bring closer to the desired solution. The correction involves taking a step of size in the directionof maximum descent down the quadratic error surface.

    For example, shown in Fig. is a three dimensional plot of quadratic function of two real

    valued coefficients, w(0),and w(1), given by

    () () () ,() ()- ()()

    Note that the counter of constant error, when projected onto the w(0)-w(1) plane, form a set

    of concentric ellipses. The direction of steepest descent at any point in the plane is the

    direction that a marble would take if it were placed on the inside of this quadratic bowl.

    Mathematically, this direction is given by gradient, which is the vector of partial derivatives

    w(k). For the function in in above eqn. the gradient vector is

  • 7/28/2019 Steepest Descent Adaptive Filter

    2/3

    () [

    ()()()()]

    () () () ()

    As shown in Fig, for any vector w, the gradient is orthogonal to the line that is tangent to the

    contour of constant error at w. Thus the update equation for is ()

    The step size affects the rate at which the weight vector moves down the quadratic surfaceand must be a positive number.

    STEEPEST ALGORITHM

    1. Initialize the steepest descent algorithm with an initial estimate, of the optimumweight vector w.

    2. Evaluate the gradient of() at the current estimate, , of the optimum weightvector.

    3. Update the estimate at time n by adding a correction that is formed by taking a step ofsize in the negative gradient direction

    ()

    4.

    Go back to (2) and repeat the process.Let us now evaluate the gradient vector (). Assuming that w is complex, the gradientis the derivative of *()+ with respect to W*. With

    () *()+ *()+ *()()+And

    () ()It follows that

    () *()

    ()+

    Thus, with a step size of, the steepest descent algorithm becomes

    *()()+To see how this steepest descent update equation for performs, let us consider whathappens in the case of stationary process. If x(n)and d(n) are jointly WSS then

    *()()+ *()()+ *()()+

    And the steepest descent algorithm becomes

  • 7/28/2019 Steepest Descent Adaptive Filter

    3/3

    ( )Note that ifis the solution to the WienerHopf equation, , then thecorrection term is zero and for all n.PROPERTY 1

    For jointly WSS process, d(n)and x(n), the steepest descent adaptive filter converges to the

    solution to the WeinerHopf equations.

    If the step size satisfied the condition

    Where is the maximum eigenvalues of the autocorrelation matrix.