16
University of Tehran Faculty of Engineering School of Mechanical Engineering Advanced Numerical Methods Gradient Methods In Optimization Prof. M. Raisee Dehkordi By Meysam Rezaei Barmi Fall 2005

Numerical Reciepe Presentam tion

Embed Size (px)

DESCRIPTION

sd

Citation preview

  • University of TehranFaculty of EngineeringSchool of Mechanical Engineering

    Advanced Numerical Methods

    Gradient Methods In Optimization

    Prof. M. Raisee Dehkordi

    ByMeysam Rezaei Barmi

    Fall 2005

  • Gradient Methods In Optimization - Steepest Descent Method

    - Conjugate Gradient Method

    - Generalized Reduced Gradient Method

  • Steepest Descent MethodIf we move along the gradient direction from any point in n-dimentional space, the function value increases at the fastest rate.

    Gradient of the function The gradient vector represent the direction of Steepest Ascent, the negative of the gradient vector denotes the direction of Steepest Descent.

  • Steepest Descent Algorithm Start with initial trial pointFind the search direction Find to minimize Is optimum? NoYes

  • Example

  • Conjugate Gradient MethodThe convergence characteristics of the Steepest Descent method can be greatly improved by modifying it into a Conjugate Gradient method.Any minimization method that makes use of the conjugate directions is quadratically convergent.The method minimized a quadratic function in n steps or less.Each function can be approximated well by a quadratic near the optimum point ( With Taylor series ), optimum point is found in a finite number of iteration with using quadratically convergent method.

  • Conjugate Gradient Algorithm Start with initial trial pointFind the search direction Find to minimize Is optimum? NoYesFind the search directionFind to minimize

  • Example

  • We can use conjugate Gradient method to solve linear equations systemWith minimizingBecause in optimized point we have:

  • Generalized Reduced Gradient MethodOne of popular search methods for optimizing constrained functions is the Generalized Reduced Gradient method (GRG).n : Number of variablesm : Number of constraints Cost Function : Constraints : n-m : Number of decision Variables

  • Assumed: n=4 , m=2Subject toWe want to optimize cost functionChoose and two decision variables.

  • Where , ThereforeThus

  • With substituting for and from previous part, we have

  • Therefore we determine Generalized Reduced Gradient.forIf for minimization, choose If for minimization, choose

  • Generalized Reduced Gradient AlgorithmChoose decision variables (n-m) and their step sizes Initialize decision variablesMove towards constraintsCalculate Move towards constraintsDecreased

  • ExampleInitial step size Last step size