of 29/29
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster Krylov subspace method • Steepest descent method • Conjugate gradient (CG) method --- most popular • Preconditioning CG (PCG) method • GMRES for nonsymmetric matrix Other methods (read yourself) • Chebyshev iterative method • Lanczos methods • Conjugate gradient normal residual (CGNR) c D x R D x c x R x D b x A m m 1 ) ( 1 ) 1 ( b x x A x x x T T x n 2 1 : ) ( ) ( min

Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent

  • View
    218

  • Download
    3

Embed Size (px)

Text of Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods,...

  • Modern iterative methodsFor basic iterative methods, converge linearlyModern iterative methods, converge fasterKrylov subspace methodSteepest descent methodConjugate gradient (CG) method --- most popularPreconditioning CG (PCG) methodGMRES for nonsymmetric matrixOther methods (read yourself)Chebyshev iterative methodLanczos methodsConjugate gradient normal residual (CGNR)

  • Modern iterative methodsIdeas:Minimizing the residual Projecting to Krylov subspaceThm: If A is an n-by-n real symmetric positive definite matrix, then

    have the same solutionProof: see details in class

  • Steepest decent methodSuppose we have an approximation Choose the direction as negative gradient of

    If

    Else, choose to minimize

  • Steepest decent methodComputation

    Choose as

  • Algorithm Steepest descent method

  • TheorySuppose A is symmetric positive definite.Define A-inner product

    Define A-norm

    Steepest decent method

  • TheoryThm: For steepest decent method, we have

    Proof: Exercise

  • TheoryRewrite the steepest decent method

    Let errors

    Lemma: For the method, we have

  • TheoryThm: For steepest decent method, we have

    Proof: See details in class (or as an exercise)

  • Steepest decent methodPerformanceConverge globally, for any initial dataIf , then it converges very fastIf , then it converges very slow!!!Geometric interpretationContour plots are flat!!Local best direction (steepest direction) is not necessarily a global best direction Computational experience shows that the method suffers a decreasing convergence rate after a few iteration steps because the search directions become linearly dependent!!!

  • Conjugate gradient (CG) methodSince A is symmetric positive definite, A-norm

    In CG method, the direction vectors are chosen to be A-orthogonal (and called as conjugate vectors), i.e.

  • CG methodIn addition, we take the new direction vector as a linear combination of the old direction vector and the descent direction as

    By the assumption we get

  • Algorithm CG Method

  • An exampleAn example

    Initial guess

    The approximate solutions

  • CG methodIn CG method, are A-orthogonal!

    Define the linear space as

    Lemma: In CG method, for m=0,1,., we have

    Proof: See details in class or as an exercise

  • CG methodIn CG method, is A-orthogonal to or

    Lemma: In CG method, we have

    Proof: See details in class or as an exerciseThm: Error estimate for CG method

  • CG methodComputational costAt each iteration, 2 matrix-vector multiplications. This can be further reduced to 1 matrix-vector multiplicationsAt most n steps, we can get the exact solution!!!Convergence rate depends on the condition #K2(A)=O(1), converges very fast!!K2(A)>>1, converges slow but can be accelerated by preconditioning!!

  • Preconditioning Ideas: Replace by satisfying

    C is symmetric positive definite is well-conditioned, i.e. can be easily solvedConditions for choosing the preconditioning matrix as small as possible is easy to computeTrade-off

  • Algorithm PCG Method

  • Preconditioning Ways to choose the matrix C (read yourself)Diagonal part of ATri-diagonal part of Am-step Jacobi preconditionerSymmetric Gauss-Seidel preconditionerSSOR preconditionerIn-complete Cholesky decompositionIn-complete block preconditioningPreconditioning based on domain decomposition.

  • Extension of CG method to nonsymmetric Biconjugate gradient (BiCG) method: Solve simultaneouslyWorks well for A is positive definite, not symmetricIf A is symmetric, BiCG reduces to CGConjugate gradient squared (CGS) methodA has a special formula in computing Ax, its transport hasntMultiplication by A is efficient but multiplication by its transport is not

  • Krylov subspace methodsProblem I. Linear systemProblem II. Variational formulation

    Problem III. Minimization problem

    Thm1: Problem I is equivalent to Problem IIThm2: If A is symmetric positive definite, they are equivalent

  • Krylov subspace methodsTo reduce problem size, we replace by a subspace

    Subspace minimization: Find Such that

    Subspace projection

  • Krylov subspace methodsTo determine the coefficients, we have Normal Equations

    It is a linear system with degree m!! m=1: line minimization or linear search or 1D projection

    By converting this formula into an iteration, we reduce the original problem into a sequence of line minimization (successive line minimization ).

  • For symmetric matrixPositive definiteSteepest decent method

    CG method

    Preconditioning CG methodNon-positive definite MINRES (minimum residual method)

  • For nonsymmetric matrixNormal equations method (or CGNR method)

    GMRES (generalized minimium residual method)Saad & Schultz, 1986 Ideas: In the m-th step, minimize the residual over the set

    Use Arnoldi (full orthogonal) vectors instead of Lanczos vectorsIf A is symmetric, it reduces to the conjugate residual method

  • Algorithm GMRES

  • More topics on Matrix computationsEigenvalue & eigenvector computations

    If A is symmetric: Power method If A is general matrixHouseholder matrix (transform)

    QR method

  • More topics on matrix computationsSingular value decomposition (SVD)Thm: Let A be an m-by-n real matrix, there exists orthogonal matrices U & V such that

    Proof: Exercise