Upload
kota
View
213
Download
1
Embed Size (px)
Citation preview
Deflation Techniques for Computational Electromagnetism,Part I: Theoretical Considerations
Hajime Igarashi, Member IEEE, Kota Watanabe, Member IEEEGraduate School of Information Science and Technology, Hokkaido University,
Kita 14, Nishi 9, Kita-ku, 060-0814 Sapporo, [email protected]
Abstract—The deflation technique replaces small eigenvaluesof a matrix with zeros to accelerate convergence of iterative linearsolvers. In this paper, it is shown that the reason why recentlyproposed computational frameworks such as the explicit andimplicit error corrections and singularity decomposition as wellas the conventional AV method improve convergence in linearsolvers is clarified from a view point of the matrix deflation.
I. INTRODUCTIONIn the finite element (FE) analysis of electromagnetic fields,
the computational times for the solution of linear equations,which dominate other process times, are strongly requiredto be reduced. The deflation techniques, whose origin canbe found in literatures in 1980s, are expected to be suitablefor this purpose; they replace small eigenvalues of a matrixwith zeros to improve the matrix condition and weakencomputational burdens in linear solvers. Recently, the deflatedconjugate gradient (CG) method has been applied to diffusionproblems for layered media [1] and two dimensional magneto-static problems [2]. Moreover, mathematical properties of thedeflated CG method has been discussed for symmetric positivedefinite matrices [3].As different approaches for acceleration of linear solvers,
the explicit and implicit error corrections (EEC, IEC) havebeen introduced by analogy to the multi-grid method [4]. Thesingular decomposition (SD) method [5], which is based onthe concept of IEC, has been shown to significantly improveconvergence of linear solvers for problems including flat FEs.However, the reason why these techniques work well hasnot been very clear. In this paper, validity of these methodis explained from the view point of the matrix deflation.Moreover, the AV method, whose computational times areusually shorter than that of A method, is discussed on thebasis of the matrix deflation.
II. DEFLATION AND EEC, IEC, SDLet us consider an FE equation
� � � �, where
�is a
symmetrix semi-positive definate matrix whose eigenvlauesare � � � � � � � � � � � � , �
is assumed to be in the range of�. The deflation method carries out the decomposition
� �� � � � �, where
�is a projector which satisfies
� � � �and� � " �
. It is readily shown that�is also a projector and� � � #
. Let us introduce the matrices $ � & ' � � � � ' ) *and+$ � & ' ) . � � � � ' � *
which are composed of the orthogonaleigenvectors of
�. Then
�is decomposed into the slowly and
fast convergent components $ / , � " $ / , where the formeris the projection of
�onto the space spanned by
' 2. The
vector / can be determined from the orthogonality condition3 � " $ / � ' 2 6 8 � #, that is$ ; � $ / � $ ; � � � (1)
The above decomposition is expressed as� � � � " $ / .
Thus, assuming $ ; � $ is regular, we obtain� � "$ 3 $ ; � $ 6 ? $ ; �
. The equation for� �
can be obtainedfrom the commutability
� � � � ; �, that is� ; � � � � ; � � (2)
It follows from� ; � $ � #
and� ; � +$ � � +$ that
� ; �has
now an effective condition number A C D E 3 � ; � 6 � � � G � ) . which is smaller than A C D E 3 � 6
.In the EEC [4], the solution vector is decomposed into� � � L � $ / , which is inserted to � � � �
to obtain theequation for
� L,
� � L � � " � $ / . Moreover, to determine/ , the Petrov-Galerkin method is applied to this equation toobtain the equation for / , $ ; � $ / � $ ; 3 � " � � L 6
. Theseequations are alternatively solved. Now it can be shown thatelimination of / from these equations results in (2). Hence theEEC improves the matrix condition in the same way as thedeflation method mentioned above.In the IEC and SD methods, the equations in the EEC are
coupled in the formT � � $$ ; � $ ; � $ U T � L/ U � T �$ ; � U � (3)
which is solved. The matrix in (3) is denoted by V . It hasbeen numerically shown that the preconditioning applied to(3) leads to rapid convergence in iterative linear solvers [5],which can be explained by the following theorem:Theorem 1: The preconditioned matrix WV resulted from the
diagonal scaling has the eigenvalues of X zeros, Y � � , Y �� � ,..., Y � � ), � ) . ,..., � � .
Hence it is concluded that A C D E 3 WV 6is smaller than A C D E 3 [ 6
.We can have a similar discussion on the AV method in whichthe discrete gradient matrix \ is substituted to $ in (3).
III. CONCLUSIONSIn this paper, the basic properties of the EEC, IEC and SD
methods are discussed on the basis of the matrix deflation. Inthe long version, the effect of approximated eigenvectors for$ is also theoretically discussed.
REFERENCES[1] C. Vuik, A. Segal, J.A. Meijerink, J. Comp. Phys., vol.152, pp.385-403,
1999.[2] H. De Gersem, K. Hameyer, Eur. Phys. J. AP., vol.13, no.1, pp.45-49,
2001.[3] Y. Saad, M. Yeung, J. Erhel, F. Guyomarch, SIAM J. Sci. Comput.,
vol.21, no.5, pp.1909-1926, 2000.[4] T. Iwashita, T. Mifune, M. Shimasaki, IEEE Trans. Magn., vol.44, no.6,
pp.946-949, 2008.[5] A. Kameari, IEEE Trans. Magn., vol.44, no.6, pp.1178-1181, 2008.
978-1-4244-7062-4/10/$26.00 ©2010 IEEE