From: lilia Date: Sat, 11 Oct 2014 20:57:25 +0000 (+0200) Subject: 11-10-2014 01 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/commitdiff_plain/ea5e4dbcd3255d09e763fb4d496a01ff90cde5f9 11-10-2014 01 --- diff --git a/paper.tex b/paper.tex index 9035059..f2114e5 100644 --- a/paper.tex +++ b/paper.tex @@ -601,7 +601,21 @@ is summarized while intended perspectives are provided. %%%********************************************************* \section{Related works} \label{sec:02} -%Wherever Times is specified, Times Roman or Times New Roman may be used. If neither is available on your system, please use the font closest in appearance to Times. Avoid using bit-mapped fonts if possible. True-Type 1 or Open Type fonts are preferred. Please embed symbol fonts, as well, for math, etc. +GMRES method is one of the most widely used iterative solvers chosen to deal with the sparsity and the large order of linear systems. It was initially developed by Saad \& al.~\cite{Saad86} to deal with non-symmetric and non-Hermitian problems, and indefinite symmetric problems too. The convergence of the restarted GMRES with preconditioning is faster and more stable than those of some other iterative solvers. + +The next two chapters explore a few methods which are considered currently to be among the +most important iterative techniques available for solving large linear systems. These techniques +are based on projection processes, both orthogonal and oblique, onto Krylov subspaces, which +are subspaces spanned by vectors of the form p(A)v where p is a polynomial. In short, these +techniques approximate A −1 b by p(A)b, where p is a “good” polynomial. This chapter covers +methods derived from, or related to, the Arnoldi orthogonalization. The next chapter covers +methods based on Lanczos biorthogonalization. + +Krylov subspace techniques have inceasingly been viewed as general purpose iterative methods, especially since the popularization of the preconditioning techniqes. + +Preconditioned Krylov-subspace iterations are a key ingredient in +many modern linear solvers, including in solvers that employ support +preconditioners. %%%********************************************************* %%%********************************************************* @@ -654,10 +668,10 @@ appropriate than a single direct method in a parallel context. \Input $A$ (sparse matrix), $b$ (right-hand side) \Output $x$ (solution vector)\vspace{0.2cm} \State Set the initial guess $x_0$ - \For {$k=1,2,3,\ldots$ until convergence (error$<\epsilon_{tsirm}$)} \label{algo:conv} + \For {$k=1,2,3,\ldots$ until convergence ($error<\epsilon_{tsirm}$)} \label{algo:conv} \State $[x_k,error]=Solve(A,b,x_{k-1},max\_iter_{kryl})$ \label{algo:solve} - \State $S_{k \mod s}=x_k$ \label{algo:store} \Comment{update column (k mod s) of S} - \If {$k \mod s=0$ {\bf and} error$>\epsilon_{kryl}$} + \State $S_{k \mod s}=x_k$ \label{algo:store} \Comment{update column ($k \mod s$) of $S$} + \If {$k \mod s=0$ {\bf and} $error>\epsilon_{kryl}$} \State $R=AS$ \Comment{compute dense matrix} \label{algo:matrix_mul} \State $\alpha=Least\_Squares(R,b,max\_iter_{ls})$ \label{algo:} \State $x_k=S\alpha$ \Comment{compute new solution} @@ -675,10 +689,10 @@ method. Moreover, a tolerance threshold must be specified for the solver. In practice, this threshold must be much smaller than the convergence threshold of the TSIRM algorithm (\emph{i.e.}, $\epsilon_{tsirm}$). We also consider that after the call of the $Solve$ function, we obtain the vector $x_k$ and the error -which is defined by $||Ax^k-b||_2$. +which is defined by $||Ax_k-b||_2$. Line~\ref{algo:store}, -$S_{k \mod s}=x^k$ consists in copying the solution $x_k$ into the column $k +$S_{k \mod s}=x_k$ consists in copying the solution $x_k$ into the column $k \mod s$ of $S$. After the minimization, the matrix $S$ is reused with the new values of the residuals. To solve the minimization problem, an iterative method is used. Two parameters are required for that: the maximum number of iterations