%%%*********************************************************
\section{Related works}
\label{sec:02}
-%Wherever Times is specified, Times Roman or Times New Roman may be used. If neither is available on your system, please use the font closest in appearance to Times. Avoid using bit-mapped fonts if possible. True-Type 1 or Open Type fonts are preferred. Please embed symbol fonts, as well, for math, etc.
+GMRES method is one of the most widely used iterative solvers chosen to deal with the sparsity and the large order of linear systems. It was initially developed by Saad \& al.~\cite{Saad86} to deal with non-symmetric and non-Hermitian problems, and indefinite symmetric problems too. The convergence of the restarted GMRES with preconditioning is faster and more stable than those of some other iterative solvers.
+
+The next two chapters explore a few methods which are considered currently to be among the
+most important iterative techniques available for solving large linear systems. These techniques
+are based on projection processes, both orthogonal and oblique, onto Krylov subspaces, which
+are subspaces spanned by vectors of the form p(A)v where p is a polynomial. In short, these
+techniques approximate A −1 b by p(A)b, where p is a “good” polynomial. This chapter covers
+methods derived from, or related to, the Arnoldi orthogonalization. The next chapter covers
+methods based on Lanczos biorthogonalization.
+
+Krylov subspace techniques have inceasingly been viewed as general purpose iterative methods, especially since the popularization of the preconditioning techniqes.
+
+Preconditioned Krylov-subspace iterations are a key ingredient in
+many modern linear solvers, including in solvers that employ support
+preconditioners.
%%%*********************************************************
%%%*********************************************************
\Input $A$ (sparse matrix), $b$ (right-hand side)
\Output $x$ (solution vector)\vspace{0.2cm}
\State Set the initial guess $x_0$
- \For {$k=1,2,3,\ldots$ until convergence (error$<\epsilon_{tsirm}$)} \label{algo:conv}
+ \For {$k=1,2,3,\ldots$ until convergence ($error<\epsilon_{tsirm}$)} \label{algo:conv}
\State $[x_k,error]=Solve(A,b,x_{k-1},max\_iter_{kryl})$ \label{algo:solve}
- \State $S_{k \mod s}=x_k$ \label{algo:store} \Comment{update column (k mod s) of S}
- \If {$k \mod s=0$ {\bf and} error$>\epsilon_{kryl}$}
+ \State $S_{k \mod s}=x_k$ \label{algo:store} \Comment{update column ($k \mod s$) of $S$}
+ \If {$k \mod s=0$ {\bf and} $error>\epsilon_{kryl}$}
\State $R=AS$ \Comment{compute dense matrix} \label{algo:matrix_mul}
\State $\alpha=Least\_Squares(R,b,max\_iter_{ls})$ \label{algo:}
\State $x_k=S\alpha$ \Comment{compute new solution}
practice, this threshold must be much smaller than the convergence threshold of
the TSIRM algorithm (\emph{i.e.}, $\epsilon_{tsirm}$). We also consider that
after the call of the $Solve$ function, we obtain the vector $x_k$ and the error
-which is defined by $||Ax^k-b||_2$.
+which is defined by $||Ax_k-b||_2$.
Line~\ref{algo:store},
-$S_{k \mod s}=x^k$ consists in copying the solution $x_k$ into the column $k
+$S_{k \mod s}=x_k$ consists in copying the solution $x_k$ into the column $k
\mod s$ of $S$. After the minimization, the matrix $S$ is reused with the new
values of the residuals. To solve the minimization problem, an iterative method
is used. Two parameters are required for that: the maximum number of iterations
We will show that the statement holds too for $r_k$. Two situations can occur:
\begin{itemize}
\item If $k \not\equiv 0 ~(\textrm{mod}\ m)$, then the TSIRM algorithm consists in executing GMRES once. In that case and by using the inductive hypothesis, we obtain either $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ if $A$ is positive, or $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite case.
-\item Else, the TSIRM algorithm consists in two stages: a first GMRES($m$) execution leads to a temporary $x_k$ whose residue satisfies $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$, and a least squares resolution.
+\item Else, the TSIRM algorithm consists in two stages: a first GMRES($m$) execution leads to a temporary $x_k$ whose residue satisfies:
+\begin{itemize}
+\item $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ in the positive case,
+\item $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite one,
+\end{itemize}
+and a least squares resolution.
Let $\operatorname{span}(S) = \left \{ {\sum_{i=1}^k \lambda_i v_i \Big| k \in \mathbb{N}, v_i \in S, \lambda _i \in \mathbb{R}} \right \}$ be the linear span of a set of real vectors $S$. So,\\
$\min_{\alpha \in \mathbb{R}^s} ||b-R\alpha ||_2 = \min_{\alpha \in \mathbb{R}^s} ||b-AS\alpha ||_2$
& \leqslant \min_{\lambda \in \mathbb{R}} ||b-\lambda Ax_{k} ||_2\\
& \leqslant ||b-Ax_{k}||_2\\
& = ||r_k||_2\\
-& \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||,
+& \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||, \textrm{ if $A$ is positive,}\\
+& \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|, \textrm{ if $A$ is}\\
+& \textrm{positive definite,}
\end{array}$
\end{itemize}
which concludes the induction and the proof.
\end{proof}
-We can remark that, at each iterate, the residue of the TSIRM algorithm is lower
-than the one of the GMRES method.
+%We can remark that, at each iterate, the residue of the TSIRM algorithm is lower
+%than the one of the GMRES method.
%%%*********************************************************
%%%*********************************************************
\label{sec:05}
-In order to see the influence of our algorithm with only one processor, we first
-show a comparison with GMRES or FGMRES and our algorithm. In Table~\ref{tab:01},
-we show the matrices we have used and some of them characteristics. Those
-matrices are chosen from the Davis collection of the University of
-Florida~\cite{Dav97}. They are matrices arising in real-world applications. For
-all the matrices, the name, the field, the number of rows and the number of
-nonzero elements are given.
+In order to see the behavior of the proposal when considering only one processor, a first
+comparison with GMRES or FGMRES and the new algorithm detailed previously has been experimented.
+Matrices that have been used with their characteristics (names, fields, rows, and nonzero coefficients) are detailed in
+Table~\ref{tab:01}. These latter, which are real-world applications matrices,
+have been extracted
+ from the Davis collection, University of
+Florida~\cite{Dav97}.
\begin{table}[htbp]
\begin{center}
\label{tab:01}
\end{center}
\end{table}
-
-The following parameters have been chosen for our experiments. As by default
+Chosen parameters are detailed below.
+%The following parameters have been chosen for our experiments.
+As by default
the restart of GMRES is performed every 30 iterations, we have chosen to stop
the GMRES every 30 iterations (\emph{i.e.} $max\_iter_{kryl}=30$). $s$ is set to 8. CGLS is
chosen to minimize the least-squares problem with the following parameters:
speed network.
+In many situations, using preconditioners is essential in order to find the
+solution of a linear system. There are many preconditioners available in PETSc.
+For parallel applications all the preconditioners based on matrix factorization
+are not available. In our experiments, we have tested different kinds of
+preconditioners, however as it is not the subject of this paper, we will not
+present results with many preconditioners. In practise, we have chosen to use a
+multigrid (mg) and successive over-relaxation (sor). For more details on the
+preconditioner in PETSc please consult~\cite{petsc-web-page}.
+
-{\bf Description of preconditioners}\\
\begin{table*}[htbp]
\begin{center}