We will show that the statement holds too for $r_k$. Two situations can occur:
\begin{itemize}
\item If $k \not\equiv 0 ~(\textrm{mod}\ m)$, then the TSIRM algorithm consists in executing GMRES once. In that case and by using the inductive hypothesis, we obtain either $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ if $A$ is positive, or $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite case.
-\item Else, the TSIRM algorithm consists in two stages: a first GMRES($m$) execution leads to a temporary $x_k$ whose residue satisfies $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$, and a least squares resolution.
+\item Else, the TSIRM algorithm consists in two stages: a first GMRES($m$) execution leads to a temporary $x_k$ whose residue satisfies:
+\begin{itemize}
+\item $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ in the positive case,
+\item $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite one,
+\end{itemize}
+and a least squares resolution.
Let $\operatorname{span}(S) = \left \{ {\sum_{i=1}^k \lambda_i v_i \Big| k \in \mathbb{N}, v_i \in S, \lambda _i \in \mathbb{R}} \right \}$ be the linear span of a set of real vectors $S$. So,\\
$\min_{\alpha \in \mathbb{R}^s} ||b-R\alpha ||_2 = \min_{\alpha \in \mathbb{R}^s} ||b-AS\alpha ||_2$
& \leqslant \min_{\lambda \in \mathbb{R}} ||b-\lambda Ax_{k} ||_2\\
& \leqslant ||b-Ax_{k}||_2\\
& = ||r_k||_2\\
-& \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||,
+& \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||, \textrm{ if $A$ is positive,}\\
+& \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|, \textrm{ if $A$ is}\\
+& \textrm{positive definite,}
\end{array}$
\end{itemize}
which concludes the induction and the proof.
\end{proof}
-We can remark that, at each iterate, the residue of the TSIRM algorithm is lower
-than the one of the GMRES method.
+%We can remark that, at each iterate, the residue of the TSIRM algorithm is lower
+%than the one of the GMRES method.
%%%*********************************************************
%%%*********************************************************
\label{sec:05}
-In order to see the influence of our algorithm with only one processor, we first
-show a comparison with GMRES or FGMRES and our algorithm. In Table~\ref{tab:01},
-we show the matrices we have used and some of them characteristics. Those
-matrices are chosen from the Davis collection of the University of
-Florida~\cite{Dav97}. They are matrices arising in real-world applications. For
-all the matrices, the name, the field, the number of rows and the number of
-nonzero elements are given.
+In order to see the behavior of the proposal when considering only one processor, a first
+comparison with GMRES or FGMRES and the new algorithm detailed previously has been experimented.
+Matrices that have been used with their characteristics (names, fields, rows, and nonzero coefficients) are detailed in
+Table~\ref{tab:01}. These latter, which are real-world applications matrices,
+have been extracted
+ from the Davis collection, University of
+Florida~\cite{Dav97}.
\begin{table}[htbp]
\begin{center}
\label{tab:01}
\end{center}
\end{table}
-
-The following parameters have been chosen for our experiments. As by default
+Chosen parameters are detailed below.
+%The following parameters have been chosen for our experiments.
+As by default
the restart of GMRES is performed every 30 iterations, we have chosen to stop
the GMRES every 30 iterations (\emph{i.e.} $max\_iter_{kryl}=30$). $s$ is set to 8. CGLS is
chosen to minimize the least-squares problem with the following parameters:
speed network.
+In many situations, using preconditioners is essential in order to find the
+solution of a linear system. There are many preconditioners available in PETSc.
+For parallel applications all the preconditioners based on matrix factorization
+are not available. In our experiments, we have tested different kinds of
+preconditioners, however as it is not the subject of this paper, we will not
+present results with many preconditioners. In practise, we have chosen to use a
+multigrid (mg) and successive over-relaxation (sor). For more details on the
+preconditioner in PETSc please consult~\cite{petsc-web-page}.
+
-{\bf Description of preconditioners}\\
\begin{table*}[htbp]
\begin{center}
Table~\ref{tab:03} shows the execution times and the number of iterations of
example ex15 of PETSc on the Juqueen architecture. Different numbers of cores
-are studied ranging from 2,048 up-to 16,383. Two preconditioners have been
-tested: {\it mg} and {\it sor}. For those experiments, the number of components (or unknowns of the
+are studied ranging from 2,048 up-to 16,383 with the two preconditioners {\it mg} and {\it sor}. For those experiments, the number of components (or unknowns of the
problems) per core is fixed to 25,000, also called weak scaling. This
number can seem relatively small. In fact, for some applications that need a lot
of memory, the number of components per processor requires sometimes to be