% affiliations
\author{\IEEEauthorblockN{Rapha\"el Couturier\IEEEauthorrefmark{1}, Lilia Ziane Khodja\IEEEauthorrefmark{2}, and Christophe Guyeux\IEEEauthorrefmark{1}}
-\IEEEauthorblockA{\IEEEauthorrefmark{1} Femto-ST Institute, University of Franche Comte, France\\
+\IEEEauthorblockA{\IEEEauthorrefmark{1} Femto-ST Institute, University of Franche-Comt\'e, France\\
Email: \{raphael.couturier,christophe.guyeux\}@univ-fcomte.fr}
\IEEEauthorblockA{\IEEEauthorrefmark{2} INRIA Bordeaux Sud-Ouest, France\\
Email: lilia.ziane@inria.fr}
However, iterative methods suffer from scalability problems on parallel
computing platforms with many processors, due to their need of reduction
-operations, and to collective communications to achive matrix-vector
+operations, and to collective communications to achieve matrix-vector
multiplications. The communications on large clusters with thousands of cores
and large sizes of messages can significantly affect the performances of these
iterative methods. As a consequence, Krylov subspace iteration methods are often used
At each outer iteration, the sparse linear system $Ax=b$ is partially
solved using only $m$
iterations of an iterative method, this latter being initialized with the
-best known approximation previously obtained.
-GMRES method~\cite{Saad86}, or any of its variants, can be used for instance as an
-inner solver. The current approximation of the Krylov method is then stored inside a matrix
-$S$ composed by the successive solutions that are computed during inner iterations.
-
-At each $s$ iterations, the minimization step is applied in order to
+last obtained approximation.
+GMRES method~\cite{Saad86}, or any of its variants, can potentially be used as
+inner solver. The current approximation of the Krylov method is then stored inside a $n \times s$ matrix
+$S$, which is composed by the $s$ last solutions that have been computed during
+the inner iterations phase.
+In the remainder, the $i$-th column vector of $S$ will be denoted by $S_i$.
+
+At each $s$ iterations, another kind of minimization step is applied in order to
compute a new solution $x$. For that, the previous residuals of $Ax=b$ are computed by
the inner iterations with $(b-AS)$. The minimization of the residuals is obtained by
\begin{equation}
\underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2
\label{eq:01}
\end{equation}
-with $R=AS$. Then the new solution $x$ is computed with $x=S\alpha$.
+with $R=AS$. The new solution $x$ is then computed with $x=S\alpha$.
In practice, $R$ is a dense rectangular matrix belonging in $\mathbb{R}^{n\times s}$,
\label{algo:01}
\end{algorithm}
-Algorithm~\ref{algo:01} summarizes the principle of our method. The outer
-iteration is inside the for loop. Line~\ref{algo:solve}, the Krylov method is
+Algorithm~\ref{algo:01} summarizes the principle of the proposed method. The outer
+iteration is inside the \emph{for} loop. Line~\ref{algo:solve}, the Krylov method is
called for a maximum of $max\_iter_{kryl}$ iterations. In practice, we suggest to set this parameter
-equals to the restart number of the GMRES-like method. Moreover, a tolerance
+equal to the restart number in the GMRES-like method. Moreover, a tolerance
threshold must be specified for the solver. In practice, this threshold must be
-much smaller than the convergence threshold of the TSIRM algorithm (\emph{i.e.}
+much smaller than the convergence threshold of the TSIRM algorithm (\emph{i.e.},
$\epsilon_{tsirm}$). Line~\ref{algo:store}, $S_{k \mod s}=x^k$ consists in copying the
-solution $x_k$ into the column $k \mod s$ of the matrix $S$, where $S$ is a matrix of size $n\times s$ whose column vector $i$ is denoted by $S_i$. After the
+solution $x_k$ into the column $k \mod s$ of $S$.
+After the
minimization, the matrix $S$ is reused with the new values of the residuals. To
solve the minimization problem, an iterative method is used. Two parameters are
required for that: the maximum number of iterations and the threshold to stop the
\section{Convergence results}
\label{sec:04}
-Let us recall the following result, see~\cite{Saad86}.
+Let us recall the following result, see~\cite{Saad86} for further readings.
\begin{proposition}
\label{prop:saad}
Suppose that $A$ is a positive real matrix with symmetric part $M$. Then the residual norm provided at the $m$-th step of GMRES satisfies:
%%%*********************************************************
%%%*********************************************************
-
-future plan : \\
-- study other kinds of matrices, problems, inner solvers\\
-- test the influence of all parameters\\
-- adaptative number of outer iterations to minimize\\
-- other methods to minimize the residuals?\\
-- implement our solver inside PETSc
+A novel two-stage iterative algorithm has been proposed in this article,
+in order to accelerate the convergence Krylov iterative methods.
+Our TSIRM proposal acts as a merger between Krylov based solvers and
+a least-squares minimization step.
+The convergence of the method has been proven in some situations, while
+experiments up to 16,394 cores have been led to verify that TSIRM runs
+5 or 7 times faster than GMRES.
+
+
+For future work, the authors' intention is to investigate
+other kinds of matrices, problems, and inner solvers. The
+influence of all parameters must be tested too, while
+other methods to minimize the residuals must be regarded.
+The number of outer iterations to minimize should become
+adaptative to improve the overall performances of the proposal.
+Finally, this solver will be implemented inside PETSc.
% conference papers do not normally have an appendix