inner solver is a Krylov based one. In order to accelerate its convergence, the
outer solver periodically applies a least-squares minimization on the residuals computed by the inner one. %Tsolver which does not required to be changed.
-At each outer iteration, the sparse linear system $Ax=b$ is partially
-solved using only $m$
-iterations of an iterative method, this latter being initialized with the
-best known approximation previously obtained.
-GMRES method~\cite{Saad86}, or any of its variants, can be used for instance as an
-inner solver. The current approximation of the Krylov method is then stored inside a matrix
-$S$ composed by the successive solutions that are computed during inner iterations.
-
-At each $s$ iterations, the minimization step is applied in order to
+At each outer iteration, the sparse linear system $Ax=b$ is partially solved
+using only $m$ iterations of an iterative method, this latter being initialized
+with the last obtained approximation. GMRES method~\cite{Saad86}, or any of its
+variants, can potentially be used as inner solver. The current approximation of
+the Krylov method is then stored inside a $n \times s$ matrix $S$, which is
+composed by the $s$ last solutions that have been computed during the inner
+iterations phase. In the remainder, the $i$-th column vector of $S$ will be
+denoted by $S_i$.
+
+At each $s$ iterations, another kind of minimization step is applied in order to
compute a new solution $x$. For that, the previous residuals of $Ax=b$ are computed by
the inner iterations with $(b-AS)$. The minimization of the residuals is obtained by
\begin{equation}
\underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2
\label{eq:01}
\end{equation}
-with $R=AS$. Then the new solution $x$ is computed with $x=S\alpha$.
+with $R=AS$. The new solution $x$ is then computed with $x=S\alpha$.
In practice, $R$ is a dense rectangular matrix belonging in $\mathbb{R}^{n\times s}$,
\Output $x$ (solution vector)\vspace{0.2cm}
\State Set the initial guess $x_0$
\For {$k=1,2,3,\ldots$ until convergence (error$<\epsilon_{tsirm}$)} \label{algo:conv}
- \State $x_k=Solve(A,b,x_{k-1},max\_iter_{kryl})$ \label{algo:solve}
- \State retrieve error
- \State $S_{k \mod s}=x_k$ \label{algo:store}
+ \State $[x_k,error]=Solve(A,b,x_{k-1},max\_iter_{kryl})$ \label{algo:solve}
+ \State $S_{k \mod s}=x_k$ \label{algo:store} \Comment{update column (k mod s) of S}
\If {$k \mod s=0$ {\bf and} error$>\epsilon_{kryl}$}
\State $R=AS$ \Comment{compute dense matrix} \label{algo:matrix_mul}
\State $\alpha=Least\_Squares(R,b,max\_iter_{ls})$ \label{algo:}
\label{algo:01}
\end{algorithm}
-Algorithm~\ref{algo:01} summarizes the principle of our method. The outer
-iteration is inside the for loop. Line~\ref{algo:solve}, the Krylov method is
-called for a maximum of $max\_iter_{kryl}$ iterations. In practice, we suggest to set this parameter
-equals to the restart number of the GMRES-like method. Moreover, a tolerance
-threshold must be specified for the solver. In practice, this threshold must be
-much smaller than the convergence threshold of the TSIRM algorithm (\emph{i.e.}
-$\epsilon_{tsirm}$). Line~\ref{algo:store}, $S_{k \mod s}=x^k$ consists in copying the
-solution $x_k$ into the column $k \mod s$ of the matrix $S$, where $S$ is a matrix of size $n\times s$ whose column vector $i$ is denoted by $S_i$. After the
-minimization, the matrix $S$ is reused with the new values of the residuals. To
-solve the minimization problem, an iterative method is used. Two parameters are
-required for that: the maximum number of iterations and the threshold to stop the
-method.
+Algorithm~\ref{algo:01} summarizes the principle of the proposed method. The
+outer iteration is inside the \emph{for} loop. Line~\ref{algo:solve}, the Krylov
+method is called for a maximum of $max\_iter_{kryl}$ iterations. In practice,
+we suggest to set this parameter equal to the restart number in the GMRES-like
+method. Moreover, a tolerance threshold must be specified for the solver. In
+practice, this threshold must be much smaller than the convergence threshold of
+the TSIRM algorithm (\emph{i.e.}, $\epsilon_{tsirm}$). We also consider that
+after the call of the $Solve$ function, we obtain the vector $x_k$ and the error
+which is defined by $||Ax^k-b||_2$.
+
+ Line~\ref{algo:store},
+$S_{k \mod s}=x^k$ consists in copying the solution $x_k$ into the column $k
+\mod s$ of $S$. After the minimization, the matrix $S$ is reused with the new
+values of the residuals. To solve the minimization problem, an iterative method
+is used. Two parameters are required for that: the maximum number of iterations
+and the threshold to stop the method.
Let us summarize the most important parameters of TSIRM:
\begin{itemize}
\section{Convergence results}
\label{sec:04}
-Let us recall the following result, see~\cite{Saad86}.
+Let us recall the following result, see~\cite{Saad86} for further readings.
\begin{proposition}
\label{prop:saad}
Suppose that $A$ is a positive real matrix with symmetric part $M$. Then the residual norm provided at the $m$-th step of GMRES satisfies:
In order to see the influence of our algorithm with only one processor, we first
-show a comparison with the standard version of GMRES and our algorithm. In
-Table~\ref{tab:01}, we show the matrices we have used and some of them
-characteristics. For all the matrices, the name, the field, the number of rows
-and the number of nonzero elements are given.
+show a comparison with GMRES or FGMRES and our algorithm. In Table~\ref{tab:01},
+we show the matrices we have used and some of them characteristics. Those
+matrices are chosen from the Davis collection of the University of
+Florida~\cite{Dav97}. They are matrices arising in real-world applications. For
+all the matrices, the name, the field, the number of rows and the number of
+nonzero elements are given.
\begin{table}[htbp]
\begin{center}
-In order to perform larger experiments, we have tested some example applications
+In order to perform larger experiments, we have tested some example applications
of PETSc. Those applications are available in the ksp part which is suited for
scalable linear equations solvers:
\begin{itemize}
finite elements. For this example, the user can define the scaling of material
coefficient in embedded circle called $\alpha$.
\end{itemize}
-For more technical details on these applications, interested readers are invited
-to read the codes available in the PETSc sources. Those problems have been
-chosen because they are scalable with many cores which is not the case of other problems that we have tested.
+For more technical details on these applications, interested readers are invited
+to read the codes available in the PETSc sources. Those problems have been
+chosen because they are scalable with many cores which is not the case of other
+problems that we have tested.
+
+In the following larger experiments are described on two large scale
+architectures: Curie and Juqeen. Both these architectures are supercomputer
+composed of 80,640 cores for Curie and 458,752 cores for Juqueen. Those machines
+are respectively hosted by GENCI in France and Jülich Supercomputing Centre in
+Germany. They belongs with other similar architectures of the PRACE initiative (
+Partnership for Advanced Computing in Europe) which aims at proposing high
+performance supercomputing architecture to enhance research in Europe. The Curie
+architecture is composed of Intel E5-2680 processors at 2.7 GHz with 2Gb memory
+by core. The Juqueen architecture is composed of IBM PowerPC A2 at 1.6 GHz with
+1Gb memory per core. Both those architecture are equiped with a dedicated high
+speed network.
-In the following larger experiments are described on two large scale architectures: Curie and Juqeen... {\bf description...}\\
{\bf Description of preconditioners}\\