1 %%%%%%%%%%%%%%%%%%%%%%
\r
2 \documentclass{doublecol-new}
\r
3 %%%%%%%%%%%%%%%%%%%%%%
\r
5 \usepackage{natbib,stfloats}
\r
6 \usepackage{mathrsfs}
\r
7 \usepackage[utf8]{inputenc}
\r
8 \usepackage[T1]{fontenc}
\r
9 \usepackage{algorithm}
\r
10 \usepackage{algpseudocode}
\r
11 \usepackage{amsmath}
\r
12 \usepackage{amssymb}
\r
13 \usepackage{multirow}
\r
14 \usepackage{graphicx}
\r
17 \def\newblock{\hskip .11em plus .33em minus .07em}
\r
20 \newtheorem{lemma}{Lemma}
\r
21 \newtheorem{theorem}[lemma]{Theorem}
\r
22 \newtheorem{corrolary}[lemma]{Corrolary}
\r
23 \newtheorem{conjecture}[lemma]{Conjecture}
\r
24 \newtheorem{proposition}[lemma]{Proposition}
\r
25 \newtheorem{claim}[lemma]{Claim}
\r
26 \newtheorem{stheorem}[lemma]{Wrong Theorem}
\r
27 %\newtheorem{algorithm}{Algorithm}
\r
30 \theoremstyle{THrm}{
\r
31 \newtheorem{definition}{Definition}[section]
\r
32 \newtheorem{question}{Question}[section]
\r
33 \newtheorem{remark}{Remark}
\r
34 \newtheorem{scheme}{Scheme}
\r
37 \theoremstyle{THhit}{
\r
38 \newtheorem{case}{Case}[section]
\r
40 \algnewcommand\algorithmicinput{\textbf{Input:}}
\r
41 \algnewcommand\Input{\item[\algorithmicinput]}
\r
43 \algnewcommand\algorithmicoutput{\textbf{Output:}}
\r
44 \algnewcommand\Output{\item[\algorithmicoutput]}
\r
50 \def\theequation{\arabic{equation}}
\r
52 \JOURNALNAME{\TEN{\it International Journal of High Performance Computing and Networking}}
\r
56 %\thispagestyle{empty}%
\r
58 %\NINE\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcr@{}}%
\r
60 %Copyright \copyright\ 2012 Inderscience Enterprises Ltd. & &%
\r
72 \setcounter{page}{1}
\r
74 \LRH{R. Couturier, L. Ziane Khodja and C. Guyeux}
\r
76 \RRH{TSIRM: A Two-Stage Iteration with least-squares Residual Minimization algorithm}
\r
90 \title{TSIRM: A Two-Stage Iteration with least-squares Residual Minimization algorithm to solve large sparse linear and non linear systems}
\r
93 \authorA{Rapha\"el Couturier}
\r
95 \affA{Femto-ST Institute, University of Bourgogne Franche-Comte, France\\
\r
96 E-mail: raphael.couturier@univ-fcomte.fr}
\r
99 \authorB{Lilia Ziane Khodja}
\r
100 \affB{LTAS-Mécanique numérique non linéaire, University of Liege, Belgium \\
\r
101 E-mail: l.zianekhodja@ulg.ac.be}
\r
103 \authorC{Christophe Guyeux}
\r
104 \affC{Femto-ST Institute, University of Bourgogne Franche-Comte, France\\
\r
105 E-mail: christophe.guyeux@univ-fcomte.fr}
\r
109 In this paper, a two-stage iterative algorithm is proposed to improve the
\r
110 convergence of Krylov based iterative methods, typically those of GMRES
\r
111 variants. The principle of the proposed approach is to build an external
\r
112 iteration over the Krylov method, and to frequently store its current residual
\r
113 (at each GMRES restart for instance). After a given number of outer iterations,
\r
114 a least-squares minimization step is applied on the matrix composed by the saved
\r
115 residuals, in order to compute a better solution and to make new iterations if
\r
116 required. It is proven that the proposal has the same convergence properties
\r
117 than the inner embedded method itself.
\r
119 Several experiments have been performed
\r
120 with the PETSc solver with linear and nonlinear problems. They show good
\r
121 speedups compared to GMRES with up to 16,394 cores with different
\r
128 \KEYWORD{Iterative Krylov methods; sparse linear and non linear systems; two stage iteration; least-squares residual minimization; PETSc.}
\r
130 %\REF{to this paper should be made as follows: Rodr\'{\i}guez
\r
131 %Bol\'{\i}var, M.P. and Sen\'{e}s Garc\'{\i}a, B. (xxxx) `The
\r
132 %corporate environmental disclosures on the internet: the case of
\r
133 %IBEX 35 Spanish companies', {\it International Journal of Metadata,
\r
134 %Semantics and Ontologies}, Vol. x, No. x, pp.xxx\textendash xxx.}
\r
137 Raphaël Couturier ....
\r
139 \noindent Lilia Ziane Khodja ...
\r
141 \noindent Christophe Guyeux ...
\r
148 \section{Introduction}
\r
150 Iterative methods have recently become more attractive than direct ones to solve
\r
151 very large sparse linear systems~\cite{Saad2003}. They are more efficient in a
\r
152 parallel context, supporting thousands of cores, and they require less memory
\r
153 and arithmetic operations than direct methods~\cite{bahicontascoutu}. This is
\r
154 why new iterative methods are frequently proposed or adapted by researchers, and
\r
155 the increasing need to solve very large sparse linear systems has triggered the
\r
156 development of such efficient iterative techniques suitable for parallel
\r
159 Most of the successful iterative methods currently available are based on
\r
160 so-called ``Krylov subspaces''. They consist in forming a basis of successive
\r
161 matrix powers multiplied by an initial vector, which can be for instance the
\r
162 residual. These methods use vectors orthogonality of the Krylov subspace basis
\r
163 in order to solve linear systems. The best known iterative Krylov subspace
\r
164 methods are conjugate gradient and GMRES ones (Generalized Minimal RESidual).
\r
167 However, iterative methods suffer from scalability problems on parallel
\r
168 computing platforms with many processors, due to their need of reduction
\r
169 operations, and to collective communications to achieve matrix-vector
\r
170 multiplications. The communications on large clusters with thousands of cores
\r
171 and large sizes of messages can significantly affect the performances of these
\r
172 iterative methods. As a consequence, Krylov subspace iteration methods are often
\r
173 used with preconditioners in practice, to increase their convergence and
\r
174 accelerate their performances. However, most of the good preconditioners are
\r
175 not scalable on large clusters.
\r
177 In this research work, a two-stage algorithm based on two nested iterations
\r
178 called inner-outer iterations is proposed. This algorithm consists in solving
\r
179 the sparse linear system iteratively with a small number of inner iterations,
\r
180 and restarting the outer step with a new solution minimizing some error
\r
181 functions over some previous residuals. For further information on two-stage
\r
182 iteration methods, interested readers are invited to
\r
183 consult~\cite{Nichols:1973:CTS}. Two-stage algorithms are easy to parallelize on
\r
184 large clusters. Furthermore, the least-squares minimization technique improves
\r
185 its convergence and performances.
\r
187 The present article is organized as follows. Related works are presented in
\r
188 Section~\ref{sec:02}. Section~\ref{sec:03} details the two-stage algorithm using
\r
189 a least-squares residual minimization, while Section~\ref{sec:04} provides
\r
190 convergence results regarding this method. Section~\ref{sec:05} shows some
\r
191 experimental results obtained on large clusters using routines of PETSc
\r
192 toolkit. This research work ends by a conclusion section, in which the proposal
\r
193 is summarized while intended perspectives are provided.
\r
197 %%%*********************************************************
\r
198 %%%*********************************************************
\r
202 %%%*********************************************************
\r
203 %%%*********************************************************
\r
204 \section{Related works}
\r
206 Krylov subspace iteration methods have increasingly become key
\r
207 techniques for solving linear and nonlinear systems, or eigenvalue problems,
\r
208 especially since the increasing development of
\r
209 preconditioners~\cite{Saad2003,Meijerink77}. One reason for the popularity of
\r
210 these methods is their generality, simplicity, and efficiency to solve systems of
\r
211 equations arising from very large and complex problems.
\r
213 GMRES is one of the most widely used Krylov iterative method for solving sparse
\r
214 and large linear systems. It has been developed by Saad \emph{et
\r
215 al.}~\cite{Saad86} as a generalized method to deal with unsymmetric and
\r
216 non-Hermitian problems, and indefinite symmetric problems too. In its original
\r
217 version called full GMRES, this algorithm minimizes the residual over the
\r
218 current Krylov subspace until convergence in at most $n$ iterations, where $n$
\r
219 is the size of the sparse matrix. Full GMRES is however too expensive in the
\r
220 case of large matrices, since the required orthogonalization process per
\r
221 iteration grows quadratically with the number of iterations. For that reason,
\r
222 GMRES is restarted in practice after each $m\ll n$ iterations, to avoid the
\r
223 storage of a large orthonormal basis. However, the convergence behavior of the
\r
224 restarted GMRES, called GMRES($m$), in many cases depends quite critically on
\r
225 the $m$ value~\cite{Huang89}. Therefore in most cases, a preconditioning
\r
226 technique is applied to the restarted GMRES method in order to improve its
\r
229 To enhance the robustness of Krylov iterative solvers, some techniques have been
\r
230 proposed allowing the use of different preconditioners, if necessary, within the
\r
231 iteration itself instead of restarting. Those techniques may lead to
\r
232 considerable savings in CPU time and memory requirements. Van der Vorst
\r
233 in~\cite{Vorst94} has for instance proposed variants of the GMRES algorithm in
\r
234 which a different preconditioner is applied in each iteration, leading to the
\r
235 so-called GMRESR family of nested methods. In fact, the GMRES method is
\r
236 effectively preconditioned with other iterative schemes (or GMRES itself), where
\r
237 the iterations of the GMRES method are called outer iterations while the
\r
238 iterations of the preconditioning process is referred to as inner iterations.
\r
239 Saad in~\cite{Saad:1993} has proposed Flexible GMRES (FGMRES) which is another
\r
240 variant of the GMRES algorithm using a variable preconditioner. In FGMRES the
\r
241 search directions are preconditioned whereas in GMRESR the residuals are
\r
242 preconditioned. However, in practice, good preconditioners are those based on
\r
243 direct methods, as ILU preconditioners, which are not easy to parallelize and
\r
244 suffer from the scalability problems on large clusters of thousands of cores.
\r
246 Recently, communication-avoiding methods have been developed to reduce the
\r
247 communication overheads in Krylov subspace iterative solvers. On modern computer
\r
248 architectures, communications between processors are much slower than
\r
249 floating-point arithmetic operations on a given
\r
250 processor. Communication-avoiding techniques reduce either communications
\r
251 between processors or data movements between levels of the memory hierarchy, by
\r
252 reformulating the communication-bound kernels (more frequently SpMV kernels) and
\r
253 the orthogonalization operations within the Krylov iterative solver. Different
\r
254 works have studied the communication-avoiding techniques for the GMRES method,
\r
255 so-called CA-GMRES, on multicore processors and multi-GPU
\r
256 machines~\cite{Mohiyuddin2009,Hoemmen2010,Yamazaki2014}.
\r
258 Compared to all these works and to all the other works on Krylov iterative
\r
259 methods, the originality of our work is to build a second iteration over a
\r
260 Krylov iterative method and to minimize the residuals with a least-squares
\r
261 method after a given number of outer iterations.
\r
263 %%%*********************************************************
\r
264 %%%*********************************************************
\r
268 %%%*********************************************************
\r
269 %%%*********************************************************
\r
270 \section{TSIRM: Two-stage iteration with least-squares residuals minimization algorithm}
\r
272 A two-stage algorithm is proposed to solve large sparse linear systems of the
\r
273 form $Ax=b$, where $A\in\mathbb{R}^{n\times n}$ is a sparse and square
\r
274 nonsingular matrix, $x\in\mathbb{R}^n$ is the solution vector, and
\r
275 $b\in\mathbb{R}^n$ is the right-hand side. As explained previously, the
\r
276 algorithm is implemented as an inner-outer iteration solver based on iterative
\r
277 Krylov methods. The main key-points of the proposed solver are given in
\r
278 Algorithm~\ref{algo:01}. It can be summarized as follows: the inner solver is a
\r
279 Krylov based one. In order to accelerate its convergence, the outer solver
\r
280 periodically applies a least-squares minimization on the residuals computed by
\r
283 At each outer iteration, the sparse linear system $Ax=b$ is partially solved
\r
284 using only $m$ iterations of an iterative method, this latter being initialized
\r
285 with the last obtained approximation. The GMRES method~\cite{Saad86}, or any of
\r
286 its variants, can potentially be used as inner solver. The current approximation
\r
287 of the Krylov method is then stored inside a $n \times s$ matrix $S$, which is
\r
288 composed by the $s$ last solutions that have been computed during the inner
\r
289 iterations phase. In the remainder, the $i$-th column vector of $S$ will be
\r
292 At each $s$ iterations, another kind of minimization step is applied in order to
\r
293 compute a new solution $x$. For that, the previous residuals of $Ax=b$ are
\r
294 computed by the inner iterations with $(b-AS)$. The minimization of the
\r
295 residuals is obtained by
\r
297 \underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2
\r
300 with $R=AS$. The new solution $x$ is then computed with $x=S\alpha$.
\r
303 In practice, $R$ is a dense rectangular matrix belonging in $\mathbb{R}^{n\times
\r
304 s}$, with $s\ll n$. In order to minimize~\eqref{eq:01}, a least-squares
\r
305 method such as CGLS ~\cite{Hestenes52} or LSQR~\cite{Paige82} is used. Remark
\r
306 that these methods are more appropriate than a single direct method in a
\r
307 parallel context. CGLS has recently been used to improve the performance of multisplitting algorithms \cite{cz15:ij}.
\r
311 \begin{algorithm}[t]
\r
313 \begin{algorithmic}[1]
\r
314 \Input $A$ (sparse matrix), $b$ (right-hand side)
\r
315 \Output $x$ (solution vector)\vspace{0.2cm}
\r
316 \State Set the initial guess $x_0$
\r
317 \For {$k=1,2,3,\ldots$ until convergence ($error<\epsilon_{tsirm}$)} \label{algo:conv}
\r
318 \State $[x_k,error]=Solve(A,b,x_{k-1},max\_iter_{kryl})$ \label{algo:solve}
\r
319 \State $S_{k \mod s}=x_k$ \label{algo:store} \Comment{update column ($k \mod s$) of $S$}
\r
320 \If {$k \mod s=0$ {\bf and} $error>\epsilon_{kryl}$}
\r
321 \State $R=AS$ \Comment{compute dense matrix} \label{algo:matrix_mul}
\r
322 \State $\alpha=Least\_Squares(R,b,max\_iter_{ls})$ \label{algo:}
\r
323 \State $x_k=S\alpha$ \Comment{compute new solution}
\r
330 Algorithm~\ref{algo:01} summarizes the principle of the proposed method. The
\r
331 outer iteration is inside the \emph{for} loop. Line~\ref{algo:solve}, the Krylov
\r
332 method is called for a maximum of $max\_iter_{kryl}$ iterations. In practice,
\r
333 we suggest to set this parameter equal to the restart number in the GMRES-like
\r
334 method. Moreover, a tolerance threshold must be specified for the solver. In
\r
335 practice, this threshold must be much smaller than the convergence threshold of
\r
336 the TSIRM algorithm (\emph{i.e.}, $\epsilon_{tsirm}$). We also consider that
\r
337 after the call of the $Solve$ function, we obtain the vector $x_k$ and the
\r
338 $error$, which is defined by $||Ax_k-b||_2$.
\r
340 Line~\ref{algo:store}, $S_{k \mod s}=x_k$ consists in copying the solution
\r
341 $x_k$ into the column $k \mod s$ of $S$. After the minimization, the matrix
\r
342 $S$ is reused with the new values of the residuals. To solve the minimization
\r
343 problem, an iterative method is used. Two parameters are required for that:
\r
344 the maximum number of iterations ($max\_iter_{ls}$) and the threshold to stop
\r
345 the method ($\epsilon_{ls}$).
\r
347 Let us summarize the most important parameters of TSIRM:
\r
349 \item $\epsilon_{tsirm}$: the threshold that stops the TSIRM method;
\r
350 \item $max\_iter_{kryl}$: the maximum number of iterations for the Krylov method;
\r
351 \item $s$: the number of outer iterations before applying the minimization step;
\r
352 \item $max\_iter_{ls}$: the maximum number of iterations for the iterative least-squares method;
\r
353 \item $\epsilon_{ls}$: the threshold used to stop the least-squares method.
\r
357 The parallelization of TSIRM relies on the parallelization of all its
\r
358 parts. More precisely, except the least-squares step, all the other parts are
\r
359 obvious to achieve out in parallel. In order to develop a parallel version of
\r
360 our code, we have chosen to use PETSc~\cite{petsc-web-page}. In
\r
361 line~\ref{algo:matrix_mul}, the matrix-matrix multiplication is implemented and
\r
362 efficient since the matrix $A$ is sparse and the matrix $S$ contains few columns
\r
363 in practice. As explained previously, at least two methods seem to be
\r
364 interesting to solve the least-squares minimization, the CGLS and the LSQR
\r
367 In Algorithm~\ref{algo:02} we remind the CGLS algorithm. The LSQR method follows
\r
368 more or less the same principle but it takes more place, so we briefly explain
\r
369 the parallelization of CGLS which is similar to LSQR.
\r
371 \begin{algorithm}[t]
\r
373 \begin{algorithmic}[1]
\r
374 \Input $A$ (matrix), $b$ (right-hand side)
\r
375 \Output $x$ (solution vector)\vspace{0.2cm}
\r
376 \State Let $x_0$ be an initial approximation
\r
377 \State $r_0=b-Ax_0$
\r
378 \State $p_1=A^Tr_0$
\r
380 \State $\gamma=||s_0||^2_2$
\r
381 \For {$k=1,2,3,\ldots$ until convergence ($\gamma<\epsilon_{ls}$)} \label{algo2:conv}
\r
383 \State $\alpha_k=\gamma/||q_k||^2_2$
\r
384 \State $x_k=x_{k-1}+\alpha_kp_k$
\r
385 \State $r_k=r_{k-1}-\alpha_kq_k$
\r
386 \State $s_k=A^Tr_k$
\r
387 \State $\gamma_{old}=\gamma$
\r
388 \State $\gamma=||s_k||^2_2$
\r
389 \State $\beta_k=\gamma/\gamma_{old}$
\r
390 \State $p_{k+1}=s_k+\beta_kp_k$
\r
397 In each iteration of CGLS, there are two matrix-vector multiplications and some
\r
398 classical operations: dot product, norm, multiplication, and addition on
\r
399 vectors. All these operations are easy to implement in PETSc or similar
\r
400 environment. It should be noticed that LSQR follows the same principle, it is a
\r
401 little bit longer but it performs more or less the same operations.
\r
404 %%%*********************************************************
\r
405 %%%*********************************************************
\r
407 \section{Convergence results}
\r
411 We can now claim that,
\r
412 \begin{proposition}
\r
414 If $A$ is either a definite positive or a positive matrix and GMRES($m$) is used as a solver, then the TSIRM algorithm is convergent.
\r
416 Furthermore, let $r_k$ be the
\r
417 $k$-th residue of TSIRM, then
\r
418 we have the following boundaries:
\r
420 \item when $A$ is positive:
\r
422 ||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0|| ,
\r
424 where $M$ is the symmetric part of $A$, $\alpha = \lambda_{min}(M)^2$ and $\beta = \lambda_{max}(A^T A)$;
\r
425 \item when $A$ is positive definite:
\r
427 \|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_0\|.
\r
430 %In the general case, where A is not positive definite, we have
\r
431 %$\|r_n\| \le \inf_{p \in P_n} \|p(A)\| \le \kappa_2(V) \inf_{p \in P_n} \max_{\lambda \in \sigma(A)} |p(\lambda)| \|r_0\|, .$
\r
435 Let us first recall that the residue is under control when considering the GMRES algorithm on a positive definite matrix, and it is bounded as follows:
\r
437 \|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{k/2} \|r_0\| .
\r
439 Additionally, when $A$ is a positive real matrix with symmetric part $M$, then the residual norm provided at the $m$-th step of GMRES satisfies:
\r
441 ||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_0|| ,
\r
443 where $\alpha$ and $\beta$ are defined as in Proposition~\ref{prop:saad}, which proves
\r
444 the convergence of GMRES($m$) for all $m$ under such assumptions regarding $A$.
\r
445 These well-known results can be found, \emph{e.g.}, in~\cite{Saad86}.
\r
447 We will now prove by a mathematical induction that, for each $k \in \mathbb{N}^\ast$,
\r
448 $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{mk}{2}} ||r_0||$ when $A$ is positive, and $\|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_0\|$ when $A$ is positive definite.
\r
450 The base case is obvious, as for $k=1$, the TSIRM algorithm simply consists in applying GMRES($m$) once, leading to a new residual $r_1$ that follows the inductive hypothesis due to the results recalled above.
\r
452 Suppose now that the claim holds for all $m=1, 2, \hdots, k-1$, that is, $\forall m \in \{1,2,\hdots, k-1\}$, $||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ in the positive case, and $\|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_0\|$ in the definite positive one.
\r
453 We will show that the statement holds too for $r_k$. Two situations can occur:
\r
455 \item If $k \not\equiv 0 ~(\textrm{mod}\ m)$, then the TSIRM algorithm consists in executing GMRES once. In that case and by using the inductive hypothesis, we obtain either $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ if $A$ is positive, or $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite case.
\r
456 \item Else, the TSIRM algorithm consists in two stages: a first GMRES($m$) execution leads to a temporary $x_k$ whose residue satisfies:
\r
458 \item $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ in the positive case,
\r
459 \item $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite one,
\r
461 and a least squares resolution.
\r
462 Let $\operatorname{span}(S) = \left \{ {\sum_{i=1}^k \lambda_i v_i \Big| k \in \mathbb{N}, v_i \in S, \lambda _i \in \mathbb{R}} \right \}$ be the linear span of a set of real vectors $S$. So,\\
\r
463 $\min_{\alpha \in \mathbb{R}^s} ||b-R\alpha ||_2 = \min_{\alpha \in \mathbb{R}^s} ||b-AS\alpha ||_2$
\r
466 & = \min_{x \in span\left(S_{k-s+1}, S_{k-s+2}, \hdots, S_{k} \right)} ||b-AS\alpha ||_2\\
\r
467 & = \min_{x \in span\left(x_{k-s+1}, x_{k-s}+2, \hdots, x_{k} \right)} ||b-AS\alpha ||_2\\
\r
468 & \leqslant \min_{x \in span\left( x_{k} \right)} ||b-Ax ||_2\\
\r
469 & \leqslant \min_{\lambda \in \mathbb{R}} ||b-\lambda Ax_{k} ||_2\\
\r
470 & \leqslant ||b-Ax_{k}||_2\\
\r
472 & \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||, \textrm{ if $A$ is positive,}\\
\r
473 & \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|, \textrm{ if $A$ is}\\
\r
474 & \textrm{positive definite,}
\r
477 which concludes the induction and the proof.
\r
480 Remark that a similar proposition can be formulated at each time
\r
481 the given solver satisfies an inequality of the form $||r_n|| \leqslant \mu^n ||r_0||$,
\r
482 with $|\mu|<1$. Furthermore, it is \emph{a priori} possible in some particular cases
\r
484 that the proposed TSIRM converges while the GMRES($m$) does not.
\r
486 %%%*********************************************************
\r
487 %%%*********************************************************
\r
488 \section{Experiments using PETSc}
\r
492 In this section four kinds of experiments have been performed. First, some experiments on real matrices issued from the sparse matrix florida have been achieved out. Second, some experiments in parallel with some linear problems are reported and analyzed. Third, some experiments in parallèle with som nonlinear problems are illustrated. Finally some parameters of TSIRM are studied in order to understand their influences.
\r
495 \subsection{Real matrices}
\r
499 In order to see the behavior of our approach when considering only one processor,
\r
500 a first comparison with GMRES or FGMRES and the new algorithm detailed
\r
501 previously has been experimented. Matrices that have been used with their
\r
502 characteristics (names, fields, rows, and nonzero coefficients) are detailed in
\r
503 Table~\ref{tab:01}. These latter, which are real-world applications matrices,
\r
504 have been extracted from the Davis collection, University of
\r
505 Florida~\cite{Dav97}.
\r
507 \begin{table*}[htbp]
\r
509 \begin{tabular}{|c|c|r|r|r|}
\r
511 Matrix name & Field &\# Rows & \# Nonzeros \\\hline \hline
\r
512 crashbasis & Optimization & 160,000 & 1,750,416 \\
\r
513 parabolic\_fem & Comput. fluid dynamics & 525,825 & 2,100,225 \\
\r
514 epb3 & Thermal problem & 84,617 & 463,625 \\
\r
515 atmosmodj & Comput. fluid dynamics & 1,270,432 & 8,814,880 \\
\r
516 bfwa398 & Electromagnetics pb & 398 & 3,678 \\
\r
517 torso3 & 2D/3D problem & 259,156 & 4,429,042 \\
\r
521 \caption{Main characteristics of the sparse matrices chosen from the Davis collection}
\r
525 Chosen parameters are detailed below.
\r
526 We have stopped the GMRES every 30
\r
527 iterations (\emph{i.e.}, $max\_iter_{kryl}=30$), which is the default
\r
528 setting of GMRES restart parameter. The parameter $s$ has been set to 8. CGLS
\r
529 minimizes the least-squares problem with parameters
\r
530 $\epsilon_{ls}=1e-40$ and $max\_iter_{ls}=20$. The external precision is set to
\r
531 $\epsilon_{tsirm}=1e-10$. These experiments have been performed on an Intel(R)
\r
532 Core(TM) i7-3630QM CPU @ 2.40GHz with the 3.5.1 version of PETSc.
\r
535 Experiments comparing
\r
536 a GMRES variant with TSIRM in the resolution of linear systems are given in Table~\ref{tab:02}.
\r
537 The second column describes whether GMRES or FGMRES has been used for linear systems solving.
\r
538 Different preconditioners have been used according to the matrices. With TSIRM, the same
\r
539 solver and the same preconditioner are used. This table shows that TSIRM can
\r
540 drastically reduce the number of iterations needed to reach the convergence, when the
\r
541 number of iterations for the normal GMRES is more or less greater than 500. In
\r
542 fact this also depends on two parameters: the number of iterations before stopping GMRES
\r
543 and the number of iterations to perform the minimization.
\r
546 \begin{table*}[htbp]
\r
548 \begin{tabular}{|c|c|r|r|r|r|}
\r
551 \multirow{2}{*}{Matrix name} & Solver / & \multicolumn{2}{c|}{GMRES} & \multicolumn{2}{c|}{TSIRM CGLS} \\
\r
553 & precond & Time & \# Iter. & Time & \# Iter. \\\hline \hline
\r
555 crashbasis & gmres / none & 15.65 & 518 & 14.12 & 450 \\
\r
556 parabolic\_fem & gmres / ilu & 1009.94 & 7573 & 401.52 & 2970 \\
\r
557 epb3 & fgmres / sor & 8.67 & 600 & 8.21 & 540 \\
\r
558 atmosmodj & fgmres / sor & 104.23 & 451 & 88.97 & 366 \\
\r
559 bfwa398 & gmres / none & 1.42 & 9612 & 0.28 & 1650 \\
\r
560 torso3 & fgmres / sor & 37.70 & 565 & 34.97 & 510 \\
\r
564 \caption{Comparison between sequential standalone (F)GMRES and TSIRM with (F)GMRES (time in seconds).}
\r
571 \subsection{Parallel linear problems}
\r
574 In order to perform larger experiments, we have tested some example applications
\r
575 of PETSc. These applications are available in the \emph{ksp} part, which is
\r
576 suited for scalable linear equations solvers:
\r
578 \item ex15 is an example that solves in parallel an operator using a finite
\r
579 difference scheme. The diagonal is equal to 4 and 4 extra-diagonals
\r
580 representing the neighbors in each directions are equal to -1. This example is
\r
581 used in many physical phenomena, for example, heat and fluid flow, wave
\r
583 \item ex54 is another example based on a 2D problem discretized with quadrilateral
\r
584 finite elements. In this example, the user can define the scaling of material
\r
585 coefficient in embedded circle called $\alpha$.
\r
587 For more technical details on these applications, interested readers are invited
\r
588 to read the codes available in the PETSc sources. These problems have been
\r
589 chosen because they are scalable with many cores.
\r
591 In the following, larger experiments are described on two large scale
\r
592 architectures: Curie and Juqueen. Both these architectures are supercomputers
\r
593 respectively composed of 80,640 cores for Curie and 458,752 cores for
\r
594 Juqueen. Those machines are respectively hosted by GENCI in France and Jülich
\r
595 Supercomputing Center in Germany. They belong with other similar architectures
\r
596 to the PRACE initiative (Partnership for Advanced Computing in Europe), which
\r
597 aims at proposing high performance supercomputing architecture to enhance
\r
598 research in Europe. The Curie architecture is composed of Intel E5-2680
\r
599 processors at 2.7 GHz with 2Gb memory by core. The Juqueen architecture,
\r
601 composed by IBM PowerPC A2 at 1.6 GHz with 1Gb memory per core. Both those
\r
602 architectures are equipped with a dedicated high speed network.
\r
605 In many situations, using preconditioners is essential in order to find the
\r
606 solution of a linear system. There are many preconditioners available in PETSc.
\r
607 However, for parallel applications, all the preconditioners based on matrix factorization
\r
608 are not available. In our experiments, we have tested different kinds of
\r
609 preconditioners, but as it is not the subject of this paper, we will not
\r
610 present results with many preconditioners. In practice, we have chosen to use a
\r
611 multigrid (MG) and successive over-relaxation (SOR). For further details on the
\r
612 preconditioners in PETSc, readers are referred to~\cite{petsc-web-page}.
\r
616 \begin{table*}[htbp]
\r
618 \begin{tabular}{|r|r|r|r|r|r|r|r|r|}
\r
621 nb. cores & precond & \multicolumn{2}{c|}{FGMRES} & \multicolumn{2}{c|}{TSIRM CGLS} & \multicolumn{2}{c|}{TSIRM LSQR} & best gain \\
\r
623 & & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
\r
624 2,048 & MG & 403.49 & 18,210 & 73.89 & 3,060 & 77.84 & 3,270 & 5.46 \\
\r
625 2,048 & SOR & 745.37 & 57,060 & 87.31 & 6,150 & 104.21 & 7,230 & 8.53 \\
\r
626 4,096 & MG & 562.25 & 25,170 & 97.23 & 3,990 & 89.71 & 3,630 & 6.27 \\
\r
627 4,096 & SOR & 912.12 & 70,194 & 145.57 & 9,750 & 168.97 & 10,980 & 6.26 \\
\r
628 8,192 & MG & 917.02 & 40,290 & 148.81 & 5,730 & 143.03 & 5,280 & 6.41 \\
\r
629 8,192 & SOR & 1,404.53 & 106,530 & 212.55 & 12,990 & 180.97 & 10,470 & 7.76 \\
\r
630 16,384 & MG & 1,430.56 & 63,930 & 237.17 & 8,310 & 244.26 & 7,950 & 6.03 \\
\r
631 16,384 & SOR & 2,852.14 & 216,240 & 418.46 & 21,690 & 505.26 & 23,970 & 6.82 \\
\r
635 \caption{Comparison of FGMRES and TSIRM with FGMRES for example ex15 of PETSc/KSP with two preconditioners (MG and SOR) having 25,000 components per core on Juqueen ($\epsilon_{tsirm}=1e-3$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
640 Table~\ref{tab:03} shows the execution times and the number of iterations of
\r
641 example ex15 of PETSc on the Juqueen architecture. Different numbers of cores
\r
642 are studied ranging from 2,048 up-to 16,383 with the two preconditioners {\it
\r
643 MG} and {\it SOR}. For those experiments, the number of components (or
\r
644 unknowns of the problems) per core is fixed at 25,000, also called weak
\r
645 scaling. This number can seem relatively small. In fact, for some applications
\r
646 that need a lot of memory, the number of components per processor requires
\r
647 sometimes to be small. Other parameters for this application are described in
\r
648 the legend of this table.
\r
652 In Table~\ref{tab:03}, we can notice that TSIRM is always faster than
\r
653 FGMRES. The last column shows the ratio between FGMRES and the best version of
\r
654 TSIRM according to the minimization procedure: CGLS or LSQR. Even if we have
\r
655 computed the worst case between CGLS and LSQR, it is clear that TSIRM is always
\r
656 faster than FGMRES. For this example, the multigrid preconditioner is faster
\r
657 than SOR. The gain between TSIRM and FGMRES is more or less similar for the two
\r
658 preconditioners. Looking at the number of iterations to reach the convergence,
\r
659 it is obvious that TSIRM allows the reduction of the number of iterations. It
\r
660 should be noticed that for TSIRM, in those experiments, only the iterations of
\r
661 the Krylov solver are taken into account. Iterations of CGLS or LSQR were not
\r
662 recorded but they are time-consuming. In general, each $max\_iter_{kryl}*s$
\r
663 iterations which corresponds to 30*12, there are $max\_iter_{ls}$ iterations for
\r
664 the least-squares method which corresponds to 15.
\r
666 \begin{figure}[htbp]
\r
668 \includegraphics[width=0.5\textwidth]{nb_iter_sec_ex15_juqueen}
\r
669 \caption{Number of iterations per second with ex15 and the same parameters as in Table~\ref{tab:03} (weak scaling)}
\r
674 In Figure~\ref{fig:01}, the number of iterations per second corresponding to
\r
675 Table~\ref{tab:03} is displayed. It can be noticed that the number of
\r
676 iterations per second of FMGRES is constant whereas it decreases with TSIRM with
\r
677 both preconditioners. This can be explained by the fact that when the number of
\r
678 cores increases, the time for the least-squares minimization step also increases
\r
679 but, generally, when the number of cores increases, the number of iterations to
\r
680 reach the threshold also increases, and, in that case, TSIRM is more efficient
\r
681 to reduce the number of iterations. So, the overall benefit of using TSIRM is
\r
689 \begin{table*}[htbp]
\r
691 \begin{tabular}{|r|r|r|r|r|r|r|r|r|}
\r
694 nb. cores & $\epsilon_{tsirm}$ & \multicolumn{2}{c|}{FGMRES} & \multicolumn{2}{c|}{TSIRM CGLS} & \multicolumn{2}{c|}{TSIRM LSQR} & best gain \\
\r
696 & & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
\r
697 2,048 & 8e-5 & 108.88 & 16,560 & 23.06 & 3,630 & 22.79 & 3,630 & 4.77 \\
\r
698 2,048 & 6e-5 & 194.01 & 30,270 & 35.50 & 5,430 & 27.74 & 4,350 & 6.99 \\
\r
699 4,096 & 7e-5 & 160.59 & 22,530 & 35.15 & 5,130 & 29.21 & 4,350 & 5.49 \\
\r
700 4,096 & 6e-5 & 249.27 & 35,520 & 52.13 & 7,950 & 39.24 & 5,790 & 6.35 \\
\r
701 8,192 & 6e-5 & 149.54 & 17,280 & 28.68 & 3,810 & 29.05 & 3,990 & 5.21 \\
\r
702 8,192 & 5e-5 & 785.04 & 109,590 & 76.07 & 10,470 & 69.42 & 9,030 & 11.30 \\
\r
703 16,384 & 4e-5 & 718.61 & 86,400 & 98.98 & 10,830 & 131.86 & 14,790 & 7.26 \\
\r
707 \caption{Comparison of FGMRES and TSIRM with FGMRES algorithms for ex54 of PETSc/KSP (both with the MG preconditioner) with 25,000 components per core on Curie ($max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
713 In Table~\ref{tab:04}, some experiments with example ex54 on the Curie
\r
714 architecture are reported. For this application, we fixed $\alpha=0.6$. As it
\r
715 can be seen in that table, the size of the problem has a strong influence on the
\r
716 number of iterations to reach the convergence. That is why we have preferred to
\r
717 change the threshold. If we set it to $1e-3$ as with the previous application,
\r
718 only one iteration is necessary to reach the convergence. So Table~\ref{tab:04}
\r
719 shows the results of different executions with different number of cores and
\r
720 different thresholds. As with the previous example, we can observe that TSIRM is
\r
721 faster than FGMRES. The ratio greatly depends on the number of iterations for
\r
722 FMGRES to reach the threshold. The greater the number of iterations to reach the
\r
723 convergence is, the better the ratio between our algorithm and FMGRES is. This
\r
724 experiment is also a weak scaling with approximately $25,000$ components per
\r
725 core. It can also be observed that the difference between CGLS and LSQR is not
\r
726 significant. Both can be good but it seems not possible to know in advance which
\r
727 one will be the best.
\r
729 Table~\ref{tab:05} shows a strong scaling experiment with example ex54 on the
\r
730 Curie architecture. So, in this case, the number of unknowns is fixed at
\r
731 $204,919,225$ and the number of cores ranges from $512$ to $8192$ with the power
\r
732 of two. The threshold is fixed at $5e-5$ and only the $mg$ preconditioner has
\r
733 been tested. Here again we can see that TSIRM is faster than FGMRES. The
\r
734 efficiency of each algorithm is reported. It can be noticed that the efficiency
\r
735 of FGMRES is better than the TSIRM one except with $8,192$ cores and that its
\r
736 efficiency is greater than one whereas the efficiency of TSIRM is lower than
\r
737 one. Nevertheless, the ratio of TSIRM with any version of the least-squares
\r
738 method is always faster. With $8,192$ cores when the number of iterations is
\r
739 far more important for FGMRES, we can see that it is only slightly more
\r
740 important for TSIRM.
\r
742 In Figure~\ref{fig:02} we report the number of iterations per second for the
\r
743 experiments reported in Table~\ref{tab:05}. This figure highlights that the
\r
744 number of iterations per second is more or less the same for FGMRES and TSIRM
\r
745 with a little advantage for FGMRES. It can be explained by the fact that, as we
\r
746 have previously explained, the iterations of the least-squares steps are not
\r
747 taken into account with TSIRM.
\r
749 \begin{table*}[htbp]
\r
751 \begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|}
\r
754 nb. cores & \multicolumn{2}{c|}{FGMRES} & \multicolumn{2}{c|}{TSIRM CGLS} & \multicolumn{2}{c|}{TSIRM LSQR} & best gain & \multicolumn{3}{c|}{efficiency} \\
\r
755 \cline{2-7} \cline{9-11}
\r
756 & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & & FGMRES & TS CGLS & TS LSQR\\\hline \hline
\r
757 512 & 3,969.69 & 33,120 & 709.57 & 5,790 & 622.76 & 5,070 & 6.37 & 1 & 1 & 1 \\
\r
758 1024 & 1,530.06 & 25,860 & 290.95 & 4,830 & 307.71 & 5,070 & 5.25 & 1.30 & 1.21 & 1.01 \\
\r
759 2048 & 919.62 & 31,470 & 237.52 & 8,040 & 194.22 & 6,510 & 4.73 & 1.08 & .75 & .80\\
\r
760 4096 & 405.60 & 28,380 & 111.67 & 7,590 & 91.72 & 6,510 & 4.42 & 1.22 & .79 & .84 \\
\r
761 8192 & 785.04 & 109,590 & 76.07 & 10,470 & 69.42 & 9,030 & 11.30 & .32 & .58 & .56 \\
\r
766 \caption{Comparison of FGMRES and TSIRM for ex54 of PETSc/KSP (both with the MG preconditioner) with 204,919,225 components on Curie with different number of cores ($\epsilon_{tsirm}=5e-5$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
771 \begin{figure}[htbp]
\r
773 \includegraphics[width=0.5\textwidth]{nb_iter_sec_ex54_curie}
\r
774 \caption{Number of iterations per second with ex54 and the same parameters as in Table~\ref{tab:05} (strong scaling)}
\r
779 \begin{figure}[htbp]
\r
781 \includegraphics[width=0.5\textwidth]{nb_iter_sec_ex45_curie}
\r
782 \caption{Number of iterations per second with ex45 and the same parameters as in Table~\ref{tab:06} (weak scaling)}
\r
789 {\bf example ex45/ksp à décrire et commenter en montrant que hypre est pourri avec cet exemple}
\r
791 \begin{table*}[htbp]
\r
793 \begin{tabular}{|r|r|r|r|r|r|r|r|}
\r
796 nb. cores & \multicolumn{2}{c|}{FGMRES/ASM} & \multicolumn{2}{c|}{TSIRM CGLS/ASM} & gain& \multicolumn{2}{c|}{FGMRES/HYPRE} \\
\r
797 \cline{2-5} \cline{7-8}
\r
798 & Time & \# Iter. & Time & \# Iter. & & Time & \# Iter. \\\hline \hline
\r
799 512 & 5.54 & 685 & 2.5 & 570 & 2.21 & 128.9 & 9 \\
\r
800 2048 & 14.95 & 1,560 & 4.32 & 746 & 3.48 & 335.7 & 9 \\
\r
801 4096 & 25.13 & 2,369 & 5.61 & 859 & 4.48 & >1000 & -- \\
\r
802 8192 & 44.35 & 3,197 & 7.6 & 1083 & 5.84 & >1000 & -- \\
\r
807 \caption{Comparison of FGMRES and TSIRM for ex45 of PETSc/KSP with two preconditioner (ASM and HYPRE) having 5,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$,$\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
813 \subsection{Parallel nonlinear problems}
\r
815 With PETSc, linear solvers are used inside nonlinear solvers. The SNES
\r
816 (Scalable Nonlinear Equations Solvers) module in PETSc implements easy to use
\r
817 methods, like Newton-type, quasi-Newton or full approximation scheme (FAS)
\r
818 multigrid to solve systems of nonlinears equations. As SNES is based on the
\r
819 Krylov methods of PETSc, it is interesting to investigate if the TSIRM method is
\r
820 also efficient and scalable with non linear problems. In PETSc, some examples
\r
821 are provided. An important criteria is the scalability of the initial code with
\r
822 classical solvers. Consequently, we have chosen two of these examples: ex14 and
\r
823 ex20. In ex14, the code solves the Bratu (SFI - solid fuel ignition) nonlinear
\r
824 partial difference equations in 3 dimension. In ex20, the code solves a 3
\r
825 dimension radiative transport test problem. For more details on these examples,
\r
826 interested readers are invited to see the code in the PETSc examples. For both
\r
827 these examples, a weak scaling case is chosen where processors have
\r
828 approximately a number of components equals to 100,000.
\r
830 In Table~\ref{tab:07} we report the result of our experiments for the example
\r
831 ex14 with the block Jacobi preconditioner. For TSIRM the CGLS algorithm is used
\r
832 to solve the minimization step. In this table, we can see that the number of
\r
833 iterations used by the linear solver is smaller with TSIRM compared with FGMRES.
\r
834 Consequently the execution times are smaller with TSIRM. The gain between TSIRM
\r
835 and FGMRES is around 6 and 7. The parameters of TSIRM are expressed in the
\r
836 caption of the table.
\r
838 \begin{table*}[htbp]
\r
840 \begin{tabular}{|r|r|r|r|r|r|}
\r
843 nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\
\r
845 & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
\r
846 1,024 & 159.52 & 11,584 & 26.34 & 1,563 & 6.06 \\
\r
847 2,048 & 226.24 & 16,459 & 37.23 & 2,248 & 6.08\\
\r
848 4,096 & 391.21 & 27,794 & 50.93 & 2,911 & 7.69\\
\r
849 8,192 & 543.23 & 37,770 & 79.21 & 4,324 & 6.86 \\
\r
854 \caption{Comparison of FGMRES and TSIRM for ex14 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
859 In Table~\cite{tab:08}, the results of the experiments with the example ex20 are
\r
860 reported. The block Jacobi preconditioner has also been used and CGLS to solve
\r
861 the minimization step for TSIRM. For this example, we can observ that the number
\r
862 of iterations for FMGRES increase drastically when the number of cores
\r
863 increases. With TSIRM, we can see that the number of iterations is initially
\r
864 very small compared to the FGMRES ones and when the number of cores increase,
\r
865 the number of iterations increases slighther with TSIRM than with FGMRES. For
\r
866 this example, the gain between TSIRM and FGMRES ranges between 8 with 1,024
\r
867 cores to more than 16 with 8,192 cores.
\r
869 \begin{table*}[htbp]
\r
871 \begin{tabular}{|r|r|r|r|r|r|}
\r
874 nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\
\r
876 & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
\r
877 1,024 & 667.92 & 48,732 & 81.65 & 5,087 & 8.18 \\
\r
878 2,048 & 966.87 & 77,177 & 90.34 & 5,716 & 10.70\\
\r
879 4,096 & 1,742.31 & 124,411 & 119.21 & 6,905 & 14.61\\
\r
880 8,192 & 2,739.21 & 187,626 & 168.9 & 9,000 & 16.22\\
\r
885 \caption{Comparison of FGMRES and TSIRM for ex20 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
897 \subsection{Influence of parameters for TSIRM}
\r
899 \begin{figure}[htbp]
\r
901 \includegraphics[angle=-90,width=0.5\textwidth]{ksp_tsirm_cgls_iter_total}
\r
902 \caption{Number of total iterations using two different methods for the minimization: LSQR and CGLS.}
\r
903 \label{fig:cgls-iter}
\r
906 \begin{figure}[htbp]
\r
908 \includegraphics[angle=-90,width=0.5\textwidth]{ksp_tsirm_cgls_time}
\r
909 \caption{Execution time in seconds using two different methods for the minimization: LSQR and CGLS.}
\r
910 \label{fig:cgls-time}
\r
919 \subsection{Experiments conclusions }
\r
923 Concerning the experiments some other remarks are interesting.
\r
925 \item We have tested other examples of PETSc/KSP (ex29, ex45, ex49). For all these
\r
926 examples, we have also obtained similar gains between GMRES and TSIRM but
\r
927 those examples are not scalable with many cores. In general, we had some
\r
928 problems with more than $4,096$ cores.
\r
929 \item We have tested many iterative solvers available in PETSc. In fact, it is
\r
930 possible to use most of them with TSIRM. From our point of view, the condition
\r
931 to use a solver inside TSIRM is that the solver must have a restart
\r
932 feature. More precisely, the solver must support to be stopped and restarted
\r
933 without decreasing its convergence. That is why with GMRES we stop it when it
\r
934 is naturally restarted (\emph{i.e.} with $m$ the restart parameter). The
\r
935 Conjugate Gradient (CG) and all its variants do not have ``restarted'' version
\r
936 in PETSc, so they are not efficient. They will converge with TSIRM but not
\r
937 quickly because if we compare a normal CG with a CG which is stopped and
\r
938 restarted every 16 iterations (for example), the normal CG will be far more
\r
939 efficient. Some restarted CG or CG variant versions exist and may be
\r
940 interesting to study in future works.
\r
946 %%%*********************************************************
\r
947 %%%*********************************************************
\r
948 \section{Conclusion}
\r
950 %The conclusion goes here. this is more of the conclusion
\r
951 %%%*********************************************************
\r
952 %%%*********************************************************
\r
955 In this paper a new two-stage algorithm TSIRM has been described. This method allows us to improve the convergence of Krylov iterative methods. It is based
\r
956 on a least-squares minimization step which uses the Krylov residuals.
\r
959 We have implemented our code in PETSc in order to show that it is efficient and scalable. Some experiments with classical examples of PETSc for linear and nonlinear problems have been performed. We observed that TSIRM outperforms GMRES variants when the number of iterations is large. TSIRM is also scalable since we made some experiments with up to 16,394 cores.
\r
961 We also observed that TSIRM is efficient with different preconditioners. The hypre preconditioner that is globally very efficient for many problems is also very time consuming. Consequently, sometimes using a less performent preconditioners may be a better solution. In that case, TSIRM is also more efficient than traditional Krylov methods.
\r
964 The influence of some important parameters of TSIRM have been studied. It can be noticed that they have a strong influence on the convergence speed
\r
966 In future works, we plan to study other problems coming from different research areas. Other efficient Krylov optimisation methods as communication avoiding technique may be interesting to be investigated
\r
971 % use section* for acknowledgement
\r
972 %%%*********************************************************
\r
973 %%%*********************************************************
\r
974 \section*{Acknowledgment}
\r
975 This paper is partially funded by the Labex ACTION program (contract
\r
976 ANR-11-LABX-01-01). We acknowledge PRACE for awarding us access to resources
\r
977 Curie and Juqueen respectively based in France and Germany.
\r
983 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\r
985 \bibliography{biblio}
\r
986 \bibliographystyle{unsrt}
\r
987 \bibliographystyle{alpha}
\r