1 %%%%%%%%%%%%%%%%%%%%%%
\r
2 \documentclass{doublecol-new}
\r
3 %%%%%%%%%%%%%%%%%%%%%%
\r
5 \usepackage{natbib,stfloats}
\r
6 \usepackage{mathrsfs}
\r
7 \usepackage[utf8]{inputenc}
\r
8 \usepackage[T1]{fontenc}
\r
9 \usepackage{algorithm}
\r
10 \usepackage{algpseudocode}
\r
11 \usepackage{amsmath}
\r
12 \usepackage{amssymb}
\r
13 \usepackage{multirow}
\r
14 \usepackage{graphicx}
\r
17 \def\newblock{\hskip .11em plus .33em minus .07em}
\r
20 \newtheorem{lemma}{Lemma}
\r
21 \newtheorem{theorem}[lemma]{Theorem}
\r
22 \newtheorem{corrolary}[lemma]{Corrolary}
\r
23 \newtheorem{conjecture}[lemma]{Conjecture}
\r
24 \newtheorem{proposition}[lemma]{Proposition}
\r
25 \newtheorem{claim}[lemma]{Claim}
\r
26 \newtheorem{stheorem}[lemma]{Wrong Theorem}
\r
27 %\newtheorem{algorithm}{Algorithm}
\r
30 \theoremstyle{THrm}{
\r
31 \newtheorem{definition}{Definition}[section]
\r
32 \newtheorem{question}{Question}[section]
\r
33 \newtheorem{remark}{Remark}
\r
34 \newtheorem{scheme}{Scheme}
\r
37 \theoremstyle{THhit}{
\r
38 \newtheorem{case}{Case}[section]
\r
40 \algnewcommand\algorithmicinput{\textbf{Input:}}
\r
41 \algnewcommand\Input{\item[\algorithmicinput]}
\r
43 \algnewcommand\algorithmicoutput{\textbf{Output:}}
\r
44 \algnewcommand\Output{\item[\algorithmicoutput]}
\r
50 \def\theequation{\arabic{equation}}
\r
52 \JOURNALNAME{\TEN{\it International Journal of High Performance Computing and Networking}}
\r
56 %\thispagestyle{empty}%
\r
58 %\NINE\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcr@{}}%
\r
60 %Copyright \copyright\ 2012 Inderscience Enterprises Ltd. & &%
\r
72 \setcounter{page}{1}
\r
74 \LRH{F. Wang et~al.}
\r
76 \RRH{Metadata Based Management and Sharing of Distributed Biomedical
\r
91 \title{TSIRM: A Two-Stage Iteration with least-squares Residual Minimization algorithm to solve large sparse linear and non linear systems}
\r
94 \authorA{Rapha\"el Couturier}
\r
96 \affA{Femto-ST Institute, University of Bourgogne Franche-Comte, France\\
\r
97 E-mail: raphael.couturier@univ-fcomte.fr}
\r
100 \authorB{Lilia Ziane Khodja}
\r
101 \affB{LTAS-Mécanique numérique non linéaire, University of Liege, Belgium \\
\r
102 E-mail: l.zianekhodja@ulg.ac.be}
\r
104 \authorC{Christophe Guyeux}
\r
105 \affC{Femto-ST Institute, University of Bourgogne Franche-Comte, France\\
\r
106 E-mail: christophe.guyeux@univ-fcomte.fr}
\r
110 In this paper, a two-stage iterative algorithm is proposed to improve the
\r
111 convergence of Krylov based iterative methods, typically those of GMRES
\r
112 variants. The principle of the proposed approach is to build an external
\r
113 iteration over the Krylov method, and to frequently store its current residual
\r
114 (at each GMRES restart for instance). After a given number of outer iterations,
\r
115 a least-squares minimization step is applied on the matrix composed by the saved
\r
116 residuals, in order to compute a better solution and to make new iterations if
\r
117 required. It is proven that the proposal has the same convergence properties
\r
118 than the inner embedded method itself.
\r
120 Several experiments have been performed
\r
121 with the PETSc solver with linear and nonlinear problems. They show good
\r
122 speedups compared to GMRES with up to 16,394 cores with different
\r
129 \KEYWORD{Iterative Krylov methods; sparse linear and non linear systems; two stage iteration; least-squares residual minimization; PETSc.}
\r
131 %\REF{to this paper should be made as follows: Rodr\'{\i}guez
\r
132 %Bol\'{\i}var, M.P. and Sen\'{e}s Garc\'{\i}a, B. (xxxx) `The
\r
133 %corporate environmental disclosures on the internet: the case of
\r
134 %IBEX 35 Spanish companies', {\it International Journal of Metadata,
\r
135 %Semantics and Ontologies}, Vol. x, No. x, pp.xxx\textendash xxx.}
\r
138 Raphaël Couturier ....
\r
140 \noindent Lilia Ziane Khodja ...
\r
142 \noindent Christophe Guyeux ...
\r
149 \section{Introduction}
\r
151 Iterative methods have recently become more attractive than direct ones to solve
\r
152 very large sparse linear systems~\cite{Saad2003}. They are more efficient in a
\r
153 parallel context, supporting thousands of cores, and they require less memory
\r
154 and arithmetic operations than direct methods~\cite{bahicontascoutu}. This is
\r
155 why new iterative methods are frequently proposed or adapted by researchers, and
\r
156 the increasing need to solve very large sparse linear systems has triggered the
\r
157 development of such efficient iterative techniques suitable for parallel
\r
160 Most of the successful iterative methods currently available are based on
\r
161 so-called ``Krylov subspaces''. They consist in forming a basis of successive
\r
162 matrix powers multiplied by an initial vector, which can be for instance the
\r
163 residual. These methods use vectors orthogonality of the Krylov subspace basis
\r
164 in order to solve linear systems. The best known iterative Krylov subspace
\r
165 methods are conjugate gradient and GMRES ones (Generalized Minimal RESidual).
\r
168 However, iterative methods suffer from scalability problems on parallel
\r
169 computing platforms with many processors, due to their need of reduction
\r
170 operations, and to collective communications to achieve matrix-vector
\r
171 multiplications. The communications on large clusters with thousands of cores
\r
172 and large sizes of messages can significantly affect the performances of these
\r
173 iterative methods. As a consequence, Krylov subspace iteration methods are often
\r
174 used with preconditioners in practice, to increase their convergence and
\r
175 accelerate their performances. However, most of the good preconditioners are
\r
176 not scalable on large clusters.
\r
178 In this research work, a two-stage algorithm based on two nested iterations
\r
179 called inner-outer iterations is proposed. This algorithm consists in solving
\r
180 the sparse linear system iteratively with a small number of inner iterations,
\r
181 and restarting the outer step with a new solution minimizing some error
\r
182 functions over some previous residuals. For further information on two-stage
\r
183 iteration methods, interested readers are invited to
\r
184 consult~\cite{Nichols:1973:CTS}. Two-stage algorithms are easy to parallelize on
\r
185 large clusters. Furthermore, the least-squares minimization technique improves
\r
186 its convergence and performances.
\r
188 The present article is organized as follows. Related works are presented in
\r
189 Section~\ref{sec:02}. Section~\ref{sec:03} details the two-stage algorithm using
\r
190 a least-squares residual minimization, while Section~\ref{sec:04} provides
\r
191 convergence results regarding this method. Section~\ref{sec:05} shows some
\r
192 experimental results obtained on large clusters using routines of PETSc
\r
193 toolkit. This research work ends by a conclusion section, in which the proposal
\r
194 is summarized while intended perspectives are provided.
\r
198 %%%*********************************************************
\r
199 %%%*********************************************************
\r
203 %%%*********************************************************
\r
204 %%%*********************************************************
\r
205 \section{Related works}
\r
207 Krylov subspace iteration methods have increasingly become key
\r
208 techniques for solving linear and nonlinear systems, or eigenvalue problems,
\r
209 especially since the increasing development of
\r
210 preconditioners~\cite{Saad2003,Meijerink77}. One reason for the popularity of
\r
211 these methods is their generality, simplicity, and efficiency to solve systems of
\r
212 equations arising from very large and complex problems.
\r
214 GMRES is one of the most widely used Krylov iterative method for solving sparse
\r
215 and large linear systems. It has been developed by Saad \emph{et
\r
216 al.}~\cite{Saad86} as a generalized method to deal with unsymmetric and
\r
217 non-Hermitian problems, and indefinite symmetric problems too. In its original
\r
218 version called full GMRES, this algorithm minimizes the residual over the
\r
219 current Krylov subspace until convergence in at most $n$ iterations, where $n$
\r
220 is the size of the sparse matrix. Full GMRES is however too expensive in the
\r
221 case of large matrices, since the required orthogonalization process per
\r
222 iteration grows quadratically with the number of iterations. For that reason,
\r
223 GMRES is restarted in practice after each $m\ll n$ iterations, to avoid the
\r
224 storage of a large orthonormal basis. However, the convergence behavior of the
\r
225 restarted GMRES, called GMRES($m$), in many cases depends quite critically on
\r
226 the $m$ value~\cite{Huang89}. Therefore in most cases, a preconditioning
\r
227 technique is applied to the restarted GMRES method in order to improve its
\r
230 To enhance the robustness of Krylov iterative solvers, some techniques have been
\r
231 proposed allowing the use of different preconditioners, if necessary, within the
\r
232 iteration itself instead of restarting. Those techniques may lead to
\r
233 considerable savings in CPU time and memory requirements. Van der Vorst
\r
234 in~\cite{Vorst94} has for instance proposed variants of the GMRES algorithm in
\r
235 which a different preconditioner is applied in each iteration, leading to the
\r
236 so-called GMRESR family of nested methods. In fact, the GMRES method is
\r
237 effectively preconditioned with other iterative schemes (or GMRES itself), where
\r
238 the iterations of the GMRES method are called outer iterations while the
\r
239 iterations of the preconditioning process is referred to as inner iterations.
\r
240 Saad in~\cite{Saad:1993} has proposed Flexible GMRES (FGMRES) which is another
\r
241 variant of the GMRES algorithm using a variable preconditioner. In FGMRES the
\r
242 search directions are preconditioned whereas in GMRESR the residuals are
\r
243 preconditioned. However, in practice, good preconditioners are those based on
\r
244 direct methods, as ILU preconditioners, which are not easy to parallelize and
\r
245 suffer from the scalability problems on large clusters of thousands of cores.
\r
247 Recently, communication-avoiding methods have been developed to reduce the
\r
248 communication overheads in Krylov subspace iterative solvers. On modern computer
\r
249 architectures, communications between processors are much slower than
\r
250 floating-point arithmetic operations on a given
\r
251 processor. Communication-avoiding techniques reduce either communications
\r
252 between processors or data movements between levels of the memory hierarchy, by
\r
253 reformulating the communication-bound kernels (more frequently SpMV kernels) and
\r
254 the orthogonalization operations within the Krylov iterative solver. Different
\r
255 works have studied the communication-avoiding techniques for the GMRES method,
\r
256 so-called CA-GMRES, on multicore processors and multi-GPU
\r
257 machines~\cite{Mohiyuddin2009,Hoemmen2010,Yamazaki2014}.
\r
259 Compared to all these works and to all the other works on Krylov iterative
\r
260 methods, the originality of our work is to build a second iteration over a
\r
261 Krylov iterative method and to minimize the residuals with a least-squares
\r
262 method after a given number of outer iterations.
\r
264 %%%*********************************************************
\r
265 %%%*********************************************************
\r
269 %%%*********************************************************
\r
270 %%%*********************************************************
\r
271 \section{TSIRM: Two-stage iteration with least-squares residuals minimization algorithm}
\r
273 A two-stage algorithm is proposed to solve large sparse linear systems of the
\r
274 form $Ax=b$, where $A\in\mathbb{R}^{n\times n}$ is a sparse and square
\r
275 nonsingular matrix, $x\in\mathbb{R}^n$ is the solution vector, and
\r
276 $b\in\mathbb{R}^n$ is the right-hand side. As explained previously, the
\r
277 algorithm is implemented as an inner-outer iteration solver based on iterative
\r
278 Krylov methods. The main key-points of the proposed solver are given in
\r
279 Algorithm~\ref{algo:01}. It can be summarized as follows: the inner solver is a
\r
280 Krylov based one. In order to accelerate its convergence, the outer solver
\r
281 periodically applies a least-squares minimization on the residuals computed by
\r
284 At each outer iteration, the sparse linear system $Ax=b$ is partially solved
\r
285 using only $m$ iterations of an iterative method, this latter being initialized
\r
286 with the last obtained approximation. The GMRES method~\cite{Saad86}, or any of
\r
287 its variants, can potentially be used as inner solver. The current approximation
\r
288 of the Krylov method is then stored inside a $n \times s$ matrix $S$, which is
\r
289 composed by the $s$ last solutions that have been computed during the inner
\r
290 iterations phase. In the remainder, the $i$-th column vector of $S$ will be
\r
293 At each $s$ iterations, another kind of minimization step is applied in order to
\r
294 compute a new solution $x$. For that, the previous residuals of $Ax=b$ are
\r
295 computed by the inner iterations with $(b-AS)$. The minimization of the
\r
296 residuals is obtained by
\r
298 \underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2
\r
301 with $R=AS$. The new solution $x$ is then computed with $x=S\alpha$.
\r
304 In practice, $R$ is a dense rectangular matrix belonging in $\mathbb{R}^{n\times
\r
305 s}$, with $s\ll n$. In order to minimize~\eqref{eq:01}, a least-squares
\r
306 method such as CGLS ~\cite{Hestenes52} or LSQR~\cite{Paige82} is used. Remark
\r
307 that these methods are more appropriate than a single direct method in a
\r
308 parallel context. CGLS has recently been used to improve the performance of multisplitting algorithms \cite{cz15:ij}.
\r
312 \begin{algorithm}[t]
\r
314 \begin{algorithmic}[1]
\r
315 \Input $A$ (sparse matrix), $b$ (right-hand side)
\r
316 \Output $x$ (solution vector)\vspace{0.2cm}
\r
317 \State Set the initial guess $x_0$
\r
318 \For {$k=1,2,3,\ldots$ until convergence ($error<\epsilon_{tsirm}$)} \label{algo:conv}
\r
319 \State $[x_k,error]=Solve(A,b,x_{k-1},max\_iter_{kryl})$ \label{algo:solve}
\r
320 \State $S_{k \mod s}=x_k$ \label{algo:store} \Comment{update column ($k \mod s$) of $S$}
\r
321 \If {$k \mod s=0$ {\bf and} $error>\epsilon_{kryl}$}
\r
322 \State $R=AS$ \Comment{compute dense matrix} \label{algo:matrix_mul}
\r
323 \State $\alpha=Least\_Squares(R,b,max\_iter_{ls})$ \label{algo:}
\r
324 \State $x_k=S\alpha$ \Comment{compute new solution}
\r
331 Algorithm~\ref{algo:01} summarizes the principle of the proposed method. The
\r
332 outer iteration is inside the \emph{for} loop. Line~\ref{algo:solve}, the Krylov
\r
333 method is called for a maximum of $max\_iter_{kryl}$ iterations. In practice,
\r
334 we suggest to set this parameter equal to the restart number in the GMRES-like
\r
335 method. Moreover, a tolerance threshold must be specified for the solver. In
\r
336 practice, this threshold must be much smaller than the convergence threshold of
\r
337 the TSIRM algorithm (\emph{i.e.}, $\epsilon_{tsirm}$). We also consider that
\r
338 after the call of the $Solve$ function, we obtain the vector $x_k$ and the
\r
339 $error$, which is defined by $||Ax_k-b||_2$.
\r
341 Line~\ref{algo:store}, $S_{k \mod s}=x_k$ consists in copying the solution
\r
342 $x_k$ into the column $k \mod s$ of $S$. After the minimization, the matrix
\r
343 $S$ is reused with the new values of the residuals. To solve the minimization
\r
344 problem, an iterative method is used. Two parameters are required for that:
\r
345 the maximum number of iterations ($max\_iter_{ls}$) and the threshold to stop
\r
346 the method ($\epsilon_{ls}$).
\r
348 Let us summarize the most important parameters of TSIRM:
\r
350 \item $\epsilon_{tsirm}$: the threshold that stops the TSIRM method;
\r
351 \item $max\_iter_{kryl}$: the maximum number of iterations for the Krylov method;
\r
352 \item $s$: the number of outer iterations before applying the minimization step;
\r
353 \item $max\_iter_{ls}$: the maximum number of iterations for the iterative least-squares method;
\r
354 \item $\epsilon_{ls}$: the threshold used to stop the least-squares method.
\r
358 The parallelization of TSIRM relies on the parallelization of all its
\r
359 parts. More precisely, except the least-squares step, all the other parts are
\r
360 obvious to achieve out in parallel. In order to develop a parallel version of
\r
361 our code, we have chosen to use PETSc~\cite{petsc-web-page}. In
\r
362 line~\ref{algo:matrix_mul}, the matrix-matrix multiplication is implemented and
\r
363 efficient since the matrix $A$ is sparse and the matrix $S$ contains few columns
\r
364 in practice. As explained previously, at least two methods seem to be
\r
365 interesting to solve the least-squares minimization, the CGLS and the LSQR
\r
368 In Algorithm~\ref{algo:02} we remind the CGLS algorithm. The LSQR method follows
\r
369 more or less the same principle but it takes more place, so we briefly explain
\r
370 the parallelization of CGLS which is similar to LSQR.
\r
372 \begin{algorithm}[t]
\r
374 \begin{algorithmic}[1]
\r
375 \Input $A$ (matrix), $b$ (right-hand side)
\r
376 \Output $x$ (solution vector)\vspace{0.2cm}
\r
377 \State Let $x_0$ be an initial approximation
\r
378 \State $r_0=b-Ax_0$
\r
379 \State $p_1=A^Tr_0$
\r
381 \State $\gamma=||s_0||^2_2$
\r
382 \For {$k=1,2,3,\ldots$ until convergence ($\gamma<\epsilon_{ls}$)} \label{algo2:conv}
\r
384 \State $\alpha_k=\gamma/||q_k||^2_2$
\r
385 \State $x_k=x_{k-1}+\alpha_kp_k$
\r
386 \State $r_k=r_{k-1}-\alpha_kq_k$
\r
387 \State $s_k=A^Tr_k$
\r
388 \State $\gamma_{old}=\gamma$
\r
389 \State $\gamma=||s_k||^2_2$
\r
390 \State $\beta_k=\gamma/\gamma_{old}$
\r
391 \State $p_{k+1}=s_k+\beta_kp_k$
\r
398 In each iteration of CGLS, there are two matrix-vector multiplications and some
\r
399 classical operations: dot product, norm, multiplication, and addition on
\r
400 vectors. All these operations are easy to implement in PETSc or similar
\r
401 environment. It should be noticed that LSQR follows the same principle, it is a
\r
402 little bit longer but it performs more or less the same operations.
\r
405 %%%*********************************************************
\r
406 %%%*********************************************************
\r
408 \section{Convergence results}
\r
412 We can now claim that,
\r
413 \begin{proposition}
\r
415 If $A$ is either a definite positive or a positive matrix and GMRES($m$) is used as a solver, then the TSIRM algorithm is convergent.
\r
417 Furthermore, let $r_k$ be the
\r
418 $k$-th residue of TSIRM, then
\r
419 we have the following boundaries:
\r
421 \item when $A$ is positive:
\r
423 ||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0|| ,
\r
425 where $M$ is the symmetric part of $A$, $\alpha = \lambda_{min}(M)^2$ and $\beta = \lambda_{max}(A^T A)$;
\r
426 \item when $A$ is positive definite:
\r
428 \|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_0\|.
\r
431 %In the general case, where A is not positive definite, we have
\r
432 %$\|r_n\| \le \inf_{p \in P_n} \|p(A)\| \le \kappa_2(V) \inf_{p \in P_n} \max_{\lambda \in \sigma(A)} |p(\lambda)| \|r_0\|, .$
\r
436 Let us first recall that the residue is under control when considering the GMRES algorithm on a positive definite matrix, and it is bounded as follows:
\r
438 \|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{k/2} \|r_0\| .
\r
440 Additionally, when $A$ is a positive real matrix with symmetric part $M$, then the residual norm provided at the $m$-th step of GMRES satisfies:
\r
442 ||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_0|| ,
\r
444 where $\alpha$ and $\beta$ are defined as in Proposition~\ref{prop:saad}, which proves
\r
445 the convergence of GMRES($m$) for all $m$ under such assumptions regarding $A$.
\r
446 These well-known results can be found, \emph{e.g.}, in~\cite{Saad86}.
\r
448 We will now prove by a mathematical induction that, for each $k \in \mathbb{N}^\ast$,
\r
449 $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{mk}{2}} ||r_0||$ when $A$ is positive, and $\|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_0\|$ when $A$ is positive definite.
\r
451 The base case is obvious, as for $k=1$, the TSIRM algorithm simply consists in applying GMRES($m$) once, leading to a new residual $r_1$ that follows the inductive hypothesis due to the results recalled above.
\r
453 Suppose now that the claim holds for all $m=1, 2, \hdots, k-1$, that is, $\forall m \in \{1,2,\hdots, k-1\}$, $||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ in the positive case, and $\|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_0\|$ in the definite positive one.
\r
454 We will show that the statement holds too for $r_k$. Two situations can occur:
\r
456 \item If $k \not\equiv 0 ~(\textrm{mod}\ m)$, then the TSIRM algorithm consists in executing GMRES once. In that case and by using the inductive hypothesis, we obtain either $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ if $A$ is positive, or $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite case.
\r
457 \item Else, the TSIRM algorithm consists in two stages: a first GMRES($m$) execution leads to a temporary $x_k$ whose residue satisfies:
\r
459 \item $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ in the positive case,
\r
460 \item $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite one,
\r
462 and a least squares resolution.
\r
463 Let $\operatorname{span}(S) = \left \{ {\sum_{i=1}^k \lambda_i v_i \Big| k \in \mathbb{N}, v_i \in S, \lambda _i \in \mathbb{R}} \right \}$ be the linear span of a set of real vectors $S$. So,\\
\r
464 $\min_{\alpha \in \mathbb{R}^s} ||b-R\alpha ||_2 = \min_{\alpha \in \mathbb{R}^s} ||b-AS\alpha ||_2$
\r
467 & = \min_{x \in span\left(S_{k-s+1}, S_{k-s+2}, \hdots, S_{k} \right)} ||b-AS\alpha ||_2\\
\r
468 & = \min_{x \in span\left(x_{k-s+1}, x_{k-s}+2, \hdots, x_{k} \right)} ||b-AS\alpha ||_2\\
\r
469 & \leqslant \min_{x \in span\left( x_{k} \right)} ||b-Ax ||_2\\
\r
470 & \leqslant \min_{\lambda \in \mathbb{R}} ||b-\lambda Ax_{k} ||_2\\
\r
471 & \leqslant ||b-Ax_{k}||_2\\
\r
473 & \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||, \textrm{ if $A$ is positive,}\\
\r
474 & \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|, \textrm{ if $A$ is}\\
\r
475 & \textrm{positive definite,}
\r
478 which concludes the induction and the proof.
\r
481 Remark that a similar proposition can be formulated at each time
\r
482 the given solver satisfies an inequality of the form $||r_n|| \leqslant \mu^n ||r_0||$,
\r
483 with $|\mu|<1$. Furthermore, it is \emph{a priori} possible in some particular cases
\r
485 that the proposed TSIRM converges while the GMRES($m$) does not.
\r
487 %%%*********************************************************
\r
488 %%%*********************************************************
\r
489 \section{Experiments using PETSc}
\r
493 In order to see the behavior of our approach when considering only one processor,
\r
494 a first comparison with GMRES or FGMRES and the new algorithm detailed
\r
495 previously has been experimented. Matrices that have been used with their
\r
496 characteristics (names, fields, rows, and nonzero coefficients) are detailed in
\r
497 Table~\ref{tab:01}. These latter, which are real-world applications matrices,
\r
498 have been extracted from the Davis collection, University of
\r
499 Florida~\cite{Dav97}.
\r
501 \begin{table*}[htbp]
\r
503 \begin{tabular}{|c|c|r|r|r|}
\r
505 Matrix name & Field &\# Rows & \# Nonzeros \\\hline \hline
\r
506 crashbasis & Optimization & 160,000 & 1,750,416 \\
\r
507 parabolic\_fem & Comput. fluid dynamics & 525,825 & 2,100,225 \\
\r
508 epb3 & Thermal problem & 84,617 & 463,625 \\
\r
509 atmosmodj & Comput. fluid dynamics & 1,270,432 & 8,814,880 \\
\r
510 bfwa398 & Electromagnetics pb & 398 & 3,678 \\
\r
511 torso3 & 2D/3D problem & 259,156 & 4,429,042 \\
\r
515 \caption{Main characteristics of the sparse matrices chosen from the Davis collection}
\r
519 Chosen parameters are detailed below.
\r
520 We have stopped the GMRES every 30
\r
521 iterations (\emph{i.e.}, $max\_iter_{kryl}=30$), which is the default
\r
522 setting of GMRES restart parameter. The parameter $s$ has been set to 8. CGLS
\r
523 minimizes the least-squares problem with parameters
\r
524 $\epsilon_{ls}=1e-40$ and $max\_iter_{ls}=20$. The external precision is set to
\r
525 $\epsilon_{tsirm}=1e-10$. These experiments have been performed on an Intel(R)
\r
526 Core(TM) i7-3630QM CPU @ 2.40GHz with the 3.5.1 version of PETSc.
\r
529 Experiments comparing
\r
530 a GMRES variant with TSIRM in the resolution of linear systems are given in Table~\ref{tab:02}.
\r
531 The second column describes whether GMRES or FGMRES has been used for linear systems solving.
\r
532 Different preconditioners have been used according to the matrices. With TSIRM, the same
\r
533 solver and the same preconditioner are used. This table shows that TSIRM can
\r
534 drastically reduce the number of iterations needed to reach the convergence, when the
\r
535 number of iterations for the normal GMRES is more or less greater than 500. In
\r
536 fact this also depends on two parameters: the number of iterations before stopping GMRES
\r
537 and the number of iterations to perform the minimization.
\r
540 \begin{table*}[htbp]
\r
542 \begin{tabular}{|c|c|r|r|r|r|}
\r
545 \multirow{2}{*}{Matrix name} & Solver / & \multicolumn{2}{c|}{GMRES} & \multicolumn{2}{c|}{TSIRM CGLS} \\
\r
547 & precond & Time & \# Iter. & Time & \# Iter. \\\hline \hline
\r
549 crashbasis & gmres / none & 15.65 & 518 & 14.12 & 450 \\
\r
550 parabolic\_fem & gmres / ilu & 1009.94 & 7573 & 401.52 & 2970 \\
\r
551 epb3 & fgmres / sor & 8.67 & 600 & 8.21 & 540 \\
\r
552 atmosmodj & fgmres / sor & 104.23 & 451 & 88.97 & 366 \\
\r
553 bfwa398 & gmres / none & 1.42 & 9612 & 0.28 & 1650 \\
\r
554 torso3 & fgmres / sor & 37.70 & 565 & 34.97 & 510 \\
\r
558 \caption{Comparison between sequential standalone (F)GMRES and TSIRM with (F)GMRES (time in seconds).}
\r
567 In order to perform larger experiments, we have tested some example applications
\r
568 of PETSc. These applications are available in the \emph{ksp} part, which is
\r
569 suited for scalable linear equations solvers:
\r
571 \item ex15 is an example that solves in parallel an operator using a finite
\r
572 difference scheme. The diagonal is equal to 4 and 4 extra-diagonals
\r
573 representing the neighbors in each directions are equal to -1. This example is
\r
574 used in many physical phenomena, for example, heat and fluid flow, wave
\r
576 \item ex54 is another example based on a 2D problem discretized with quadrilateral
\r
577 finite elements. In this example, the user can define the scaling of material
\r
578 coefficient in embedded circle called $\alpha$.
\r
580 For more technical details on these applications, interested readers are invited
\r
581 to read the codes available in the PETSc sources. These problems have been
\r
582 chosen because they are scalable with many cores.
\r
584 In the following, larger experiments are described on two large scale
\r
585 architectures: Curie and Juqueen. Both these architectures are supercomputers
\r
586 respectively composed of 80,640 cores for Curie and 458,752 cores for
\r
587 Juqueen. Those machines are respectively hosted by GENCI in France and Jülich
\r
588 Supercomputing Center in Germany. They belong with other similar architectures
\r
589 to the PRACE initiative (Partnership for Advanced Computing in Europe), which
\r
590 aims at proposing high performance supercomputing architecture to enhance
\r
591 research in Europe. The Curie architecture is composed of Intel E5-2680
\r
592 processors at 2.7 GHz with 2Gb memory by core. The Juqueen architecture,
\r
594 composed by IBM PowerPC A2 at 1.6 GHz with 1Gb memory per core. Both those
\r
595 architectures are equipped with a dedicated high speed network.
\r
598 In many situations, using preconditioners is essential in order to find the
\r
599 solution of a linear system. There are many preconditioners available in PETSc.
\r
600 However, for parallel applications, all the preconditioners based on matrix factorization
\r
601 are not available. In our experiments, we have tested different kinds of
\r
602 preconditioners, but as it is not the subject of this paper, we will not
\r
603 present results with many preconditioners. In practice, we have chosen to use a
\r
604 multigrid (mg) and successive over-relaxation (sor). For further details on the
\r
605 preconditioners in PETSc, readers are referred to~\cite{petsc-web-page}.
\r
609 \begin{table*}[htbp]
\r
611 \begin{tabular}{|r|r|r|r|r|r|r|r|r|}
\r
614 nb. cores & precond & \multicolumn{2}{c|}{FGMRES} & \multicolumn{2}{c|}{TSIRM CGLS} & \multicolumn{2}{c|}{TSIRM LSQR} & best gain \\
\r
616 & & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
\r
617 2,048 & mg & 403.49 & 18,210 & 73.89 & 3,060 & 77.84 & 3,270 & 5.46 \\
\r
618 2,048 & sor & 745.37 & 57,060 & 87.31 & 6,150 & 104.21 & 7,230 & 8.53 \\
\r
619 4,096 & mg & 562.25 & 25,170 & 97.23 & 3,990 & 89.71 & 3,630 & 6.27 \\
\r
620 4,096 & sor & 912.12 & 70,194 & 145.57 & 9,750 & 168.97 & 10,980 & 6.26 \\
\r
621 8,192 & mg & 917.02 & 40,290 & 148.81 & 5,730 & 143.03 & 5,280 & 6.41 \\
\r
622 8,192 & sor & 1,404.53 & 106,530 & 212.55 & 12,990 & 180.97 & 10,470 & 7.76 \\
\r
623 16,384 & mg & 1,430.56 & 63,930 & 237.17 & 8,310 & 244.26 & 7,950 & 6.03 \\
\r
624 16,384 & sor & 2,852.14 & 216,240 & 418.46 & 21,690 & 505.26 & 23,970 & 6.82 \\
\r
628 \caption{Comparison of FGMRES and TSIRM with FGMRES for example ex15 of PETSc/KSP with two preconditioners (mg and sor) having 25,000 components per core on Juqueen ($\epsilon_{tsirm}=1e-3$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
633 Table~\ref{tab:03} shows the execution times and the number of iterations of
\r
634 example ex15 of PETSc on the Juqueen architecture. Different numbers of cores
\r
635 are studied ranging from 2,048 up-to 16,383 with the two preconditioners {\it
\r
636 mg} and {\it sor}. For those experiments, the number of components (or
\r
637 unknowns of the problems) per core is fixed at 25,000, also called weak
\r
638 scaling. This number can seem relatively small. In fact, for some applications
\r
639 that need a lot of memory, the number of components per processor requires
\r
640 sometimes to be small. Other parameters for this application are described in
\r
641 the legend of this table.
\r
645 In Table~\ref{tab:03}, we can notice that TSIRM is always faster than
\r
646 FGMRES. The last column shows the ratio between FGMRES and the best version of
\r
647 TSIRM according to the minimization procedure: CGLS or LSQR. Even if we have
\r
648 computed the worst case between CGLS and LSQR, it is clear that TSIRM is always
\r
649 faster than FGMRES. For this example, the multigrid preconditioner is faster
\r
650 than SOR. The gain between TSIRM and FGMRES is more or less similar for the two
\r
651 preconditioners. Looking at the number of iterations to reach the convergence,
\r
652 it is obvious that TSIRM allows the reduction of the number of iterations. It
\r
653 should be noticed that for TSIRM, in those experiments, only the iterations of
\r
654 the Krylov solver are taken into account. Iterations of CGLS or LSQR were not
\r
655 recorded but they are time-consuming. In general, each $max\_iter_{kryl}*s$
\r
656 iterations which corresponds to 30*12, there are $max\_iter_{ls}$ iterations for
\r
657 the least-squares method which corresponds to 15.
\r
659 \begin{figure}[htbp]
\r
661 \includegraphics[width=0.5\textwidth]{nb_iter_sec_ex15_juqueen}
\r
662 \caption{Number of iterations per second with ex15 and the same parameters as in Table~\ref{tab:03} (weak scaling)}
\r
667 In Figure~\ref{fig:01}, the number of iterations per second corresponding to
\r
668 Table~\ref{tab:03} is displayed. It can be noticed that the number of
\r
669 iterations per second of FMGRES is constant whereas it decreases with TSIRM with
\r
670 both preconditioners. This can be explained by the fact that when the number of
\r
671 cores increases, the time for the least-squares minimization step also increases
\r
672 but, generally, when the number of cores increases, the number of iterations to
\r
673 reach the threshold also increases, and, in that case, TSIRM is more efficient
\r
674 to reduce the number of iterations. So, the overall benefit of using TSIRM is
\r
682 \begin{table*}[htbp]
\r
684 \begin{tabular}{|r|r|r|r|r|r|r|r|r|}
\r
687 nb. cores & $\epsilon_{tsirm}$ & \multicolumn{2}{c|}{FGMRES} & \multicolumn{2}{c|}{TSIRM CGLS} & \multicolumn{2}{c|}{TSIRM LSQR} & best gain \\
\r
689 & & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
\r
690 2,048 & 8e-5 & 108.88 & 16,560 & 23.06 & 3,630 & 22.79 & 3,630 & 4.77 \\
\r
691 2,048 & 6e-5 & 194.01 & 30,270 & 35.50 & 5,430 & 27.74 & 4,350 & 6.99 \\
\r
692 4,096 & 7e-5 & 160.59 & 22,530 & 35.15 & 5,130 & 29.21 & 4,350 & 5.49 \\
\r
693 4,096 & 6e-5 & 249.27 & 35,520 & 52.13 & 7,950 & 39.24 & 5,790 & 6.35 \\
\r
694 8,192 & 6e-5 & 149.54 & 17,280 & 28.68 & 3,810 & 29.05 & 3,990 & 5.21 \\
\r
695 8,192 & 5e-5 & 785.04 & 109,590 & 76.07 & 10,470 & 69.42 & 9,030 & 11.30 \\
\r
696 16,384 & 4e-5 & 718.61 & 86,400 & 98.98 & 10,830 & 131.86 & 14,790 & 7.26 \\
\r
700 \caption{Comparison of FGMRES and TSIRM with FGMRES algorithms for ex54 of PETSc/KSP (both with the MG preconditioner) with 25,000 components per core on Curie ($max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
706 In Table~\ref{tab:04}, some experiments with example ex54 on the Curie
\r
707 architecture are reported. For this application, we fixed $\alpha=0.6$. As it
\r
708 can be seen in that table, the size of the problem has a strong influence on the
\r
709 number of iterations to reach the convergence. That is why we have preferred to
\r
710 change the threshold. If we set it to $1e-3$ as with the previous application,
\r
711 only one iteration is necessary to reach the convergence. So Table~\ref{tab:04}
\r
712 shows the results of different executions with different number of cores and
\r
713 different thresholds. As with the previous example, we can observe that TSIRM is
\r
714 faster than FGMRES. The ratio greatly depends on the number of iterations for
\r
715 FMGRES to reach the threshold. The greater the number of iterations to reach the
\r
716 convergence is, the better the ratio between our algorithm and FMGRES is. This
\r
717 experiment is also a weak scaling with approximately $25,000$ components per
\r
718 core. It can also be observed that the difference between CGLS and LSQR is not
\r
719 significant. Both can be good but it seems not possible to know in advance which
\r
720 one will be the best.
\r
722 Table~\ref{tab:05} shows a strong scaling experiment with example ex54 on the
\r
723 Curie architecture. So, in this case, the number of unknowns is fixed at
\r
724 $204,919,225$ and the number of cores ranges from $512$ to $8192$ with the power
\r
725 of two. The threshold is fixed at $5e-5$ and only the $mg$ preconditioner has
\r
726 been tested. Here again we can see that TSIRM is faster than FGMRES. The
\r
727 efficiency of each algorithm is reported. It can be noticed that the efficiency
\r
728 of FGMRES is better than the TSIRM one except with $8,192$ cores and that its
\r
729 efficiency is greater than one whereas the efficiency of TSIRM is lower than
\r
730 one. Nevertheless, the ratio of TSIRM with any version of the least-squares
\r
731 method is always faster. With $8,192$ cores when the number of iterations is
\r
732 far more important for FGMRES, we can see that it is only slightly more
\r
733 important for TSIRM.
\r
735 In Figure~\ref{fig:02} we report the number of iterations per second for the
\r
736 experiments reported in Table~\ref{tab:05}. This figure highlights that the
\r
737 number of iterations per second is more or less the same for FGMRES and TSIRM
\r
738 with a little advantage for FGMRES. It can be explained by the fact that, as we
\r
739 have previously explained, the iterations of the least-squares steps are not
\r
740 taken into account with TSIRM.
\r
742 \begin{table*}[htbp]
\r
744 \begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|}
\r
747 nb. cores & \multicolumn{2}{c|}{FGMRES} & \multicolumn{2}{c|}{TSIRM CGLS} & \multicolumn{2}{c|}{TSIRM LSQR} & best gain & \multicolumn{3}{c|}{efficiency} \\
\r
748 \cline{2-7} \cline{9-11}
\r
749 & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & & FGMRES & TS CGLS & TS LSQR\\\hline \hline
\r
750 512 & 3,969.69 & 33,120 & 709.57 & 5,790 & 622.76 & 5,070 & 6.37 & 1 & 1 & 1 \\
\r
751 1024 & 1,530.06 & 25,860 & 290.95 & 4,830 & 307.71 & 5,070 & 5.25 & 1.30 & 1.21 & 1.01 \\
\r
752 2048 & 919.62 & 31,470 & 237.52 & 8,040 & 194.22 & 6,510 & 4.73 & 1.08 & .75 & .80\\
\r
753 4096 & 405.60 & 28,380 & 111.67 & 7,590 & 91.72 & 6,510 & 4.42 & 1.22 & .79 & .84 \\
\r
754 8192 & 785.04 & 109,590 & 76.07 & 10,470 & 69.42 & 9,030 & 11.30 & .32 & .58 & .56 \\
\r
759 \caption{Comparison of FGMRES and TSIRM for ex54 of PETSc/KSP (both with the MG preconditioner) with 204,919,225 components on Curie with different number of cores ($\epsilon_{tsirm}=5e-5$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
764 \begin{figure}[htbp]
\r
766 \includegraphics[width=0.5\textwidth]{nb_iter_sec_ex54_curie}
\r
767 \caption{Number of iterations per second with ex54 and the same parameters as in Table~\ref{tab:05} (strong scaling)}
\r
772 Concerning the experiments some other remarks are interesting.
\r
774 \item We have tested other examples of PETSc/KSP (ex29, ex45, ex49). For all these
\r
775 examples, we have also obtained similar gains between GMRES and TSIRM but
\r
776 those examples are not scalable with many cores. In general, we had some
\r
777 problems with more than $4,096$ cores.
\r
778 \item We have tested many iterative solvers available in PETSc. In fact, it is
\r
779 possible to use most of them with TSIRM. From our point of view, the condition
\r
780 to use a solver inside TSIRM is that the solver must have a restart
\r
781 feature. More precisely, the solver must support to be stopped and restarted
\r
782 without decreasing its convergence. That is why with GMRES we stop it when it
\r
783 is naturally restarted (\emph{i.e.} with $m$ the restart parameter). The
\r
784 Conjugate Gradient (CG) and all its variants do not have ``restarted'' version
\r
785 in PETSc, so they are not efficient. They will converge with TSIRM but not
\r
786 quickly because if we compare a normal CG with a CG which is stopped and
\r
787 restarted every 16 iterations (for example), the normal CG will be far more
\r
788 efficient. Some restarted CG or CG variant versions exist and may be
\r
789 interesting to study in future works.
\r
791 %%%*********************************************************
\r
792 %%%*********************************************************
\r
796 \begin{table*}[htbp]
\r
798 \begin{tabular}{|r|r|r|r|r|r|r|r|}
\r
801 nb. cores & \multicolumn{2}{c|}{FGMRES/ASM} & \multicolumn{2}{c|}{TSIRM CGLS/ASM} & gain& \multicolumn{2}{c|}{FGMRES/HYPRE} \\
\r
802 \cline{2-5} \cline{7-8}
\r
803 & Time & \# Iter. & Time & \# Iter. & & Time & \# Iter. \\\hline \hline
\r
804 512 & 5.54 & 685 & 2.5 & 570 & 2.21 & 128.9 & 9 \\
\r
805 2048 & 14.95 & 1,560 & 4.32 & 746 & 3.48 & 335.7 & 9 \\
\r
806 4096 & 25.13 & 2,369 & 5.61 & 859 & 4.48 & >1000 & -- \\
\r
807 8192 & 44.35 & 3,197 & 7.6 & 1083 & 5.84 & >1000 & -- \\
\r
812 \caption{Comparison of FGMRES and TSIRM for ex45 of PETSc/KSP with two preconditioner (ASM and HYPRE) having 25,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
818 \begin{figure}[htbp]
\r
820 \includegraphics[width=0.5\textwidth]{nb_iter_sec_ex45_curie}
\r
821 \caption{Number of iterations per second with ex45 and the same parameters as in Table~\ref{tab:06} (weak scaling)}
\r
827 \begin{table*}[htbp]
\r
829 \begin{tabular}{|r|r|r|r|r|r|}
\r
832 nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\
\r
834 & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
\r
835 1024 & 667.92 & 48,732 & 81.65 & 5,087 & 8.18 \\
\r
836 2048 & 966.87 & 77,177 & 90.34 & 5,716 & 10.70\\
\r
837 4096 & 1,742.31 & 124,411 & 119.21 & 6,905 & 14.61\\
\r
838 8192 & 2,739.21 & 187,626 & 168.9 & 9,000 & 16.22\\
\r
843 \caption{Comparison of FGMRES and TSIRM for ex20 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
848 \begin{table*}[htbp]
\r
850 \begin{tabular}{|r|r|r|r|r|r|}
\r
853 nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\
\r
855 & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
\r
856 1024 & 159.52 & 11,584 & 26.34 & 1,563 & 6.06 \\
\r
857 2048 & 226.24 & 16,459 & 37.23 & 2,248 & 6.08\\
\r
858 4096 & 391.21 & 27,794 & 50.93 & 2,911 & 7.69\\
\r
859 8192 & 543.23 & 37,770 & 79.21 & 4,324 & 6.86 \\
\r
864 \caption{Comparison of FGMRES and TSIRM for ex14 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.}
\r
872 %%%*********************************************************
\r
873 %%%*********************************************************
\r
874 \section{Conclusion}
\r
876 %The conclusion goes here. this is more of the conclusion
\r
877 %%%*********************************************************
\r
878 %%%*********************************************************
\r
880 A new two-stage iterative algorithm TSIRM has been proposed in this article,
\r
881 in order to accelerate the convergence of Krylov iterative methods.
\r
882 Our TSIRM proposal acts as a merger between Krylov based solvers and
\r
883 a least-squares minimization step.
\r
884 The convergence of the method has been proven in some situations, while
\r
885 experiments up to 16,394 cores have been led to verify that TSIRM runs
\r
886 5 or 7 times faster than GMRES.
\r
889 For future work, the authors' intention is to investigate other kinds of
\r
890 matrices, problems, and inner solvers. In particular, the possibility
\r
891 to obtain a convergence of TSIRM in situations where the GMRES is divergent will be
\r
892 investigated. The influence of all parameters must be
\r
893 tested too, while other methods to minimize the residuals must be regarded. The
\r
894 number of outer iterations to minimize should become adaptive to improve the
\r
895 overall performances of the proposal. Finally, this solver will be implemented
\r
896 inside PETSc, which would be of interest as it would allows us to test
\r
897 all the non-linear examples and compare our algorithm with the other algorithm
\r
898 implemented in PETSc.
\r
901 % conference papers do not normally have an appendix
\r
905 % use section* for acknowledgement
\r
906 %%%*********************************************************
\r
907 %%%*********************************************************
\r
908 \section*{Acknowledgment}
\r
909 This paper is partially funded by the Labex ACTION program (contract
\r
910 ANR-11-LABX-01-01). We acknowledge PRACE for awarding us access to resources
\r
911 Curie and Juqueen respectively based in France and Germany.
\r
917 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\r
919 \bibliography{biblio}
\r
920 \bibliographystyle{unsrt}
\r
921 \bibliographystyle{alpha}
\r