1 \documentclass{article}
2 \usepackage[utf8]{inputenc}
3 \usepackage{amsfonts,amssymb}
7 \usepackage{algpseudocode}
11 \algnewcommand\algorithmicinput{\textbf{Input:}}
12 \algnewcommand\Input{\item[\algorithmicinput]}
14 \algnewcommand\algorithmicoutput{\textbf{Output:}}
15 \algnewcommand\Output{\item[\algorithmicoutput]}
17 \newcommand{\Time}[1]{\mathit{Time}_\mathit{#1}}
18 \newcommand{\Prec}{\mathit{prec}}
19 \newcommand{\Ratio}{\mathit{Ratio}}
21 \def\changemargin#1#2{\list{}{\rightmargin#2\leftmargin#1}\item[]}
22 \let\endchangemargin=\endlist
24 \title{A scalable multisplitting algorithm to solve large sparse linear systems}
27 \author[1]{Raphaël Couturier}
28 \author[2]{ Lilia Ziane Khodja}
29 \affil[1]{ Femto-ST Institute\\
30 University of Franche Comte\\
32 email: raphael.couturier@univ-fcomte.fr}
33 \affil[2]{Inria Bordeaux Sud-Ouest\\
35 email: lilia.ziane@inria.fr}
42 %%%%%%%%%%%%%%%%%%%%%%%%
43 %%%%%%%%%%%%%%%%%%%%%%%%
46 In this paper we revisit the Krylov multisplitting algorithm presented in
47 \cite{huang1993krylov} which uses a sequential method to minimize the Krylov
48 iterations computed by a multisplitting algorithm. Our new algorithm is based on
49 a parallel multisplitting algorithm with few blocks of large size using a
50 parallel GMRES method inside each block and on a parallel Krylov minimization in
51 order to improve the convergence. Some large scale experiments with a 3D Poisson
52 problem are presented with up to 8,192 cores. They show the obtained
53 improvements compared to a classical GMRES both in terms of number of iterations
54 and in terms of execution times.
57 %%%%%%%%%%%%%%%%%%%%%%%%
58 %%%%%%%%%%%%%%%%%%%%%%%%
60 \section{Introduction}
61 Iterative methods are used to solve large sparse linear systems of equations of
62 the form $Ax=b$ because they are easier to parallelize than direct ones. Many
63 iterative methods have been proposed and adapted by different researchers. For
64 example, the GMRES method and the Conjugate Gradient method are very well known
65 and used~\cite{S96}. Both methods are based on the
66 Krylov subspace which consists in forming a basis of a sequence of successive
67 matrix powers times the initial residual.
69 When solving large linear systems with many cores, iterative methods often
70 suffer from scalability problems. This is due to their need for collective
71 communications to perform matrix-vector products and reduction operations.
72 Preconditioners can be used in order to increase the convergence of iterative
73 solvers. However, most of the good preconditioners are not scalable when
74 thousands of cores are used.
76 %Traditional iterative solvers have global synchronizations that penalize the
77 %scalability. Two possible solutions consists either in using asynchronous
78 %iterative methods~\cite{ref18} or to use multisplitting algorithms. In this
79 %paper, we will reconsider the use of a multisplitting method. In opposition to
80 %traditional multisplitting method that suffer from slow convergence, as
81 %proposed in~\cite{huang1993krylov}, the use of a minimization process can
82 %drastically improve the convergence.
84 Traditional parallel iterative solvers are based on fine-grain computations that
85 frequently require data exchanges between computing nodes and have global
86 synchronizations that penalize the scalability. Particularly, they are more
87 penalized on large scale architectures or on distributed platforms composed of
88 distant clusters interconnected by a high-latency network. It is therefore
89 imperative to develop coarse-grain based algorithms to reduce the communications
90 in the parallel iterative solvers. Two possible solutions consists either in
91 using asynchronous iterative methods~\cite{ref18} or in using multisplitting
92 algorithmss. In this paper, we will reconsider the use of a multisplitting
93 method. In opposition to traditional multisplitting method that suffer from slow
94 convergence, as proposed in~\cite{huang1993krylov}, the use of a minimization
95 process can drastically improve the convergence.
97 The present paper is organized as follows. First, Section~\ref{sec:02} presents
98 some related works and the principle of multisplitting methods. Then, in
99 Section~\ref{sec:03} the algorithm of our Krylov multisplitting
100 method, based on inner-outer iterations, is presented. Finally, in Section~\ref{sec:04}, the
101 parallel experiments on Hector architecture show the performances of the Krylov
102 multisplitting algorithm compared to the classical GMRES algorithm to solve a 3D
106 %%%%%%%%%%%%%%%%%%%%%%%%
107 %%%%%%%%%%%%%%%%%%%%%%%%
109 \section{Related works and presentation of the multisplitting method}
111 A general framework to study parallel multisplitting methods has been presented in~\cite{o1985multi}
112 by O'Leary and White. Convergence conditions are given for the
113 most general cases. Many authors have improved multisplitting algorithms by proposing,
114 for example, an asynchronous version~\cite{bru1995parallel} or convergence
115 conditions~\cite{bai1999block,bahi2000asynchronous} or other
116 two-stage algorithms~\cite{frommer1992h,bru1995parallel}.
118 In~\cite{huang1993krylov}, the authors have proposed a parallel multisplitting
119 algorithm in which all the tasks except one are devoted to solve a sub-block of
120 the splitting and to send their local solutions to the first task which is in
121 charge of combining the vectors at each iteration. These vectors form a Krylov
122 basis for which the first task minimizes the error function over the basis to
123 increase the convergence, then the other tasks receive the updated solution until the
124 convergence of the global system.
126 In~\cite{couturier2008gremlins}, the authors have developed practical implementations
127 of multisplitting algorithms to solve large scale linear systems. Inner solvers
128 could be based on sequential direct method with the LU method or sequential iterative
131 In~\cite{prace-multi}, the authors have designed a parallel multisplitting
132 algorithm in which large blocks are solved using a GMRES solver. The authors have
133 performed large scale experiments up-to 32,768 cores and they conclude that
134 an asynchronous multisplitting algorithm could be more efficient than traditional
135 solvers on an exascale architecture with hundreds of thousands of cores.
137 So, compared to these works, we propose in this paper a practical multisplitting method based on parallel iterative blocks which gives better results than classical GMRES method for the 3D Poisson problem we considered.
140 The key idea of a multisplitting method to solve a large system of linear equations $Ax=b$ is defined as follows. The first step consists in partitioning the matrix $A$ in $L$ several ways
145 where for all $\ell\in\{1,\ldots,L\}$ $M_\ell$ are non-singular matrices. Then the linear system is solved by an iteration based on the obtained splittings as follows
147 x^{k+1}=\displaystyle\sum^L_{\ell=1} E_\ell M^{-1}_\ell (N_\ell x^k + b),~k=1,2,3,\ldots
150 where $E_\ell$ are non-negative and diagonal weighting matrices and their sum is an identity matrix $I$. The convergence of such a method is dependent on the condition
152 \rho(\displaystyle\sum^L_{\ell=1}E_\ell M^{-1}_\ell N_\ell)<1.
155 where $\rho$ is the spectral radius of the square matrix.
157 The advantage of the multisplitting method is that at each iteration $k$ there are $L$ different linear sub-systems
159 v_\ell^k=M^{-1}_\ell N_\ell x_\ell^{k-1} + M^{-1}_\ell b,~\ell\in\{1,\ldots,L\},
162 to be solved independently by a direct or an iterative method, where $v_\ell$ is the solution of the local sub-system. Thus the computations of $\{v_\ell\}_{1\leq \ell\leq L}$ may be performed in parallel by a set of processors. A multisplitting method using an iterative method as an inner solver is called an inner-outer iterative method or a two-stage method. The results $v_\ell$ obtained from the different splittings~(\ref{eq04}) are combined to compute solution $x$ of the linear system by using the diagonal weighting matrices
164 x^k = \displaystyle\sum^L_{\ell=1} E_\ell v_\ell^k,
167 In the case where the diagonal weighting matrices $E_\ell$ have only zero and one factors (i.e. $v_\ell$ are disjoint vectors), the multisplitting method is non-overlapping and corresponds to the block Jacobi method.
169 %%%%%%%%%%%%%%%%%%%%%%%%
170 %%%%%%%%%%%%%%%%%%%%%%%%
172 \section{A two-stage method with a minimization}
174 Let $Ax=b$ be a given large and sparse linear system of $n$ equations to solve in parallel on $L$ clusters of processors, physically adjacent or geographically distant, where $A\in\mathbb{R}^{n\times n}$ is a square and non-singular matrix, $x\in\mathbb{R}^{n}$ is the solution vector and $b\in\mathbb{R}^{n}$ is the right-hand side vector. The multisplitting of this linear system is defined as follows
178 A & = & [A_{1}, \ldots, A_{L}]\\
179 x & = & [X_{1}, \ldots, X_{L}]\\
180 b & = & [B_{1}, \ldots, B_{L}]
185 where for $\ell\in\{1,\ldots,L\}$, $A_\ell$ is a rectangular block of size $n_\ell\times n$ and $X_\ell$ and $B_\ell$ are sub-vectors of size $n_\ell$ each, such that $\sum_\ell n_\ell=n$. In this work, we use a row-by-row splitting without overlapping in such a way that successive rows of sparse matrix $A$ and both vectors $x$ and $b$ are assigned to one cluster. So, the multisplitting format of the linear system is defined as follows
187 \forall \ell\in\{1,\ldots,L\} \mbox{,~} A_{\ell \ell}X_\ell + \displaystyle\sum_{\substack{m=1\\m\neq\ell}}^L A_{\ell m}X_m = B_\ell,
190 where $A_{\ell m}$ is a sub-block of size $n_\ell\times n_m$ of the rectangular matrix $A_\ell$, $X_m\neq X_\ell$ is a sub-vector of size $n_m$ of the solution vector $x$ and $\sum_{m\neq \ell}n_m+n_\ell=n$, for all $m\in\{1,\ldots,L\}$.
192 Our multisplitting method proceeds by iteration to solve the linear system in such a way that each sub-system
196 A_{\ell \ell}X_\ell = Y_\ell \mbox{,~such that}\\
197 Y_\ell = B_\ell - \displaystyle\sum_{\substack{m=1\\m\neq \ell}}^{L}A_{\ell m}X_m,
202 is solved independently by a {\it cluster of processors} and communications are required to update the right-hand side vectors $Y_\ell$, such that the vectors $X_m$ represent the data dependencies between the clusters. In this work, we use the parallel restarted GMRES method~\cite{ref34} as an inner iteration method to solve sub-systems~(\ref{sec03:eq03}). GMRES is one of the most used Krylov iterative methods to solve sparse linear systems. %In practice, GMRES is used with a preconditioner to improve its convergence. In this work, we used a preconditioning matrix equivalent to the main diagonal of sparse sub-matrix $A_{ll}$. This preconditioner is straightforward to implement in parallel and gives good performances in many situations.
204 It should be noted that the convergence of the inner iterative solver for the
205 different sub-systems~(\ref{sec03:eq03}) does not necessarily involve the
206 convergence of the multisplitting method. It strongly depends on the properties
207 of the global sparse linear system to be
208 solved~\cite{o1985multi,ref18}. Furthermore, the splitting of the linear system
209 among several clusters of processors increases the spectral radius of the
210 iteration matrix, thereby slowing the convergence. In fact, the larger the
211 number of splitting is, the larger the spectral radius is. In this paper, our
212 work is based on the work presented in~\cite{huang1993krylov} to increase the
213 convergence and improve the scalability of the multisplitting methods.
215 In order to accelerate the convergence, we implemented the outer iteration of the multisplitting solver as a Krylov iterative method which minimizes some error function over a Krylov subspace~\cite{S96}. The Krylov subspace that we used is spanned by a basis composed of successive solutions issued from solving the $L$ splittings~(\ref{sec03:eq03})
217 S=\{x^1,x^2,\ldots,x^s\},~s\leq n,
220 where for $j\in\{1,\ldots,s\}$, $x^j=[X_1^j,\ldots,X_L^j]$ is a solution of the global linear system. The advantage of such a Krylov subspace is that we neither need an orthogonal basis nor any synchronization between clusters to generate this basis.
222 The multisplitting method is periodically restarted every $s$ iterations with a new initial guess $\tilde{x}=S\alpha$ which minimizes the error function $\|b-Ax\|_2$ over the Krylov subspace spanned by vectors of $S$. So $\alpha$ is defined as the solution of the large overdetermined linear system
227 where $R=AS$ is a dense rectangular matrix of size $n\times s$ and $s\ll n$. This leads us to solve a system of normal equations
232 which is associated with the least squares problem
234 \text{minimize}~\|b-R\alpha\|_2,
237 where $R^T$ denotes the transpose of matrix $R$. Since $R$ (i.e. $AS$) and $b$ are split among $L$ clusters, the symmetric positive definite system~(\ref{sec03:eq06}) is solved in parallel. Thus an iterative method would be more appropriate than a direct one to solve this system. We use the parallel Conjugate Gradient method for the normal equations CGNR~\cite{S96,refCGNR}.
239 \begin{algorithm}[!t]
240 \caption{A two-stage linear solver with inner iteration GMRES method}
241 \begin{algorithmic}[1]
242 \Input $A_\ell$ (sparse sub-matrix), $B_\ell$ (right-hand side sub-vector)
243 \Output $X_\ell$ (solution sub-vector)\vspace{0.2cm}
244 \State Load $A_\ell$, $B_\ell$
245 \State Set the initial guess $x^0$
246 \State Set the minimizer $\tilde{x}^0=x^0$
247 \For {$k=1,2,3,\ldots$ until the global convergence}
248 \State Restart with $x^0=\tilde{x}^{k-1}$:
249 \For {$j=1,2,\ldots,s$}
250 \State \label{line7}Inner iteration solver: \Call{InnerSolver}{$x^0$, $j$}
251 \State Construct basis $S$: add column vector $X_\ell^j$ to the matrix $S_\ell^k$
252 \State Exchange local values of $X_\ell^j$ with the neighboring clusters
253 \State Compute dense matrix $R$: $R_\ell^{k,j}=\sum^L_{i=1}A_{\ell i}X_i^j$
255 \State \label{line12}Minimization $\|b-R\alpha\|_2$: \Call{UpdateMinimizer}{$S_\ell$, $R$, $b$, $k$}
256 \State Local solution of linear system $Ax=b$: $X_\ell^k=\tilde{X}_\ell^k$
257 \State Exchange the local minimizer $\tilde{X}_\ell^k$ with the neighboring clusters
262 \Function {InnerSolver}{$x^0$, $j$}
263 \State Compute local right-hand side $Y_\ell = B_\ell - \sum^L_{\substack{m=1\\m\neq \ell}}A_{\ell m}X_m^0$
264 \State Solving local splitting $A_{\ell \ell}X_\ell^j=Y_\ell$ using parallel GMRES method, such that $X_\ell^0$ is the initial guess
265 \State \Return $X_\ell^j$
270 \Function {UpdateMinimizer}{$S_\ell$, $R$, $b$, $k$}
271 \State Solving normal equations $(R^k)^TR^k\alpha^k=(R^k)^Tb$ in parallel by $L$ clusters using parallel CGNR method
272 \State Compute local minimizer $\tilde{X}_\ell^k=S_\ell^k\alpha^k$
273 \State \Return $\tilde{X}_\ell^k$
279 The main key points of our Krylov multisplitting method to solve a large sparse linear system are given in Algorithm~\ref{algo:01}. This algorithm is based on a two-stage method with a minimization using restarted GMRES iterative method as an inner solver. It is executed in parallel by each cluster of processors. Matrices and vectors with the subscript $\ell$ represent the local data for cluster $\ell$, where $\ell\in\{1,\ldots,L\}$. The two-stage solver uses two different parallel iterative algorithms: the GMRES method to solve each splitting~(\ref{sec03:eq03}) on a cluster of processors, and the CGNR method, executed in parallel by all clusters, to minimize the function error~(\ref{sec03:eq07}) over the Krylov subspace spanned by $S$. The algorithm requires two global synchronizations between $L$ clusters. The first one is performed line~\ref{line12} in Algorithm~\ref{algo:01} to exchange local values of vector solution $x$ (i.e. the minimizer $\tilde{x}$) required to restart the multisplitting solver. The second one is needed to construct the matrix $R$. We chose to perform this latter synchronization $s$ times in every outer iteration $k$ (line~\ref{line7} in Algorithm~\ref{algo:01}). This is a straightforward way to compute the sparse matrix-dense matrix multiplication $R=AS$. We implemented all synchronizations by using message passing collective communications of MPI library.
281 %%%%%%%%%%%%%%%%%%%%%%%%
282 %%%%%%%%%%%%%%%%%%%%%%%%
284 \section{Experiments}
286 In order to illustrate the interest of our algorithm, we have compared our
287 algorithm with the GMRES method which is a commonly used method in many
288 situations. We have chosen to focus on only one problem which is very simple to
289 implement: a 3 dimension Poisson problem.
294 \nabla u&=f \mbox{~in~} \omega\\
295 u &=0 \mbox{~on~} \Gamma=\partial \omega
300 After discretization, with a finite difference scheme, a seven point stencil is
301 used. It is well-known that the spectral radius of matrices representing such
302 problems are very close to 1. Moreover, the larger the number of discretization
303 points is, the closer to 1 the spectral radius is. Hence, to solve a matrix
304 obtained for a 3D Poisson problem, the number of iterations is high. Using a
305 preconditioner it is possible to reduce the number of iterations but
306 preconditioners are not scalable when using many cores.
308 %Doing many experiments with many cores is not easy and requires to access to a supercomputer with several hours for developing a code and then improving it.
309 In the following we present some experiments we could achieve out on the Hector
310 architecture, a UK's high-end computing resource, funded by the UK Research
311 Councils~\cite{hector}. This is a Cray XE6 supercomputer, equipped with two
312 16-core AMD Opteron 2.3 Ghz and 32 GB of memory. Machines are interconnected
315 Table~\ref{tab1} shows the result of the experiments. The first column shows
316 the size of the 3D Poisson problem. The size is chosen in order to have
317 approximately 50,000 components per core. The second column represents the
318 number of cores used. In brackets, one can find the decomposition used for the
319 Krylov multisplitting. The third column and the sixth column respectively show
320 the execution time for the GMRES and the Krylov multisplitting codes. The fourth
321 and the seventh column describe the number of iterations. For the
322 multisplitting code, the total number of inner iterations is represented in
323 brackets. For the GMRES code (alone and in the multisplitting version) the
324 restart parameter is fixed to 16. The precision of the GMRES version is fixed to
325 1e-6. For the multisplitting, there are two precisions, one for the external
326 solver which is fixed to 1e-6 and another one for the inner solver (GMRES) which
327 is fixed to 1e-10. It should be noted that a high precision is used but we also
328 fixed a maximum number of iterations for each internal step. In practice, we
329 limit the number of iterations in the internal step to 10. So an internal iteration is finished
330 when the precision is reached or when the maximum internal number of iterations
331 is reached. The precision and the maximum number of iterations of CGNR method are fixed to 1e-25 and 20 respectively. The size of the Krylov subspace basis $S$ is fixed to 10 vectors.
335 \begin{changemargin}{-1.8cm}{0cm}
337 \begin{tabular}{|c|c||c|c|c||c|c|c||c|}
339 \multirow{2}{*}{Pb size}&\multirow{2}{*}{Nb. cores} & \multicolumn{3}{c||}{GMRES} & \multicolumn{3}{c||}{Krylov Multisplitting} & \multirow{2}{*}{Ratio}\\
341 & & Time (s) & nb Iter. & $\Delta$ & Time (s)& nb Iter. & $\Delta$ & \\
343 $468^3$ & 2,048 (2x1,024) & 299.7 & 41,028 & 5.02e-8 & 48.4 & 691(6,146) & 8.24e-08 & 6.19 \\
345 $590^3$ & 4,096 (2x2,048) & 433.1 & 55,494 & 4.92e-7 & 74.1 & 1,101(8,211) & 6.62e-08 & 5.84 \\
347 $743^3$ & 8,192 (2x4,096) & 704.4 & 87,822 & 4.80e-07 & 151.2 & 3,061(14,914) & 5.87e-08 & 4.65 \\
349 $743^3$ & 8,192 (4x2,048) & 704.4 & 87,822 & 4.80e-07 & 110.3 & 1,531(12,721) & 1.47e-07& 6.39 \\
361 From these experiments, it can be observed that the multisplitting version is
362 always faster than the GMRES version. The acceleration gain of the
363 multisplitting version ranges between 4 and 6. It can be noticed that the number of
364 iterations is drastically reduced with the multisplitting version even it is not
365 negligible. Moreover, with 8,192 cores, we can see that using 4 clusters gives a
366 better performance than simply using 2 clusters. In fact, we can notice that the
367 precision with 2 clusters is slightly better but in both cases the precision is
368 under the specified threshold.
370 \section{Conclusion and perspectives}
371 We have implemented a Krylov multisplitting method to solve sparse linear
372 systems on large-scale computing platforms. We have developed a synchronous
373 two-stage method based on the block Jacobi multisaplitting which uses GMRES
374 iterative method as an inner iteration. Our contribution in this paper is
375 twofold. First we provide a multi cluster decomposition that allows us to choose
376 the appropriate size of the clusters according to the architecures of the
377 supercomputer. Second, we have implemented the outer iteration of the
378 multisplitting method as a Krylov subspace method which minimizes some error
379 function. This increases the convergence and improves the scalability of the
380 multisplitting method.
382 We have tested our multisplitting method to solve the sparse linear system
383 issued from the discretization of a 3D Poisson problem. We have compared its
384 performances to the classical GMRES method on a supercomputer composed of 2,048
385 to 8,192 cores. The experimental results showed that the multisplitting method is
386 about 4 to 6 times faster than the GMRES method for different sizes of the
387 problem split into 2 or 4 blocks when using the multisplitting method. Indeed, the
388 GMRES method has difficulties to scale with many cores while the Krylov
389 multisplitting method allows to hide latency and reduce the inter-cluster
392 In future works, we plan to conduct experiments on larger numbers of cores and
393 test the scalability of our Krylov multisplitting method. It would be
394 interesting to validate its performances to solve other linear/nonlinear and
395 symmetric/nonsymmetric problems. Moreover, we intend to develop multisplitting
396 methods based on asynchronous iterations in which communications are overlapped
397 by computations. These methods would be interesting for platforms composed of
398 distant clusters interconnected by a high-latency network. In addition, we
399 intend to investigate the convergence improvements of our method by using
400 preconditioning techniques for Krylov iterative methods and multisplitting
401 methods with overlapping blocks.
403 \section{Acknowledgement}
404 The authors would like to thank Mark Bull of the EPCC his fruitful remarks and the facilities of HECToR.
406 %Other applications (=> other matrices)\\
407 %Larger experiments\\
413 %%%%%%%%%%%%%%%%%%%%%%%%
414 %%%%%%%%%%%%%%%%%%%%%%%%
416 \bibliographystyle{plain}
417 \bibliography{biblio}