1 \documentclass{article}
2 \usepackage[utf8]{inputenc}
3 \usepackage{amsfonts,amssymb}
7 \usepackage{algpseudocode}
10 \algnewcommand\algorithmicinput{\textbf{Input:}}
11 \algnewcommand\Input{\item[\algorithmicinput]}
13 \algnewcommand\algorithmicoutput{\textbf{Output:}}
14 \algnewcommand\Output{\item[\algorithmicoutput]}
16 \newcommand{\Time}[1]{\mathit{Time}_\mathit{#1}}
17 \newcommand{\Prec}{\mathit{prec}}
18 \newcommand{\Ratio}{\mathit{Ratio}}
21 %\usepackage[textsize=footnotesize]{todonotes}
22 %\newcommand{\LZK}[2][inline]{%
23 %\todo[color=green!40,#1]{\sffamily\textbf{LZK:} #2}\xspace}
25 \title{A scalable multisplitting algorithm for solving large sparse linear systems}
31 \author{Raphaël Couturier \and Lilia Ziane Khodja}
36 %%%%%%%%%%%%%%%%%%%%%%%%
37 %%%%%%%%%%%%%%%%%%%%%%%%
40 In this paper we revisit the Krylov multisplitting algorithm presented in
41 \cite{huang1993krylov} which uses a sequential method to minimize the Krylov
42 iterations computed by a multisplitting algorithm. Our new algorithm is based on
43 a parallel multisplitting algorithm with few blocks of large size using a
44 parallel GMRES method inside each block and on a parallel Krylov minimization in
45 order to improve the convergence. Some large scale experiments with a 3D Poisson
46 problem are presented with up to 8,192 cores. They show the obtained
47 improvements compared to a classical GMRES both in terms of number of iterations
51 %%%%%%%%%%%%%%%%%%%%%%%%
52 %%%%%%%%%%%%%%%%%%%%%%%%
54 \section{Introduction}
55 Iterative methods are used to solve large sparse linear systems of equations of
56 the form $Ax=b$ because they are easier to parallelize than direct ones. Many
57 iterative methods have been proposed and adapted by many researchers. For
58 example, the GMRES method and the Conjugate Gradient method are very well known
59 and used by many researchers~\cite{S96}. Both methods are based on the
60 Krylov subspace which consists in forming a basis of a sequence of successive
61 matrix powers times the initial residual.
63 When solving large linear systems with many cores, iterative methods often
64 suffer from scalability problems. This is due to their need for collective
65 communications to perform matrix-vector products and reduction operations.
66 Preconditioners can be used in order to increase the convergence of iterative
67 solvers. However, most of the good preconditioners are not scalable when
68 thousands of cores are used.
70 %Traditional iterative solvers have global synchronizations that penalize the
71 %scalability. Two possible solutions consists either in using asynchronous
72 %iterative methods~\cite{ref18} or to use multisplitting algorithms. In this
73 %paper, we will reconsider the use of a multisplitting method. In opposition to
74 %traditional multisplitting method that suffer from slow convergence, as
75 %proposed in~\cite{huang1993krylov}, the use of a minimization process can
76 %drastically improve the convergence.
78 Traditional parallel iterative solvers are based on fine-grain computations that
79 frequently require data exchanges between computing nodes and have global
80 synchronizations that penalize the scalability. Particularly, they are more
81 penalized on large scale architectures or on distributed platforms composed of
82 distant clusters interconnected by a high-latency network. It is therefore
83 imperative to develop coarse-grain based algorithms to reduce the communications
84 in the parallel iterative solvers. Two possible solutions consists either in
85 using asynchronous iterative methods~\cite{ref18} or to use multisplitting
86 algorithms. In this paper, we will reconsider the use of a multisplitting
87 method. In opposition to traditional multisplitting method that suffer from slow
88 convergence, as proposed in~\cite{huang1993krylov}, the use of a minimization
89 process can drastically improve the convergence.
91 The present paper is organized as follows. First, Section~\ref{sec:02} presents
92 some related works and the principle of multisplitting methods. Then, in
93 Section~\ref{sec:03} is presented the algorithm of our Krylov multisplitting
94 method based on inner-outer iterations. Finally, in Section~\ref{sec:04}, the
95 parallel experiments on Hector architecture show the performances of the Krylov
96 multisplitting algorithm compared to the classical GMRES algorithm to solve a 3D
100 %%%%%%%%%%%%%%%%%%%%%%%%
101 %%%%%%%%%%%%%%%%%%%%%%%%
103 \section{Related works and presentation of the multisplitting method}
105 A general framework for studying parallel multisplitting has been presented in~\cite{o1985multi}
106 by O'Leary and White. Convergence conditions are given for the
107 most general case. Many authors improved multisplitting algorithms by proposing
108 for example an asynchronous version~\cite{bru1995parallel} and convergence
109 conditions~\cite{bai1999block,bahi2000asynchronous} in this case or other
110 two-stage algorithms~\cite{frommer1992h,bru1995parallel}.
112 In~\cite{huang1993krylov}, the authors proposed a parallel multisplitting
113 algorithm in which all the tasks except one are devoted to solve a sub-block of
114 the splitting and to send their local solutions to the first task which is in
115 charge to combine the vectors at each iteration. These vectors form a Krylov
116 basis for which the first task minimizes the error function over the basis to
117 increase the convergence, then the other tasks receive the updated solution until
118 convergence of the global system.
120 In~\cite{couturier2008gremlins}, the authors proposed practical implementations
121 of multisplitting algorithms to solve large scale linear systems. Inner solvers
122 could be based on sequential direct method with the LU method or sequential iterative
125 In~\cite{prace-multi}, the authors have proposed a parallel multisplitting
126 algorithm in which large blocks are solved using a GMRES solver. The authors have
127 performed large scale experiments up-to 32,768 cores and they conclude that
128 asynchronous multisplitting algorithm could be more efficient than traditional
129 solvers on an exascale architecture with hundreds of thousands of cores.
131 So compared to these works, we propose in this paper a practical multisplitting method based on parallel iterative blocks and gives better results than classical GMRES method for the 3D Poisson problem we considered.
134 The key idea of a multisplitting method to solve a large system of linear equations $Ax=b$ is defined as follows. The first step consists in partitioning the matrix $A$ in $L$ several ways
139 where for all $\ell\in\{1,\ldots,L\}$ $M_\ell$ are non-singular matrices. Then the linear system is solved by iteration based on the obtained splittings as follows
141 x^{k+1}=\displaystyle\sum^L_{\ell=1} E_\ell M^{-1}_\ell (N_\ell x^k + b),~k=1,2,3,\ldots
144 where $E_\ell$ are non-negative and diagonal weighting matrices and their sum is an identity matrix $I$. The convergence of such a method is dependent on the condition
146 \rho(\displaystyle\sum^L_{\ell=1}E_\ell M^{-1}_\ell N_\ell)<1.
149 where $\rho$ is the spectral radius of the square matrix.
151 The advantage of the multisplitting method is that at each iteration $k$ there are $L$ different linear sub-systems
153 v_\ell^k=M^{-1}_\ell N_\ell x_\ell^{k-1} + M^{-1}_\ell b,~\ell\in\{1,\ldots,L\},
156 to be solved independently by a direct or an iterative method, where $v_\ell$ is the solution of the local sub-system. Thus the computations of $\{v_\ell\}_{1\leq \ell\leq L}$ may be performed in parallel by a set of processors. A multisplitting method using an iterative method as an inner solver is called an inner-outer iterative method or a two-stage method. The results $v_\ell$ obtained from the different splittings~(\ref{eq04}) are combined to compute solution $x$ of the linear system by using the diagonal weighting matrices
158 x^k = \displaystyle\sum^L_{\ell=1} E_\ell v_\ell^k,
161 In the case where the diagonal weighting matrices $E_\ell$ have only zero and one factors (i.e. $v_\ell$ are disjoint vectors), the multisplitting method is non-overlapping and corresponds to the block Jacobi method.
163 %%%%%%%%%%%%%%%%%%%%%%%%
164 %%%%%%%%%%%%%%%%%%%%%%%%
166 \section{A two-stage method with a minimization}
168 Let $Ax=b$ be a given large and sparse linear system of $n$ equations to solve in parallel on $L$ clusters of processors, physically adjacent or geographically distant, where $A\in\mathbb{R}^{n\times n}$ is a square and non-singular matrix, $x\in\mathbb{R}^{n}$ is the solution vector and $b\in\mathbb{R}^{n}$ is the right-hand side vector. The multisplitting of this linear system is defined as follows
172 A & = & [A_{1}, \ldots, A_{L}]\\
173 x & = & [X_{1}, \ldots, X_{L}]\\
174 b & = & [B_{1}, \ldots, B_{L}]
179 where for $\ell\in\{1,\ldots,L\}$, $A_\ell$ is a rectangular block of size $n_\ell\times n$ and $X_\ell$ and $B_\ell$ are sub-vectors of size $n_\ell$ each, such that $\sum_\ell n_\ell=n$. In this work, we use a row-by-row splitting without overlapping in such a way that successive rows of sparse matrix $A$ and both vectors $x$ and $b$ are assigned to one cluster. So, the multisplitting format of the linear system is defined as follows
181 \forall \ell\in\{1,\ldots,L\} \mbox{,~} A_{\ell \ell}X_\ell + \displaystyle\sum_{\substack{m=1\\m\neq\ell}}^L A_{\ell m}X_m = B_\ell,
184 where $A_{\ell m}$ is a sub-block of size $n_\ell\times n_m$ of the rectangular matrix $A_\ell$, $X_m\neq X_\ell$ is a sub-vector of size $n_m$ of the solution vector $x$ and $\sum_{m\neq \ell}n_m+n_\ell=n$, for all $m\in\{1,\ldots,L\}$.
186 Our multisplitting method proceeds by iteration to solve the linear system in such a way that each sub-system
190 A_{\ell \ell}X_\ell = Y_\ell \mbox{,~such that}\\
191 Y_\ell = B_\ell - \displaystyle\sum_{\substack{m=1\\m\neq \ell}}^{L}A_{\ell m}X_m,
196 is solved independently by a {\it cluster of processors} and communications are required to update the right-hand side vectors $Y_\ell$, such that the vectors $X_m$ represent the data dependencies between the clusters. In this work, we use the parallel restarted GMRES method~\cite{ref34} as an inner iteration method to solve sub-systems~(\ref{sec03:eq03}). GMRES is one of the most used Krylov iterative methods to solve sparse linear systems. %In practice, GMRES is used with a preconditioner to improve its convergence. In this work, we used a preconditioning matrix equivalent to the main diagonal of sparse sub-matrix $A_{ll}$. This preconditioner is straightforward to implement in parallel and gives good performances in many situations.
198 It should be noted that the convergence of the inner iterative solver for the
199 different sub-systems~(\ref{sec03:eq03}) does not necessarily involve the
200 convergence of the multisplitting method. It strongly depends on the properties
201 of the global sparse linear system to be
202 solved~\cite{o1985multi,ref18}. Furthermore, the splitting of the linear system
203 among several clusters of processors increases the spectral radius of the
204 iteration matrix, thereby slowing the convergence. In fact, the larger the
205 number of splitting is, the larger the spectral radius is. In this paper, we
206 based on the work presented in~\cite{huang1993krylov} to increase the
207 convergence and improve the scalability of the multisplitting methods.
209 In order to accelerate the convergence, we implemented the outer iteration of the multisplitting solver as a Krylov iterative method which minimizes some error function over a Krylov subspace~\cite{S96}. The Krylov subspace that we used is spanned by a basis composed of successive solutions issued from solving the $L$ splittings~(\ref{sec03:eq03})
211 S=\{x^1,x^2,\ldots,x^s\},~s\leq n,
214 where for $j\in\{1,\ldots,s\}$, $x^j=[X_1^j,\ldots,X_L^j]$ is a solution of the global linear system. The advantage of such a Krylov subspace is that we need neither an orthogonal basis nor synchronizations between clusters to generate this basis.
216 The multisplitting method is periodically restarted every $s$ iterations with a new initial guess $\tilde{x}=S\alpha$ which minimizes the error function $\|b-Ax\|_2$ over the Krylov subspace spanned by vectors of $S$. So $\alpha$ is defined as the solution of the large overdetermined linear system
221 where $R=AS$ is a dense rectangular matrix of size $n\times s$ and $s\ll n$. This leads us to solve a system of normal equations
226 which is associated with the least squares problem
228 \text{minimize}~\|b-R\alpha\|_2,
231 where $R^T$ denotes the transpose of matrix $R$. Since $R$ (i.e. $AS$) and $b$ are split among $L$ clusters, the symmetric positive definite system~(\ref{sec03:eq06}) is solved in parallel. Thus an iterative method would be more appropriate than a direct one to solve this system. We use the parallel Conjugate Gradient method for the normal equations CGNR~\cite{S96,refCGNR}.
233 \begin{algorithm}[!t]
234 \caption{A two-stage linear solver with inner iteration GMRES method}
235 \begin{algorithmic}[1]
236 \Input $A_\ell$ (sparse sub-matrix), $B_\ell$ (right-hand side sub-vector)
237 \Output $X_\ell$ (solution sub-vector)\vspace{0.2cm}
238 \State Load $A_\ell$, $B_\ell$
239 \State Set the initial guess $x^0$
240 \State Set the minimizer $\tilde{x}^0=x^0$
241 \For {$k=1,2,3,\ldots$ until the global convergence}
242 \State Restart with $x^0=\tilde{x}^{k-1}$:
243 \For {$j=1,2,\ldots,s$}
244 \State \label{line7}Inner iteration solver: \Call{InnerSolver}{$x^0$, $j$}
245 \State Construct basis $S$: add column vector $X_\ell^j$ to the matrix $S_\ell^k$
246 \State Exchange local values of $X_\ell^j$ with the neighboring clusters
247 \State Compute dense matrix $R$: $R_\ell^{k,j}=\sum^L_{i=1}A_{\ell i}X_i^j$
249 \State \label{line12}Minimization $\|b-R\alpha\|_2$: \Call{UpdateMinimizer}{$S_\ell$, $R$, $b$, $k$}
250 \State Local solution of linear system $Ax=b$: $X_\ell^k=\tilde{X}_\ell^k$
251 \State Exchange the local minimizer $\tilde{X}_\ell^k$ with the neighboring clusters
256 \Function {InnerSolver}{$x^0$, $j$}
257 \State Compute local right-hand side $Y_\ell = B_\ell - \sum^L_{\substack{m=1\\m\neq \ell}}A_{\ell m}X_m^0$
258 \State Solving local splitting $A_{\ell \ell}X_\ell^j=Y_\ell$ using parallel GMRES method, such that $X_\ell^0$ is the initial guess
259 \State \Return $X_\ell^j$
264 \Function {UpdateMinimizer}{$S_\ell$, $R$, $b$, $k$}
265 \State Solving normal equations $(R^k)^TR^k\alpha^k=(R^k)^Tb$ in parallel by $L$ clusters using parallel CGNR method
266 \State Compute local minimizer $\tilde{X}_\ell^k=S_\ell^k\alpha^k$
267 \State \Return $\tilde{X}_\ell^k$
273 The main key points of our Krylov multisplitting method to solve a large sparse linear system are given in Algorithm~\ref{algo:01}. This algorithm is based on a two-stage method with a minimization using restarted GMRES iterative method as an inner solver. It is executed in parallel by each cluster of processors. Matrices and vectors with the subscript $\ell$ represent the local data for cluster $\ell$, where $\ell\in\{1,\ldots,L\}$. The two-stage solver uses two different parallel iterative algorithms: GMRES method to solve each splitting~(\ref{sec03:eq03}) on a cluster of processors, and CGNR method executed in parallel by all clusters to minimize the function error~(\ref{sec03:eq07}) over the Krylov subspace spanned by $S$. The algorithm requires two global synchronizations between $L$ clusters. The first one is performed at line~\ref{line12} in Algorithm~\ref{algo:01} to exchange local values of vector solution $x$ (i.e. the minimizer $\tilde{x}$) required to restart the multisplitting solver. The second one is needed to construct the matrix $R$. We chose to perform this latter synchronization $s$ times in every outer iteration $k$ (line~\ref{line7} in Algorithm~\ref{algo:01}). This is a straightforward way to compute the sparse matrix-dense matrix multiplication $R=AS$. We implemented all synchronizations by using message passing collective communications of MPI library.
275 %%%%%%%%%%%%%%%%%%%%%%%%
276 %%%%%%%%%%%%%%%%%%%%%%%%
278 \section{Experiments}
280 In order to illustrate the interest of our algorithm. We have compared our
281 algorithm with the GMRES method which is a very well used method in many
282 situations. We have chosen to focus on only one problem which is very simple to
283 implement: a 3 dimension Poisson problem.
288 \nabla u&=f \mbox{~in~} \omega\\
289 u &=0 \mbox{~on~} \Gamma=\partial \omega
294 After discretization, with a finite difference scheme, a seven point stencil is
295 used. It is well-known that the spectral radius of matrices representing such
296 problems are very close to 1. Moreover, the larger the number of discretization
297 points is, the closer to 1 the spectral radius is. Hence, to solve a matrix
298 obtained for a 3D Poisson problem, the number of iterations is high. Using a
299 preconditioner it is possible to reduce the number of iterations but
300 preconditioners are not scalable when using many cores.
302 %Doing many experiments with many cores is not easy and requires to access to a supercomputer with several hours for developing a code and then improving it.
303 In the following we present some experiments we could achieved out on the Hector
304 architecture, a UK's high-end computing resource, funded by the UK Research
305 Councils~\cite{hector}. This is a Cray XE6 supercomputer, equipped with two
306 16-core AMD Opteron 2.3 Ghz and 32 GB of memory. Machines are interconnected
309 Table~\ref{tab1} shows the result of the experiments. The first column shows
310 the size of the 3D Poisson problem. The size is chosen in order to have
311 approximately 50,000 components per core. The second column represents the
312 number of cores used. In parenthesis, there is the decomposition used for the
313 Krylov multisplitting. The third column and the sixth column respectively show
314 the execution time for the GMRES and the Krylov multisplitting codes. The fourth
315 and the seventh column describes the number of iterations. For the
316 multisplitting code, the total number of inner iterations is represented in
317 parenthesis. For the GMRES code (alone and in the multisplitting version) the
318 restart parameter is fixed to 16. The precision of the GMRES version is fixed to
319 1e-6. For the multisplitting, there are two precisions, one for the external
320 solver which is fixed to 1e-6 and another one for the inner solver (GMRES) which
321 is fixed to 1e-10. It should be noted that a high precision is used but we also
322 fixed a maximum number of iterations for each internal step. In practice, we
323 limit the number of iterations in the internal step to 10. So an internal iteration is finished
324 when the precision is reached or when the maximum internal number of iterations
325 is reached. The precision and the maximum number of iterations of CGNR method are fixed to 1e-25 and 20 respectively. The size of the Krylov subspace basis $S$ is fixed to 10 vectors.
329 \begin{tabular}{|c|c||c|c|c||c|c|c||c|}
331 \multirow{2}{*}{Pb size}&\multirow{2}{*}{Nb. cores} & \multicolumn{3}{c||}{GMRES} & \multicolumn{3}{c||}{Krylov Multisplitting} & \multirow{2}{*}{Ratio}\\
333 & & Time (s) & nb Iter. & $\Delta$ & Time (s)& nb Iter. & $\Delta$ & \\
335 $468^3$ & 2,048 (2x1,024) & 299.7 & 41,028 & 5.02e-8 & 48.4 & 691(6,146) & 8.24e-08 & 6.19 \\
337 $590^3$ & 4,096 (2x2,048) & 433.1 & 55,494 & 4.92e-7 & 74.1 & 1,101(8,211) & 6.62e-08 & 5.84 \\
339 $743^3$ & 8,192 (2x4,096) & 704.4 & 87,822 & 4.80e-07 & 151.2 & 3,061(14,914) & 5.87e-08 & 4.65 \\
341 $743^3$ & 8,192 (4x2,048) & 704.4 & 87,822 & 4.80e-07 & 110.3 & 1,531(12,721) & 1.47e-07& 6.39 \\
351 From these experiments, it can be observed that the multisplitting version is
352 always faster than the GMRES version. The acceleration gain of the
353 multisplitting version is between 4 and 6. It can be noticed that the number of
354 iterations is drastically reduced with the multisplitting version even it is not
355 neglectable. Moreover, with 8,192 cores, we can see that using 4 clusters gives
356 better performance than simply using 2 clusters. In fact, we can remark that the
357 precision with 2 clusters is slightly better but in both cases the precision is
358 under the specified threshold.
360 \section{Conclusion and perspectives}
361 We have implemented a Krylov multisplitting method to solve sparse linear
362 systems on large-scale computing platforms. We have developed a synchronous
363 two-stage method based on the block Jacobi multisaplitting which uses GMRES
364 iterative method as an inner iteration. Our contribution in this paper is
365 twofold. First we provide a multi cluster decomposition that allows us to choose
366 the appropriate size of the clusters according to the architecures of the
367 supercomputer. Second, we have implemented the outer iteration of the
368 multisplitting method as a Krylov subspace method which minimizes some error
369 function. This increases the convergence and improves the scalability of the
370 multisplitting method.
372 We have tested our multisplitting method to solve the sparse linear system
373 issued from the discretization of a 3D Poisson problem. We have compared its
374 performances to the classical GMRES method on a supercomputer composed of 2,048
375 to 8,192 cores. The experimental results showed that the multisplitting method is
376 about 4 to 6 times faster than the GMRES method for different sizes of the
377 problem split into 2 or 4 blocks when using multisplitting method. Indeed, the
378 GMRES method has difficulties to scale with many cores while the Krylov
379 multisplitting method allows to hide latency and reduce the inter-cluster
382 In future works, we plan to conduct experiments on larger number of cores and
383 test the scalability of our Krylov multisplitting method. It would be
384 interesting to validate its performances to solve other linear/nonlinear and
385 symmetric/nonsymmetric problems. Moreover, we intend to develop multisplitting
386 methods based on asynchronous iteration in which communications are overlapped
387 by computations. These methods would be interesting for platforms composed of
388 distant clusters interconnected by a high-latency network. In addition, we
389 intend to investigate the convergence improvements of our method by using
390 preconditioning techniques for Krylov iterative methods and multisplitting
391 methods with overlapping blocks.
393 \section{Acknowledgement}
394 The authors would like to thank Mark Bull of the EPCC his fruitful remarks and the facilities of HECToR.
396 %Other applications (=> other matrices)\\
397 %Larger experiments\\
403 %%%%%%%%%%%%%%%%%%%%%%%%
404 %%%%%%%%%%%%%%%%%%%%%%%%
406 \bibliographystyle{plain}
407 \bibliography{biblio}