1 \documentclass{article}
2 \usepackage[utf8]{inputenc}
3 \usepackage{amsfonts,amssymb}
7 \usepackage{algpseudocode}
10 \algnewcommand\algorithmicinput{\textbf{Input:}}
11 \algnewcommand\Input{\item[\algorithmicinput]}
13 \algnewcommand\algorithmicoutput{\textbf{Output:}}
14 \algnewcommand\Output{\item[\algorithmicoutput]}
16 \newcommand{\Time}[1]{\mathit{Time}_\mathit{#1}}
17 \newcommand{\Prec}{\mathit{prec}}
18 \newcommand{\Ratio}{\mathit{Ratio}}
21 %\usepackage[textsize=footnotesize]{todonotes}
22 %\newcommand{\LZK}[2][inline]{%
23 %\todo[color=green!40,#1]{\sffamily\textbf{LZK:} #2}\xspace}
25 \title{A scalable multisplitting algorithm for solving large sparse linear systems}
31 \author{Raphaël Couturier \and Lilia Ziane Khodja}
36 %%%%%%%%%%%%%%%%%%%%%%%%
37 %%%%%%%%%%%%%%%%%%%%%%%%
40 In this paper we revisit the Krylov multisplitting algorithm presented in
41 \cite{huang1993krylov} which uses a sequential method to minimize the Krylov
42 iterations computed by a multisplitting algorithm. Our new algorithm is based on
43 a parallel multisplitting algorithm with few blocks of large size using a
44 parallel GMRES method inside each block and on a parallel Krylov minimization in
45 order to improve the convergence. Some large scale experiments with a 3D Poisson
46 problem are presented with up to 8,192 cores. They show the obtained
47 improvements compared to a classical GMRES both in terms of number of iterations
51 %%%%%%%%%%%%%%%%%%%%%%%%
52 %%%%%%%%%%%%%%%%%%%%%%%%
54 \section{Introduction}
55 Iterative methods are used to solve large sparse linear systems of equations of
56 the form $Ax=b$ because they are easier to parallelize than direct ones. Many
57 iterative methods have been proposed and adapted by many researchers. For
58 example, the GMRES method and the Conjugate Gradient method are very well known
59 and used by many researchers~\cite{S96}. Both methods are based on the
60 Krylov subspace which consists in forming a basis of a sequence of successive
61 matrix powers times the initial residual.
63 When solving large linear systems with many cores, iterative methods often
64 suffer from scalability problems. This is due to their need for collective
65 communications to perform matrix-vector products and reduction operations.
66 Preconditioners can be used in order to increase the convergence of iterative
67 solvers. However, most of the good preconditioners are not scalable when
68 thousands of cores are used.
70 %Traditional iterative solvers have global synchronizations that penalize the
71 %scalability. Two possible solutions consists either in using asynchronous
72 %iterative methods~\cite{ref18} or to use multisplitting algorithms. In this
73 %paper, we will reconsider the use of a multisplitting method. In opposition to
74 %traditional multisplitting method that suffer from slow convergence, as
75 %proposed in~\cite{huang1993krylov}, the use of a minimization process can
76 %drastically improve the convergence.
78 Traditional parallel iterative solvers are based on fine-grain computations that frequently require data exchanges between computing nodes and have global synchronizations that penalize the scalability. Particularly, they are more penalized on large scale architectures or on distributed platforms composed of distant clusters interconnected by a high-latency network. It is therefore imperative to develop coarse-grain based algorithms to reduce the communications in the parallel iterative solvers. Two possible solutions consists either in using asynchronous iterative methods~\cite{ref18} or to use multisplitting algorithms. In this paper, we will reconsider the use of a multisplitting method. In opposition to traditional multisplitting method that suffer from slow convergence, as proposed in~\cite{huang1993krylov}, the use of a minimization process can drastically improve the convergence.
80 The present paper is organized as follows. First in Section~\ref{sec:02} is given some related works and the main principle of multisplitting methods. Then, in Section~\ref{sec:03} is presented the algorithm of our Krylov multisplitting method based on inner-outer iterations. Finally, in Section~\ref{sec:04}, the parallel experiments on Hector architecture show the performances of the Krylov multisplitting algorithm compared to the classical GMRES algorithm to solve a 3D Poisson problem.
83 %%%%%%%%%%%%%%%%%%%%%%%%
84 %%%%%%%%%%%%%%%%%%%%%%%%
86 \section{Related works and presentation of the multisplitting method}
88 A general framework for studying parallel multisplitting has been presented in~\cite{o1985multi}
89 by O'Leary and White. Convergence conditions are given for the
90 most general case. Many authors improved multisplitting algorithms by proposing
91 for example an asynchronous version~\cite{bru1995parallel} and convergence
92 conditions~\cite{bai1999block,bahi2000asynchronous} in this case or other
93 two-stage algorithms~\cite{frommer1992h,bru1995parallel}.
95 In~\cite{huang1993krylov}, the authors proposed a parallel multisplitting
96 algorithm in which all the tasks except one are devoted to solve a sub-block of
97 the splitting and to send their local solutions to the first task which is in
98 charge to combine the vectors at each iteration. These vectors form a Krylov
99 basis for which the first task minimizes the error function over the basis to
100 increase the convergence, then the other tasks receive the updated solution until
101 convergence of the global system.
103 In~\cite{couturier2008gremlins}, the authors proposed practical implementations
104 of multisplitting algorithms to solve large scale linear systems. Inner solvers
105 could be based on sequential direct method with the LU method or sequential iterative
108 In~\cite{prace-multi}, the authors have proposed a parallel multisplitting
109 algorithm in which large blocks are solved using a GMRES solver. The authors have
110 performed large scale experiments up-to 32,768 cores and they conclude that
111 asynchronous multisplitting algorithm could be more efficient than traditional
112 solvers on an exascale architecture with hundreds of thousands of cores.
114 So compared to these works, we propose in this paper a practical multisplitting method based on parallel iterative blocks and gives better results than classical GMRES method for the 3D Poisson problem we considered.
117 The key idea of a multisplitting method to solve a large system of linear equations $Ax=b$ is defined as follows. The first step consists in partitioning the matrix $A$ in $L$ several ways
122 where for all $\ell\in\{1,\ldots,L\}$ $M_\ell$ are non-singular matrices. Then the linear system is solved by iteration based on the obtained splittings as follows
124 x^{k+1}=\displaystyle\sum^L_{\ell=1} E_\ell M^{-1}_\ell (N_\ell x^k + b),~k=1,2,3,\ldots
127 where $E_\ell$ are non-negative and diagonal weighting matrices and their sum is an identity matrix $I$. The convergence of such a method is dependent on the condition
129 \rho(\displaystyle\sum^L_{\ell=1}E_\ell M^{-1}_\ell N_\ell)<1.
132 where $\rho$ is the spectral radius of the square matrix.
134 The advantage of the multisplitting method is that at each iteration $k$ there are $L$ different linear sub-systems
136 v_\ell^k=M^{-1}_\ell N_\ell x_\ell^{k-1} + M^{-1}_\ell b,~\ell\in\{1,\ldots,L\},
139 to be solved independently by a direct or an iterative method, where $v_\ell$ is the solution of the local sub-system. Thus the computations of $\{v_\ell\}_{1\leq \ell\leq L}$ may be performed in parallel by a set of processors. A multisplitting method using an iterative method as an inner solver is called an inner-outer iterative method or a two-stage method. The results $v_\ell$ obtained from the different splittings~(\ref{eq04}) are combined to compute solution $x$ of the linear system by using the diagonal weighting matrices
141 x^k = \displaystyle\sum^L_{\ell=1} E_\ell v_\ell^k,
144 In the case where the diagonal weighting matrices $E_\ell$ have only zero and one factors (i.e. $v_\ell$ are disjoint vectors), the multisplitting method is non-overlapping and corresponds to the block Jacobi method.
146 %%%%%%%%%%%%%%%%%%%%%%%%
147 %%%%%%%%%%%%%%%%%%%%%%%%
149 \section{A two-stage method with a minimization}
151 Let $Ax=b$ be a given large and sparse linear system of $n$ equations to solve in parallel on $L$ clusters of processors, physically adjacent or geographically distant, where $A\in\mathbb{R}^{n\times n}$ is a square and non-singular matrix, $x\in\mathbb{R}^{n}$ is the solution vector and $b\in\mathbb{R}^{n}$ is the right-hand side vector. The multisplitting of this linear system is defined as follows
155 A & = & [A_{1}, \ldots, A_{L}]\\
156 x & = & [X_{1}, \ldots, X_{L}]\\
157 b & = & [B_{1}, \ldots, B_{L}]
162 where for $\ell\in\{1,\ldots,L\}$, $A_\ell$ is a rectangular block of size $n_\ell\times n$ and $X_\ell$ and $B_\ell$ are sub-vectors of size $n_\ell$ each, such that $\sum_\ell n_\ell=n$. In this work, we use a row-by-row splitting without overlapping in such a way that successive rows of sparse matrix $A$ and both vectors $x$ and $b$ are assigned to one cluster. So, the multisplitting format of the linear system is defined as follows
164 \forall \ell\in\{1,\ldots,L\} \mbox{,~} A_{\ell \ell}X_\ell + \displaystyle\sum_{\substack{m=1\\m\neq\ell}}^L A_{\ell m}X_m = B_\ell,
167 where $A_{\ell m}$ is a sub-block of size $n_\ell\times n_m$ of the rectangular matrix $A_\ell$, $X_m\neq X_\ell$ is a sub-vector of size $n_m$ of the solution vector $x$ and $\sum_{m\neq \ell}n_m+n_\ell=n$, for all $m\in\{1,\ldots,L\}$.
169 Our multisplitting method proceeds by iteration to solve the linear system in such a way that each sub-system
173 A_{\ell \ell}X_\ell = Y_\ell \mbox{,~such that}\\
174 Y_\ell = B_\ell - \displaystyle\sum_{\substack{m=1\\m\neq \ell}}^{L}A_{\ell m}X_m,
179 is solved independently by a {\it cluster of processors} and communications are required to update the right-hand side vectors $Y_\ell$, such that the vectors $X_m$ represent the data dependencies between the clusters. In this work, we use the parallel restarted GMRES method~\cite{ref34} as an inner iteration method to solve sub-systems~(\ref{sec03:eq03}). GMRES is one of the most used Krylov iterative methods to solve sparse linear systems. %In practice, GMRES is used with a preconditioner to improve its convergence. In this work, we used a preconditioning matrix equivalent to the main diagonal of sparse sub-matrix $A_{ll}$. This preconditioner is straightforward to implement in parallel and gives good performances in many situations.
181 It should be noted that the convergence of the inner iterative solver for the
182 different sub-systems~(\ref{sec03:eq03}) does not necessarily involve the
183 convergence of the multisplitting method. It strongly depends on the properties
184 of the global sparse linear system to be
185 solved~\cite{o1985multi,ref18}. Furthermore, the splitting of the linear system
186 among several clusters of processors increases the spectral radius of the
187 iteration matrix, thereby slowing the convergence. In fact, the larger the
188 number of splitting is, the larger the spectral radius is. In this paper, we
189 based on the work presented in~\cite{huang1993krylov} to increase the
190 convergence and improve the scalability of the multisplitting methods.
192 In order to accelerate the convergence, we implemented the outer iteration of the multisplitting solver as a Krylov iterative method which minimizes some error function over a Krylov subspace~\cite{S96}. The Krylov subspace that we used is spanned by a basis composed of successive solutions issued from solving the $L$ splittings~(\ref{sec03:eq03})
194 S=\{x^1,x^2,\ldots,x^s\},~s\leq n,
197 where for $j\in\{1,\ldots,s\}$, $x^j=[X_1^j,\ldots,X_L^j]$ is a solution of the global linear system. The advantage of such a Krylov subspace is that we need neither an orthogonal basis nor synchronizations between clusters to generate this basis.
199 The multisplitting method is periodically restarted every $s$ iterations with a new initial guess $\tilde{x}=S\alpha$ which minimizes the error function $\|b-Ax\|_2$ over the Krylov subspace spanned by vectors of $S$. So $\alpha$ is defined as the solution of the large overdetermined linear system
204 where $R=AS$ is a dense rectangular matrix of size $n\times s$ and $s\ll n$. This leads us to solve a system of normal equations
209 which is associated with the least squares problem
211 \text{minimize}~\|b-R\alpha\|_2,
214 where $R^T$ denotes the transpose of matrix $R$. Since $R$ (i.e. $AS$) and $b$ are split among $L$ clusters, the symmetric positive definite system~(\ref{sec03:eq06}) is solved in parallel. Thus an iterative method would be more appropriate than a direct one to solve this system. We use the parallel Conjugate Gradient method for the normal equations CGNR~\cite{S96,refCGNR}.
216 \begin{algorithm}[!t]
217 \caption{A two-stage linear solver with inner iteration GMRES method}
218 \begin{algorithmic}[1]
219 \Input $A_\ell$ (sparse sub-matrix), $B_\ell$ (right-hand side sub-vector)
220 \Output $X_\ell$ (solution sub-vector)\vspace{0.2cm}
221 \State Load $A_\ell$, $B_\ell$
222 \State Set the initial guess $x^0$
223 \State Set the minimizer $\tilde{x}^0=x^0$
224 \For {$k=1,2,3,\ldots$ until the global convergence}
225 \State Restart with $x^0=\tilde{x}^{k-1}$:
226 \For {$j=1,2,\ldots,s$}
227 \State \label{line7}Inner iteration solver: \Call{InnerSolver}{$x^0$, $j$}
228 \State Construct basis $S$: add column vector $X_\ell^j$ to the matrix $S_\ell^k$
229 \State Exchange local values of $X_\ell^j$ with the neighboring clusters
230 \State Compute dense matrix $R$: $R_\ell^{k,j}=\sum^L_{i=1}A_{\ell i}X_i^j$
232 \State \label{line12}Minimization $\|b-R\alpha\|_2$: \Call{UpdateMinimizer}{$S_\ell$, $R$, $b$, $k$}
233 \State Local solution of linear system $Ax=b$: $X_\ell^k=\tilde{X}_\ell^k$
234 \State Exchange the local minimizer $\tilde{X}_\ell^k$ with the neighboring clusters
239 \Function {InnerSolver}{$x^0$, $j$}
240 \State Compute local right-hand side $Y_\ell = B_\ell - \sum^L_{\substack{m=1\\m\neq \ell}}A_{\ell m}X_m^0$
241 \State Solving local splitting $A_{\ell \ell}X_\ell^j=Y_\ell$ using parallel GMRES method, such that $X_\ell^0$ is the initial guess
242 \State \Return $X_\ell^j$
247 \Function {UpdateMinimizer}{$S_\ell$, $R$, $b$, $k$}
248 \State Solving normal equations $(R^k)^TR^k\alpha^k=(R^k)^Tb$ in parallel by $L$ clusters using parallel CGNR method
249 \State Compute local minimizer $\tilde{X}_\ell^k=S_\ell^k\alpha^k$
250 \State \Return $\tilde{X}_\ell^k$
256 The main key points of our Krylov multisplitting method to solve a large sparse linear system are given in Algorithm~\ref{algo:01}. This algorithm is based on a two-stage method with a minimization using restarted GMRES iterative method as an inner solver. It is executed in parallel by each cluster of processors. Matrices and vectors with the subscript $\ell$ represent the local data for cluster $\ell$, where $\ell\in\{1,\ldots,L\}$. The two-stage solver uses two different parallel iterative algorithms: GMRES method to solve each splitting~(\ref{sec03:eq03}) on a cluster of processors, and CGNR method executed in parallel by all clusters to minimize the function error~(\ref{sec03:eq07}) over the Krylov subspace spanned by $S$. The algorithm requires two global synchronizations between $L$ clusters. The first one is performed at line~\ref{line12} in Algorithm~\ref{algo:01} to exchange local values of vector solution $x$ (i.e. the minimizer $\tilde{x}$) required to restart the multisplitting solver. The second one is needed to construct the matrix $R$. We chose to perform this latter synchronization $s$ times in every outer iteration $k$ (line~\ref{line7} in Algorithm~\ref{algo:01}). This is a straightforward way to compute the sparse matrix-dense matrix multiplication $R=AS$. We implemented all synchronizations by using message passing collective communications of MPI library.
258 %%%%%%%%%%%%%%%%%%%%%%%%
259 %%%%%%%%%%%%%%%%%%%%%%%%
261 \section{Experiments}
263 In order to illustrate the interest of our algorithm. We have compared our
264 algorithm with the GMRES method which is a very well used method in many
265 situations. We have chosen to focus on only one problem which is very simple to
266 implement: a 3 dimension Poisson problem.
271 \nabla u&=f \mbox{~in~} \omega\\
272 u &=0 \mbox{~on~} \Gamma=\partial \omega
277 After discretization, with a finite difference scheme, a seven point stencil is
278 used. It is well-known that the spectral radius of matrices representing such
279 problems are very close to 1. Moreover, the larger the number of discretization
280 points is, the closer to 1 the spectral radius is. Hence, to solve a matrix
281 obtained for a 3D Poisson problem, the number of iterations is high. Using a
282 preconditioner it is possible to reduce the number of iterations but
283 preconditioners are not scalable when using many cores.
285 %Doing many experiments with many cores is not easy and requires to access to a supercomputer with several hours for developing a code and then improving it.
286 In the following we present some experiments we could achieved out on the Hector
287 architecture, a UK's high-end computing resource, funded by the UK Research
288 Councils~\cite{hector}. This is a Cray XE6 supercomputer, equipped with two
289 16-core AMD Opteron 2.3 Ghz and 32 GB of memory. Machines are interconnected
292 Table~\ref{tab1} shows the result of the experiments. The first column shows
293 the size of the 3D Poisson problem. The size is chosen in order to have
294 approximately 50,000 components per core. The second column represents the
295 number of cores used. In parenthesis, there is the decomposition used for the
296 Krylov multisplitting. The third column and the sixth column respectively show
297 the execution time for the GMRES and the Krylov multisplitting codes. The fourth
298 and the seventh column describes the number of iterations. For the
299 multisplitting code, the total number of inner iterations is represented in
300 parenthesis. For the GMRES code (alone and in the multisplitting version) the
301 restart parameter is fixed to 16. The precision of the GMRES version is fixed to
302 1e-6. For the multisplitting, there are two precisions, one for the external
303 solver which is fixed to 1e-6 and another one for the inner solver (GMRES) which
304 is fixed to 1e-10. It should be noted that a high precision is used but we also
305 fixed a maximum number of iterations for each internal step. In practice, we
306 limit the number of iterations in the internal step to 10. So an internal iteration is finished
307 when the precision is reached or when the maximum internal number of iterations
308 is reached. The precision and the maximum number of iterations of CGNR method are fixed to 1e-25 and 20 respectively. The size of the Krylov subspace basis $S$ is fixed to 10 vectors.
312 \begin{tabular}{|c|c||c|c|c||c|c|c||c|}
314 \multirow{2}{*}{Pb size}&\multirow{2}{*}{Nb. cores} & \multicolumn{3}{c||}{GMRES} & \multicolumn{3}{c||}{Krylov Multisplitting} & \multirow{2}{*}{Ratio}\\
316 & & Time (s) & nb Iter. & $\Delta$ & Time (s)& nb Iter. & $\Delta$ & \\
318 $468^3$ & 2,048 (2x1,024) & 299.7 & 41,028 & 5.02e-8 & 48.4 & 691(6,146) & 8.24e-08 & 6.19 \\
320 $590^3$ & 4,096 (2x2,048) & 433.1 & 55,494 & 4.92e-7 & 74.1 & 1,101(8,211) & 6.62e-08 & 5.84 \\
322 $743^3$ & 8,192 (2x4,096) & 704.4 & 87,822 & 4.80e-07 & 151.2 & 3,061(14,914) & 5.87e-08 & 4.65 \\
324 $743^3$ & 8,192 (4x2,048) & 704.4 & 87,822 & 4.80e-07 & 110.3 & 1,531(12,721) & 1.47e-07& 6.39 \\
334 From these experiments, it can be observed that the multisplitting version is
335 always faster than the GMRES version. The acceleration gain of the
336 multisplitting version is between 4 and 6. It can be noticed that the number of
337 iterations is drastically reduced with the multisplitting version even it is not
338 neglectable. Moreover, with 8,192 cores, we can see that using 4 clusters gives
339 better performance than simply using 2 clusters. In fact, we can remark that the
340 precision with 2 clusters is slightly better but in both cases the precision is
341 under the specified threshold.
343 \section{Conclusion and perspectives}
344 We have implemented a Krylov multisplitting method to solve sparse linear
345 systems on large-scale computing platforms. We have developed a synchronous
346 two-stage method based on the block Jacobi multisaplitting which uses GMRES
347 iterative method as an inner iteration. Our contribution in this paper is
348 twofold. First we provide a multi cluster decomposition that allows us to choose
349 the appropriate size of the clusters according to the architecures of the
350 supercomputer. Second, we have implemented the outer iteration of the
351 multisplitting method as a Krylov subspace method which minimizes some error
352 function. This increases the convergence and improves the scalability of the
353 multisplitting method.
355 We have tested our multisplitting method to solve the sparse linear system
356 issued from the discretization of a 3D Poisson problem. We have compared its
357 performances to the classical GMRES method on a supercomputer composed of 2,048
358 to 8,192 cores. The experimental results showed that the multisplitting method is
359 about 4 to 6 times faster than the GMRES method for different sizes of the
360 problem split into 2 or 4 blocks when using multisplitting method. Indeed, the
361 GMRES method has difficulties to scale with many cores while the Krylov
362 multisplitting method allows to hide latency and reduce the inter-cluster
365 In future works, we plan to conduct experiments on larger number of cores and
366 test the scalability of our Krylov multisplitting method. It would be
367 interesting to validate its performances to solve other linear/nonlinear and
368 symmetric/nonsymmetric problems. Moreover, we intend to develop multisplitting
369 methods based on asynchronous iteration in which communications are overlapped
370 by computations. These methods would be interesting for platforms composed of
371 distant clusters interconnected by a high-latency network. In addition, we
372 intend to investigate the convergence improvements of our method by using
373 preconditioning techniques for Krylov iterative methods and multisplitting
374 methods with overlapping blocks.
376 \section{Acknowledgement}
377 The authors would like to thank Mark Bull of the EPCC his fruitful remarks and the facilities of HECToR.
379 %Other applications (=> other matrices)\\
380 %Larger experiments\\
386 %%%%%%%%%%%%%%%%%%%%%%%%
387 %%%%%%%%%%%%%%%%%%%%%%%%
389 \bibliographystyle{plain}
390 \bibliography{biblio}