1 \documentclass{article}
2 \usepackage[utf8]{inputenc}
3 \usepackage{amsfonts,amssymb}
7 \title{A scalable multisplitting algorithm for solving large sparse linear systems}
12 \author{Raphaël Couturier \and Lilia Ziane Khodja}
17 %%%%%%%%%%%%%%%%%%%%%%%%
18 %%%%%%%%%%%%%%%%%%%%%%%%
22 In this paper we revist the krylov multisplitting algorithm presented in
23 \cite{huang1993krylov} which uses a scalar method to minimize the krylov
24 iterations computed by a multisplitting algorithm. Our new algorithm is based on
25 a parallel multisplitting algorithm with few blocks of large size using a
26 parallel GMRES method inside each block and on a parallel krylov minimization in
27 order to improve the convergence. Some large scale experiments with a 3D Poisson
28 problem are presented. They show the obtained improvements compared to a
29 classical GMRES both in terms of number of iterations and execution times.
33 %%%%%%%%%%%%%%%%%%%%%%%%
34 %%%%%%%%%%%%%%%%%%%%%%%%
37 \section{Introduction}
39 Iterative methods are used to solve large sparse linear systems of equations of
40 the form $Ax=b$ because they are easier to parallelize than direct ones. Many
41 iterative methods have been proposed and adapted by many researchers. For
42 example, the GMRES method and the Conjugate Gradient method are very well known
43 and used by many researchers ~\cite{S96}. Both the method are based on the
44 Krylov subspace which consists in forming a basis of the sequence of successive
45 matrix powers times the initial residual.
47 When solving large linear systems with many cores, iterative methods often
48 suffer from scalability problems. This is due to their need for collective
49 communications to perform matrix-vector products and reduction operations.
50 Preconditionners can be used in order to increase the convergence of iterative
51 solvers. However, most of the good preconditionners are not sclalable when
52 thousands of cores are used.
56 On ne peut pas parler de tout...\\
61 %%%%%%%%%%%%%%%%%%%%%%%
63 %%%%%%%%%%%%%%%%%%%%%%%
64 The key idea of the multisplitting method for solving a large system
65 of linear equations $Ax=b$ consists in partitioning the matrix $A$ in
68 A = M_l - N_l,~l\in\{1,\ldots,L\},
71 where $M_l$ are nonsingular matrices. Then the linear system is solved
72 by iteration based on the multisplittings as follows
74 x^{k+1}=\displaystyle\sum^L_{l=1} E_l M^{-1}_l (N_l x^k + b),~k=1,2,3,\ldots
77 where $E_l$ are non-negative and diagonal weighting matrices such that
78 $\sum^L_{l=1}E_l=I$ ($I$ is an identity matrix). Thus the convergence
79 of such a method is dependent on the condition
81 \rho(\displaystyle\sum^L_{l=1}E_l M^{-1}_l N_l)<1.
85 The advantage of the multisplitting method is that at each iteration
86 $k$ there are $L$ different linear sub-systems
88 v_l^k=M^{-1}_l N_l x_l^{k-1} + M^{-1}_l b,~l\in\{1,\ldots,L\},
91 to be solved independently by a direct or an iterative method, where
92 $v_l^k$ is the solution of the local sub-system. Thus, the
93 calculations of $v_l^k$ may be performed in parallel by a set of
94 processors. A multisplitting method using an iterative method for
95 solving the $L$ linear sub-systems is called an inner-outer iterative
96 method or a two-stage method. The results $v_l^k$ obtained from the
97 different splittings~(\ref{eq04}) are combined to compute the solution
98 $x^k$ of the linear system by using the diagonal weighting matrices
100 x^k = \displaystyle\sum^L_{l=1} E_l v_l^k,
103 In the case where the diagonal weighting matrices $E_l$ have only zero
104 and one factors (i.e. $v_l^k$ are disjoint vectors), the
105 multisplitting method is non-overlapping and corresponds to the block
107 %%%%%%%%%%%%%%%%%%%%%%%
109 %%%%%%%%%%%%%%%%%%%%%%%
111 \section{Related works}
114 A general framework for studying parallel multisplitting has been presented in
115 \cite{o1985multi} by O'Leary and White. Convergence conditions are given for the
116 most general case. Many authors improved multisplitting algorithms by proposing
117 for example an asynchronous version \cite{bru1995parallel} and convergence
118 conditions \cite{bai1999block,bahi2000asynchronous} in this case or other
119 two-stage algorithms~\cite{frommer1992h,bru1995parallel}.
121 In \cite{huang1993krylov}, the authors proposed a parallel multisplitting
122 algorithm in which all the tasks except one are devoted to solve a sub-block of
123 the splitting and to send their local solution to the first task which is in
124 charge to combine the vectors at each iteration. These vectors form a Krylov
125 basis for which the first task minimizes the error function over the basis to
126 increase the convergence, then the other tasks receive the update solution until
127 convergence of the global system.
131 In \cite{couturier2008gremlins}, the authors proposed practical implementations
132 of multisplitting algorithms that take benefit from multisplitting algorithms to
133 solve large scale linear systems. Inner solvers could be based on scalar direct
134 method with the LU method or scalar iterative one with GMRES.
136 In~\cite{prace-multi}, the authors have proposed a parallel multisplitting
137 algorithm in which large block are solved using a GMRES solver. The authors have
138 performed large scale experimentations upto 32.768 cores and they conclude that
139 asynchronous multisplitting algorithm could more efficient than traditionnal
140 solvers on exascale architecture with hunders of thousands of cores.
143 %%%%%%%%%%%%%%%%%%%%%%%%
144 %%%%%%%%%%%%%%%%%%%%%%%%
147 \section{A two-stage method with a minimization}
148 Let $Ax=b$ be a given sparse and large linear system of $n$ equations
149 to solve in parallel on $L$ clusters, physically adjacent or geographically
150 distant, where $A\in\mathbb{R}^{n\times n}$ is a square and nonsingular
151 matrix, $x\in\mathbb{R}^{n}$ is the solution vector and $b\in\mathbb{R}^{n}$
152 is the right-hand side vector. The multisplitting of this linear system
153 is defined as follows:
157 A & = & [A_{1}, \ldots, A_{L}]\\
158 x & = & [X_{1}, \ldots, X_{L}]\\
159 b & = & [B_{1}, \ldots, B_{L}]
164 where for $l\in\{1,\ldots,L\}$, $A_l$ is a rectangular block of size $n_l\times n$
165 and $X_l$ and $B_l$ are sub-vectors of size $n_l$, such that $\sum_ln_l=n$. In this
166 case, we use a row-by-row splitting without overlapping in such a way that successive
167 rows of the sparse matrix $A$ and both vectors $x$ and $b$ are assigned to one cluster.
168 So, the multisplitting format of the linear system is defined as follows:
170 \forall l\in\{1,\ldots,L\} \mbox{,~} \displaystyle\sum_{i=1}^{l-1}A_{li}X_i + A_{ll}X_l + \displaystyle\sum_{i=l+1}^{L}A_{li}X_i = B_l,
173 where $A_{li}$ is a block of size $n_l\times n_i$ of the rectangular matrix $A_l$, $X_i\neq X_l$
174 is a sub-vector of size $n_i$ of the solution vector $x$ and $\sum_{i<l}n_i+\sum_{i>l}n_i+n_l=n$,
175 for all $i\in\{1,\ldots,l-1,l+1,\ldots,L\}$.
177 The multisplitting method proceeds by iteration for solving the linear system in such a
182 A_{ll}X_l = Y_l \mbox{,~such that}\\
183 Y_l = B_l - \displaystyle\sum_{i=1,i\neq l}^{L}A_{li}X_i,
188 is solved independently by a cluster of processors and communication are required to
189 update the right-hand side vectors $Y_l$, such that the vectors $X_i$ represent the data
190 dependencies between the clusters. In this work, we use the GMRES method as an inner
191 iteration method for solving the sub-systems~(\ref{sec03:eq03}). It is a well-known
192 iterative method which gives good performances for solving sparse linear systems in
193 parallel on a cluster of processors.
195 It should be noted that the convergence of the inner iterative solver for the different
196 linear sub-systems~(\ref{sec03:eq03}) does not necessarily involve the convergence of the
197 multisplitting method. It strongly depends on the properties of the sparse linear system
198 to be solved and the computing environment~\cite{o1985multi,ref18}. Furthermore, the multisplitting
199 of the linear system among several clusters of processors increases the spectral radius
200 of the iteration matrix, thereby slowing the convergence. In this paper, we based on the
201 work presented in~\cite{huang1993krylov} to increase the convergence and improve the
202 scalability of the multisplitting methods.
204 In order to accelerate the convergence, we implement the outer iteration of the multisplitting
205 solver as a Krylov subspace method which minimizes some error function over a Krylov subspace~\cite{S96}.
206 The Krylov space of the method that we used is spanned by a basis composed of the solutions issued from
207 solving the $L$ splittings~(\ref{sec03:eq03})
209 \{x^1,x^2,\ldots,x^s\},~s\ll n,
212 where for $k\in\{1,\ldots,s\}$, $x^k=[X_1^k,\ldots,X_L^k]$ is a solution of the global linear
214 %The advantage such a method is that the Krylov subspace does not need to be spanned by an orthogonal basis.
215 The advantage of such a method is that the Krylov subspace need neither to be spanned by an orthogonal
216 basis nor synchronizations between the different clusters to generate this basis.
223 %%%%%%%%%%%%%%%%%%%%%%%%
224 %%%%%%%%%%%%%%%%%%%%%%%%
226 \bibliographystyle{plain}
227 \bibliography{biblio}