\title{A scalable multisplitting algorithm for solving large sparse linear systems}
+\date{}
\begin{abstract}
-In this paper we revist the krylov multisplitting algorithm presented in
+In this paper we revisit the krylov multisplitting algorithm presented in
\cite{huang1993krylov} which uses a scalar method to minimize the krylov
iterations computed by a multisplitting algorithm. Our new algorithm is based on
a parallel multisplitting algorithm with few blocks of large size using a
thousands of cores are used.
-A completer...
-On ne peut pas parler de tout...\\
+Traditionnal iterative solvers have global synchronizations that penalize the
+scalability. Two possible solutions consists either in using asynchronous
+iterative methods~\cite{ref18} or to use multisplitting algorithms. In this
+paper, we will reconsider the use of a multisplitting method. In opposition to
+traditionnal multisplitting method that suffer from slow convergence, as
+proposed in~\cite{huang1993krylov}, the use of a minimization process can
+drastically improve the convergence.
are required to update the right-hand side vectors $Y_l$, such that
the vectors $X_i$ represent the data dependencies between the
clusters. In this work, we use the parallel GMRES method~\cite{ref34}
-as an inner iteration method for solving the
+as an inner iteration method to solve the
sub-systems~(\ref{sec03:eq03}). It is a well-known iterative method
-which gives good performances for solving sparse linear systems in
+which gives good performances to solve sparse linear systems in
parallel on a cluster of processors.
It should be noted that the convergence of the inner iterative solver
\label{sec03:eq04}
\end{equation}
where for $j\in\{1,\ldots,s\}$, $x^j=[X_1^j,\ldots,X_L^j]$ is a
-solution of the global linear system. The advantage of such a Krylov subspace is that we need neither an orthogonal basis nor synchronizations between the different clusters to generate this basis.
+solution of the global linear system. The advantage of such a Krylov
+subspace is that we need neither an orthogonal basis nor
+synchronizations between the different clusters to generate this
+basis.
The multisplitting method is periodically restarted every $s$
iterations with a new initial guess $\tilde{x}=S\alpha$ which
\text{minimize}~\|b-R\alpha\|_2,
\label{sec03:eq07}
\end{equation}
-where $R^T$ denotes the transpose of the matrix $R$. Since $R$
-(i.e. $AS$) and $b$ are split among $L$ clusters, the symmetric
-positive definite system~(\ref{sec03:eq06}) is solved in
-parallel. Thus, an iterative method would be more appropriate than a
-direct one for solving this system. We use the parallel conjugate
-gradient method for the normal equations CGNR~\cite{S96,refCGNR}.
+where $R^T$ denotes the transpose of the matrix $R$. Since $R$ (i.e.
+$AS$) and $b$ are split among $L$ clusters, the symmetric positive
+definite system~(\ref{sec03:eq06}) is solved in parallel. Thus, an
+iterative method would be more appropriate than a direct one to solve
+this system. We use the parallel conjugate gradient method for the
+normal equations CGNR~\cite{S96,refCGNR}.
\begin{algorithm}[!t]
\caption{A two-stage linear solver with inner iteration GMRES method}
\label{algo:01}
\end{algorithm}
-The main key points of the multisplitting method for solving large
-sparse linear systems are given in Algorithm~\ref{algo:01}. This
+The main key points of the multisplitting method to solve a large
+sparse linear system are given in Algorithm~\ref{algo:01}. This
algorithm is based on a two-stage method with a minimization using the
GMRES iterative method as an inner solver. It is executed in parallel
by each cluster of processors. The matrices and vectors with the
subscript $l$ represent the local data for the cluster $l$, where
$l\in\{1,\ldots,L\}$. The two-stage solver uses two different parallel
-iterative algorithms: the GMRES method for solving each splitting on a
+iterative algorithms: the GMRES method to solve each splitting on a
cluster of processors, and the CGNR method executed in parallel by all
-clusters for minimizing the function error over the Krylov subspace
+clusters to minimize the function error over the Krylov subspace
spanned by $S$. The algorithm requires two global synchronizations
between the $L$ clusters. The first one is performed at line~$12$ in
Algorithm~\ref{algo:01} to exchange the local values of the vector
subroutines.
+\section{Experiments}
+
+In order to illustrate the interest of our algorithm. We have compared our
+algorithm with the GMRES method which a very well used method in many
+situations. We have chosen to focus on only one problem which is very simple to
+implement: a 3 dimension Poisson problem.
+
+\begin{equation}
+\left\{
+ \begin{array}{ll}
+ \nabla u&=f \mbox{~in~} \omega\\
+ u &=0 \mbox{~on~} \Gamma=\partial \omega
+ \end{array}
+ \right.
+\end{equation}
+
+After discretization, with a finite difference scheme, a seven point stencil is
+used. It is well-known that the spectral radius of matrices representing such
+problems are very close to 1. Moreover, the larger the number of discretization
+points is, the closer to 1 the spectral radius is. Hence, to solve a matrix
+obtained for a 3D Poisson problem, the number of iterations is high. Using a
+preconditioner it is possible to reduce the number of iterations but
+preconditioners are not scalable when using many cores.
+
+\section{Conclusion and perspectives}
+
+Other applications (=> other matrices)\\
+Larger experiments\\
+Async\\
+Overlapping
%%%%%%%%%%%%%%%%%%%%%%%%