From: raphael couturier Date: Tue, 29 Apr 2014 12:34:39 +0000 (+0200) Subject: modif X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/Krylov_multi.git/commitdiff_plain/c83126c7666992d323080ca200c00cd3564e68ed?ds=inline;hp=fbbdb9946ee8448344a4b02a1bf225857c01df99 modif --- diff --git a/krylov_multi.tex b/krylov_multi.tex index b5dee17..9757efc 100644 --- a/krylov_multi.tex +++ b/krylov_multi.tex @@ -43,8 +43,9 @@ iterations computed by a multisplitting algorithm. Our new algorithm is based on a parallel multisplitting algorithm with few blocks of large size using a parallel GMRES method inside each block and on a parallel Krylov minimization in order to improve the convergence. Some large scale experiments with a 3D Poisson -problem are presented. They show the obtained improvements compared to a -classical GMRES both in terms of number of iterations and execution times. +problem are presented with up to 8,192 cores. They show the obtained +improvements compared to a classical GMRES both in terms of number of iterations +and execution times. \end{abstract} %%%%%%%%%%%%%%%%%%%%%%%% @@ -78,7 +79,7 @@ drastically improve the convergence. %%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% -\section{Related works} +\section{Related works and presention of the multisplitting method} A general framework for studying parallel multisplitting has been presented in~\cite{o1985multi} by O'Leary and White. Convergence conditions are given for the most general case. Many authors improved multisplitting algorithms by proposing @@ -95,9 +96,9 @@ increase the convergence, then the other tasks receive the updated solution unti convergence of the global system. In~\cite{couturier2008gremlins}, the authors proposed practical implementations -of multisplitting algorithms that take benefit from multisplitting algorithms {\bf ???} to -solve large scale linear systems. Inner solvers could be based on scalar direct -method with the LU method or scalar iterative one with GMRES. +of multisplitting algorithms to solve large scale linear systems. Inner solvers +could be based on scalar direct method with the LU method or scalar iterative +one with GMRES. In~\cite{prace-multi}, the authors have proposed a parallel multisplitting algorithm in which large blocks are solved using a GMRES solver. The authors have @@ -105,6 +106,12 @@ performed large scale experiments up-to 32,768 cores and they conclude that asynchronous multisplitting algorithm could be more efficient than traditional solvers on an exascale architecture with hundreds of thousands of cores. + +So compared to these works, we propose in this paper a practical multisplitting +which is based on parallel iterative blocks and which give better result than +GMRES for the 3D Poisson problem we considered. +\\ + The key idea of a multisplitting method to solve a large system of linear equations $Ax=b$ is defined as follows. The first step consists in partitioning the matrix $A$ in $L$ several ways \begin{equation} A = M_\ell - N_\ell, @@ -168,7 +175,7 @@ Y_\ell = B_\ell - \displaystyle\sum_{\substack{m=1\\m\neq \ell}}^{L}A_{\ell m}X_ \end{equation} is solved independently by a {\it cluster of processors} and communications are required to update the right-hand side vectors $Y_\ell$, such that the vectors $X_m$ represent the data dependencies between the clusters. In this work, we use the parallel restarted GMRES method~\cite{ref34} as an inner iteration method to solve sub-systems~(\ref{sec03:eq03}). GMRES is one of the most used Krylov iterative methods to solve sparse linear systems. %In practice, GMRES is used with a preconditioner to improve its convergence. In this work, we used a preconditioning matrix equivalent to the main diagonal of sparse sub-matrix $A_{ll}$. This preconditioner is straightforward to implement in parallel and gives good performances in many situations. -It should be noted that the convergence of the inner iterative solver for the different sub-systems~(\ref{sec03:eq03}) does not necessarily involve the convergence of the multisplitting method. It strongly depends on the properties of the global sparse linear system to be solved and the computing environment~\cite{o1985multi,ref18}. Furthermore, the multisplitting of the linear system among several clusters of processors increases the spectral radius of the iteration matrix, thereby slowing the convergence. In this paper, we based on the work presented in~\cite{huang1993krylov} to increase the convergence and improve the scalability of the multisplitting methods. +It should be noted that the convergence of the inner iterative solver for the different sub-systems~(\ref{sec03:eq03}) does not necessarily involve the convergence of the multisplitting method. It strongly depends on the properties of the global sparse linear system to be solved~\cite{o1985multi,ref18}. Furthermore, the multisplitting of the linear system among several clusters of processors increases the spectral radius of the iteration matrix, thereby slowing the convergence. In this paper, we based on the work presented in~\cite{huang1993krylov} to increase the convergence and improve the scalability of the multisplitting methods. In order to accelerate the convergence, we implemented the outer iteration of the multisplitting solver as a Krylov iterative method which minimizes some error function over a Krylov subspace~\cite{S96}. The Krylov subspace that we used is spanned by a basis composed of successive solutions issued from solving the $L$ splittings~(\ref{sec03:eq03}) \begin{equation}