X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/Krylov_multi.git/blobdiff_plain/fedf59562159b08c86b61f27de746deb1fd42f98..70c44690f5342df3f1f809bf2bf5565a1e50f6ff:/krylov_multi.tex?ds=inline diff --git a/krylov_multi.tex b/krylov_multi.tex index 5cf4056..0fd3b79 100644 --- a/krylov_multi.tex +++ b/krylov_multi.tex @@ -15,18 +15,58 @@ \begin{abstract} -In this paper we revist the krylov multisplitting algorithm presented in [ref] -which uses a scalar method to minimize the krylov iterations computed by a -multisplitting algorithm. Our new algorithm is simply a parallel multisplitting -algorithm with few blocks of large size and a parallel krylov minimization is -used to improve the convergence. Some large scale experiments with a 3D Poisson -problem are presented. They show the obtained improvements compared to a +In this paper we revist the krylov multisplitting algorithm presented in +\cite{huang1993krylov} which uses a scalar method to minimize the krylov +iterations computed by a multisplitting algorithm. Our new algorithm is based on +a parallel multisplitting algorithm with few blocks of large size using a +parallel GMRES method inside each block and on a parallel krylov minimization in +order to improve the convergence. Some large scale experiments with a 3D Poisson +problem are presented. They show the obtained improvements compared to a classical GMRES both in terms of number of iterations and execution times. \end{abstract} \section{Introduction} -Iterative methods used to solve large sparse linear systems of the form $Ax=b$ -because they are easier to parallelize than direct ones. +Iterative methods are used to solve large sparse linear systems of equations of +the form $Ax=b$ because they are easier to parallelize than direct ones. Many +iterative methods have been proposed and adapted by many researchers. When +solving large linear systems with many cores, iterative methods often suffer +from scalability problems. This is due to their need for collective +communications to perform matrix-vector products and reduction operations. +Preconditionners can be used in order to increase the convergence of iterative +solvers. However, most of the good preconditionners are not sclalable when +thousands of cores are used. + + +A completer... +On ne peut pas parler de tout... + +\section{Related works} + + +A general framework for studying parallel multisplitting has been presented in +\cite{o1985multi} by O'Leary and White. Convergence conditions are given for the +most general case. Many authors improved multisplitting algorithms by proposing +for example a asynchronous version \cite{bru1995parallel} and convergence +condition \cite{bai1999block,bahi2000asynchronous} in this case or other +two-stage algorithms~\cite{frommer1992h,bru1995parallel} + +In \cite{huang1993krylov}, the authors proposed a parallel multisplitting +algorithm in which all the tasks except one are devoted to solve a sub-block of +the splitting and to send their local solution to the first task which is in +charge to combine the vectors at each iteration. These vectors form a Krylov +basis for which the first tasks minimize the error function over the basis to +increase the convergence, then the other tasks receive the update solution until +convergence of the global system. + + + +In \cite{couturier2008gremlins}, the authors proposed practical implementations +of multisplitting algorithms that take benefit from multisplitting algorithms to +solve large scale linear systems. Inner solvers could be based on scalar direct +method with the LU method or scalar iterative one with GMRES. + +\bibliographystyle{plain} +\bibliography{biblio} \end{document}