X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/Krylov_multi.git/blobdiff_plain/fedf59562159b08c86b61f27de746deb1fd42f98..1a82aaffa07c2cd0cd044d1454d233171075e6f2:/krylov_multi.tex?ds=sidebyside diff --git a/krylov_multi.tex b/krylov_multi.tex index 5cf4056..3295c82 100644 --- a/krylov_multi.tex +++ b/krylov_multi.tex @@ -14,19 +14,166 @@ \maketitle +%%%%%%%%%%%%%%%%%%%%%%%% +%%%%%%%%%%%%%%%%%%%%%%%% + + \begin{abstract} -In this paper we revist the krylov multisplitting algorithm presented in [ref] -which uses a scalar method to minimize the krylov iterations computed by a -multisplitting algorithm. Our new algorithm is simply a parallel multisplitting -algorithm with few blocks of large size and a parallel krylov minimization is -used to improve the convergence. Some large scale experiments with a 3D Poisson -problem are presented. They show the obtained improvements compared to a +In this paper we revist the krylov multisplitting algorithm presented in +\cite{huang1993krylov} which uses a scalar method to minimize the krylov +iterations computed by a multisplitting algorithm. Our new algorithm is based on +a parallel multisplitting algorithm with few blocks of large size using a +parallel GMRES method inside each block and on a parallel krylov minimization in +order to improve the convergence. Some large scale experiments with a 3D Poisson +problem are presented. They show the obtained improvements compared to a classical GMRES both in terms of number of iterations and execution times. \end{abstract} + +%%%%%%%%%%%%%%%%%%%%%%%% +%%%%%%%%%%%%%%%%%%%%%%%% + + \section{Introduction} -Iterative methods used to solve large sparse linear systems of the form $Ax=b$ -because they are easier to parallelize than direct ones. +Iterative methods are used to solve large sparse linear systems of equations of +the form $Ax=b$ because they are easier to parallelize than direct ones. Many +iterative methods have been proposed and adapted by many researchers. When +solving large linear systems with many cores, iterative methods often suffer +from scalability problems. This is due to their need for collective +communications to perform matrix-vector products and reduction operations. +Preconditionners can be used in order to increase the convergence of iterative +solvers. However, most of the good preconditionners are not sclalable when +thousands of cores are used. + + +A completer... +On ne peut pas parler de tout...\\ + + + + +%%%%%%%%%%%%%%%%%%%%%%% +%% BEGIN +%%%%%%%%%%%%%%%%%%%%%%% +The key idea of the multisplitting method for solving a large system of linear equations +$Ax=b$ consists in partitioning the matrix $A$ in $L$ several ways +\begin{equation} +A = M_l - N_l,~l\in\{1,\ldots,L\}, +\label{eq01} +\end{equation} +where $M_l$ are nonsingular matrices. Then the linear system is solved by iteration based +on the multisplittings as follows +\begin{equation} +x^{k+1}=\displaystyle\sum^L_{l=1} E_l M^{-1}_l (N_l x^k + b),~k=1,2,3,\ldots +\label{eq02} +\end{equation} +where $E_l$ are non-negative and diagonal weighting matrices such that $\sum^L_{l=1}E_l=I$ ($I$ is an identity matrix). +Thus the convergence of such a method is dependent on the condition +\begin{equation} +\rho(\displaystyle\sum^L_{l=1}E_l M^{-1}_l N_l)<1. +\label{eq03} +\end{equation} + +The advantage of the multisplitting method is that at each iteration $k$ there are $L$ different linear +systems +\begin{equation} +y_l^k=M^{-1}_l N_l x_l^{k-1} + M^{-1}_l b,~l\in\{1,\ldots,L\}, +\label{eq04} +\end{equation} +to be solved independently by a direct or an iterative method, where $y_l^k$ is the solution of the local system. +A multisplitting method using an iterative method for solving the $L$ linear systems is called an inner-outer +iterative method or a two-stage method. The solution of the global linear system at the iteration $k$ is computed +as follows +\begin{equation} +x^k = \displaystyle\sum^L_{l=1} E_l y_l^k, +\label{eq05} +\end{equation} +In the case where the diagonal weighting matrices $E_l$ have only zero and one factors (i.e. $y_l^k$ are disjoint vectors), +the multisplitting method is non-overlapping and corresponds to the block Jacobi method. +%%%%%%%%%%%%%%%%%%%%%%% +%% END +%%%%%%%%%%%%%%%%%%%%%%% + +\section{Related works} + + +A general framework for studying parallel multisplitting has been presented in +\cite{o1985multi} by O'Leary and White. Convergence conditions are given for the +most general case. Many authors improved multisplitting algorithms by proposing +for example a asynchronous version \cite{bru1995parallel} and convergence +condition \cite{bai1999block,bahi2000asynchronous} in this case or other +two-stage algorithms~\cite{frommer1992h,bru1995parallel} + +In \cite{huang1993krylov}, the authors proposed a parallel multisplitting +algorithm in which all the tasks except one are devoted to solve a sub-block of +the splitting and to send their local solution to the first task which is in +charge to combine the vectors at each iteration. These vectors form a Krylov +basis for which the first tasks minimize the error function over the basis to +increase the convergence, then the other tasks receive the update solution until +convergence of the global system. + + + +In \cite{couturier2008gremlins}, the authors proposed practical implementations +of multisplitting algorithms that take benefit from multisplitting algorithms to +solve large scale linear systems. Inner solvers could be based on scalar direct +method with the LU method or scalar iterative one with GMRES. + + + + +%%%%%%%%%%%%%%%%%%%%%%%% +%%%%%%%%%%%%%%%%%%%%%%%% + + +\section{A two-stage method with a minimization} +Let $Ax=b$ be a given sparse and large linear system of $n$ equations +to solve in parallel on $L$ clusters, physically adjacent or geographically +distant, where $A\in\mathbb{R}^{n\times n}$ is a square and nonsingular +matrix, $x\in\mathbb{R}^{n}$ is the solution vector and $b\in\mathbb{R}^{n}$ +is the right-hand side vector. The multisplitting of this linear system +is defined as follows: +\begin{equation} +\left\{ +\begin{array}{lll} +A & = & [A_{1}, \ldots, A_{L}]\\ +x & = & [X_{1}, \ldots, X_{L}]\\ +b & = & [B_{1}, \ldots, B_{L}] +\end{array} +\right. +\label{sec03:eq01} +\end{equation} +where for all $l\in\{1,\ldots,L\}$ $A_l$ is a rectangular block of size $n_l\times n$ +and $X_l$ and $B_l$ are sub-vectors of size $n_l$, such that $\sum_ln_l=n$. In this +case, we use a row-by-row splitting without overlapping in such a way that successive +rows of the sparse matrix $A$ and both vectors $x$ and $b$ are assigned to one cluster. +So, the multisplitting format of the linear system is defined as follows: +\begin{equation} +\forall l\in\{1,\ldots,L\} \mbox{,~} \displaystyle\sum_{i=1}^{l-1}A_{li}X_i + A_{ll}X_l + \displaystyle\sum_{i=l+1}^{L}A_{li}X_i = B_l, +\label{sec03:eq02} +\end{equation} +where $A_{li}$ is a block of size $n_l\times n_i$ of the rectangular matrix $A_l$, $X_i\neq X_l$ +is a sub-vector of size $n_i$ of the solution vector $x$ and $\sum_{i<l}n_i+\sum_{i>l}n_i+n_l=n$, +for all $i\in\{1,\ldots,l-1,l+1,\ldots,L\}$. Therefore, each cluster $l$ is in charge of solving +the following spare sub-linear system: +\begin{equation} +\left\{ +\begin{array}{l} +A_{ll}X_l = Y_l \mbox{,~such that}\\ +Y_l = B_l - \displaystyle\sum_{i=1,i\neq l}^{L}A_{li}X_i, +\end{array} +\right. +\label{sec03:eq03} +\end{equation} +where the sub-vectors $X_i$ define the data dependencies between the cluster $l$ and other clusters. + + + +%%%%%%%%%%%%%%%%%%%%%%%% +%%%%%%%%%%%%%%%%%%%%%%%% + +\bibliographystyle{plain} +\bibliography{biblio} \end{document}