From: lilia Date: Tue, 7 Jan 2014 09:25:33 +0000 (+0100) Subject: 08-01-2013 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/Krylov_multi.git/commitdiff_plain/1a82aaffa07c2cd0cd044d1454d233171075e6f2 08-01-2013 --- diff --git a/krylov_multi.tex b/krylov_multi.tex index d81125b..3295c82 100644 --- a/krylov_multi.tex +++ b/krylov_multi.tex @@ -62,12 +62,13 @@ $Ax=b$ consists in partitioning the matrix $A$ in $L$ several ways A = M_l - N_l,~l\in\{1,\ldots,L\}, \label{eq01} \end{equation} -where $M_l$ is a nonsingular matrix, and then solving the linear system by the iterative method +where $M_l$ are nonsingular matrices. Then the linear system is solved by iteration based +on the multisplittings as follows \begin{equation} x^{k+1}=\displaystyle\sum^L_{l=1} E_l M^{-1}_l (N_l x^k + b),~k=1,2,3,\ldots \label{eq02} \end{equation} -where $E_l$ is a non-negative and diagonal weighting matrix such that $\sum^L_{l=1}E_l=I$ ($I$ is the identity matrix). +where $E_l$ are non-negative and diagonal weighting matrices such that $\sum^L_{l=1}E_l=I$ ($I$ is an identity matrix). Thus the convergence of such a method is dependent on the condition \begin{equation} \rho(\displaystyle\sum^L_{l=1}E_l M^{-1}_l N_l)<1. @@ -77,18 +78,18 @@ Thus the convergence of such a method is dependent on the condition The advantage of the multisplitting method is that at each iteration $k$ there are $L$ different linear systems \begin{equation} -y_l=M^{-1}_l N_l x_l^{k-1} + M^{-1}_l b,~l\in\{1,\ldots,L\}, +y_l^k=M^{-1}_l N_l x_l^{k-1} + M^{-1}_l b,~l\in\{1,\ldots,L\}, \label{eq04} \end{equation} -to be solved independently by a direct or an iterative method, where $y_l$ is the solution of the local system. +to be solved independently by a direct or an iterative method, where $y_l^k$ is the solution of the local system. A multisplitting method using an iterative method for solving the $L$ linear systems is called an inner-outer iterative method or a two-stage method. The solution of the global linear system at the iteration $k$ is computed as follows \begin{equation} -x^k = \displaystyle\sum^L_{l=1} E_l y_l, +x^k = \displaystyle\sum^L_{l=1} E_l y_l^k, \label{eq05} \end{equation} -In the case where the diagonal weighting matrices $E_l$ have only zero and one factors (i.e. $y_l$ are disjoint vectors), +In the case where the diagonal weighting matrices $E_l$ have only zero and one factors (i.e. $y_l^k$ are disjoint vectors), the multisplitting method is non-overlapping and corresponds to the block Jacobi method. %%%%%%%%%%%%%%%%%%%%%%% %% END @@ -146,7 +147,7 @@ b & = & [B_{1}, \ldots, B_{L}] where for all $l\in\{1,\ldots,L\}$ $A_l$ is a rectangular block of size $n_l\times n$ and $X_l$ and $B_l$ are sub-vectors of size $n_l$, such that $\sum_ln_l=n$. In this case, we use a row-by-row splitting without overlapping in such a way that successive -rows of the sparse matrix $A$ and both vectors $x$ and $b$ are assigned to a cluster. +rows of the sparse matrix $A$ and both vectors $x$ and $b$ are assigned to one cluster. So, the multisplitting format of the linear system is defined as follows: \begin{equation} \forall l\in\{1,\ldots,L\} \mbox{,~} \displaystyle\sum_{i=1}^{l-1}A_{li}X_i + A_{ll}X_l + \displaystyle\sum_{i=l+1}^{L}A_{li}X_i = B_l,