From: lilia Date: Wed, 20 Aug 2014 10:39:59 +0000 (+0200) Subject: v0-20-08-2014 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/commitdiff_plain/4c9039f1d8f4a9b3099479e6f78f45f497dc0e59?ds=sidebyside v0-20-08-2014 --- diff --git a/paper.tex b/paper.tex index 89487b6..60c7878 100644 --- a/paper.tex +++ b/paper.tex @@ -538,6 +538,11 @@ Iterative Krylov methods; sparse linear systems; error minimization; PETSC; %à % no \IEEEPARstart % You must have at least 2 lines in the paragraph with the drop letter % (should never be an issue) +Iterative methods are become more attractive than direct ones to solve large sparse linear systems. They are more effective in a parallel context and require less memory and arithmetic operations than direct methods. + +%les chercheurs ont développer différentes méthodes exemple de méthode iteratives stationnaires et non stationnaires (krylov) +%problème de convergence et difficulté dans le passage à l'échelle + %%%********************************************************* %%%********************************************************* @@ -557,7 +562,7 @@ Iterative Krylov methods; sparse linear systems; error minimization; PETSC; %à \section{A Krylov two-stage algorithm} We propose a two-stage algorithm to solve large sparse linear systems of the form $Ax=b$, where $A\in\mathbb{R}^{n\times n}$ is a sparse and square nonsingular matrix, $x\in\mathbb{R}^n$ is the solution vector and $b\in\mathbb{R}^n$ is the right-hand side. The algorithm is implemented as an inner-outer iteration solver based on iterative Krylov methods. The main key points of our solver are given in Algorithm~\ref{algo:01}. -The outer iteration is implemented as an iterative Krylov method which minimizes some error function over a Krylov sub-space~\cite{saad96}. At every iteration, the sparse linear system $Ax=b$ is solved iteratively with an iterative method as GMRES method~\cite{saad86} and the Krylov sub-space that we used is spanned by a basis $S$ composed of successive solutions issued from the inner iteration +In order to accelerate the convergence, the outer iteration is implemented as an iterative Krylov method which minimizes some error function over a Krylov sub-space~\cite{saad96}. At every iteration, the sparse linear system $Ax=b$ is solved iteratively with an iterative method as GMRES method~\cite{saad86} and the Krylov sub-space that we used is spanned by a basis $S$ composed of successive solutions issued from the inner iteration \begin{equation} S = \{x^1, x^2, \ldots, x^s\} \text{,~} s\leq n. \end{equation} @@ -570,7 +575,7 @@ which is associated with the least-squares problem \underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2 \label{eq:01} \end{equation} -such that $R=AS$ is a dense rectangular matrix in $\mathbb{R}^{n\times s}$, $s\ll n$, and $R^T$ denotes the transpose of matrix $R$. We use an iterative method to solve the least-squares problem~(\ref{eq:01}) as CGLS~\cite{hestenes52} or LSQR~\cite{paige82} which is more appropriate that a direct method in the parallel context. +such that $R=AS$ is a dense rectangular matrix in $\mathbb{R}^{n\times s}$, $s\ll n$, and $R^T$ denotes the transpose of matrix $R$. We use an iterative method to solve the least-squares problem~(\ref{eq:01}) as CGLS~\cite{hestenes52} or LSQR~\cite{paige82} methods which is more appropriate than a direct method in the parallel context. \begin{algorithm}[t] \caption{A Krylov two-stage algorithm} @@ -580,12 +585,12 @@ such that $R=AS$ is a dense rectangular matrix in $\mathbb{R}^{n\times s}$, $s\l \State Set the initial guess $x^0$ \For {$k=1,2,3,\ldots$ until convergence} \State Solve iteratively $Ax^k=b$ - \State Add vector $x^k$ to Krylov basis $S$ + \State Add vector $x^k$ to Krylov sub-space basis $S$ \If {$k$ mod $s=0$ {\bf and} not convergence} \State Compute dense matrix $R=AS$ \State Solve least-squares problem $\underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2$ \State Compute minimizer $x^k=S\alpha$ - \State Reinitialize Krylov basis $S$ + \State Reinitialize Krylov sub-space basis $S$ \EndIf \EndFor \end{algorithmic} @@ -653,7 +658,7 @@ such that $R=AS$ is a dense rectangular matrix in $\mathbb{R}^{n\times s}$, $s\l \bibitem{saad86} Y.~Saad and M.~H.~Schultz, \emph{GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems}, SIAM Journal on Scientific and Statistical Computing, 7(3):856--869, 1986. -\bibitem{saad96} Y.~Saad and M.~H.~Schultz, \emph{Iterative Methods for Sparse Linear Systems}, PWS Publishing, New York, 1996. +\bibitem{saad96} Y.~Saad, \emph{Iterative Methods for Sparse Linear Systems}, PWS Publishing, New York, 1996. \bibitem{hestenes52} M.~R.~Hestenes and E.~Stiefel, \emph{Methods of conjugate gradients for solving linear system}, Journal of Research of National Bureau of Standards, B49:409--436, 1952.