X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/blobdiff_plain/19a2f48d7a537fbcf6f320847d686e8f8b9efcb4..bda6e7ef79c89806bf219192f386105c428e1134:/paper.tex diff --git a/paper.tex b/paper.tex index fc64064..acb46bb 100644 --- a/paper.tex +++ b/paper.tex @@ -613,57 +613,67 @@ $b\in\mathbb{R}^n$ is the right-hand side. The algorithm is implemented as an inner-outer iteration solver based on iterative Krylov methods. The main key points of our solver are given in Algorithm~\ref{algo:01}. -In order to accelerate the convergence, the outer iteration applies a least-square minimization on the residuals computed by the inner some error functions over a Krylov -subspace~\cite{Saad2003}. At each iteration, the sparse linear system $Ax=b$ is -solved iteratively with an iterative method, for example GMRES -method~\cite{Saad86} or some of its variants, and the Krylov subspace that we -used is spanned by a basis $S$ composed of successive solutions issued from the -inner iteration -\begin{equation} - S = \{x^1, x^2, \ldots, x^s\} \text{,~} s\leq n. -\end{equation} -The advantage of such a Krylov subspace is that we neither need an orthogonal -basis nor any synchronization between processors to generate this basis. The -algorithm is periodically restarted every $s$ iterations with a new initial -guess $x=S\alpha$ which minimizes the residual norm $\|b-Ax\|_2$ over the Krylov -subspace spanned by vectors of $S$, where $\alpha$ is a solution of the normal -equations -\begin{equation} - R^TR\alpha = R^Tb, -\end{equation} -which is associated with the least-squares problem +In order to accelerate the convergence, the outer iteration periodically applies +a least-square minimization on the residuals computed by the inner solver. The +inner solver is a Krylov based solver which does not required to be changed. + +At each outer iteration, the sparse linear system $Ax=b$ is solved, only for $m$ +iterations, using an iterative method restarting with the previous solution. For +example, the GMRES method~\cite{Saad86} or some of its variants can be used as a +inner solver. The current solution of the Krylov method is saved inside a matrix +$S$ composed of successive solutions computed by the inner iteration. + +Periodically, every $s$ iterations, the minimization step is applied in order to +compute a new solution $x$. For that, the previous residuals are computed with +$(b-AS)$. The minimization of the residuals is obtained by \begin{equation} \underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2 \label{eq:01} \end{equation} -such that $R=AS$ is a dense rectangular matrix in $\mathbb{R}^{n\times s}$, -$s\ll n$, and $R^T$ denotes the transpose of matrix $R$. We use an iterative -method to solve the least-squares problem~(\ref{eq:01}) such as CGLS -~\cite{Hestenes52} or LSQR~\cite{Paige82} which are more appropriate than a -direct method in the parallel context. +with $R=AS$. Then the new solution $x$ is computed with $x=S\alpha$. + + +In practice, $R$ is a dense rectangular matrix in $\mathbb{R}^{n\times s}$, +$s\ll n$. In order to minimize~(\ref{eq:01}), a least-square method such as +CGLS ~\cite{Hestenes52} or LSQR~\cite{Paige82} is used. Those methods are more +appropriate than a direct method in a parallel context. \begin{algorithm}[t] -\caption{A Krylov two-stage algorithm} +\caption{TSARM} \begin{algorithmic}[1] \Input $A$ (sparse matrix), $b$ (right-hand side) \Output $x$ (solution vector)\vspace{0.2cm} \State Set the initial guess $x^0$ - \For {$k=1,2,3,\ldots$ until convergence} \label{algo:conv} - \State Solve iteratively $Ax^k=b$ \label{algo:solve} - \State $S_{k~mod~s}=x^k$ - \If {$k$ mod $s=0$ {\bf and} not convergence} - \State Compute dense matrix $R=AS$ - \State Solve least-squares problem $\underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2$ - \State Compute minimizer $x^k=S\alpha$ + \For {$k=1,2,3,\ldots$ until convergence (error$<\epsilon$)} \label{algo:conv} + \State $x^k=Solve(A,b,x^{k-1},m)$ \label{algo:solve} + \State retrieve error + \State $S_{k~mod~s}=x^k$ \label{algo:store} + \If {$k$ mod $s=0$ {\bf and} error$>\epsilon$} + \State $R=AS$ \Comment{compute dense matrix} + \State Solve least-squares problem $\underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2$ \label{algo:} + \State $x^k=S\alpha$ \Comment{compute new solution} \EndIf \EndFor \end{algorithmic} \label{algo:01} \end{algorithm} -Operation $S_{k~ mod~ s}=x^k$ consists in copying the residual $x_k$ into the -column $k~ mod~ s$ of the matrix $S$. After the minimization, the matrix $S$ is -reused with the new values of the residuals. +Algorithm~\ref{algo:01} summarizes the principle of our method. The outer +iteration is inside the for loop. Line~\ref{algo:solve}, the Krylov method is +called for a maximum of $m$ iterations. In practice, we suggest to choose $m$ +equals to the restart number of the GMRES-like method. Moreover, a tolerance +threshold must be specified for the solver. In practise, this threshold must be +much smaller than the convergence threshold of the TSARM algorithm +(i.e. $\epsilon$). Line~\ref{algo:store}, $S_{k~ mod~ s}=x^k$ consists in +copying the solution $x_k$ into the column $k~ mod~ s$ of the matrix $S$. After +the minimization, the matrix $S$ is reused with the new values of the residuals. % à continuer Line + +To summarize, the important parameters of are: +\begin{itemize} +\item $\epsilon$ the threshold to stop the method +\item $m$ the number of iterations for the krylov method +\item $s$ the number of outer iterations before applying the minimization step +\end{itemize} %%%********************************************************* %%%*********************************************************