From: raphael couturier Date: Mon, 13 Oct 2014 09:01:44 +0000 (+0200) Subject: new X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/commitdiff_plain/721c9ed5bc3d45a7fbb91e920b8de4f11e0ffb1f?ds=inline new --- diff --git a/paper.tex b/paper.tex index e4d146a..bfb7aa2 100644 --- a/paper.tex +++ b/paper.tex @@ -703,16 +703,16 @@ method is called for a maximum of $max\_iter_{kryl}$ iterations. In practice, we suggest to set this parameter equal to the restart number in the GMRES-like method. Moreover, a tolerance threshold must be specified for the solver. In practice, this threshold must be much smaller than the convergence threshold of -the TSIRM algorithm (\emph{i.e.}, $\epsilon_{tsirm}$). We also consider that -after the call of the $Solve$ function, we obtain the vector $x_k$ and the error -which is defined by $||Ax_k-b||_2$. +the TSIRM algorithm (\emph{i.e.}, $\epsilon_{tsirm}$). We also consider that +after the call of the $Solve$ function, we obtain the vector $x_k$ and the +$error$ which is defined by $||Ax_k-b||_2$. - Line~\ref{algo:store}, -$S_{k \mod s}=x_k$ consists in copying the solution $x_k$ into the column $k -\mod s$ of $S$. After the minimization, the matrix $S$ is reused with the new -values of the residuals. To solve the minimization problem, an iterative method -is used. Two parameters are required for that: the maximum number of iterations -and the threshold to stop the method. + Line~\ref{algo:store}, $S_{k \mod s}=x_k$ consists in copying the solution + $x_k$ into the column $k \mod s$ of $S$. After the minimization, the matrix + $S$ is reused with the new values of the residuals. To solve the minimization + problem, an iterative method is used. Two parameters are required for that: + the maximum number of iterations ($max\_iter_{ls}$) and the threshold to stop + the method ($\epsilon_{ls}$). Let us summarize the most important parameters of TSIRM: \begin{itemize} @@ -733,8 +733,9 @@ efficient since the matrix $A$ is sparse and since the matrix $S$ contains few columns in practice. As explained previously, at least two methods seem to be interesting to solve the least-squares minimization, CGLS and LSQR. -In the following we remind the CGLS algorithm. The LSQR method follows more or -less the same principle but it takes more place, so we briefly explain the parallelization of CGLS which is similar to LSQR. +In Algorithm~\ref{algo:02} we remind the CGLS algorithm. The LSQR method follows +more or less the same principle but it takes more place, so we briefly explain +the parallelization of CGLS which is similar to LSQR. \begin{algorithm}[t] \caption{CGLS} @@ -763,9 +764,10 @@ less the same principle but it takes more place, so we briefly explain the paral In each iteration of CGLS, there is two matrix-vector multiplications and some -classical operations: dot product, norm, multiplication and addition on vectors. All -these operations are easy to implement in PETSc or similar environment. - +classical operations: dot product, norm, multiplication and addition on +vectors. All these operations are easy to implement in PETSc or similar +environment. It should be noticed that LSQR follows the same principle, it is a +little bit longer but it performs more or less the same operations. %%%*********************************************************