From: lilia Date: Fri, 10 Oct 2014 12:45:41 +0000 (+0200) Subject: aMerge branch 'master' of ssh://bilbo.iut-bm.univ-fcomte.fr/GMRES2stage X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/commitdiff_plain/7de6cc5b2d794c02ef46116780ac061eff79ab7a?hp=2013b77d04aa67e242c885349c663c432142782a aMerge branch 'master' of ssh://bilbo.iut-bm.univ-fcomte.fr/GMRES2stage --- diff --git a/paper.tex b/paper.tex index c0e8b16..063cbf9 100644 --- a/paper.tex +++ b/paper.tex @@ -669,8 +669,8 @@ called for a maximum of $max\_iter_{kryl}$ iterations. In practice, we sugges equals to the restart number of the GMRES-like method. Moreover, a tolerance threshold must be specified for the solver. In practice, this threshold must be much smaller than the convergence threshold of the TSIRM algorithm (\emph{i.e.} -$\epsilon_{tsirm}$). Line~\ref{algo:store}, $S_{k~ mod~ s}=x^k$ consists in copying the -solution $x_k$ into the column $k~ mod~ s$ of the matrix $S$. After the +$\epsilon_{tsirm}$). Line~\ref{algo:store}, $S_{k \mod s}=x^k$ consists in copying the +solution $x_k$ into the column $k \mod s$ of the matrix $S$, where $S$ is a matrix of size $n\times s$ whose column vector $i$ is denoted by $S_i$. After the minimization, the matrix $S$ is reused with the new values of the residuals. To solve the minimization problem, an iterative method is used. Two parameters are required for that: the maximum number of iterations and the threshold to stop the @@ -686,13 +686,13 @@ Let us summarize the most important parameters of TSIRM: \end{itemize} -The parallelisation of TSIRM relies on the parallelization of all its +The parallelization of TSIRM relies on the parallelization of all its parts. More precisely, except the least-squares step, all the other parts are obvious to achieve out in parallel. In order to develop a parallel version of our code, we have chosen to use PETSc~\cite{petsc-web-page}. For line~\ref{algo:matrix_mul} the matrix-matrix multiplication is implemented and efficient since the matrix $A$ is sparse and since the matrix $S$ contains few -colums in practice. As explained previously, at least two methods seem to be +columns in practice. As explained previously, at least two methods seem to be interesting to solve the least-squares minimization, CGLS and LSQR. In the following we remind the CGLS algorithm. The LSQR method follows more or