From a12cd1ab6b9ca67bc2943937cc73e59782d724e3 Mon Sep 17 00:00:00 2001 From: Christophe Guyeux Date: Mon, 13 Oct 2014 15:14:19 +0200 Subject: [PATCH 1/1] idskfjkl --- paper.tex | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/paper.tex b/paper.tex index 693edae..f60ef9a 100644 --- a/paper.tex +++ b/paper.tex @@ -716,7 +716,7 @@ $error$, which is defined by $||Ax_k-b||_2$. Let us summarize the most important parameters of TSIRM: \begin{itemize} -\item $\epsilon_{tsirm}$: the threshold to stop the TSIRM method; +\item $\epsilon_{tsirm}$: the threshold that stops the TSIRM method; \item $max\_iter_{kryl}$: the maximum number of iterations for the Krylov method; \item $s$: the number of outer iterations before applying the minimization step; \item $max\_iter_{ls}$: the maximum number of iterations for the iterative least-squares method; @@ -727,9 +727,9 @@ Let us summarize the most important parameters of TSIRM: The parallelization of TSIRM relies on the parallelization of all its parts. More precisely, except the least-squares step, all the other parts are obvious to achieve out in parallel. In order to develop a parallel version of -our code, we have chosen to use PETSc~\cite{petsc-web-page}. For -line~\ref{algo:matrix_mul} the matrix-matrix multiplication is implemented and -efficient since the matrix $A$ is sparse and since the matrix $S$ contains few +our code, we have chosen to use PETSc~\cite{petsc-web-page}. In +line~\ref{algo:matrix_mul}, the matrix-matrix multiplication is implemented and +efficient since the matrix $A$ is sparse and the matrix $S$ contains few columns in practice. As explained previously, at least two methods seem to be interesting to solve the least-squares minimization, CGLS and LSQR. @@ -764,7 +764,7 @@ the parallelization of CGLS which is similar to LSQR. In each iteration of CGLS, there is two matrix-vector multiplications and some -classical operations: dot product, norm, multiplication and addition on +classical operations: dot product, norm, multiplication, and addition on vectors. All these operations are easy to implement in PETSc or similar environment. It should be noticed that LSQR follows the same principle, it is a little bit longer but it performs more or less the same operations. @@ -846,8 +846,11 @@ $\begin{array}{ll} which concludes the induction and the proof. \end{proof} -%We can remark that, at each iterate, the residue of the TSIRM algorithm is lower -%than the one of the GMRES method. +Remark that a similar proposition can be formulated at each time +the given solver satisfies an inequality of the form $||r_n|| \leqslant \mu^n ||r_0||$, +with $|\mu|<1$. Furthermore, it is \emph{a priori} possible in some particular cases +regarding $A$, +that the proposed TSIRM converges while the GMRES($m$) does not. %%%********************************************************* %%%********************************************************* -- 2.39.5