equals to the restart number of the GMRES-like method. Moreover, a tolerance
threshold must be specified for the solver. In practice, this threshold must be
much smaller than the convergence threshold of the TSIRM algorithm (\emph{i.e.}
-$\epsilon_{tsirm}$). Line~\ref{algo:store}, $S_{k~ mod~ s}=x^k$ consists in copying the
-solution $x_k$ into the column $k~ mod~ s$ of the matrix $S$. After the
+$\epsilon_{tsirm}$). Line~\ref{algo:store}, $S_{k \mod s}=x^k$ consists in copying the
+solution $x_k$ into the column $k \mod s$ of the matrix $S$, where $S$ is a matrix of size $n\times s$ whose column vector $i$ is denoted by $S_i$. After the
minimization, the matrix $S$ is reused with the new values of the residuals. To
solve the minimization problem, an iterative method is used. Two parameters are
required for that: the maximum number of iterations and the threshold to stop the
\end{itemize}
-The parallelisation of TSIRM relies on the parallelization of all its
+The parallelization of TSIRM relies on the parallelization of all its
parts. More precisely, except the least-squares step, all the other parts are
obvious to achieve out in parallel. In order to develop a parallel version of
our code, we have chosen to use PETSc~\cite{petsc-web-page}. For
line~\ref{algo:matrix_mul} the matrix-matrix multiplication is implemented and
efficient since the matrix $A$ is sparse and since the matrix $S$ contains few
-colums in practice. As explained previously, at least two methods seem to be
+columns in practice. As explained previously, at least two methods seem to be
interesting to solve the least-squares minimization, CGLS and LSQR.
In the following we remind the CGLS algorithm. The LSQR method follows more or
the convergence of GMRES($m$) for all $m$ under that assumption regarding $A$.
\end{proposition}
-<<<<<<< HEAD
-=======
We can now claim that,
\begin{proposition}
If $A$ is a positive real matrix and GMRES($m$) is used as solver, then the TSIRM algorithm is convergent.
$k$-th iterate of TSIRM.
We will prove that $r_k \rightarrow 0$ when $k \rightarrow +\infty$.
-Each step of the TSIRM algorithm
+Each step of the TSIRM algorithm \\
+$\min_{\alpha \in \mathbb{R}^s} ||b-R\alpha ||_2 = \min_{\alpha \in \mathbb{R}^s} ||b-AS\alpha ||_2$
+
+$\begin{array}{ll}
+& = \min_{x \in Vect\left(x_0, x_1, \hdots, x_{k-1} \right)} ||b-AS\alpha ||_2\\
+& \leqslant \min_{x \in Vect\left( S_{k-1} \right)} ||b-Ax ||_2\\
+& \leqslant ||b-Ax_{k-1}||
+\end{array}$
\end{proof}
->>>>>>> 84e15020344b77e5497c4a516cc20b472b2914cd
+
%%%*********************************************************
%%%*********************************************************
% that's all folks
\end{document}
-