-Algorithm~\ref{algo:01} summarizes the principle of our method. The outer
-iteration is inside the for loop. Line~\ref{algo:solve}, the Krylov method is
-called for a maximum of $m$ iterations. In practice, we suggest to choose $m$
-equals to the restart number of the GMRES like method. Line~\ref{algo:store},
-$S_{k~ mod~ s}=x^k$ consists in copying the solution $x_k$ into the column $k~
-mod~ s$ of the matrix $S$. After the minimization, the matrix $S$ is reused with
-the new values of the residuals.
+Algorithm~\ref{algo:01} summarizes the principle of the proposed method. The outer
+iteration is inside the \emph{for} loop. Line~\ref{algo:solve}, the Krylov method is
+called for a maximum of $max\_iter_{kryl}$ iterations. In practice, we suggest to set this parameter
+equals to the restart number of the GMRES-like method. Moreover, a tolerance
+threshold must be specified for the solver. In practice, this threshold must be
+much smaller than the convergence threshold of the TSIRM algorithm (\emph{i.e.}
+$\epsilon_{tsirm}$). Line~\ref{algo:store}, $S_{k \mod s}=x^k$ consists in copying the
+solution $x_k$ into the column $k \mod s$ of the matrix $S$, where $S$ is a matrix of size $n\times s$ whose column vector $i$ is denoted by $S_i$. After the
+minimization, the matrix $S$ is reused with the new values of the residuals. To
+solve the minimization problem, an iterative method is used. Two parameters are
+required for that: the maximum number of iterations and the threshold to stop the
+method.
+
+Let us summarize the most important parameters of TSIRM:
+\begin{itemize}
+\item $\epsilon_{tsirm}$: the threshold to stop the TSIRM method;
+\item $max\_iter_{kryl}$: the maximum number of iterations for the Krylov method;
+\item $s$: the number of outer iterations before applying the minimization step;
+\item $max\_iter_{ls}$: the maximum number of iterations for the iterative least-squares method;
+\item $\epsilon_{ls}$: the threshold used to stop the least-squares method.
+\end{itemize}
+
+
+The parallelization of TSIRM relies on the parallelization of all its
+parts. More precisely, except the least-squares step, all the other parts are
+obvious to achieve out in parallel. In order to develop a parallel version of
+our code, we have chosen to use PETSc~\cite{petsc-web-page}. For
+line~\ref{algo:matrix_mul} the matrix-matrix multiplication is implemented and
+efficient since the matrix $A$ is sparse and since the matrix $S$ contains few
+columns in practice. As explained previously, at least two methods seem to be
+interesting to solve the least-squares minimization, CGLS and LSQR.
+
+In the following we remind the CGLS algorithm. The LSQR method follows more or
+less the same principle but it takes more place, so we briefly explain the parallelization of CGLS which is similar to LSQR.
+
+\begin{algorithm}[t]
+\caption{CGLS}
+\begin{algorithmic}[1]
+ \Input $A$ (matrix), $b$ (right-hand side)
+ \Output $x$ (solution vector)\vspace{0.2cm}
+ \State Let $x_0$ be an initial approximation
+ \State $r_0=b-Ax_0$
+ \State $p_1=A^Tr_0$
+ \State $s_0=p_1$
+ \State $\gamma=||s_0||^2_2$
+ \For {$k=1,2,3,\ldots$ until convergence ($\gamma<\epsilon_{ls}$)} \label{algo2:conv}
+ \State $q_k=Ap_k$
+ \State $\alpha_k=\gamma/||q_k||^2_2$
+ \State $x_k=x_{k-1}+\alpha_kp_k$
+ \State $r_k=r_{k-1}-\alpha_kq_k$
+ \State $s_k=A^Tr_k$
+ \State $\gamma_{old}=\gamma$
+ \State $\gamma=||s_k||^2_2$
+ \State $\beta_k=\gamma/\gamma_{old}$
+ \State $p_{k+1}=s_k+\beta_kp_k$
+ \EndFor
+\end{algorithmic}
+\label{algo:02}
+\end{algorithm}
+
+
+In each iteration of CGLS, there is two matrix-vector multiplications and some
+classical operations: dot product, norm, multiplication and addition on vectors. All
+these operations are easy to implement in PETSc or similar environment.
+
+