iterations phase. In the remainder, the $i$-th column vector of $S$ will be
denoted by $S_i$.
-$\|r_n\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{n/2} \|r_0\|,$
-In the general case, where A is not positive definite, we have
-
-$\|r_n\| \le \inf_{p \in P_n} \|p(A)\| \le \kappa_2(V) \inf_{p \in P_n} \max_{\lambda \in \sigma(A)} |p(\lambda)| \|r_0\|, \,$
-
-
At each $s$ iterations, another kind of minimization step is applied in order to
compute a new solution $x$. For that, the previous residuals of $Ax=b$ are computed by
the inner iterations with $(b-AS)$. The minimization of the residuals is obtained by
\section{Convergence results}
\label{sec:04}
-Let us recall the following result, see~\cite{Saad86} for further readings.
-\begin{proposition}
-\label{prop:saad}
-Suppose that $A$ is a positive real matrix with symmetric part $M$. Then the residual norm provided at the $m$-th step of GMRES satisfies:
-\begin{equation}
-||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_0|| ,
-\end{equation}
-where $\alpha = \lambda_{min}(M)^2$ and $\beta = \lambda_{max}(A^T A)$, which proves
-the convergence of GMRES($m$) for all $m$ under that assumption regarding $A$.
-\end{proposition}
We can now claim that,
\begin{proposition}
-If $A$ is a positive real matrix and GMRES($m$) is used as solver, then the TSIRM algorithm is convergent. Furthermore,
+If $A$ is either a definite positive or a positive matrix and GMRES($m$) is used as solver, then the TSIRM algorithm is convergent. Furthermore,
let $r_k$ be the
$k$-th residue of TSIRM, then
-we still have:
+we have the following boundaries:
+\begin{itemize}
+\item when $A$ is positive:
\begin{equation}
||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0|| ,
\end{equation}
-where $\alpha$ and $\beta$ are defined as in Proposition~\ref{prop:saad}.
+where $M$ is the symmetric part of $A$, $\alpha = \lambda_{min}(M)^2$ and $\beta = \lambda_{max}(A^T A)$;
+\item when $A$ is definite positive:
+\begin{equation}
+\|r_n\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{n/2} \|r_0\|,
+\end{equation}
+\end{itemize}
+%In the general case, where A is not positive definite, we have
+%$\|r_n\| \le \inf_{p \in P_n} \|p(A)\| \le \kappa_2(V) \inf_{p \in P_n} \max_{\lambda \in \sigma(A)} |p(\lambda)| \|r_0\|, \,$
+
\end{proposition}
\begin{proof}
We will prove by a mathematical induction that, for each $k \in \mathbb{N}^\ast$,
$||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{mk}{2}} ||r_0||.$
+Let us first recall that,
+Additionally, when $A$ is a positive real matrix with symmetric part $M$, then the residual norm provided at the $m$-th step of GMRES satisfies:
+\begin{equation}
+||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_0|| ,
+\end{equation}
+where $\alpha$ and $\beta$ are defined as in Proposition~\ref{prop:saad}, which proves
+the convergence of GMRES($m$) for all $m$ under that assumption regarding $A$.
+Let us recall the following result, see~\cite{Saad86} for further readings.
+
+
The base case is obvious, as for $k=1$, the TSIRM algorithm simply consists in applying GMRES($m$) once, leading to a new residual $r_1$ which follows the inductive hypothesis due to Proposition~\ref{prop:saad}.
Suppose now that the claim holds for all $m=1, 2, \hdots, k-1$, that is, $\forall m \in \{1,2,\hdots, k-1\}$, $||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$.