From: couturie Date: Mon, 21 Sep 2015 14:08:17 +0000 (+0200) Subject: nouvelle preuve X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/commitdiff_plain/8a3bd904941367c04cdeba8a6b18cc411f0c1003?hp=e8b2791c920612ee2d944379890d2a6b76e0acea nouvelle preuve --- diff --git a/IJHPCN/paper.tex b/IJHPCN/paper.tex index f67fa56..02ab9cf 100644 --- a/IJHPCN/paper.tex +++ b/IJHPCN/paper.tex @@ -13,6 +13,8 @@ \usepackage{multirow} \usepackage{graphicx} \usepackage{url} +\usepackage{dsfont} + \def\newblock{\hskip .11em plus .33em minus .07em} @@ -407,57 +409,74 @@ little bit longer but it performs more or less the same operations. \section{Convergence results} \label{sec:04} +%%NEW + + +We suppose in this section that GMRES($m$) is used as solver in the TSIRM algorithm applied on a complex matrix $A$. +Let us denote $A^\ast$ the conjugate transpose of $A$, and let $\mathfrak{R}(A)=\dfrac{1}{2} \left( A + A^\ast\right)$, $\mathfrak{I}(A)=\dfrac{1}{2i} \left( A - A^\ast\right)$. + +\subsection{$\mathfrak{R}(A)$ is positive} + +\begin{proposition} +\label{positiveConvergent} +If $\mathfrak{R}(A)$ is positive, then the TSIRM algorithm is convergent. +\end{proposition} + + +\begin{proof} +If $\mathfrak{R}(A)$ is positive, then even if $A$ is complex, it is possible to state that +the GMRES algorithm is convergent, see, \emph{e.g.},~\cite{Huang89}. In particular, its residual norm +decreases to zero. + +At each iterate of the TSIRM algorithm, either a GMRES iteration is realized or a least square +resolution (to find the minimum of $||b-Ax||_2$ is achieved on the linear span of the iterated approximation vectors +$span\left(x_{k-s+1}, x_{k-s}+2, \hdots, x_{k} \right)$ +of the last GMRES stage, +where +$\operatorname{span}(S) = \left \{ {\sum_{i=1}^k \lambda_i v_i \Big| k \in \mathbb{N}, v_i \in S, \lambda _i \in \mathbb{R}} \right \}$. + +Obviously, the minimum of $||b-Ax||_2$ on the set $span\left(x_{k-s+1}, x_{k-s}+2, \hdots, x_{k} \right)$ +is lower than or equal to $||b-Ax_k||_2$, which is the last obtained GMRES-residual norm. So we can +conclude that the intermediate stage of least square resolution inserted into the GMRES algorithm +does not break the decreasing to zero of the GMRES-residual norm. + +In other words, the TSIRM algorithm is convergent. +\end{proof} + -We can now claim that, +Regarding the convergence speed, we can claim that, \begin{proposition} \label{prop:saad} -If $A$ is either a definite positive or a positive matrix and GMRES($m$) is used as a solver, then the TSIRM algorithm is convergent. +If $A$ is a positive matrix, then the convergence of the +TSIRM algorithm is linear. -Furthermore, let $r_k$ be the -$k$-th residue of TSIRM, then +Furthermore, let $r_k$ be the $k$-th residue of TSIRM, then we have the following boundaries: -\begin{itemize} -\item when $A$ is positive: \begin{equation} ||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0|| , \end{equation} -where $M$ is the symmetric part of $A$, $\alpha = \lambda_{min}(M)^2$ and $\beta = \lambda_{max}(A^T A)$; -\item when $A$ is positive definite: -\begin{equation} -\|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_0\|. -\end{equation} -\end{itemize} -%In the general case, where A is not positive definite, we have -%$\|r_n\| \le \inf_{p \in P_n} \|p(A)\| \le \kappa_2(V) \inf_{p \in P_n} \max_{\lambda \in \sigma(A)} |p(\lambda)| \|r_0\|, .$ +where $M$ is the symmetric part of $A$, $\alpha = \lambda_{min}(M)^2$ and $\beta = \lambda_{max}(A^T A)$. \end{proposition} \begin{proof} -Let us first recall that the residue is under control when considering the GMRES algorithm on a positive definite matrix, and it is bounded as follows: -\begin{equation*} -\|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{k/2} \|r_0\| . -\end{equation*} -Additionally, when $A$ is a positive real matrix with symmetric part $M$, then the residual norm provided at the $m$-th step of GMRES satisfies: +Let us first recall that, when $A$ is a positive real matrix with symmetric part $M$, then the residual norm provided at the $m$-th step of GMRES satisfies: \begin{equation*} ||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_0|| , \end{equation*} -where $\alpha$ and $\beta$ are defined as in Proposition~\ref{prop:saad}, which proves -the convergence of GMRES($m$) for all $m$ under such assumptions regarding $A$. +where $\alpha$ and $\beta$ are defined as in Proposition~\ref{prop:saad}. These well-known results can be found, \emph{e.g.}, in~\cite{Saad86}. We will now prove by a mathematical induction that, for each $k \in \mathbb{N}^\ast$, -$||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{mk}{2}} ||r_0||$ when $A$ is positive, and $\|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_0\|$ when $A$ is positive definite. +$||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{mk}{2}} ||r_0||$ when $A$ is positive. The base case is obvious, as for $k=1$, the TSIRM algorithm simply consists in applying GMRES($m$) once, leading to a new residual $r_1$ that follows the inductive hypothesis due to the results recalled above. -Suppose now that the claim holds for all $m=1, 2, \hdots, k-1$, that is, $\forall m \in \{1,2,\hdots, k-1\}$, $||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ in the positive case, and $\|r_k\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_0\|$ in the definite positive one. +Suppose now that the claim holds for all $m=1, 2, \hdots, k-1$, that is, $\forall m \in \{1,2,\hdots, k-1\}$, $||r_m|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$. We will show that the statement holds too for $r_k$. Two situations can occur: \begin{itemize} -\item If $k \not\equiv 0 ~(\textrm{mod}\ m)$, then the TSIRM algorithm consists in executing GMRES once. In that case and by using the inductive hypothesis, we obtain either $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ if $A$ is positive, or $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite case. +\item If $k \not\equiv 0 ~(\textrm{mod}\ m)$, then the TSIRM algorithm consists in executing GMRES once. In that case and by using the inductive hypothesis, we obtain either $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$. \item Else, the TSIRM algorithm consists in two stages: a first GMRES($m$) execution leads to a temporary $x_k$ whose residue satisfies: -\begin{itemize} -\item $||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$ in the positive case, -\item $\|r_k\| \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{m/2} \|r_{k-1}\|$ $\leqslant$ $\left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|$ in the positive definite one, -\end{itemize} +$$||r_k|| \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{m}{2}} ||r_{k-1}||\leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||$$ and a least squares resolution. Let $\operatorname{span}(S) = \left \{ {\sum_{i=1}^k \lambda_i v_i \Big| k \in \mathbb{N}, v_i \in S, \lambda _i \in \mathbb{R}} \right \}$ be the linear span of a set of real vectors $S$. So,\\ $\min_{\alpha \in \mathbb{R}^s} ||b-R\alpha ||_2 = \min_{\alpha \in \mathbb{R}^s} ||b-AS\alpha ||_2$ @@ -469,20 +488,120 @@ $\begin{array}{ll} & \leqslant \min_{\lambda \in \mathbb{R}} ||b-\lambda Ax_{k} ||_2\\ & \leqslant ||b-Ax_{k}||_2\\ & = ||r_k||_2\\ -& \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||, \textrm{ if $A$ is positive,}\\ -& \leqslant \left( 1-\frac{\lambda_{\mathrm{min}}^2(1/2(A^T + A))}{ \lambda_{\mathrm{max}}(A^T A)} \right)^{km/2} \|r_{0}\|, \textrm{ if $A$ is}\\ -& \textrm{positive definite,} +& \leqslant \left(1-\dfrac{\alpha}{\beta}\right)^{\frac{km}{2}} ||r_0||, \\ \end{array}$ \end{itemize} which concludes the induction and the proof. \end{proof} + + +\subsection{$\mathfrak{R}(A)$ is positive definite} + +\begin{proposition} +\label{prop2} +Convergence of the TSIRM algorithm is at least linear when $\mathfrak{R}(A)$ is +positive definite. Furthermore, the rate of convergence is lower +than $$\min\left( \left(1- \dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{ \lambda_{min}^{\mathfrak{R}(A)} \lambda_{max}^{\mathfrak{R}(A)} + {\lambda_{max}^{\mathfrak{I}(A)}}^2}\right)^{\frac{m}{2}}; +\left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{m}{2}}\right) ,$$ +where ${\lambda_{min}^{X}}$ (resp. ${\lambda_{max}^{X}}$) is the lowest (resp. largest) eigenvalue of matrix $X$. +\end{proposition} + + +\begin{proof} +If $\mathfrak{R}(A)$ is positive definite, then it is positive, and so the TSIRM algorithm +is convergent due to Proposition~\ref{positiveConvergent}. + +Furthermore, as stated in the proof of Proposition~\ref{positiveConvergent}, the GMRES residue is under control +when $\mathfrak{R}(A)$ is positive. More precisely, it has been proven in the literature that the residual norm +provided at the $m$-th step of GMRES satisfies: +\begin{enumerate} +\item $||r_m|| \leqslant \left(1- \dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{ \lambda_{min}^{\mathfrak{R}(A)} \lambda_{max}^{\mathfrak{R}(A)} + {\lambda_{max}^{\mathfrak{I}(A)}}^2}\right)^{\frac{mk}{2}} ||r_0||$, see, \emph{e.g.},~\cite{citeulike:2951999}, +\item $||r_m|| \leqslant \left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{mk}{2}} ||r_0||$, see~\cite{ANU:137201}, +\end{enumerate} +which proves the convergence of GMRES($m$) for all $m$ under such assumptions regarding $A$. + +We will now prove by a mathematical induction, and following the same canvas than in the proof of Prop.~\ref{positiveConvergent}, that: for each $k \in \mathbb{N}^\ast$, the TSIRM-residual norm satisfies +\begin{equation} +\label{induc} +\begin{array}{ll} +||r_k|| \leqslant & \min\left( \left(1- \dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{ \lambda_{min}^{\mathfrak{R}(A)} \lambda_{max}^{\mathfrak{R}(A)} + {\lambda_{max}^{\mathfrak{I}(A)}}^2}\right)^{\frac{m}{2}}; \right. \\ +& \left. \left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{m}{2}}\right) ||r_0|| +\end{array} +\end{equation} +when $A$ is positive definite. + + +The base case is obvious, as for $k=1$, the TSIRM algorithm simply consists in applying GMRES($m$) once, leading to a new residual $r_1$ that follows the inductive hypothesis due to the results recalled in the items listed above. + +Suppose now that the claim holds for all $u=1, 2, \hdots, k-1$, that is, $\forall u \in \{1,2,\hdots, k-1\}$, $||r_u|| \leqslant \left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{mu}{2}} ||r_0||$. +We will show that the statement holds too for $r_k$. Two situations can occur: +\begin{itemize} +\item If $k \not\equiv 0 ~(\textrm{mod}\ m)$, then the TSIRM algorithm consists in executing GMRES once. In that case and by using the inductive hypothesis, we obtain +$||r_k|| \leqslant \left(1- \dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{ \lambda_{min}^{\mathfrak{R}(A)} \lambda_{max}^{\mathfrak{R}(A)} + {\lambda_{max}^{\mathfrak{I}(A)}}^2}\right)^{\frac{m}{2}} \leqslant \left(1- \dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{ \lambda_{min}^{\mathfrak{R}(A)} \lambda_{max}^{\mathfrak{R}(A)} + {\lambda_{max}^{\mathfrak{I}(A)}}^2}\right)^{\frac{mk}{2}} ||r_0||$, due to~\cite{citeulike:2951999}. Furthermore, we have too that: $||r_k|| \leqslant \left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{m}{2}} ||r_{k-1}|| \leqslant \left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{mk}{2}} ||r_0||$, as proven in~\cite{ANU:137201} and by using the inductive hypothesis. So we can conclude that +$$\begin{array}{ll}||r_k|| \leqslant & \min\left( \left(1- \dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{ \lambda_{min}^{\mathfrak{R}(A)} \lambda_{max}^{\mathfrak{R}(A)} + {\lambda_{max}^{\mathfrak{I}(A)}}^2}\right)^{\frac{mk}{2}}; \right. \\ +& \left. \left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{mk}{2}}\right) \times ||r_0|| +\end{array}.$$ + +\item Else, the TSIRM algorithm consists in two stages: a first GMRES($m$) execution leads to a temporary $x_k$ whose residue satisfies, following the previous item: +$$\begin{array}{ll} +||r_k|| & \leqslant \min\left( \left(1- \dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{ \lambda_{min}^{\mathfrak{R}(A)} \lambda_{max}^{\mathfrak{R}(A)} + {\lambda_{max}^{\mathfrak{I}(A)}}^2}\right)^{\frac{m}{2}}; \right. \\ +& \left. \left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{m}{2}}\right) \times ||r_{k-1}||\\ + & \leqslant \min\left( \left(1- \dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{ \lambda_{min}^{\mathfrak{R}(A)} \lambda_{max}^{\mathfrak{R}(A)} + {\lambda_{max}^{\mathfrak{I}(A)}}^2}\right)^{\frac{mk}{2}}; \right. \\ +& \left. \left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{mk}{2}}\right) \times ||r_0|| +\end{array}$$ +and the least squares resolution of $\min_{\alpha \in \mathbb{R}^s} ||b-R\alpha ||_2$. + +Let $\operatorname{span}(S) = \left \{ {\sum_{i=1}^k \lambda_i v_i \Big| k \in \mathbb{N}, v_i \in S, \lambda _i \in \mathbb{R}} \right \}$ be the linear span of a set of real vectors $S$, as defined previously. So,\\ +$\min_{\alpha \in \mathbb{R}^s} ||b-R\alpha ||_2 = \min_{\alpha \in \mathbb{R}^s} ||b-AS\alpha ||_2$ + +$\begin{array}{ll} +& = \min_{x \in span\left(S_{k-s+1}, S_{k-s+2}, \hdots, S_{k} \right)} ||b-AS\alpha ||_2\\ +& = \min_{x \in span\left(x_{k-s+1}, x_{k-s}+2, \hdots, x_{k} \right)} ||b-AS\alpha ||_2\\ +& \leqslant \min_{x \in span\left( x_{k} \right)} ||b-Ax ||_2\\ +& \leqslant \min_{\lambda \in \mathbb{R}} ||b-\lambda Ax_{k} ||_2\\ +& \leqslant ||b-Ax_{k}||_2\\ +& = ||r_k||_2\\ +& \leqslant \min\left( \left(1- \dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{ \lambda_{min}^{\mathfrak{R}(A)} \lambda_{max}^{\mathfrak{R}(A)} + {\lambda_{max}^{\mathfrak{I}(A)}}^2}\right)^{\frac{mk}{2}}; \right. \\ +& \left. \left(1-\dfrac{{\lambda_{min}^{\mathfrak{R}(A)}}^2}{||A||^2}\right)^{\frac{mk}{2}}\right) \times ||r_0|| +\end{array} .$ +\end{itemize} +due to the inductive hypothesis. +So the statement of Equation~\eqref{induc} holds too for the $k$-th iterate, which concludes the induction and the proof. +\end{proof} + +\subsection{A last linear convergence} + + +\begin{proposition} +Let us define the field of values of $A$ by +$$\mathfrak{F}(A) = \left\{ \dfrac{x^\ast A x}{x^\ast x}, x \in \mathds{C}^n\setminus \{0\} \right\} .$$ + +Then if $\mathfrak{F}(A)$ is included into a closed ball of radius $r$ and center $c$, +which does not contain the origin, then the convergence of the TSIRM algorithm is at least linear. + +More precisely, the rate of convergence is lower +than $2 \dfrac{r}{|c|}$. +\end{proposition} + +\begin{proof} +This inequality comes from the fact that, in the conditions of the proposition, the GMRES residue +satisfies the inequality: $|r_k| \leqslant 2 \dfrac{r}{|c|}^k |r_0|$. An induction inspired by +the proofs of Propositions~\ref{prop:saad} and~\ref{prop2} can transfer this inequality to the +TSIRM residue. +\end{proof} + + + Remark that a similar proposition can be formulated at each time the given solver satisfies an inequality of the form $||r_n|| \leqslant \mu^n ||r_0||$, with $|\mu|<1$. Furthermore, it is \emph{a priori} possible in some particular cases regarding $A$, that the proposed TSIRM converges while the GMRES($m$) does not. +%%ENDNEW + + %%%********************************************************* %%%********************************************************* \section{Experiments using PETSc}