From: raphael couturier Date: Tue, 7 Oct 2014 11:22:53 +0000 (+0200) Subject: new X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/commitdiff_plain/58544a2932f7124415b708bec79740f1269f8b52 new --- diff --git a/paper.tex b/paper.tex index acb46bb..f2e0621 100644 --- a/paper.tex +++ b/paper.tex @@ -583,10 +583,10 @@ performances. The present paper is organized as follows. In Section~\ref{sec:02} some related works are presented. Section~\ref{sec:03} presents our two-stage algorithm using a least-square residual minimization. Section~\ref{sec:04} describes some -convergence results on this method. Section~\ref{sec:05} shows some -experimental results obtained on large clusters of our algorithm using routines -of PETSc toolkit. Finally Section~\ref{sec:06} concludes and gives some -perspectives. +convergence results on this method. In Section~\ref{sec:05}, parallization +details of TSARM are given. Section~\ref{sec:06} shows some experimental +results obtained on large clusters of our algorithm using routines of PETSc +toolkit. Finally Section~\ref{sec:06} concludes and gives some perspectives. %%%********************************************************* %%%********************************************************* @@ -604,7 +604,7 @@ perspectives. %%%********************************************************* %%%********************************************************* -\section{A Krylov two-stage algorithm} +\section{Two-stage algorithm with least-square residuals minimization} \label{sec:03} A two-stage algorithm is proposed to solve large sparse linear systems of the form $Ax=b$, where $A\in\mathbb{R}^{n\times n}$ is a sparse and square @@ -644,11 +644,11 @@ appropriate than a direct method in a parallel context. \Input $A$ (sparse matrix), $b$ (right-hand side) \Output $x$ (solution vector)\vspace{0.2cm} \State Set the initial guess $x^0$ - \For {$k=1,2,3,\ldots$ until convergence (error$<\epsilon$)} \label{algo:conv} - \State $x^k=Solve(A,b,x^{k-1},m)$ \label{algo:solve} + \For {$k=1,2,3,\ldots$ until convergence (error$<\epsilon_{kryl}$)} \label{algo:conv} + \State $x^k=Solve(A,b,x^{k-1},max\_iter_{kryl})$ \label{algo:solve} \State retrieve error \State $S_{k~mod~s}=x^k$ \label{algo:store} - \If {$k$ mod $s=0$ {\bf and} error$>\epsilon$} + \If {$k$ mod $s=0$ {\bf and} error$>\epsilon_{kryl}$} \State $R=AS$ \Comment{compute dense matrix} \State Solve least-squares problem $\underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2$ \label{algo:} \State $x^k=S\alpha$ \Comment{compute new solution} @@ -660,19 +660,24 @@ appropriate than a direct method in a parallel context. Algorithm~\ref{algo:01} summarizes the principle of our method. The outer iteration is inside the for loop. Line~\ref{algo:solve}, the Krylov method is -called for a maximum of $m$ iterations. In practice, we suggest to choose $m$ +called for a maximum of $max\_iter_{kryl}$ iterations. In practice, we suggest to set this parameter equals to the restart number of the GMRES-like method. Moreover, a tolerance -threshold must be specified for the solver. In practise, this threshold must be -much smaller than the convergence threshold of the TSARM algorithm -(i.e. $\epsilon$). Line~\ref{algo:store}, $S_{k~ mod~ s}=x^k$ consists in -copying the solution $x_k$ into the column $k~ mod~ s$ of the matrix $S$. After -the minimization, the matrix $S$ is reused with the new values of the residuals. % à continuer Line +threshold must be specified for the solver. In practice, this threshold must be +much smaller than the convergence threshold of the TSARM algorithm (i.e. +$\epsilon$). Line~\ref{algo:store}, $S_{k~ mod~ s}=x^k$ consists in copying the +solution $x_k$ into the column $k~ mod~ s$ of the matrix $S$. After the +minimization, the matrix $S$ is reused with the new values of the residuals. To +solve the minimization problem, an iterative method is used. Two parameters are +required for that: the maximum number of iteration and the threshold to stop the +method. To summarize, the important parameters of are: \begin{itemize} -\item $\epsilon$ the threshold to stop the method -\item $m$ the number of iterations for the krylov method +\item $\epsilon_{kryl}$ the threshold to stop the method of the krylov method +\item $max\_iter_{kryl}$ the maximum number of iterations for the krylov method \item $s$ the number of outer iterations before applying the minimization step +\item $max\_iter_{ls}$ the maximum number of iterations for the iterative least-square method +\item $\epsilon_{ls}$ the threshold to stop the least-square method \end{itemize} %%%********************************************************* @@ -681,11 +686,18 @@ To summarize, the important parameters of are: \section{Convergence results} \label{sec:04} + + %%%********************************************************* %%%********************************************************* -\section{Experiments using petsc} +\section{Parallelization} \label{sec:05} +%%%********************************************************* +%%%********************************************************* +\section{Experiments using petsc} +\label{sec:06} + In order to see the influence of our algorithm with only one processor, we first show a comparison with the standard version of GMRES and our algorithm. In @@ -819,7 +831,7 @@ Larger experiments .... %%%********************************************************* %%%********************************************************* \section{Conclusion} -\label{sec:06} +\label{sec:07} %The conclusion goes here. this is more of the conclusion %%%********************************************************* %%%*********************************************************