X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/blobdiff_plain/dfdee1f9086e9574b81e615edd998441ed3b82ae..fd78ced113c137f5f83f7aea69a10961467aea3e:/paper.tex diff --git a/paper.tex b/paper.tex index c8305c5..16b2de5 100644 --- a/paper.tex +++ b/paper.tex @@ -894,12 +894,12 @@ $\epsilon_{tsirm}=1e-10$. These experiments have been performed on an Intel(R Core(TM) i7-3630QM CPU @ 2.40GHz with the 3.5.1 version of PETSc. -In Table~\ref{tab:02}, some experiments comparing the solving of the linear -systems obtained with the previous matrices with a GMRES variant and with TSIRM -are given. In the second column, it can be noticed that either GMRES or FGMRES -(Flexible GMRES)~\cite{Saad:1993} is used to solve the linear system. According -to the matrices, different preconditioners are used. With TSIRM, the same -solver and the same preconditionner are used. This Table shows that TSIRM can +Experiments comparing +a GMRES variant with TSIRM in the resolution of linear systems are given in Table~\ref{tab:02}. +The second column describes whether GMRES or FGMRES +(Flexible GMRES~\cite{Saad:1993}) has been used for linear systems solving. +Different preconditioners have been used according to the matrices. With TSIRM, the same +solver and the same preconditionner are used. This table shows that TSIRM can drastically reduce the number of iterations to reach the convergence when the number of iterations for the normal GMRES is more or less greater than 500. In fact this also depends on two parameters: the number of iterations to stop GMRES @@ -924,7 +924,7 @@ torso3 & fgmres / sor & 37.70 & 565 & 34.97 & 510 \\ \hline \end{tabular} -\caption{Comparison of (F)GMRES and TSIRM with (F)GMRES in sequential with some matrices, time is expressed in seconds.} +\caption{Comparison between sequential standalone (F)GMRES and TSIRM with (F)GMRES (time in seconds).} \label{tab:02} \end{center} \end{table} @@ -934,10 +934,10 @@ torso3 & fgmres / sor & 37.70 & 565 & 34.97 & 510 \\ In order to perform larger experiments, we have tested some example applications -of PETSc. Those applications are available in the \emph{ksp} part which is +of PETSc. Those applications are available in the \emph{ksp} part, which is suited for scalable linear equations solvers: \begin{itemize} -\item ex15 is an example which solves in parallel an operator using a finite +\item ex15 is an example that solves in parallel an operator using a finite difference scheme. The diagonal is equal to 4 and 4 extra-diagonals representing the neighbors in each directions are equal to -1. This example is used in many physical phenomena, for example, heat and fluid flow, wave