From: raphael couturier Date: Thu, 9 Oct 2014 19:22:14 +0000 (+0200) Subject: new X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/commitdiff_plain/42bad55fb961978dd2a38a0236bc43cc8b5ab7cf?ds=sidebyside new --- diff --git a/paper.tex b/paper.tex index 4f9f60e..15a45f0 100644 --- a/paper.tex +++ b/paper.tex @@ -877,9 +877,10 @@ Table~\ref{tab:03} shows the execution times and the number of iterations of example ex15 of PETSc on the Juqueen architecture. Differents number of cores are studied rangin from 2,048 upto 16,383. Two preconditioners have been tested. For those experiments, the number of components (or unknown of the -problems) per processor is fixed to 25,000. This number can seem relatively -small. In fact, for some applications that need a lot of memory, the number of -components per processor requires sometimes to be small. +problems) per processor is fixed to 25,000, also called weak scaling. This +number can seem relatively small. In fact, for some applications that need a lot +of memory, the number of components per processor requires sometimes to be +small. In this Table, we can notice that TSIRM is always faster than FGMRES. The last column shows the ratio between FGMRES and the best version of TSIRM according to @@ -887,7 +888,16 @@ the minimization procedure: CGLS or LSQR. Even if we have computed the worst case between CGLS and LSQR, it is clear that TSIRM is alsways faster than FGMRES. For this example, the multigrid preconditionner is faster than SOR. The gain between TSIRM and FGMRES is more or less similar for the two -preconditioners +preconditioners. + +In Figure~\ref{fig:01}, the number of iterations per second corresponding to +Table~\ref{tab:01} is displayed. It should be noticed that for TSIRM, only the +iterations of the Krylov solver are taken into account. Iterations of CGLS or +LSQR are not recorded but they are time-consuming. It can be noticed that the +number of iterations per second of FMGRES is constant whereas it decrease with +TSIRM with both preconditioner. This can be explained by the fact that when the +number of core increases the time for the minimization step also increases but +it is also more efficient to reduce the number of iterations. \begin{figure}[htbp]