X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/blobdiff_plain/df562f71fc6dcfdbb4e0b77f138977ca7219df6f..26d38e217c09735a23eb667846b3869559154681:/IJHPCN/paper.tex diff --git a/IJHPCN/paper.tex b/IJHPCN/paper.tex index 0c88f29..2e4cfb6 100644 --- a/IJHPCN/paper.tex +++ b/IJHPCN/paper.tex @@ -49,9 +49,7 @@ \makeatletter \def\theequation{\arabic{equation}} -%\JOURNALNAME{\TEN{\it Int. J. System Control and Information -%Processing, -%Vol. \theVOL, No. \theISSUE, \thePUBYEAR\hfill\thepage}}% +\JOURNALNAME{\TEN{\it International Journal of High Performance Computing and Networking}} % %\def\BottomCatch{% %\vskip -10pt @@ -73,10 +71,9 @@ \setcounter{page}{1} -\LRH{F. Wang et~al.} +\LRH{R. Couturier, L. Ziane Khodja and C. Guyeux} -\RRH{Metadata Based Management and Sharing of Distributed Biomedical -Data} +\RRH{TSIRM: A Two-Stage Iteration with least-squares Residual Minimization algorithm} \VOL{x} @@ -86,7 +83,7 @@ Data} \BottomCatch -\PUBYEAR{2012} +\PUBYEAR{2015} \subtitle{} @@ -109,19 +106,25 @@ Data} \begin{abstract} -In this article, a two-stage iterative algorithm is proposed to improve the +In this paper, a two-stage iterative algorithm is proposed to improve the convergence of Krylov based iterative methods, typically those of GMRES -variants. The principle of the proposed approach is to build an external -iteration over the Krylov method, and to frequently store its current residual +variants. The principle of the proposed approach is to build an external +iteration over the Krylov method, and to frequently store its current residual (at each GMRES restart for instance). After a given number of outer iterations, a least-squares minimization step is applied on the matrix composed by the saved -residuals, in order to compute a better solution and to make new iterations if -required. It is proven that the proposal has the same convergence properties -than the inner embedded method itself. Experiments using up to 16,394 cores -also show that the proposed algorithm runs around 5 or 7 times faster than -GMRES. +residuals, in order to compute a better solution and to make new iterations if +required. It is proven that the proposal has the same convergence properties +than the inner embedded method itself. +%%NEW +Several experiments have been performed +with the PETSc solver with linear and nonlinear problems. They show good +speedups compared to GMRES with up to 16,394 cores with different +preconditioners. +%%ENDNEW \end{abstract} + + \KEYWORD{Iterative Krylov methods; sparse linear and non linear systems; two stage iteration; least-squares residual minimization; PETSc.} %\REF{to this paper should be made as follows: Rodr\'{\i}guez @@ -131,28 +134,11 @@ GMRES. %Semantics and Ontologies}, Vol. x, No. x, pp.xxx\textendash xxx.} \begin{bio} -Manuel Pedro Rodr\'iguez Bol\'ivar received his PhD in Accounting at -the University of Granada. He is a Lecturer at the Department of -Accounting and Finance, University of Granada. His research -interests include issues related to conceptual frameworks of -accounting, diffusion of financial information on Internet, Balanced -Scorecard applications and environmental accounting. He is author of -a great deal of research studies published at national and -international journals, conference proceedings as well as book -chapters, one of which has been edited by Kluwer Academic -Publishers.\vs{9} - -\noindent Bel\'en Sen\'es Garc\'ia received her PhD in Accounting at -the University of Granada. She is a Lecturer at the Department of -Accounting and Finance, University of Granada. Her research -interests are related to cultural, institutional and historic -accounting and in environmental accounting. She has published -research papers at national and international journals, conference -proceedings as well as chapters of books.\vs{8} - -\noindent Both authors have published a book about environmental -accounting edited by the Institute of Accounting and Auditing, -Ministry of Economic Affairs, in Spain in October 2003. +Raphaël Couturier .... + +\noindent Lilia Ziane Khodja ... + +\noindent Christophe Guyeux ... \end{bio} @@ -511,7 +497,7 @@ Table~\ref{tab:01}. These latter, which are real-world applications matrices, have been extracted from the Davis collection, University of Florida~\cite{Dav97}. -\begin{table}[htbp] +\begin{table*}[htbp] \begin{center} \begin{tabular}{|c|c|r|r|r|} \hline @@ -528,7 +514,7 @@ torso3 & 2D/3D problem & 259,156 & 4,429,042 \\ \caption{Main characteristics of the sparse matrices chosen from the Davis collection} \label{tab:01} \end{center} -\end{table} +\end{table*} Chosen parameters are detailed below. We have stopped the GMRES every 30 iterations (\emph{i.e.}, $max\_iter_{kryl}=30$), which is the default @@ -550,7 +536,7 @@ fact this also depends on two parameters: the number of iterations before stoppi and the number of iterations to perform the minimization. -\begin{table}[htbp] +\begin{table*}[htbp] \begin{center} \begin{tabular}{|c|c|r|r|r|r|} \hline @@ -571,7 +557,7 @@ torso3 & fgmres / sor & 37.70 & 565 & 34.97 & 510 \\ \caption{Comparison between sequential standalone (F)GMRES and TSIRM with (F)GMRES (time in seconds).} \label{tab:02} \end{center} -\end{table} +\end{table*} @@ -638,7 +624,7 @@ preconditioners in PETSc, readers are referred to~\cite{petsc-web-page}. \hline \end{tabular} -\caption{Comparison of FGMRES and TSIRM with FGMRES for example ex15 of PETSc with two preconditioners (mg and sor) having 25,000 components per core on Juqueen ($\epsilon_{tsirm}=1e-3$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} +\caption{Comparison of FGMRES and TSIRM with FGMRES for example ex15 of PETSc/KSP with two preconditioners (mg and sor) having 25,000 components per core on Juqueen ($\epsilon_{tsirm}=1e-3$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} \label{tab:03} \end{center} \end{table*} @@ -710,7 +696,7 @@ interesting. \hline \end{tabular} -\caption{Comparison of FGMRES and TSIRM with FGMRES algorithms for ex54 of Petsc (both with the MG preconditioner) with 25,000 components per core on Curie ($max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} +\caption{Comparison of FGMRES and TSIRM with FGMRES algorithms for ex54 of PETSc/KSP (both with the MG preconditioner) with 25,000 components per core on Curie ($max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} \label{tab:04} \end{center} \end{table*} @@ -769,7 +755,7 @@ taken into account with TSIRM. \hline \end{tabular} -\caption{Comparison of FGMRES and TSIRM for ex54 of PETSc (both with the MG preconditioner) with 204,919,225 components on Curie with different number of cores ($\epsilon_{tsirm}=5e-5$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} +\caption{Comparison of FGMRES and TSIRM for ex54 of PETSc/KSP (both with the MG preconditioner) with 204,919,225 components on Curie with different number of cores ($\epsilon_{tsirm}=5e-5$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} \label{tab:05} \end{center} \end{table*} @@ -784,7 +770,7 @@ taken into account with TSIRM. Concerning the experiments some other remarks are interesting. \begin{itemize} -\item We have tested other examples of PETSc (ex29, ex45, ex49). For all these +\item We have tested other examples of PETSc/KSP (ex29, ex45, ex49). For all these examples, we have also obtained similar gains between GMRES and TSIRM but those examples are not scalable with many cores. In general, we had some problems with more than $4,096$ cores. @@ -805,6 +791,82 @@ Concerning the experiments some other remarks are interesting. %%%********************************************************* +%%NEW +\begin{table*}[htbp] +\begin{center} +\begin{tabular}{|r|r|r|r|r|r|r|r|} +\hline + + nb. cores & \multicolumn{2}{c|}{FGMRES/ASM} & \multicolumn{2}{c|}{TSIRM CGLS/ASM} & gain& \multicolumn{2}{c|}{FGMRES/HYPRE} \\ +\cline{2-5} \cline{7-8} + & Time & \# Iter. & Time & \# Iter. & & Time & \# Iter. \\\hline \hline + 512 & 5.54 & 685 & 2.5 & 570 & 2.21 & 128.9 & 9 \\ + 2048 & 14.95 & 1,560 & 4.32 & 746 & 3.48 & 335.7 & 9 \\ + 4096 & 25.13 & 2,369 & 5.61 & 859 & 4.48 & >1000 & -- \\ + 8192 & 44.35 & 3,197 & 7.6 & 1083 & 5.84 & >1000 & -- \\ + +\hline + +\end{tabular} +\caption{Comparison of FGMRES and TSIRM for ex45 of PETSc/KSP with two preconditioner (ASM and HYPRE) having 25,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} +\label{tab:06} +\end{center} +\end{table*} + + +\begin{figure}[htbp] +\centering + \includegraphics[width=0.5\textwidth]{nb_iter_sec_ex45_curie} +\caption{Number of iterations per second with ex45 and the same parameters as in Table~\ref{tab:06} (weak scaling)} +\label{fig:03} +\end{figure} + + + +\begin{table*}[htbp] +\begin{center} +\begin{tabular}{|r|r|r|r|r|r|} +\hline + + nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\ +\cline{2-5} + & Time & \# Iter. & Time & \# Iter. & \\\hline \hline + 1024 & 667.92 & 48,732 & 81.65 & 5,087 & 8.18 \\ + 2048 & 966.87 & 77,177 & 90.34 & 5,716 & 10.70\\ + 4096 & 1,742.31 & 124,411 & 119.21 & 6,905 & 14.61\\ + 8192 & 2,739.21 & 187,626 & 168.9 & 9,000 & 16.22\\ + +\hline + +\end{tabular} +\caption{Comparison of FGMRES and TSIRM for ex20 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} +\label{tab:07} +\end{center} +\end{table*} + +\begin{table*}[htbp] +\begin{center} +\begin{tabular}{|r|r|r|r|r|r|} +\hline + + nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\ +\cline{2-5} + & Time & \# Iter. & Time & \# Iter. & \\\hline \hline + 1024 & 159.52 & 11,584 & 26.34 & 1,563 & 6.06 \\ + 2048 & 226.24 & 16,459 & 37.23 & 2,248 & 6.08\\ + 4096 & 391.21 & 27,794 & 50.93 & 2,911 & 7.69\\ + 8192 & 543.23 & 37,770 & 79.21 & 4,324 & 6.86 \\ + +\hline + +\end{tabular} +\caption{Comparison of FGMRES and TSIRM for ex14 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} +\label{tab:08} +\end{center} +\end{table*} + + +%%ENDNEW %%%********************************************************* %%%*********************************************************