X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/blobdiff_plain/1e098dfc32858d5c40fdc47bec94526503edf207..3f095ff7f3fff2553897be9e8dce25d2c3e9298f:/IJHPCN/paper.tex diff --git a/IJHPCN/paper.tex b/IJHPCN/paper.tex index 9c7ff0c..999ce37 100644 --- a/IJHPCN/paper.tex +++ b/IJHPCN/paper.tex @@ -608,7 +608,7 @@ However, for parallel applications, all the preconditioners based on matrix fac are not available. In our experiments, we have tested different kinds of preconditioners, but as it is not the subject of this paper, we will not present results with many preconditioners. In practice, we have chosen to use a -multigrid (mg) and successive over-relaxation (sor). For further details on the +multigrid (MG) and successive over-relaxation (SOR). For further details on the preconditioners in PETSc, readers are referred to~\cite{petsc-web-page}. @@ -621,18 +621,18 @@ preconditioners in PETSc, readers are referred to~\cite{petsc-web-page}. nb. cores & precond & \multicolumn{2}{c|}{FGMRES} & \multicolumn{2}{c|}{TSIRM CGLS} & \multicolumn{2}{c|}{TSIRM LSQR} & best gain \\ \cline{3-8} & & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & \\\hline \hline - 2,048 & mg & 403.49 & 18,210 & 73.89 & 3,060 & 77.84 & 3,270 & 5.46 \\ - 2,048 & sor & 745.37 & 57,060 & 87.31 & 6,150 & 104.21 & 7,230 & 8.53 \\ - 4,096 & mg & 562.25 & 25,170 & 97.23 & 3,990 & 89.71 & 3,630 & 6.27 \\ - 4,096 & sor & 912.12 & 70,194 & 145.57 & 9,750 & 168.97 & 10,980 & 6.26 \\ - 8,192 & mg & 917.02 & 40,290 & 148.81 & 5,730 & 143.03 & 5,280 & 6.41 \\ - 8,192 & sor & 1,404.53 & 106,530 & 212.55 & 12,990 & 180.97 & 10,470 & 7.76 \\ - 16,384 & mg & 1,430.56 & 63,930 & 237.17 & 8,310 & 244.26 & 7,950 & 6.03 \\ - 16,384 & sor & 2,852.14 & 216,240 & 418.46 & 21,690 & 505.26 & 23,970 & 6.82 \\ + 2,048 & MG & 403.49 & 18,210 & 73.89 & 3,060 & 77.84 & 3,270 & 5.46 \\ + 2,048 & SOR & 745.37 & 57,060 & 87.31 & 6,150 & 104.21 & 7,230 & 8.53 \\ + 4,096 & MG & 562.25 & 25,170 & 97.23 & 3,990 & 89.71 & 3,630 & 6.27 \\ + 4,096 & SOR & 912.12 & 70,194 & 145.57 & 9,750 & 168.97 & 10,980 & 6.26 \\ + 8,192 & MG & 917.02 & 40,290 & 148.81 & 5,730 & 143.03 & 5,280 & 6.41 \\ + 8,192 & SOR & 1,404.53 & 106,530 & 212.55 & 12,990 & 180.97 & 10,470 & 7.76 \\ + 16,384 & MG & 1,430.56 & 63,930 & 237.17 & 8,310 & 244.26 & 7,950 & 6.03 \\ + 16,384 & SOR & 2,852.14 & 216,240 & 418.46 & 21,690 & 505.26 & 23,970 & 6.82 \\ \hline \end{tabular} -\caption{Comparison of FGMRES and TSIRM with FGMRES for example ex15 of PETSc/KSP with two preconditioners (mg and sor) having 25,000 components per core on Juqueen ($\epsilon_{tsirm}=1e-3$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} +\caption{Comparison of FGMRES and TSIRM with FGMRES for example ex15 of PETSc/KSP with two preconditioners (MG and SOR) having 25,000 components per core on Juqueen ($\epsilon_{tsirm}=1e-3$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} \label{tab:03} \end{center} \end{table*} @@ -640,7 +640,7 @@ preconditioners in PETSc, readers are referred to~\cite{petsc-web-page}. Table~\ref{tab:03} shows the execution times and the number of iterations of example ex15 of PETSc on the Juqueen architecture. Different numbers of cores are studied ranging from 2,048 up-to 16,383 with the two preconditioners {\it - mg} and {\it sor}. For those experiments, the number of components (or + MG} and {\it SOR}. For those experiments, the number of components (or unknowns of the problems) per core is fixed at 25,000, also called weak scaling. This number can seem relatively small. In fact, for some applications that need a lot of memory, the number of components per processor requires @@ -791,56 +791,65 @@ taken into account with TSIRM. With PETSc, linear solvers are used inside nonlinear solvers. The SNES (Scalable Nonlinear Equations Solvers) module in PETSc implements easy to use methods, like Newton-type, quasi-Newton or full approximation scheme (FAS) -multigrid to solve systems of nonlinears equations. As the SNES is based on the +multigrid to solve systems of nonlinears equations. As SNES is based on the Krylov methods of PETSc, it is interesting to investigate if the TSIRM method is -also efficient and scalable with non linear problems. - - +also efficient and scalable with non linear problems. In PETSc, some examples +are provided. An important criteria is the scalability of the initial code with +classical solvers. Consequently, we have chosen two of these examples: ex14 and +ex20. In ex14, the code solves the Bratu (SFI - solid fuel ignition) nonlinear +partial difference equations in 3 dimension. In ex20, the code solves a 3 +dimension radiative transport test problem. For more details on these examples, +interested readers are invited to see the code in the PETSc examples. +In Table~\ref{tab:07} we report the result of our experiments for the example +ex14. \begin{table*}[htbp] \begin{center} \begin{tabular}{|r|r|r|r|r|r|} \hline - nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\ + nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\ \cline{2-5} - & Time & \# Iter. & Time & \# Iter. & \\\hline \hline - 1024 & 667.92 & 48,732 & 81.65 & 5,087 & 8.18 \\ - 2048 & 966.87 & 77,177 & 90.34 & 5,716 & 10.70\\ - 4096 & 1,742.31 & 124,411 & 119.21 & 6,905 & 14.61\\ - 8192 & 2,739.21 & 187,626 & 168.9 & 9,000 & 16.22\\ + & Time & \# Iter. & Time & \# Iter. & \\\hline \hline + 1024 & 159.52 & 11,584 & 26.34 & 1,563 & 6.06 \\ + 2048 & 226.24 & 16,459 & 37.23 & 2,248 & 6.08\\ + 4096 & 391.21 & 27,794 & 50.93 & 2,911 & 7.69\\ + 8192 & 543.23 & 37,770 & 79.21 & 4,324 & 6.86 \\ \hline \end{tabular} -\caption{Comparison of FGMRES and TSIRM for ex20 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} +\caption{Comparison of FGMRES and TSIRM for ex14 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} \label{tab:07} \end{center} \end{table*} + \begin{table*}[htbp] \begin{center} \begin{tabular}{|r|r|r|r|r|r|} \hline - nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\ + nb. cores & \multicolumn{2}{c|}{FGMRES/BJAC} & \multicolumn{2}{c|}{TSIRM CGLS/BJAC} & gain \\ \cline{2-5} - & Time & \# Iter. & Time & \# Iter. & \\\hline \hline - 1024 & 159.52 & 11,584 & 26.34 & 1,563 & 6.06 \\ - 2048 & 226.24 & 16,459 & 37.23 & 2,248 & 6.08\\ - 4096 & 391.21 & 27,794 & 50.93 & 2,911 & 7.69\\ - 8192 & 543.23 & 37,770 & 79.21 & 4,324 & 6.86 \\ + & Time & \# Iter. & Time & \# Iter. & \\\hline \hline + 1024 & 667.92 & 48,732 & 81.65 & 5,087 & 8.18 \\ + 2048 & 966.87 & 77,177 & 90.34 & 5,716 & 10.70\\ + 4096 & 1,742.31 & 124,411 & 119.21 & 6,905 & 14.61\\ + 8192 & 2,739.21 & 187,626 & 168.9 & 9,000 & 16.22\\ \hline \end{tabular} -\caption{Comparison of FGMRES and TSIRM for ex14 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} +\caption{Comparison of FGMRES and TSIRM for ex20 of PETSc/SNES with a Block Jacobi preconditioner having 100,000 components per core on Curie ($\epsilon_{tsirm}=1e-10$, $max\_iter_{kryl}=30$, $s=12$, $max\_iter_{ls}=15$, $\epsilon_{ls}=1e-40$), time is expressed in seconds.} \label{tab:08} \end{center} \end{table*} + + \subsection{Influence of parameters for TSIRM}