X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/blobdiff_plain/9c834284514e7988ab1e0780d1b58a9a9f317751..7041fe1a420f9c665431e315942cfa4dbfa2bed5:/paper.tex diff --git a/paper.tex b/paper.tex index 087ff6e..d8fc84b 100644 --- a/paper.tex +++ b/paper.tex @@ -353,6 +353,7 @@ \usepackage{algpseudocode} \usepackage{amsmath} \usepackage{amssymb} +\usepackage{multirow} \algnewcommand\algorithmicinput{\textbf{Input:}} \algnewcommand\Input{\item[\algorithmicinput]} @@ -636,8 +637,8 @@ direct method in the parallel context. \Input $A$ (sparse matrix), $b$ (right-hand side) \Output $x$ (solution vector)\vspace{0.2cm} \State Set the initial guess $x^0$ - \For {$k=1,2,3,\ldots$ until convergence} - \State Solve iteratively $Ax^k=b$ + \For {$k=1,2,3,\ldots$ until convergence} \label{algo:conv} + \State Solve iteratively $Ax^k=b$ \label{algo:solve} \State $S_{k~mod~s}=x^k$ \If {$k$ mod $s=0$ {\bf and} not convergence} \State Compute dense matrix $R=AS$ @@ -663,6 +664,109 @@ reused with the new values of the residuals. \section{Experiments using petsc} \label{sec:04} + +In order to see the influence of our algorithm with only one processor, we first +show a comparison with the standard version of GMRES and our algorithm. In +table~\ref{tab:01}, we show the matrices we have used and some of them +characteristics. For all the matrices, the name, the field, the number of rows +and the number of nonzero elements is given. + +\begin{table} +\begin{center} +\begin{tabular}{|c|c|r|r|r|} +\hline +Matrix name & Field &\# Rows & \# Nonzeros \\\hline \hline +crashbasis & Optimization & 160,000 & 1,750,416 \\ +parabolic\_fem & Computational fluid dynamics & 525,825 & 2,100,225 \\ +epb3 & Thermal problem & 84,617 & 463,625 \\ +atmosmodj & Computational fluid dynamics & 1,270,432 & 8,814,880 \\ +bfwa398 & Electromagnetics problem & 398 & 3,678 \\ +torso3 & 2D/3D problem & 259,156 & 4,429,042 \\ +\hline + +\end{tabular} +\caption{Main characteristics of the sparse matrices chosen from the Davis collection} +\label{tab:01} +\end{center} +\end{table} + +The following parameters have been chosen for our experiments. As by default +the restart of GMRES is performed every 30 iterations, we have chosen to stop +the GMRES every 30 iterations (line \ref{algo:solve} in +Algorithm~\ref{algo:01}). $s$ is set to 8. CGLS is chosen to minimize the +least-squares problem. Two conditions are used to stop CGLS, either the +precision is under $1e-40$ or the number of iterations is greater to $20$. The +external precision is set to $1e-10$ (line \ref{algo:conv} in +Algorithm~\ref{algo:01}). Those experiments have been performed on a Intel(R) +Core(TM) i7-3630QM CPU @ 2.40GHz with the version 3.5.1 of PETSc. + + +In Table~\ref{tab:02}, some experiments comparing the solving of the linear +systems obtained with the previous matrices with a GMRES variant and with out 2 +stage algorithm are given. In the second column, it can be noticed that either +gmres or fgmres is used to solve the linear system. According to the matrices, +different preconditioner is used. With the 2 stage algorithm, the same solver +and the same preconditionner is used. This Table shows that the 2 stage +algorithm can drastically reduce the number of iterations to reach the +convergence when the number of iterations for the normal GMRES is more or less +greater than 500. In fact this also depends on tow parameters: the number of +iterations to stop GMRES and the number of iterations to perform the +minimization. + + +\begin{table} +\begin{center} +\begin{tabular}{|c|c|r|r|r|r|} +\hline + + \multirow{2}{*}{Matrix name} & Solver / & \multicolumn{2}{c|}{gmres variant} & \multicolumn{2}{c|}{2 stage CGLS} \\ +\cline{3-6} + & precond & Time & \# Iter. & Time & \# Iter. \\\hline \hline + +crashbasis & gmres / none & 15.65 & 518 & 14.12 & 450 \\ +parabolic\_fem & gmres / ilu & 1009.94 & 7573 & 401.52 & 2970 \\ +epb3 & fgmres / sor & 8.67 & 600 & 8.21 & 540 \\ +atmosmodj & fgmres / sor & 104.23 & 451 & 88.97 & 366 \\ +bfwa398 & gmres / none & 1.42 & 9612 & 0.28 & 1650 \\ +torso3 & fgmres / sor & 37.70 & 565 & 34.97 & 510 \\ +\hline + +\end{tabular} +\caption{Comparison of (F)GMRES and 2 stage (F)GMRES algorithms in sequential with some matrices, time is expressed in seconds.} +\label{tab:02} +\end{center} +\end{table} + + + + +Larger experiments .... + +\begin{table*} +\begin{center} +\begin{tabular}{|r|r|r|r|r|r|r|r|r|} +\hline + + nb. cores & precond & \multicolumn{2}{c|}{gmres variant} & \multicolumn{2}{c|}{2 stage CGLS} & \multicolumn{2}{c|}{2 stage LSQR} & best gain \\ +\cline{3-8} + & & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & \\\hline \hline + + 4,096 & mg & 562.25 & 25,170 & 97.23 & 3,990 & 89.71 & 3,630 & 6.27 \\ + 4,096 & sor & 912.12 & 70,194 & 145.57 & 9,750 & 168.97 & 10,980 & 6.26 \\ + 8,192 & mg & 917.02 & 40,290 & 148.81 & 5,730 & 143.03 & 5,280 & 6.41 \\ + 8,192 & sor & 1,404.53 & 106,530 & 212.55 & 12,990 & 180.97 & 10,470 & 7.76 \\ + 16,384 & mg & 1,430.56 & 63,930 & 237.17 & 8,310 & 244.26 & 7,950 & 6.03 \\ + 16,384 & sor & 2,852.14 & 216,240 & 418.46 & 21,690 & 505.26 & 23,970 & 6.82 \\ +\hline + +\end{tabular} +\caption{Comparison of FGMRES and 2 stage FGMRES algorithms for ex15 of Petsc with 25000 components per core on Juqueen (threshold 1e-3, restart=30, s=12), time is expressed in seconds.} +\label{tab:03} +\end{center} +\end{table*} + + + %%%********************************************************* %%%*********************************************************