+\label{sec:05}
+
+
+In order to see the influence of our algorithm with only one processor, we first
+show a comparison with the standard version of GMRES and our algorithm. In
+table~\ref{tab:01}, we show the matrices we have used and some of them
+characteristics. For all the matrices, the name, the field, the number of rows
+and the number of nonzero elements is given.
+
+\begin{table*}
+\begin{center}
+\begin{tabular}{|c|c|r|r|r|}
+\hline
+Matrix name & Field &\# Rows & \# Nonzeros \\\hline \hline
+crashbasis & Optimization & 160,000 & 1,750,416 \\
+parabolic\_fem & Computational fluid dynamics & 525,825 & 2,100,225 \\
+epb3 & Thermal problem & 84,617 & 463,625 \\
+atmosmodj & Computational fluid dynamics & 1,270,432 & 8,814,880 \\
+bfwa398 & Electromagnetics problem & 398 & 3,678 \\
+torso3 & 2D/3D problem & 259,156 & 4,429,042 \\
+\hline
+
+\end{tabular}
+\caption{Main characteristics of the sparse matrices chosen from the Davis collection}
+\label{tab:01}
+\end{center}
+\end{table*}
+
+The following parameters have been chosen for our experiments. As by default
+the restart of GMRES is performed every 30 iterations, we have chosen to stop
+the GMRES every 30 iterations (line \ref{algo:solve} in
+Algorithm~\ref{algo:01}). $s$ is set to 8. CGLS is chosen to minimize the
+least-squares problem. Two conditions are used to stop CGLS, either the
+precision is under $1e-40$ or the number of iterations is greater to $20$. The
+external precision is set to $1e-10$ (line \ref{algo:conv} in
+Algorithm~\ref{algo:01}). Those experiments have been performed on a Intel(R)
+Core(TM) i7-3630QM CPU @ 2.40GHz with the version 3.5.1 of PETSc.
+
+
+In Table~\ref{tab:02}, some experiments comparing the solving of the linear
+systems obtained with the previous matrices with a GMRES variant and with out 2
+stage algorithm are given. In the second column, it can be noticed that either
+gmres or fgmres is used to solve the linear system. According to the matrices,
+different preconditioner is used. With the 2 stage algorithm, the same solver
+and the same preconditionner is used. This Table shows that the 2 stage
+algorithm can drastically reduce the number of iterations to reach the
+convergence when the number of iterations for the normal GMRES is more or less
+greater than 500. In fact this also depends on tow parameters: the number of
+iterations to stop GMRES and the number of iterations to perform the
+minimization.
+
+
+\begin{table}
+\begin{center}
+\begin{tabular}{|c|c|r|r|r|r|}
+\hline
+
+ \multirow{2}{*}{Matrix name} & Solver / & \multicolumn{2}{c|}{GMRES} & \multicolumn{2}{c|}{TSARM CGLS} \\
+\cline{3-6}
+ & precond & Time & \# Iter. & Time & \# Iter. \\\hline \hline
+
+crashbasis & gmres / none & 15.65 & 518 & 14.12 & 450 \\
+parabolic\_fem & gmres / ilu & 1009.94 & 7573 & 401.52 & 2970 \\
+epb3 & fgmres / sor & 8.67 & 600 & 8.21 & 540 \\
+atmosmodj & fgmres / sor & 104.23 & 451 & 88.97 & 366 \\
+bfwa398 & gmres / none & 1.42 & 9612 & 0.28 & 1650 \\
+torso3 & fgmres / sor & 37.70 & 565 & 34.97 & 510 \\
+\hline
+
+\end{tabular}
+\caption{Comparison of (F)GMRES and 2 stage (F)GMRES algorithms in sequential with some matrices, time is expressed in seconds.}
+\label{tab:02}
+\end{center}
+\end{table}
+
+
+
+
+
+In the following we describe the applications of PETSc we have experimented. Those applications are available in the ksp part which is suited for scalable linear equations solvers:
+\begin{itemize}
+\item ex15 is an example which solves in parallel an operator using a finite difference scheme. The diagonal is equals to 4 and 4
+ extra-diagonals representing the neighbors in each directions is equal to
+ -1. This example is used in many physical phenomena , for exemple, heat and
+ fluid flow, wave propagation...
+\item ex54 is another example based on 2D problem discretized with quadrilateral finite elements. For this example, the user can define the scaling of material coefficient in embedded circle, it is called $\alpha$.
+\end{itemize}
+For more technical details on these applications, interested reader are invited
+to read the codes available in the PETSc sources. Those problem have been
+chosen because they are scalable with many cores. We have tested other problem
+but they are not scalable with many cores.
+
+
+
+
+\begin{table*}
+\begin{center}
+\begin{tabular}{|r|r|r|r|r|r|r|r|r|}
+\hline
+
+ nb. cores & precond & \multicolumn{2}{c|}{GMRES} & \multicolumn{2}{c|}{TSARM CGLS} & \multicolumn{2}{c|}{TSARM LSQR} & best gain \\
+\cline{3-8}
+ & & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
+ 2,048 & mg & 403.49 & 18,210 & 73.89 & 3,060 & 77.84 & 3,270 & 5.46 \\
+ 2,048 & sor & 745.37 & 57,060 & 87.31 & 6,150 & 104.21 & 7,230 & 8.53 \\
+ 4,096 & mg & 562.25 & 25,170 & 97.23 & 3,990 & 89.71 & 3,630 & 6.27 \\
+ 4,096 & sor & 912.12 & 70,194 & 145.57 & 9,750 & 168.97 & 10,980 & 6.26 \\
+ 8,192 & mg & 917.02 & 40,290 & 148.81 & 5,730 & 143.03 & 5,280 & 6.41 \\
+ 8,192 & sor & 1,404.53 & 106,530 & 212.55 & 12,990 & 180.97 & 10,470 & 7.76 \\
+ 16,384 & mg & 1,430.56 & 63,930 & 237.17 & 8,310 & 244.26 & 7,950 & 6.03 \\
+ 16,384 & sor & 2,852.14 & 216,240 & 418.46 & 21,690 & 505.26 & 23,970 & 6.82 \\
+\hline
+
+\end{tabular}
+\caption{Comparison of FGMRES and 2 stage FGMRES algorithms for ex15 of Petsc with 25000 components per core on Juqueen (threshold 1e-3, restart=30, s=12), time is expressed in seconds.}
+\label{tab:03}
+\end{center}
+\end{table*}
+
+
+\begin{figure}
+\centering
+ \includegraphics[width=0.45\textwidth]{nb_iter_sec_ex15_juqueen}
+\caption{Number of iterations per second with ex15 and the same parameters than in Table~\ref{tab:03}}
+\label{fig:01}
+\end{figure}
+
+
+
+
+
+\begin{table*}
+\begin{center}
+\begin{tabular}{|r|r|r|r|r|r|r|r|r|}
+\hline
+
+ nb. cores & threshold & \multicolumn{2}{c|}{GMRES} & \multicolumn{2}{c|}{TSARM CGLS} & \multicolumn{2}{c|}{TSARM LSQR} & best gain \\
+\cline{3-8}
+ & & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
+ 2,048 & 8e-5 & 108.88 & 16,560 & 23.06 & 3,630 & 22.79 & 3,630 & 4.77 \\
+ 2,048 & 6e-5 & 194.01 & 30,270 & 35.50 & 5,430 & 27.74 & 4,350 & 6.99 \\
+ 4,096 & 7e-5 & 160.59 & 22,530 & 35.15 & 5,130 & 29.21 & 4,350 & 5.49 \\
+ 4,096 & 6e-5 & 249.27 & 35,520 & 52.13 & 7,950 & 39.24 & 5,790 & 6.35 \\
+ 8,192 & 6e-5 & 149.54 & 17,280 & 28.68 & 3,810 & 29.05 & 3,990 & 5.21 \\
+ 8,192 & 5e-5 & 785.04 & 109,590 & 76.07 & 10,470 & 69.42 & 9,030 & 11.30 \\
+ 16,384 & 4e-5 & 718.61 & 86,400 & 98.98 & 10,830 & 131.86 & 14,790 & 7.26 \\
+\hline
+
+\end{tabular}
+\caption{Comparison of FGMRES and 2 stage FGMRES algorithms for ex54 of Petsc (both with the MG preconditioner) with 25000 components per core on Curie (restart=30, s=12), time is expressed in seconds.}
+\label{tab:04}
+\end{center}
+\end{table*}
+
+
+
+
+
+\begin{table*}
+\begin{center}
+\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|}
+\hline
+
+ nb. cores & \multicolumn{2}{c|}{GMRES} & \multicolumn{2}{c|}{TSARM CGLS} & \multicolumn{2}{c|}{TSARM LSQR} & best gain & \multicolumn{3}{c|}{efficiency} \\
+\cline{2-7} \cline{9-11}
+ & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & & GMRES & TS CGLS & TS LSQR\\\hline \hline
+ 512 & 3,969.69 & 33,120 & 709.57 & 5,790 & 622.76 & 5,070 & 6.37 & 1 & 1 & 1 \\
+ 1024 & 1,530.06 & 25,860 & 290.95 & 4,830 & 307.71 & 5,070 & 5.25 & 1.30 & 1.21 & 1.01 \\
+ 2048 & 919.62 & 31,470 & 237.52 & 8,040 & 194.22 & 6,510 & 4.73 & 1.08 & .75 & .80\\
+ 4096 & 405.60 & 28,380 & 111.67 & 7,590 & 91.72 & 6,510 & 4.42 & 1.22 & .79 & .84 \\
+ 8192 & 785.04 & 109,590 & 76.07 & 10,470 & 69.42 & 9,030 & 11.30 & .32 & .58 & .56 \\
+
+\hline
+
+\end{tabular}
+\caption{Comparison of FGMRES and 2 stage FGMRES algorithms for ex54 of Petsc (both with the MG preconditioner) with 204,919,225 components on Curie with different number of cores (restart=30, s=12, threshol 5e-5), time is expressed in seconds.}
+\label{tab:05}
+\end{center}
+\end{table*}