+The following parameters have been chosen for our experiments. As by default
+the restart of GMRES is performed every 30 iterations, we have chosen to stop
+the GMRES every 30 iterations (\emph{i.e.} $max\_iter_{kryl}=30$). $s$ is set to 8. CGLS is
+chosen to minimize the least-squares problem with the following parameters:
+$\epsilon_{ls}=1e-40$ and $max\_iter_{ls}=20$. The external precision is set to
+$\epsilon_{tsirm}=1e-10$. Those experiments have been performed on a Intel(R)
+Core(TM) i7-3630QM CPU @ 2.40GHz with the version 3.5.1 of PETSc.
+
+
+In Table~\ref{tab:02}, some experiments comparing the solving of the linear
+systems obtained with the previous matrices with a GMRES variant and with out 2
+stage algorithm are given. In the second column, it can be noticed that either
+gmres or fgmres is used to solve the linear system. According to the matrices,
+different preconditioner is used. With TSIRM, the same solver and the same
+preconditionner are used. This Table shows that TSIRM can drastically reduce the
+number of iterations to reach the convergence when the number of iterations for
+the normal GMRES is more or less greater than 500. In fact this also depends on
+tow parameters: the number of iterations to stop GMRES and the number of
+iterations to perform the minimization.
+
+
+\begin{table}[htbp]
+\begin{center}
+\begin{tabular}{|c|c|r|r|r|r|}
+\hline
+
+ \multirow{2}{*}{Matrix name} & Solver / & \multicolumn{2}{c|}{GMRES} & \multicolumn{2}{c|}{TSIRM CGLS} \\
+\cline{3-6}
+ & precond & Time & \# Iter. & Time & \# Iter. \\\hline \hline
+
+crashbasis & gmres / none & 15.65 & 518 & 14.12 & 450 \\
+parabolic\_fem & gmres / ilu & 1009.94 & 7573 & 401.52 & 2970 \\
+epb3 & fgmres / sor & 8.67 & 600 & 8.21 & 540 \\
+atmosmodj & fgmres / sor & 104.23 & 451 & 88.97 & 366 \\
+bfwa398 & gmres / none & 1.42 & 9612 & 0.28 & 1650 \\
+torso3 & fgmres / sor & 37.70 & 565 & 34.97 & 510 \\
+\hline
+
+\end{tabular}
+\caption{Comparison of (F)GMRES and TSIRM with (F)GMRES in sequential with some matrices, time is expressed in seconds.}
+\label{tab:02}
+\end{center}
+\end{table}
+
+
+
+
+
+In order to perform larger experiments, we have tested some example applications
+of PETSc. Those applications are available in the ksp part which is suited for
+scalable linear equations solvers:
+\begin{itemize}
+\item ex15 is an example which solves in parallel an operator using a finite
+ difference scheme. The diagonal is equal to 4 and 4 extra-diagonals
+ representing the neighbors in each directions are equal to -1. This example is
+ used in many physical phenomena, for example, heat and fluid flow, wave
+ propagation, etc.
+\item ex54 is another example based on 2D problem discretized with quadrilateral
+ finite elements. For this example, the user can define the scaling of material
+ coefficient in embedded circle called $\alpha$.
+\end{itemize}
+For more technical details on these applications, interested readers are invited
+to read the codes available in the PETSc sources. Those problems have been
+chosen because they are scalable with many cores which is not the case of other
+problems that we have tested.
+
+In the following larger experiments are described on two large scale
+architectures: Curie and Juqeen. Both these architectures are supercomputer
+composed of 80,640 cores for Curie and 458,752 cores for Juqueen. Those machines
+are respectively hosted by GENCI in France and Jülich Supercomputing Centre in
+Germany. They belongs with other similar architectures of the PRACE initiative (
+Partnership for Advanced Computing in Europe) which aims at proposing high
+performance supercomputing architecture to enhance research in Europe. The Curie
+architecture is composed of Intel E5-2680 processors at 2.7 GHz with 2Gb memory
+by core. The Juqueen architecture is composed of IBM PowerPC A2 at 1.6 GHz with
+1Gb memory per core. Both those architecture are equiped with a dedicated high
+speed network.
+
+
+
+{\bf Description of preconditioners}\\
+
+\begin{table*}[htbp]
+\begin{center}
+\begin{tabular}{|r|r|r|r|r|r|r|r|r|}
+\hline
+
+ nb. cores & precond & \multicolumn{2}{c|}{FGMRES} & \multicolumn{2}{c|}{TSIRM CGLS} & \multicolumn{2}{c|}{TSIRM LSQR} & best gain \\
+\cline{3-8}
+ & & Time & \# Iter. & Time & \# Iter. & Time & \# Iter. & \\\hline \hline
+ 2,048 & mg & 403.49 & 18,210 & 73.89 & 3,060 & 77.84 & 3,270 & 5.46 \\
+ 2,048 & sor & 745.37 & 57,060 & 87.31 & 6,150 & 104.21 & 7,230 & 8.53 \\
+ 4,096 & mg & 562.25 & 25,170 & 97.23 & 3,990 & 89.71 & 3,630 & 6.27 \\
+ 4,096 & sor & 912.12 & 70,194 & 145.57 & 9,750 & 168.97 & 10,980 & 6.26 \\
+ 8,192 & mg & 917.02 & 40,290 & 148.81 & 5,730 & 143.03 & 5,280 & 6.41 \\
+ 8,192 & sor & 1,404.53 & 106,530 & 212.55 & 12,990 & 180.97 & 10,470 & 7.76 \\
+ 16,384 & mg & 1,430.56 & 63,930 & 237.17 & 8,310 & 244.26 & 7,950 & 6.03 \\
+ 16,384 & sor & 2,852.14 & 216,240 & 418.46 & 21,690 & 505.26 & 23,970 & 6.82 \\
+\hline
+
+\end{tabular}
+\caption{Comparison of FGMRES and TSIRM with FGMRES for example ex15 of PETSc with two preconditioners (mg and sor) with 25,000 components per core on Juqueen (threshold 1e-3, restart=30, s=12), time is expressed in seconds.}
+\label{tab:03}
+\end{center}
+\end{table*}
+
+Table~\ref{tab:03} shows the execution times and the number of iterations of
+example ex15 of PETSc on the Juqueen architecture. Different numbers of cores
+are studied ranging from 2,048 up-to 16,383. Two preconditioners have been
+tested: {\it mg} and {\it sor}. For those experiments, the number of components (or unknowns of the
+problems) per core is fixed to 25,000, also called weak scaling. This
+number can seem relatively small. In fact, for some applications that need a lot
+of memory, the number of components per processor requires sometimes to be
+small.
+
+