(Flexible GMRES~\cite{Saad:1993}) has been used for linear systems solving.
Different preconditioners have been used according to the matrices. With TSIRM, the same
solver and the same preconditionner are used. This table shows that TSIRM can
-drastically reduce the number of iterations to reach the convergence when the
+drastically reduce the number of iterations needed to reach the convergence, when the
number of iterations for the normal GMRES is more or less greater than 500. In
-fact this also depends on two parameters: the number of iterations to stop GMRES
+fact this also depends on two parameters: the number of iterations before stopping GMRES
and the number of iterations to perform the minimization.
In order to perform larger experiments, we have tested some example applications
-of PETSc. Those applications are available in the \emph{ksp} part, which is
+of PETSc. These applications are available in the \emph{ksp} part, which is
suited for scalable linear equations solvers:
\begin{itemize}
\item ex15 is an example that solves in parallel an operator using a finite
representing the neighbors in each directions are equal to -1. This example is
used in many physical phenomena, for example, heat and fluid flow, wave
propagation, etc.
-\item ex54 is another example based on 2D problem discretized with quadrilateral
- finite elements. For this example, the user can define the scaling of material
+\item ex54 is another example based on a 2D problem discretized with quadrilateral
+ finite elements. In this example, the user can define the scaling of material
coefficient in embedded circle called $\alpha$.
\end{itemize}
For more technical details on these applications, interested readers are invited
-to read the codes available in the PETSc sources. Those problems have been
+to read the codes available in the PETSc sources. These problems have been
chosen because they are scalable with many cores.
In the following larger experiments are described on two large scale
-architectures: Curie and Juqueen. Both these architectures are supercomputer
+architectures: Curie and Juqueen. Both these architectures are supercomputers
respectively composed of 80,640 cores for Curie and 458,752 cores for
Juqueen. Those machines are respectively hosted by GENCI in France and Jülich
-Supercomputing Centre in Germany. They belongs with other similar architectures
-of the PRACE initiative (Partnership for Advanced Computing in Europe) which
+Supercomputing Centre in Germany. They belong with other similar architectures
+of the PRACE initiative (Partnership for Advanced Computing in Europe), which
aims at proposing high performance supercomputing architecture to enhance
research in Europe. The Curie architecture is composed of Intel E5-2680
-processors at 2.7 GHz with 2Gb memory by core. The Juqueen architecture is
-composed of IBM PowerPC A2 at 1.6 GHz with 1Gb memory per core. Both those
+processors at 2.7 GHz with 2Gb memory by core. The Juqueen architecture,
+for its part, is
+composed by IBM PowerPC A2 at 1.6 GHz with 1Gb memory per core. Both those
architecture are equiped with a dedicated high speed network.