speed network.
+In many situations, using preconditioners is essential in order to find the
+solution of a linear system. There are many preconditioners available in PETSc.
+For parallel applications all the preconditioners based on matrix factorization
+are not available. In our experiments, we have tested different kinds of
+preconditioners, however as it is not the subject of this paper, we will not
+present results with many preconditioners. In practise, we have chosen to use a
+multigrid (mg) and successive over-relaxation (sor). For more details on the
+preconditioner in PETSc please consult~\cite{petsc-web-page}.
+
-{\bf Description of preconditioners}\\
\begin{table*}[htbp]
\begin{center}
Table~\ref{tab:03} shows the execution times and the number of iterations of
example ex15 of PETSc on the Juqueen architecture. Different numbers of cores
-are studied ranging from 2,048 up-to 16,383. Two preconditioners have been
-tested: {\it mg} and {\it sor}. For those experiments, the number of components (or unknowns of the
+are studied ranging from 2,048 up-to 16,383 with the two preconditioners {\it mg} and {\it sor}. For those experiments, the number of components (or unknowns of the
problems) per core is fixed to 25,000, also called weak scaling. This
number can seem relatively small. In fact, for some applications that need a lot
of memory, the number of components per processor requires sometimes to be