invited to read~\cite{BT89,bahi07}.
Before using an asynchronous iterative method, the convergence must be
-studied. Otherwise, there is no garantee that the application will reach the convergence. An
+studied. Otherwise, there is no guarantee that the application will reach the convergence. An
algorithm that supports both the synchronous or the asynchronous iteration model
requires very few modifications to be able to be executed in both variants. In
practice, only the communications management and the convergence detection are different. In
The number of iterations required to reach the convergence is generally greater
for the asynchronous scheme (this number depends on the delay of the
messages). Note that, it is not the case in the synchronous mode where the
-number of iterations is the same than in the sequential mode. In this way, the
+number of iterations is the same as in the sequential mode. Thus, the
set of the parameters of the platform (number of nodes, power of nodes,
inter and intra clusters bandwidth and latency,~\ldots) and of the
application can drastically change the number of iterations required to get the
these resources along the program execution. This model produces accurate
results while still running relatively
fast~\cite{bedaride+degomme+genaud+al.2013.toward,velho+schnorr+casanova+al.2013.validity}.
-During the simulation, the computation is really executed, but the commuications
+During the simulation, the computations are really executed, but the communications
are intercepted and their execution time evaluated according to the parameters
of the simulated platform. It is also possible for SimGrid/SMPI to only keep the
duration of large computations by skipping them. Moreover, when applicable, the
\label{sec:04}
\subsection{Synchronous and asynchronous two-stage methods for sparse linear systems}
\label{sec:04.01}
-In this paper we focus on two-stage multisplitting methods in their both versions (synchronous and asynchronous)~\cite{Frommer92,Szyld92,Bru95}. These iterative methods are based on multisplitting methods~\cite{O'leary85,White86,Alefeld97} and use two nested iterations: the outer iteration and the inner iteration. Let us consider the following sparse linear system of $n$ equations in $\mathbb{R}$:
+In this paper we focus on two-stage multisplitting methods in both their versions (synchronous and asynchronous)~\cite{Frommer92,Szyld92,Bru95}. These iterative methods are based on multisplitting methods~\cite{O'leary85,White86,Alefeld97} and use two nested iterations: the outer iteration and the inner iteration. Let us consider the following sparse linear system of $n$ equations in $\mathbb{R}$:
\begin{equation}
Ax=b,
\label{eq:01}
x_\ell^{k+1} = A_{\ell\ell}^{-1}(b_\ell - \displaystyle\sum^{L}_{\substack{m=1\\m\neq\ell}}{A_{\ell m}x^k_m}),\mbox{~for~}\ell=1,\ldots,L\mbox{~and~}k=1,2,3,\ldots
\label{eq:02}
\end{equation}
-where $x_\ell$ are sub-vectors of the solution $x$, $b_\ell$ are the sub-vectors of the right-hand side $b$, and $A_{\ell\ell}$ and $A_{\ell m}$ are diagonal and off-diagonal blocks of matrix $A$ respectively. The iterations of these methods can naturally be computed in parallel such that each processor or cluster of processors is responsible for solving one splitting as a linear sub-system:
+where $x_\ell$ are sub-vectors of the solution $x$, $b_\ell$ are the sub-vectors of the right-hand side $b$, and $A_{\ell\ell}$ and $A_{\ell m}$ are diagonal and off-diagonal blocks of matrix $A$ respectively. The iterations of these methods can naturally be computed in parallel so that each processor or cluster of processors is responsible for solving one splitting as a linear sub-system:
\begin{equation}
A_{\ell\ell} x_\ell = c_\ell,\mbox{~for~}\ell=1,\ldots,L,
\label{eq:03}
\end{equation}
-where right-hand sides $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m$ are computed using the shared vectors $x_m$. In this paper, we use the well-known iterative method GMRES~\cite{saad86} as an inner iteration to approximate the solutions of the different splittings arising from the block Jacobi multisplitting of matrix $A$. The algorithm in Figure~\ref{alg:01} shows the main key points of our block Jacobi two-stage method executed by a cluster of processors. In line~\ref{solve}, the linear sub-system~(\ref{eq:03}) is solved in parallel using GMRES method where $\MIG$ and $\TOLG$ are the maximum number of inner iterations and the tolerance threshold for GMRES respectively. The convergence of the two-stage multisplitting methods, based on synchronous or asynchronous iterations, has been studied by many authors for example~\cite{Bru95,bahi07}.
+where the right-hand sides $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m$ are computed using the shared vectors $x_m$. In this paper, we use the well-known iterative method GMRES~\cite{saad86} as an inner iteration to approximate the solutions of the different splittings arising from the block Jacobi multisplitting of matrix $A$. The algorithm in Figure~\ref{alg:01} shows the main key points of our block Jacobi two-stage method executed by a cluster of processors. Line~\ref{solve}, the linear sub-system~(\ref{eq:03}) is solved in parallel using the GMRES method where $\MIG$ and $\TOLG$ are the maximum number of inner iterations and the tolerance threshold for GMRES respectively. The convergence of the two-stage multisplitting methods, based on synchronous or asynchronous iterations, has been studied by many authors for example~\cite{Bru95,bahi07}.
\begin{figure}[htpb]
%\begin{algorithm}[t]
\min_{\alpha\in\mathbb{R}^s}{\|b-AS\alpha\|_2}.
\label{eq:06}
\end{equation}
-The algorithm in Figure~\ref{alg:02} includes the procedure of the residual minimization and the outer iteration is restarted with a new approximation $\tilde{x}$ at every $s$ iterations. The least-squares problem~(\ref{eq:06}) is solved in parallel by all clusters using CGLS method~\cite{Hestenes52} such that $\MIC$ is the maximum number of iterations and $\TOLC$ is the tolerance threshold for this method (line~\ref{cgls} in Figure~\ref{alg:02}).
+The algorithm in Figure~\ref{alg:02} includes the procedure of the residual minimization and the outer iteration is restarted with a new approximation $\tilde{x}$ at every $s$ iterations. The least-squares problem~(\ref{eq:06}) is solved in parallel by all clusters using the CGLS method~\cite{Hestenes52} sosuch that $\MIC$ is the maximum number of iterations and $\TOLC$ is the tolerance threshold for this method (line~\ref{cgls} in Figure~\ref{alg:02}).
\begin{figure}[htbp]
%\begin{algorithm}[t]
One of our objectives when simulating the application in SimGrid is, as in real
life, to get accurate results (solutions of the problem) but also to ensure the
-test reproducibility under the same conditions. According to our experience,
+test reproducibility under similar conditions. According to our experience,
very few modifications are required to adapt a MPI program for the SimGrid
simulator using SMPI (Simulated MPI). The first modification is to include SMPI
libraries and related header files (\verb+smpi.h+). The second modification is to
effects on runtime between the threads running in the same process and generated by
SimGrid to simulate the grid environment.
-\paragraph{Parameters of the simulation in SimGrid}
+\paragraph{Simulation parameters for SimGrid}
\ \\ \noindent Before running a SimGrid benchmark, many parameters for the
computation platform must be defined. For our experiments, we consider platforms
in which several clusters are geographically distant, so there are intra and
%%% fill-column: 80
%%% ispell-local-dictionary: "american"
%%% End:
+
+% LocalWords: Ramamonjisoa Ziane Khodja Laiymani Raphaël Arnaud Giersch Femto
+% LocalWords: Franche Comté Belfort GMRES multisplitting SimGrid Krylov SMPI
+% LocalWords: MPI