\section{The asynchronous iteration model and the motivations of our work}
\label{sec:asynchro}
-Asynchronous iterative methods have been studied for many years theoritecally and
+Asynchronous iterative methods have been studied for many years theoretically and
practically. Many methods have been considered and convergence results have been
proved. These methods can be used to solve, in parallel, fixed point problems
(i.e. problems for which the solution is $x^\star =f(x^\star)$. In practice,
-asynchronous iterations methods can be used to solve, for example, linear and
+asynchronous iteration methods can be used to solve, for example, linear and
non-linear systems of equations or optimization problems, interested readers are
invited to read~\cite{BT89,bahi07}.
requires very few modifications to be able to be executed in both variants. In
practice, only the communications and convergence detection are different. In
the synchronous mode, iterations are synchronized whereas in the asynchronous
-one, they are not. It should be noticed that non blocking communications can be
+one, they are not. It should be noticed that non-blocking communications can be
used in both modes. Concerning the convergence detection, synchronous variants
can use a global convergence procedure which acts as a global synchronization
point. In the asynchronous model, the convergence detection is more tricky as
consult~\cite{myBCCV05c,bahi07,ccl09:ij}.
The number of iterations required to reach the convergence is generally greater
-for the asynchronous scheme (this number depends depends on the delay of the
+for the asynchronous scheme (this number depends on the delay of the
messages). Note that, it is not the case in the synchronous mode where the
number of iterations is the same than in the sequential mode. In this way, the
set of the parameters of the platform (number of nodes, power of nodes,
-inter and intra clusters bandwidth and latency, \ldots) and of the
+inter and intra clusters bandwidth and latency,~\ldots) and of the
application can drastically change the number of iterations required to get the
convergence. It follows that asynchronous iterative algorithms are difficult to
optimize since the financial and deployment costs on large scale multi-core
-architecture are often very important. So, prior to delpoyment and tests it
+architectures are often very important. So, prior to deployment and tests it
seems very promising to be able to simulate the behavior of asynchronous
-iterative algorithms. The problematic is then to show that the results produce
+iterative algorithms. The problematic is then to show that the results produced
by simulation are in accordance with reality i.e. of the same order of
magnitude. To our knowledge, there is no study on this problematic.
A_{\ell\ell} x_\ell = c_\ell,\mbox{~for~}\ell=1,\ldots,L,
\label{eq:03}
\end{equation}
-where right-hand sides $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m$ are computed using the shared vectors $x_m$. In this paper, we use the well-known iterative method GMRES ({\it Generalized Minimal RESidual})~\cite{saad86} as an inner iteration to approximate the solutions of the different splittings arising from the block Jacobi multisplitting of matrix $A$. The algorithm in Figure~\ref{alg:01} shows the main key points of our block Jacobi two-stage method executed by a cluster of processors. In line~\ref{solve}, the linear sub-system~(\ref{eq:03}) is solved in parallel using GMRES method where $\MIG$ and $\TOLG$ are the maximum number of inner iterations and the tolerance threshold for GMRES respectively. The convergence of the two-stage multisplitting methods, based on synchronous or asynchronous iterations, has been studied by many authors for example~\cite{Bru95,bahi07}.
+where right-hand sides $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m$ are computed using the shared vectors $x_m$. In this paper, we use the well-known iterative method GMRES~\cite{saad86} as an inner iteration to approximate the solutions of the different splittings arising from the block Jacobi multisplitting of matrix $A$. The algorithm in Figure~\ref{alg:01} shows the main key points of our block Jacobi two-stage method executed by a cluster of processors. In line~\ref{solve}, the linear sub-system~(\ref{eq:03}) is solved in parallel using GMRES method where $\MIG$ and $\TOLG$ are the maximum number of inner iterations and the tolerance threshold for GMRES respectively. The convergence of the two-stage multisplitting methods, based on synchronous or asynchronous iterations, has been studied by many authors for example~\cite{Bru95,bahi07}.
\begin{figure}[t]
%\begin{algorithm}[t]
\subsection{Simulation of the two-stage methods using SimGrid toolkit}
\label{sec:04.02}
-One of our objectives when simulating the application in Simgrid is, as in real
+One of our objectives when simulating the application in SimGrid is, as in real
life, to get accurate results (solutions of the problem) but also to ensure the
test reproducibility under the same conditions. According to our experience,
-very few modifications are required to adapt a MPI program for the Simgrid
+very few modifications are required to adapt a MPI program for the SimGrid
simulator using SMPI (Simulator MPI). The first modification is to include SMPI
-libraries and related header files (smpi.h). The second modification is to
+libraries and related header files (\verb+smpi.h+). The second modification is to
suppress all global variables by replacing them with local variables or using a
-Simgrid selector called "runtime automatic switching"
+SimGrid selector called "runtime automatic switching"
(smpi/privatize\_global\_variables). Indeed, global variables can generate side
effects on runtime between the threads running in the same process and generated by
-Simgrid to simulate the grid environment.
+SimGrid to simulate the grid environment.
-%\RC{On vire cette phrase ?} \RCE {Si c'est la phrase d'avant sur les threads, je pense qu'on peut la retenir car c'est l'explication du pourquoi Simgrid n'aime pas les variables globales. Si c'est pas bien dit, on peut la reformuler. Si c'est la phrase ci-apres, effectivement, on peut la virer si elle preterais a discussion}The
-%last modification on the MPI program pointed out for some cases, the review of
-%the sequence of the MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions which
-%might cause an infinite loop.
-
-
-\paragraph{Simgrid Simulator parameters}
-\ \\ \noindent Before running a Simgrid benchmark, many parameters for the
+\paragraph{Parameters of the simulation in SimGrid}
+\ \\ \noindent Before running a SimGrid benchmark, many parameters for the
computation platform must be defined. For our experiments, we consider platforms
in which several clusters are geographically distant, so there are intra and
inter-cluster communications. In the following, these parameters are described:
\begin{itemize}
\item hostfile: hosts description file.
\item platform: file describing the platform architecture: clusters (CPU power,
-\dots{}), intra cluster network description, inter cluster network (bandwidth bw,
-latency lat, \dots{}).
+\dots{}), intra cluster network description, inter cluster network (bandwidth $bw$,
+latency $lat$, \dots{}).
\item archi : grid computational description (number of clusters, number of
-nodes/processors for each cluster).
+nodes/processors in each cluster).
\end{itemize}
\noindent
In addition, the following arguments are given to the programs at runtime:
\begin{itemize}
\item maximum number of inner iterations $\MIG$ and outer iterations $\MIM$,
\item inner precision $\TOLG$ and outer precision $\TOLM$,
- \item matrix sizes of the 3D Poisson problem: N$_{x}$, N$_{y}$ and N$_{z}$ on axis $x$, $y$ and $z$ respectively,
- \item matrix diagonal value is fixed to $6.0$ for synchronous Krylov multisplitting experiments and $6.2$ for asynchronous block Jacobi experiments,
+ \item matrix sizes of the problem: N$_{x}$, N$_{y}$ and N$_{z}$ on axis $x$, $y$ and $z$ respectively (in our experiments, we solve 3D problem, see Section~\ref{3dpoisson}),
+ \item matrix diagonal value is fixed to $6.0$ for synchronous experiments and $6.2$ for asynchronous ones,
\item matrix off-diagonal value is fixed to $-1.0$,
\item number of vectors in matrix $S$ (i.e. value of $s$),
\item maximum number of iterations $\MIC$ and precision $\TOLC$ for CGLS method,
\item execution mode: synchronous or asynchronous.
\end{itemize}
-It should also be noticed that both solvers have been executed with the Simgrid selector \texttt{-cfg=smpi/running\_power} which determines the computational power (here 19GFlops) of the simulator host machine.
+It should also be noticed that both solvers have been executed with the SimGrid selector \texttt{-cfg=smpi/running\_power} which determines the computational power (here 19GFlops) of the simulator host machine.
%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%
In this section, experiments for both Multisplitting algorithms are reported. First the 3D Poisson problem used in our experiments is described.
\subsection{The 3D Poisson problem}
+\label{3dpoisson}
We use our two-stage algorithms to solve the well-known Poisson problem $\nabla^2\phi=f$~\cite{Polyanin01}. In three-dimensional Cartesian coordinates in $\mathbb{R}^3$, the problem takes the following form: