paper, we show that it is interesting to use SimGrid to simulate the behaviors
of asynchronous iterative algorithms. For that, we compare the behaviour of a
synchronous GMRES algorithm with an asynchronous multisplitting one with
-simulations in which we choose some parameters. Both codes are real MPI
-codes. Simulations allow us to see when the multisplitting algorithm can be more
+simulations which let us easily choose some parameters. Both codes are real MPI
+codes ans simulations allow us to see when the asynchronous multisplitting algorithm can be more
efficient than the GMRES one to solve a 3D Poisson problem.
all clusters are interconnected by a virtual unidirectional ring network (see
Figure~\ref{fig:4.1}). During the resolution, a Boolean token circulates around
the virtual ring from a master processor to another until the global convergence
-is achieved. So starting from the cluster with rank 1, each master processor $i$
+is achieved. So starting from the cluster with rank 1, each master processor $\ell$
sets the token to \textit{True} if the local convergence is achieved or to
-\textit{False} otherwise, and sends it to master processor $i+1$. Finally, the
+\textit{False} otherwise, and sends it to master processor $\ell+1$. Finally, the
global convergence is detected when the master of cluster 1 receives from the
master of cluster $L$ a token set to \textit{True}. In this case, the master of
cluster 1 broadcasts a stop message to masters of other clusters. In this work,
compared to the asynchronous multisplitting algorithm ($t_\text{GMRES} / t_\text{Multisplitting}$) is defined as the \emph{relative gain}. So,
our objective running the algorithm in SimGrid is to obtain a relative gain greater than 1.
A priori, obtaining a relative gain greater than 1 would be difficult in a local
-area network configuration where the synchronous mode will take advantage on the
+area network configuration where the synchronous GMRES method will take advantage on the
rapid exchange of information on such high-speed links. Thus, the methodology
adopted was to launch the application on a clustered network. In this
configuration, degrading the inter-cluster network performance will penalize the
After analyzing the outputs, generally, for the two clusters including one hundred hosts configuration (Tables~\ref{tab.cluster.2x50}), some combinations of parameters affecting
the results have given a relative gain more than 2.5, showing the effectiveness of the
-asynchronous performance compared to the synchronous mode.
+asynchronous multisplitting compared to GMRES with two distant clusters.
With these settings, Table~\ref{tab.cluster.2x50} shows
-that after a deterioration of inter cluster network with a bandwidth of \np[Mbit/s]{5} and a latency in order of one hundredth of millisecond and a processor power
+that after setting the bandwidth of the inter cluster network to \np[Mbit/s]{5} and a latency in order of one hundredth of millisecond and a processor power
of one GFlops, an efficiency of about \np[\%]{40} is
obtained in asynchronous mode for a matrix size of 62 elements. It is noticed that the result remains
stable even we vary the residual error precision from \np{E-5} to \np{E-9}. By
%\LZK{Ma question est: le bandwidth et latency sont ceux inter-clusters ou pour les deux inter et intra cluster??}
%\CER{Définitivement, les paramètres réseaux variables ici se rapportent au réseau INTER cluster.}
\section{Conclusion}
-The experimental results on executing a parallel iterative algorithm in
-asynchronous mode on an environment simulating a large scale of virtual
-computers organized with interconnected clusters have been presented.
-Our work has demonstrated that using such a simulation tool allow us to
+The simulation of the execution of parallel asynchronous iterative algorithms on large scale clusters has been presented.
+In this work, we show that SIMGRID is an efficient simulation tool that allows us to
reach the following three objectives:
\begin{enumerate}
-\item To have a flexible configurable execution platform resolving the
-hard exercise to access to very limited but so solicited physical
-resources;
+\item To have a flexible configurable execution platform that allows us to
+ simulate asynchronous iterative algorithm for which execution of all parts of
+ the code is necessary. Using simulations before real execution is a nice
+ solution to detect the scalability problems.
+
\item to ensure the algorithm convergence with a reasonable time and
iteration number ;
\item and finally and more importantly, to find the correct combination
executing the algorithm in asynchronous mode.
\end{enumerate}
Our results have shown that in certain conditions, asynchronous mode is
-speeder up to \np[\%]{40} than executing the algorithm in synchronous mode
+speeder up to \np[\%]{40} comparing to the synchronous GMRES method
which is not negligible for solving complex practical problems with more
and more increasing size.
- Several studies have already addressed the performance execution time of
+Several studies have already addressed the performance execution time of
this class of algorithm. The work presented in this paper has
demonstrated an original solution to optimize the use of a simulation
tool to run efficiently an iterative parallel algorithm in asynchronous
mode in a grid architecture.
-\LZK{Perspectives???}
+In future works, we plan to extend our experimentations to larger scale platforms by increasing the number of computing cores and the number of clusters.
+We will also have to increase the size of the input problem which will require the use of a more powerful simulation platform. At last, we expect to compare our simulation results to real execution results on real architectures in order to experimentally validate our study.
\section*{Acknowledgment}
This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01).
-\todo[inline]{The authors would like to thank\dots{}}
+%\todo[inline]{The authors would like to thank\dots{}}
% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page