fixed. So one solution consists in using simulations first in order to analyze
what parameters could influence or not the behaviors of an algorithm. In this
paper, we show that it is interesting to use SimGrid to simulate the behaviors
-of asynchronous iterative algorithms. For that, we compare the behaviour of a
+of asynchronous iterative algorithms. For that, we compare the behavior of a
synchronous GMRES algorithm with an asynchronous multisplitting one with
simulations which let us easily choose some parameters. Both codes are real MPI
codes and simulations allow us to see when the asynchronous multisplitting algorithm can be more
network. Parameters on the cluster's architecture are the number of machines and
the computation power of a machine. Simulations show that the asynchronous
multisplitting algorithm can solve the 3D Poisson problem approximately twice
-faster than GMRES with two distant clusters.
+faster than GMRES with two distant clusters. In this way, we present an original solution to optimize the use of a simulation
+tool to run efficiently an asynchronous iterative parallel algorithm in a grid architecture
with real data. In fact, from an execution to another the order of messages will
change and the number of iterations to reach the convergence will also change.
According to all the parameters of the platform (number of nodes, power of
-nodes, inter and intra clusrters bandwith and latency, etc.) and of the
+nodes, inter and intra clusters bandwidth and latency, etc.) and of the
algorithm (number of splittings with the multisplitting algorithm), the
multisplitting code will obtain the solution more or less quickly. Of course,
the GMRES method also depends of the same parameters. As it is difficult to have
access to many clusters, grids or supercomputers with many different network
parameters, it is interesting to be able to simulate the behaviors of
-asynchronous iterative algoritms before being able to runs real experiments.
+asynchronous iterative algorithms before being able to run real experiments.
interface, SimGrid provides bindings for the C++, Java, Lua and Ruby programming
languages. SMPI is the interface that has been used for the work exposed in
this paper. The SMPI interface implements about \np[\%]{80} of the MPI 2.0
-standard~\cite{bedaride:hal-00919507}, and supports applications written in C or
-Fortran, with little or no modifications.
+standard~\cite{bedaride+degomme+genaud+al.2013.toward}, and supports
+applications written in C or Fortran, with little or no modifications.
-Within SimGrid, the execution of a distributed application is simulated on a
-single machine. The application code is really executed, but some operations
+Within SimGrid, the execution of a distributed application is simulated by a
+single process. The application code is really executed, but some operations
like the communications are intercepted, and their running time is computed
according to the characteristics of the simulated execution platform. The
description of this target platform is given as an input for the execution, by
the mean of an XML file. It describes the properties of the platform, such as
the computing nodes with their computing power, the interconnection links with
-their bandwidth and latency, and the routing strategy. The simulated running
-time of the application is computed according to these properties.
+their bandwidth and latency, and the routing strategy. The scheduling of the
+simulated processes, as well as the simulated running time of the application is
+computed according to these properties.
To compute the durations of the operations in the simulated world, and to take
into account resource sharing (e.g. bandwidth sharing between competing
communications), SimGrid uses a fluid model. This allows to run relatively fast
simulations, while still keeping accurate
-results~\cite{bedaride:hal-00919507,tomacs13}. Moreover, depending on the
+results~\cite{bedaride+degomme+genaud+al.2013.toward,
+ velho+schnorr+casanova+al.2013.validity}. Moreover, depending on the
simulated application, SimGrid/SMPI allows to skip long lasting computations and
to only take their duration into account. When the real computations cannot be
skipped, but the results have no importance for the simulation results, there is
several simulated processes, and thus to reduce the whole memory consumption.
These two techniques can help to run simulations at a very large scale.
+The validity of simulations with SimGrid has been asserted by several studies.
+See, for example, \cite{velho+schnorr+casanova+al.2013.validity} and articles
+referenced therein for the validity of the network models. Comparisons between
+real execution of MPI applications on the one hand, and their simulation with
+SMPI on the other hand, are presented in~\cite{guermouche+renard.2010.first,
+ clauss+stillwell+genaud+al.2011.single,
+ bedaride+degomme+genaud+al.2013.toward}. All these works conclude that
+SimGrid is able to simulate pretty accurately the real behavior of the
+applications.
+
+
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Simulation of the multisplitting method}
\begin{table}[!t]
\centering
\caption{Relative gain of the multisplitting algorithm compared to GMRES for
- different configurations with 2 clusters, each one composed of 50 nodes.}
+ different configurations with 2 clusters, each one composed of 50 nodes. Latency = $20$ms}
\label{tab.cluster.2x50}
\begin{mytable}{5}
bandwidth (Mbit/s)
& 5 & 5 & 5 & 5 & 5 \\
\hline
- latency (ms)
- & 20 & 20 & 20 & 20 & 20 \\
- \hline
+ % latency (ms)
+ % & 20 & 20 & 20 & 20 & 20 \\
+ %\hline
power (GFlops)
& 1 & 1 & 1 & 1.5 & 1.5 \\
\hline
size $(N)$
- & 62 & 62 & 62 & 100 & 100 \\
+ & $62^3$ & $62^3$ & $62^3$ & $100^3$ & $100^3$ \\
\hline
Precision
& \np{E-5} & \np{E-8} & \np{E-9} & \np{E-11} & \np{E-11} \\
bandwidth (Mbit/s)
& 50 & 50 & 50 & 50 & 50 \\ % & 10 & 10 \\
\hline
- latency (ms)
- & 20 & 20 & 20 & 20 & 20 \\ % & 0.03 & 0.01 \\
- \hline
+ %latency (ms)
+ %& 20 & 20 & 20 & 20 & 20 \\ % & 0.03 & 0.01 \\
+ %\hline
Power (GFlops)
& 1.5 & 1.5 & 1.5 & 1.5 & 1.5 \\ % & 1 & 1.5 \\
\hline
size $(N)$
- & 110 & 120 & 130 & 140 & 150 \\ % & 171 & 171 \\
+ & $110^3$ & $120^3$ & $130^3$ & $140^3$ & $150^3$ \\ % & 171 & 171 \\
\hline
Precision
& \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} \\ % & \np{E-5} & \np{E-5} \\
\end{mytable}
\end{table}
-\RC{Du coup la latence est toujours la même, pourquoi la mettre dans la table?}
+%\RC{Du coup la latence est toujours la même, pourquoi la mettre dans la table?}
%Then we have changed the network configuration using three clusters containing
%respectively 33, 33 and 34 hosts, or again by on hundred hosts for all the
\begin{itemize}
\item HOSTFILE: Text file containing the list of the processors units name. Here 100 hosts;
-\item PLATFORM: XML file description of the platform architecture whith the following characteristics: %two clusters (cluster1 and cluster2) with the following characteristics :
+\item PLATFORM: XML file description of the platform architecture with the
+ following characteristics:
+ % two clusters (cluster1 and cluster2) with the following characteristics:
\begin{itemize}
\item 2 clusters of 50 hosts each;
\item Processor unit power: \np[GFlops]{1} or \np[GFlops]{1.5};
asynchronous multisplitting compared to GMRES with two distant clusters.
With these settings, Table~\ref{tab.cluster.2x50} shows
-that after setting the bandwidth of the inter cluster network to \np[Mbit/s]{5} and a latency in order of one hundredth of millisecond and a processor power
-of one GFlops, an efficiency of about \np[\%]{40} is
+that after setting the bandwidth of the inter cluster network to \np[Mbit/s]{5}, the latency to $20$ millisecond and the processor power
+to one GFlops, an efficiency of about \np[\%]{40} is
obtained in asynchronous mode for a matrix size of $62^3$ elements. It is noticed that the result remains
stable even we vary the residual error precision from \np{E-5} to \np{E-9}. By
increasing the matrix size up to $100^3$ elements, it was necessary to increase the
%(synchronous and asynchronous) is achieved with an inter cluster of
%\np[Mbit/s]{10} and a latency of \np[ms]{E-1}. To challenge an efficiency greater than 1.2 with a matrix %size of 100 points, it was necessary to degrade the
%inter cluster network bandwidth from 5 to \np[Mbit/s]{2}.
-\AG{Conclusion, on prend une plateforme pourrie pour avoir un bon ratio sync/async ???
- Quelle est la perte de perfs en faisant ça ?}
+%\AG{Conclusion, on prend une plateforme pourrie pour avoir un bon ratio sync/async ???
+ %Quelle est la perte de perfs en faisant ça ?}
%A last attempt was made for a configuration of three clusters but more powerful
%with 200 nodes in total. The convergence with a relative gain around 1.1 was
%\CER{Définitivement, les paramètres réseaux variables ici se rapportent au réseau INTER cluster.}
\section{Conclusion}
The simulation of the execution of parallel asynchronous iterative algorithms on large scale clusters has been presented.
-In this work, we show that SIMGRID is an efficient simulation tool that allows us to
+In this work, we show that SimGrid is an efficient simulation tool that allows us to
reach the following two objectives:
\begin{enumerate}
mode in a grid architecture.
In future works, we plan to extend our experimentations to larger scale platforms by increasing the number of computing cores and the number of clusters.
-We will also have to increase the size of the input problem which will require the use of a more powerful simulation platform. At last, we expect to compare our simulation results to real execution results on real architectures in order to experimentally validate our study. Finally, we also plan to study other problems with the multisplitting method and other asynchronous iterative methods.
+We will also have to increase the size of the input problem which will require the use of a more powerful simulation platform. At last, we expect to compare our simulation results to real execution results on real architectures in order to better experimentally validate our study. Finally, we also plan to study other problems with the multisplitting method and other asynchronous iterative methods.
\section*{Acknowledgment}
% LocalWords: Ouest Vieille Talence cedex scalability experimentations HPC MPI
% LocalWords: Parallelization AIAC GMRES multi SMPI SISC SIAC SimDAG DAGs Lua
% LocalWords: Fortran GFlops priori Mbit de du fcomte multisplitting scalable
-% LocalWords: SimGrid Belfort parallelize Labex ANR LABX IEEEabrv hpccBib
+% LocalWords: SimGrid Belfort parallelize Labex ANR LABX IEEEabrv hpccBib Gbit
% LocalWords: intra durations nonsingular Waitall discretization discretized
-% LocalWords: InnerSolver Isend Irecv
+% LocalWords: InnerSolver Isend Irecv parallelization