X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/hpcc2014.git/blobdiff_plain/b59046bc07a13cbcd215b2ac5f41664cafccd41d..a6eb13d9e879fac3b6ad0e97391bfea86b6c51a8:/hpcc.tex diff --git a/hpcc.tex b/hpcc.tex index 6e262b1..9962b25 100644 --- a/hpcc.tex +++ b/hpcc.tex @@ -79,7 +79,7 @@ network parameters is not easy because with supercomputers such parameters are fixed. So one solution consists in using simulations first in order to analyze what parameters could influence or not the behaviors of an algorithm. In this paper, we show that it is interesting to use SimGrid to simulate the behaviors -of asynchronous iterative algorithms. For that, we compare the behaviour of a +of asynchronous iterative algorithms. For that, we compare the behavior of a synchronous GMRES algorithm with an asynchronous multisplitting one with simulations which let us easily choose some parameters. Both codes are real MPI codes and simulations allow us to see when the asynchronous multisplitting algorithm can be more @@ -232,13 +232,13 @@ asynchronous iterative algorithms comes from the fact it is necessary to run the with real data. In fact, from an execution to another the order of messages will change and the number of iterations to reach the convergence will also change. According to all the parameters of the platform (number of nodes, power of -nodes, inter and intra clusrters bandwith and latency, etc.) and of the +nodes, inter and intra clusters bandwidth and latency, etc.) and of the algorithm (number of splittings with the multisplitting algorithm), the multisplitting code will obtain the solution more or less quickly. Of course, the GMRES method also depends of the same parameters. As it is difficult to have access to many clusters, grids or supercomputers with many different network parameters, it is interesting to be able to simulate the behaviors of -asynchronous iterative algoritms before being able to runs real experiments. +asynchronous iterative algorithms before being able to run real experiments. @@ -261,24 +261,26 @@ run real applications written in MPI~\cite{MPI}. Apart from the native C interface, SimGrid provides bindings for the C++, Java, Lua and Ruby programming languages. SMPI is the interface that has been used for the work exposed in this paper. The SMPI interface implements about \np[\%]{80} of the MPI 2.0 -standard~\cite{bedaride:hal-00919507}, and supports applications written in C or -Fortran, with little or no modifications. +standard~\cite{bedaride+degomme+genaud+al.2013.toward}, and supports +applications written in C or Fortran, with little or no modifications. -Within SimGrid, the execution of a distributed application is simulated on a -single machine. The application code is really executed, but some operations +Within SimGrid, the execution of a distributed application is simulated by a +single process. The application code is really executed, but some operations like the communications are intercepted, and their running time is computed according to the characteristics of the simulated execution platform. The description of this target platform is given as an input for the execution, by the mean of an XML file. It describes the properties of the platform, such as the computing nodes with their computing power, the interconnection links with -their bandwidth and latency, and the routing strategy. The simulated running -time of the application is computed according to these properties. +their bandwidth and latency, and the routing strategy. The scheduling of the +simulated processes, as well as the simulated running time of the application is +computed according to these properties. To compute the durations of the operations in the simulated world, and to take into account resource sharing (e.g. bandwidth sharing between competing communications), SimGrid uses a fluid model. This allows to run relatively fast simulations, while still keeping accurate -results~\cite{bedaride:hal-00919507,tomacs13}. Moreover, depending on the +results~\cite{bedaride+degomme+genaud+al.2013.toward, + velho+schnorr+casanova+al.2013.validity}. Moreover, depending on the simulated application, SimGrid/SMPI allows to skip long lasting computations and to only take their duration into account. When the real computations cannot be skipped, but the results have no importance for the simulation results, there is @@ -286,6 +288,17 @@ also the possibility to share dynamically allocated data structures between several simulated processes, and thus to reduce the whole memory consumption. These two techniques can help to run simulations at a very large scale. +The validity of simulations with SimGrid has been asserted by several studies. +See, for example, \cite{velho+schnorr+casanova+al.2013.validity} and articles +referenced therein for the validity of the network models. Comparisons between +real execution of MPI applications on the one hand, and their simulation with +SMPI on the other hand, are presented in~\cite{guermouche+renard.2010.first, + clauss+stillwell+genaud+al.2011.single, + bedaride+degomme+genaud+al.2013.toward}. All these works conclude that +SimGrid is able to simulate pretty accurately the real behavior of the +applications. + + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Simulation of the multisplitting method} @@ -632,7 +645,9 @@ Note that the program was run with the following parameters: \begin{itemize} \item HOSTFILE: Text file containing the list of the processors units name. Here 100 hosts; -\item PLATFORM: XML file description of the platform architecture whith the following characteristics: %two clusters (cluster1 and cluster2) with the following characteristics : +\item PLATFORM: XML file description of the platform architecture with the + following characteristics: + % two clusters (cluster1 and cluster2) with the following characteristics: \begin{itemize} \item 2 clusters of 50 hosts each; \item Processor unit power: \np[GFlops]{1} or \np[GFlops]{1.5}; @@ -745,6 +760,6 @@ This work is partially funded by the Labex ACTION program (contract ANR-11-LABX- % LocalWords: Ouest Vieille Talence cedex scalability experimentations HPC MPI % LocalWords: Parallelization AIAC GMRES multi SMPI SISC SIAC SimDAG DAGs Lua % LocalWords: Fortran GFlops priori Mbit de du fcomte multisplitting scalable -% LocalWords: SimGrid Belfort parallelize Labex ANR LABX IEEEabrv hpccBib +% LocalWords: SimGrid Belfort parallelize Labex ANR LABX IEEEabrv hpccBib Gbit % LocalWords: intra durations nonsingular Waitall discretization discretized -% LocalWords: InnerSolver Isend Irecv +% LocalWords: InnerSolver Isend Irecv parallelization