1 \documentclass[conference]{IEEEtran}
3 \usepackage[T1]{fontenc}
4 \usepackage[utf8]{inputenc}
5 \usepackage{amsfonts,amssymb}
7 %\usepackage{algorithm}
8 \usepackage{algpseudocode}
11 \usepackage[american]{babel}
12 % Extension pour les liens intra-documents (tagged PDF)
13 % et l'affichage correct des URL (commande \url{http://example.com})
14 %\usepackage{hyperref}
17 \DeclareUrlCommand\email{\urlstyle{same}}
19 \usepackage[autolanguage,np]{numprint}
21 \renewcommand*\npunitcommand[1]{\text{#1}}
22 \npthousandthpartsep{}}
25 \usepackage[textsize=footnotesize]{todonotes}
26 \newcommand{\AG}[2][inline]{%
27 \todo[color=green!50,#1]{\sffamily\textbf{AG:} #2}\xspace}
28 \newcommand{\DL}[2][inline]{%
29 \todo[color=yellow!50,#1]{\sffamily\textbf{DL:} #2}\xspace}
30 \newcommand{\LZK}[2][inline]{%
31 \todo[color=blue!10,#1]{\sffamily\textbf{LZK:} #2}\xspace}
32 \newcommand{\RC}[2][inline]{%
33 \todo[color=red!10,#1]{\sffamily\textbf{RC:} #2}\xspace}
34 \newcommand{\CER}[2][inline]{%
35 \todo[color=pink!10,#1]{\sffamily\textbf{CER:} #2}\xspace}
37 \algnewcommand\algorithmicinput{\textbf{Input:}}
38 \algnewcommand\Input{\item[\algorithmicinput]}
40 \algnewcommand\algorithmicoutput{\textbf{Output:}}
41 \algnewcommand\Output{\item[\algorithmicoutput]}
43 \newcommand{\MI}{\mathit{MaxIter}}
44 \newcommand{\Time}[1]{\mathit{Time}_\mathit{#1}}
48 \title{Simulation of Asynchronous Iterative Algorithms Using SimGrid}
52 Charles Emile Ramamonjisoa\IEEEauthorrefmark{1},
53 Lilia Ziane Khodja\IEEEauthorrefmark{2},
54 David Laiymani\IEEEauthorrefmark{1},
55 Arnaud Giersch\IEEEauthorrefmark{1} and
56 Raphaël Couturier\IEEEauthorrefmark{1}
58 \IEEEauthorblockA{\IEEEauthorrefmark{1}%
59 Femto-ST Institute -- DISC Department\\
60 Université de Franche-Comté,
61 IUT de Belfort-Montbéliard\\
62 19 avenue du Maréchal Juin, BP 527, 90016 Belfort cedex, France\\
63 Email: \email{{charles.ramamonjisoa,david.laiymani,arnaud.giersch,raphael.couturier}@univ-fcomte.fr}
65 \IEEEauthorblockA{\IEEEauthorrefmark{2}%
66 Inria Bordeaux Sud-Ouest\\
67 200 avenue de la Vieille Tour, 33405 Talence cedex, France \\
68 Email: \email{lilia.ziane@inria.fr}
76 Synchronous iterative algorithms are often less scalable than asynchronous
77 iterative ones. Performing large scale experiments with different kind of
78 network parameters is not easy because with supercomputers such parameters are
79 fixed. So one solution consists in using simulations first in order to analyze
80 what parameters could influence or not the behaviors of an algorithm. In this
81 paper, we show that it is interesting to use SimGrid to simulate the behaviors
82 of asynchronous iterative algorithms. For that, we compare the behaviour of a
83 synchronous GMRES algorithm with an asynchronous multisplitting one with
84 simulations in which we choose some parameters. Both codes are real MPI
85 codes. Experiments allow us to see when the multisplitting algorithm can be more
86 efficient than the GMRES one to solve a 3D Poisson problem.
89 % no keywords for IEEE conferences
90 % Keywords: Algorithm distributed iterative asynchronous simulation SimGrid
93 \section{Introduction}
95 Parallel computing and high performance computing (HPC) are becoming more and more imperative for solving various
96 problems raised by researchers on various scientific disciplines but also by industrial in the field. Indeed, the
97 increasing complexity of these requested applications combined with a continuous increase of their sizes lead to write
98 distributed and parallel algorithms requiring significant hardware resources (grid computing, clusters, broadband
99 network, etc.) but also a non-negligible CPU execution time. We consider in this paper a class of highly efficient
100 parallel algorithms called \emph{iterative algorithms} executed in a distributed environment. As their name
101 suggests, these algorithms solve a given problem by successive iterations ($X_{n +1} = f(X_{n})$) from an initial value
102 $X_{0}$ to find an approximate value $X^*$ of the solution with a very low residual error. Several well-known methods
103 demonstrate the convergence of these algorithms~\cite{BT89,Bahi07}.
105 Parallelization of such algorithms generally involve the division of the problem into several \emph{blocks} that will
106 be solved in parallel on multiple processing units. The latter will communicate each intermediate results before a new
107 iteration starts and until the approximate solution is reached. These parallel computations can be performed either in
108 \emph{synchronous} mode where a new iteration begins only when all nodes communications are completed,
109 or in \emph{asynchronous} mode where processors can continue independently with few or no synchronization points. For
110 instance in the \textit{Asynchronous Iterations~-- Asynchronous Communications (AIAC)} model~\cite{bcvc06:ij}, local
111 computations do not need to wait for required data. Processors can then perform their iterations with the data present
112 at that time. Even if the number of iterations required before the convergence is generally greater than for the
113 synchronous case, AIAC algorithms can significantly reduce overall execution times by suppressing idle times due to
114 synchronizations especially in a grid computing context (see~\cite{Bahi07} for more details).
116 Parallel (synchronous or asynchronous) applications may have different
117 configuration and deployment requirements. Quantifying their resource
118 allocation policies and application scheduling algorithms in grid computing
119 environments under varying load, CPU power and network speeds is very costly,
120 very labor intensive and very time
121 consuming~\cite{Calheiros:2011:CTM:1951445.1951450}. The case of AIAC
122 algorithms is even more problematic since they are very sensible to the
123 execution environment context. For instance, variations in the network bandwidth
124 (intra and inter-clusters), in the number and the power of nodes, in the number
125 of clusters\dots{} can lead to very different number of iterations and so to
126 very different execution times. Then, it appears that the use of simulation
127 tools to explore various platform scenarios and to run large numbers of
128 experiments quickly can be very promising. In this way, the use of a simulation
129 environment to execute parallel iterative algorithms found some interests in
130 reducing the highly cost of access to computing resources: (1) for the
131 applications development life cycle and in code debugging (2) and in production
132 to get results in a reasonable execution time with a simulated infrastructure
133 not accessible with physical resources. Indeed, the launch of distributed
134 iterative asynchronous algorithms to solve a given problem on a large-scale
135 simulated environment challenges to find optimal configurations giving the best
136 results with a lowest residual error and in the best of execution time.
138 To our knowledge, there is no existing work on the large-scale simulation of a
139 real AIAC application. There are {\bf two contributions} in this paper. First we give a first
140 approach of the simulation of AIAC algorithms using a simulation tool (i.e. the
141 SimGrid toolkit~\cite{SimGrid}). Second, we confirm the effectiveness of the
142 asynchronous multisplitting algorithm by comparing its performance with the synchronous
143 GMRES. More precisely, we had implemented a program for solving large
144 linear system of equations by numerical method GMRES (Generalized
145 Minimal Residual) \cite{ref1}. We show, that with minor modifications of the
146 initial MPI code, the SimGrid toolkit allows us to perform a test campaign of a
147 real AIAC application on different computing architectures. The simulated
148 results we obtained are in line with real results exposed in ??\AG[]{ref?}.
149 SimGrid had allowed us to launch the application from a modest computing
150 infrastructure by simulating different distributed architectures composed by
151 clusters nodes interconnected by variable speed networks. With selected
152 parameters on the network platforms (bandwidth, latency of inter cluster
153 network) and on the clusters architecture (number, capacity calculation power)
154 in the simulated environment, the experimental results have demonstrated not
155 only the algorithm convergence within a reasonable time compared with the
156 physical environment performance, but also a time saving of up to \np[\%]{40} in
158 \AG{Il faudrait revoir la phrase précédente (couper en deux?). Là, on peut
159 avoir l'impression que le gain de \np[\%]{40} est entre une exécution réelle
160 et une exécution simulée!}
162 This article is structured as follows: after this introduction, the next section will give a brief description of
163 iterative asynchronous model. Then, the simulation framework SimGrid is presented with the settings to create various
164 distributed architectures. The algorithm of the multisplitting method used by GMRES \LZK{??? GMRES n'utilise pas la méthode de multisplitting! Sinon ne doit on pas expliquer le choix d'une méthode de multisplitting?} written with MPI primitives and
165 its adaptation to SimGrid with SMPI (Simulated MPI) is detailed in the next section. At last, the experiments results
166 carried out will be presented before some concluding remarks and future works.
168 \section{Motivations and scientific context}
170 As exposed in the introduction, parallel iterative methods are now widely used in many scientific domains. They can be
171 classified in three main classes depending on how iterations and communications are managed (for more details readers
172 can refer to~\cite{bcvc06:ij}). In the \textit{Synchronous Iterations~-- Synchronous Communications (SISC)} model data
173 are exchanged at the end of each iteration. All the processors must begin the same iteration at the same time and
174 important idle times on processors are generated. The \textit{Synchronous Iterations~-- Asynchronous Communications
175 (SIAC)} model can be compared to the previous one except that data required on another processor are sent asynchronously
176 i.e. without stopping current computations. This technique allows to partially overlap communications by computations
177 but unfortunately, the overlapping is only partial and important idle times remain. It is clear that, in a grid
178 computing context, where the number of computational nodes is large, heterogeneous and widely distributed, the idle
179 times generated by synchronizations are very penalizing. One way to overcome this problem is to use the
180 \textit{Asynchronous Iterations~-- Asynchronous Communications (AIAC)} model. Here, local computations do not need to
181 wait for required data. Processors can then perform their iterations with the data present at that time. Figure~\ref{fig:aiac}
182 illustrates this model where the gray blocks represent the computation phases, the white spaces the idle
183 times and the arrows the communications.
184 \AG{There are no ``white spaces'' on the figure.}
185 With this algorithmic model, the number of iterations required before the
186 convergence is generally greater than for the two former classes. But, and as detailed in~\cite{bcvc06:ij}, AIAC
187 algorithms can significantly reduce overall execution times by suppressing idle times due to synchronizations especially
188 in a grid computing context.\LZK{Répétition par rapport à l'intro}
192 \includegraphics[width=8cm]{AIAC.pdf}
193 \caption{The Asynchronous Iterations~-- Asynchronous Communications model}
198 It is very challenging to develop efficient applications for large scale,
199 heterogeneous and distributed platforms such as computing grids. Researchers and
200 engineers have to develop techniques for maximizing application performance of
201 these multi-cluster platforms, by redesigning the applications and/or by using
202 novel algorithms that can account for the composite and heterogeneous nature of
203 the platform. Unfortunately, the deployment of such applications on these very
204 large scale systems is very costly, labor intensive and time consuming. In this
205 context, it appears that the use of simulation tools to explore various platform
206 scenarios at will and to run enormous numbers of experiments quickly can be very
207 promising. Several works\dots{}
209 \AG{Several works\dots{} what?\\
210 Le paragraphe suivant se trouve déjà dans l'intro ?}
211 In the context of AIAC algorithms, the use of simulation tools is even more
212 relevant. Indeed, this class of applications is very sensible to the execution
213 environment context. For instance, variations in the network bandwidth (intra
214 and inter-clusters), in the number and the power of nodes, in the number of
215 clusters\dots{} can lead to very different number of iterations and so to very
216 different execution times.
223 SimGrid~\cite{SimGrid,casanova+legrand+quinson.2008.simgrid} is a simulation
224 framework to study the behavior of large-scale distributed systems. As its name
225 says, it emanates from the grid computing community, but is nowadays used to
226 study grids, clouds, HPC or peer-to-peer systems. The early versions of SimGrid
227 date from 1999, but it's still actively developed and distributed as an open
228 source software. Today, it's one of the major generic tools in the field of
229 simulation for large-scale distributed systems.
231 SimGrid provides several programming interfaces: MSG to simulate Concurrent
232 Sequential Processes, SimDAG to simulate DAGs of (parallel) tasks, and SMPI to
233 run real applications written in MPI~\cite{MPI}. Apart from the native C
234 interface, SimGrid provides bindings for the C++, Java, Lua and Ruby programming
235 languages. SMPI is the interface that has been used for the work exposed in
236 this paper. The SMPI interface implements about \np[\%]{80} of the MPI 2.0
237 standard~\cite{bedaride:hal-00919507}, and supports applications written in C or
238 Fortran, with little or no modifications.
240 Within SimGrid, the execution of a distributed application is simulated on a
241 single machine. The application code is really executed, but some operations
242 like the communications are intercepted, and their running time is computed
243 according to the characteristics of the simulated execution platform. The
244 description of this target platform is given as an input for the execution, by
245 the mean of an XML file. It describes the properties of the platform, such as
246 the computing nodes with their computing power, the interconnection links with
247 their bandwidth and latency, and the routing strategy. The simulated running
248 time of the application is computed according to these properties.
250 To compute the durations of the operations in the simulated world, and to take
251 into account resource sharing (e.g. bandwidth sharing between competing
252 communications), SimGrid uses a fluid model. This allows to run relatively fast
253 simulations, while still keeping accurate
254 results~\cite{bedaride:hal-00919507,tomacs13}. Moreover, depending on the
255 simulated application, SimGrid/SMPI allows to skip long lasting computations and
256 to only take their duration into account. When the real computations cannot be
257 skipped, but the results have no importance for the simulation results, there is
258 also the possibility to share dynamically allocated data structures between
259 several simulated processes, and thus to reduce the whole memory consumption.
260 These two techniques can help to run simulations at a very large scale.
262 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
263 \section{Simulation of the multisplitting method}
264 %Décrire le problème (algo) traité ainsi que le processus d'adaptation à SimGrid.
265 Let $Ax=b$ be a large sparse system of $n$ linear equations in $\mathbb{R}$, where $A$ is a sparse square and nonsingular matrix, $x$ is the solution vector and $b$ is the right-hand side vector. We use a multisplitting method based on the block Jacobi splitting to solve this linear system on a large scale platform composed of $L$ clusters of processors~\cite{o1985multi}. In this case, we apply a row-by-row splitting without overlapping
267 \left(\begin{array}{ccc}
268 A_{11} & \cdots & A_{1L} \\
269 \vdots & \ddots & \vdots\\
270 A_{L1} & \cdots & A_{LL}
273 \left(\begin{array}{c}
279 \left(\begin{array}{c}
285 in such a way that successive rows of matrix $A$ and both vectors $x$ and $b$
286 are assigned to one cluster, where for all $\ell,m\in\{1,\ldots,L\}$, $A_{\ell
287 m}$ is a rectangular block of $A$ of size $n_\ell\times n_m$, $X_\ell$ and
288 $B_\ell$ are sub-vectors of $x$ and $b$, respectively, of size $n_\ell$ each,
289 and $\sum_{\ell} n_\ell=\sum_{m} n_m=n$.
291 The multisplitting method proceeds by iteration to solve in parallel the linear system on $L$ clusters of processors, in such a way each sub-system
296 A_{\ell\ell}X_\ell = Y_\ell \text{, such that}\\
297 Y_\ell = B_\ell - \displaystyle\sum_{\substack{m=1\\ m\neq \ell}}^{L}A_{\ell m}X_m
301 is solved independently by a cluster and communications are required to update
302 the right-hand side sub-vector $Y_\ell$, such that the sub-vectors $X_m$
303 represent the data dependencies between the clusters. As each sub-system
304 (\ref{eq:4.1}) is solved in parallel by a cluster of processors, our
305 multisplitting method uses an iterative method as an inner solver which is
306 easier to parallelize and more scalable than a direct method. In this work, we
307 use the parallel algorithm of GMRES method~\cite{ref1} which is one of the most
308 used iterative method by many researchers.
311 %%% IEEE instructions forbid to use an algorithm environment here, use figure
313 \begin{algorithmic}[1]
314 \Input $A_\ell$ (sparse sub-matrix), $B_\ell$ (right-hand side sub-vector)
315 \Output $X_\ell$ (solution sub-vector)\medskip
317 \State Load $A_\ell$, $B_\ell$
318 \State Set the initial guess $x^0$
319 \For {$k=0,1,2,\ldots$ until the global convergence}
320 \State Restart outer iteration with $x^0=x^k$
321 \State Inner iteration: \Call{InnerSolver}{$x^0$, $k+1$}
322 \State\label{algo:01:send} Send shared elements of $X_\ell^{k+1}$ to neighboring clusters
323 \State\label{algo:01:recv} Receive shared elements in $\{X_m^{k+1}\}_{m\neq \ell}$
328 \Function {InnerSolver}{$x^0$, $k$}
329 \State Compute local right-hand side $Y_\ell$:
331 Y_\ell = B_\ell - \sum\nolimits^L_{\substack{m=1\\ m\neq \ell}}A_{\ell m}X_m^0
333 \State Solving sub-system $A_{\ell\ell}X_\ell^k=Y_\ell$ with the parallel GMRES method
334 \State \Return $X_\ell^k$
337 \caption{A multisplitting solver with GMRES method}
341 Algorithm on Figure~\ref{algo:01} shows the main key points of the
342 multisplitting method to solve a large sparse linear system. This algorithm is
343 based on an outer-inner iteration method where the parallel synchronous GMRES
344 method is used to solve the inner iteration. It is executed in parallel by each
345 cluster of processors. For all $\ell,m\in\{1,\ldots,L\}$, the matrices and
346 vectors with the subscript $\ell$ represent the local data for cluster $\ell$,
347 while $\{A_{\ell m}\}_{m\neq \ell}$ are off-diagonal matrices of sparse matrix
348 $A$ and $\{X_m\}_{m\neq \ell}$ contain vector elements of solution $x$ shared
349 with neighboring clusters. At every outer iteration $k$, asynchronous
350 communications are performed between processors of the local cluster and those
351 of distant clusters (lines~\ref{algo:01:send} and~\ref{algo:01:recv} in
352 Figure~\ref{algo:01}). The shared vector elements of the solution $x$ are
353 exchanged by message passing using MPI non-blocking communication routines.
357 \includegraphics[width=60mm,keepaspectratio]{clustering}
358 \caption{Example of three clusters of processors interconnected by a virtual unidirectional ring network.}
362 The global convergence of the asynchronous multisplitting solver is detected
363 when the clusters of processors have all converged locally. We implemented the
364 global convergence detection process as follows. On each cluster a master
365 processor is designated (for example the processor with rank 1) and masters of
366 all clusters are interconnected by a virtual unidirectional ring network (see
367 Figure~\ref{fig:4.1}). During the resolution, a Boolean token circulates around
368 the virtual ring from a master processor to another until the global convergence
369 is achieved. So starting from the cluster with rank 1, each master processor $i$
370 sets the token to \textit{True} if the local convergence is achieved or to
371 \textit{False} otherwise, and sends it to master processor $i+1$. Finally, the
372 global convergence is detected when the master of cluster 1 receives from the
373 master of cluster $L$ a token set to \textit{True}. In this case, the master of
374 cluster 1 broadcasts a stop message to masters of other clusters. In this work,
375 the local convergence on each cluster $\ell$ is detected when the following
376 condition is satisfied
378 (k\leq \MI) \text{ or } (\|X_\ell^k - X_\ell^{k+1}\|_{\infty}\leq\epsilon)
380 where $\MI$ is the maximum number of outer iterations and $\epsilon$ is the
381 tolerance threshold of the error computed between two successive local solution
382 $X_\ell^k$ and $X_\ell^{k+1}$.
384 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
385 We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code
386 debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. In synchronous
387 mode, the execution of the program raised no particular issue but in asynchronous mode, the review of the sequence of MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions
388 and with the addition of the primitive MPI\_Test was needed to avoid a memory fault due to an infinite loop resulting from the non-convergence of the algorithm.
389 \CER{On voulait en fait montrer la simplicité de l'adaptation de l'algo a SimGrid. Les problèmes rencontrés décrits dans ce paragraphe concerne surtout le mode async}\LZK{OK. J'aurais préféré avoir un peu plus de détails sur l'adaptation de la version async}
390 Note here that the use of SMPI functions optimizer for memory footprint and CPU usage is not recommended knowing that one wants to get real results by simulation.
391 As mentioned, upon this adaptation, the algorithm is executed as in the real life in the simulated environment after the following minor changes. First, all declared
392 global variables have been moved to local variables for each subroutine. In fact, global variables generate side effects arising from the concurrent access of
393 shared memory used by threads simulating each computing unit in the SimGrid architecture. Second, the alignment of certain types of variables such as ``long int'' had
395 \AG{À propos de ces problèmes d'alignement, en dire plus si ça a un intérêt, ou l'enlever.}
396 Finally, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid.
397 In total, the initial MPI program running on the simulation environment SMPI gave after a very simple adaptation the same results as those obtained in a real
398 environment. We have successfully executed the code in synchronous mode using parallel GMRES algorithm compared with our multisplitting algorithm in asynchronous mode after few modifications.
402 \section{Experimental results}
404 When the \textit{real} application runs in the simulation environment and produces the expected results, varying the input
405 parameters and the program arguments allows us to compare outputs from the code execution. We have noticed from this
406 study that the results depend on the following parameters:
408 \item At the network level, we found that the most critical values are the
409 bandwidth and the network latency.
410 \item Hosts power (GFlops) can also influence on the results.
411 \item Finally, when submitting job batches for execution, the arguments values
412 passed to the program like the maximum number of iterations or the external
413 precision are critical. They allow to ensure not only the convergence of the
414 algorithm but also to get the main objective of the experimentation of the
415 simulation in having an execution time in asynchronous less than in
416 synchronous mode. The ratio between the execution time of asynchronous
417 compared to the synchronous mode is defined as the \emph{relative gain}. So,
418 our objective running the algorithm in SimGrid is to obtain a relative gain
420 \AG{$t_\text{async} / t_\text{sync} > 1$, l'objectif est donc que ça dure plus
421 longtemps (que ça aille moins vite) en asynchrone qu'en synchrone ?
422 Ce n'est pas plutôt l'inverse ?}
425 A priori, obtaining a relative gain greater than 1 would be difficult in a local
426 area network configuration where the synchronous mode will take advantage on the
427 rapid exchange of information on such high-speed links. Thus, the methodology
428 adopted was to launch the application on clustered network. In this last
429 configuration, degrading the inter-cluster network performance will penalize the
430 synchronous mode allowing to get a relative gain greater than 1. This action
431 simulates the case of distant clusters linked with long distance network like
434 \AG{Cette partie sur le poisson 3D
435 % on sait donc que ce n'est pas une plie ou une sole (/me fatigué)
436 n'est pas à sa place. Elle devrait être placée plus tôt.}
437 In this paper, we solve the 3D Poisson problem whose the mathematical model is
441 \nabla^2 u = f \text{~in~} \Omega \\
442 u =0 \text{~on~} \Gamma =\partial\Omega
447 where $\nabla^2$ is the Laplace operator, $f$ and $u$ are real-valued functions, and $\Omega=[0,1]^3$. The spatial discretization with a finite difference scheme reduces problem~(\ref{eq:02}) to a system of sparse linear equations. The general iteration scheme of our multisplitting method in a 3D domain using a seven point stencil could be written as
450 u^{k+1}(x,y,z)= & u^k(x,y,z) - \frac{1}{6}\times\\
451 & (u^k(x-1,y,z) + u^k(x+1,y,z) + \\
452 & u^k(x,y-1,z) + u^k(x,y+1,z) + \\
453 & u^k(x,y,z-1) + u^k(x,y,z+1)),
457 where the iteration matrix $A$ of size $N_x\times N_y\times N_z$ of the discretized linear system is sparse, symmetric and positive definite.
459 The parallel solving of the 3D Poisson problem with our multisplitting method requires a data partitioning of the problem between clusters and between processors within a cluster. We have chosen the 3D partitioning instead of the row-by-row partitioning in order to reduce the data exchanges at sub-domain boundaries. Figure~\ref{fig:4.2} shows an example of the data partitioning of the 3D Poisson problem between two clusters of processors, where each sub-problem is assigned to a processor. In this context, a processor has at most six neighbors within a cluster or in distant clusters with which it shares data at sub-domain boundaries.
463 \includegraphics[width=80mm,keepaspectratio]{partition}
464 \caption{Example of the 3D data partitioning between two clusters of processors.}
469 As a first step, the algorithm was run on a network consisting of two clusters
470 containing 50 hosts each, totaling 100 hosts. Various combinations of the above
471 factors have provided the results shown in Table~\ref{tab.cluster.2x50} with a
472 matrix size ranging from $N_x = N_y = N_z = \text{62}$ to 171 elements or from
473 $\text{62}^\text{3} = \text{\np{238328}}$ to $\text{171}^\text{3} =
474 \text{\np{5000211}}$ entries.
475 \AG{Expliquer comment lire les tableaux.}
477 % use the same column width for the following three tables
478 \newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}}
479 \newenvironment{mytable}[1]{% #1: number of columns for data
480 \renewcommand{\arraystretch}{1.3}%
481 \begin{tabular}{|>{\bfseries}r%
482 |*{#1}{>{\centering\arraybackslash}p{\mytablew}|}}}{%
487 \caption{2 clusters, each with 50 nodes}
488 \label{tab.cluster.2x50}
493 & 5 & 5 & 5 & 5 & 5 & 50 \\
496 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 \\
499 & 1 & 1 & 1 & 1.5 & 1.5 & 1.5 \\
502 & 62 & 62 & 62 & 100 & 100 & 110 \\
505 & \np{E-5} & \np{E-8} & \np{E-9} & \np{E-11} & \np{E-11} & \np{E-11} \\
509 & 2.52 & 2.55 & 2.52 & 2.57 & 2.54 & 2.53 \\
518 & 50 & 50 & 50 & 50 & 10 & 10 \\
521 & 0.02 & 0.02 & 0.02 & 0.02 & 0.03 & 0.01 \\
524 & 1.5 & 1.5 & 1.5 & 1.5 & 1 & 1.5 \\
527 & 120 & 130 & 140 & 150 & 171 & 171 \\
530 & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-5} & \np{E-5} \\
534 & 2.51 & 2.58 & 2.55 & 2.54 & 1.59 & 1.29 \\
539 Then we have changed the network configuration using three clusters containing
540 respectively 33, 33 and 34 hosts, or again by on hundred hosts for all the
541 clusters. In the same way as above, a judicious choice of key parameters has
542 permitted to get the results in Table~\ref{tab.cluster.3x33} which shows the
543 relative gains greater than 1 with a matrix size from 62 to 100 elements.
547 \caption{3 clusters, each with 33 nodes}
548 \label{tab.cluster.3x33}
553 & 10 & 5 & 4 & 3 & 2 & 6 \\
556 & 0.01 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 \\
559 & 1 & 1 & 1 & 1 & 1 & 1 \\
562 & 62 & 100 & 100 & 100 & 100 & 171 \\
565 & \np{E-5} & \np{E-5} & \np{E-5} & \np{E-5} & \np{E-5} & \np{E-5} \\
569 & 1.003 & 1.01 & 1.08 & 1.19 & 1.28 & 1.01 \\
574 In a final step, results of an execution attempt to scale up the three clustered
575 configuration but increasing by two hundreds hosts has been recorded in
576 Table~\ref{tab.cluster.3x67}.
580 \caption{3 clusters, each with 66 nodes}
581 \label{tab.cluster.3x67}
593 Prec/Eprec & \np{E-5} \\
596 Relative gain & 1.11 \\
601 Note that the program was run with the following parameters:
603 \paragraph*{SMPI parameters}
605 ~\\{}\AG{Donner un peu plus de précisions (plateforme en particulier).}
607 \item HOSTFILE: Hosts file description.
608 \item PLATFORM: file description of the platform architecture : clusters (CPU
609 power, \dots{}), intra cluster network description, inter cluster network
610 (bandwidth, latency, \dots{}).
614 \paragraph*{Arguments of the program}
617 \item Description of the cluster architecture;
618 \item Maximum number of internal and external iterations;
619 \item Internal and external precisions;
620 \item Matrix size $N_x$, $N_y$ and $N_z$;
621 \item Matrix diagonal value: \np{6.0};
622 \item Matrix off-diagonal value: \np{-1.0};
623 \item Execution Mode: synchronous or asynchronous.
626 \paragraph*{Interpretations and comments}
628 After analyzing the outputs, generally, for the configuration with two or three
629 clusters including one hundred hosts (Tables~\ref{tab.cluster.2x50}
630 and~\ref{tab.cluster.3x33}), some combinations of the used parameters affecting
631 the results have given a relative gain more than 2.5, showing the effectiveness of the
632 asynchronous performance compared to the synchronous mode.
634 In the case of a two clusters configuration, Table~\ref{tab.cluster.2x50} shows
635 that with a deterioration of inter cluster network set with \np[Mbit/s]{5} of
636 bandwidth, a latency in order of a hundredth of a millisecond and a system power
637 of one GFlops, an efficiency of about \np[\%]{40} in asynchronous mode is
638 obtained for a matrix size of 62 elements. It is noticed that the result remains
639 stable even if we vary the external precision from \np{E-5} to \np{E-9}. By
640 increasing the matrix size up to 100 elements, it was necessary to increase the
641 CPU power of \np[\%]{50} to \np[GFlops]{1.5} for a convergence of the algorithm
642 with the same order of asynchronous mode efficiency. Maintaining such a system
643 power but this time, increasing network throughput inter cluster up to
644 \np[Mbit/s]{50}, the result of efficiency with a relative gain of 1.5\AG[]{2.5 ?} is obtained with
645 high external precision of \np{E-11} for a matrix size from 110 to 150 side
648 For the 3 clusters architecture including a total of 100 hosts,
649 Table~\ref{tab.cluster.3x33} shows that it was difficult to have a combination
650 which gives a relative gain of asynchronous mode more than 1.2. Indeed, for a
651 matrix size of 62 elements, equality between the performance of the two modes
652 (synchronous and asynchronous) is achieved with an inter cluster of
653 \np[Mbit/s]{10} and a latency of \np[ms]{E-1}. To challenge an efficiency greater than 1.2 with a matrix size of 100 points, it was necessary to degrade the
654 inter cluster network bandwidth from 5 to \np[Mbit/s]{2}.
655 \AG{Conclusion, on prend une plateforme pourrie pour avoir un bon ratio sync/async ???
656 Quelle est la perte de perfs en faisant ça ?}
658 A last attempt was made for a configuration of three clusters but more powerful
659 with 200 nodes in total. The convergence with a relative gain around 1.1 was
660 obtained with a bandwidth of \np[Mbit/s]{1} as shown in
661 Table~\ref{tab.cluster.3x67}.
663 \RC{Est ce qu'on sait expliquer pourquoi il y a une telle différence entre les résultats avec 2 et 3 clusters... Avec 3 clusters, ils sont pas très bons... Je me demande s'il ne faut pas les enlever...}
664 \RC{En fait je pense avoir la réponse à ma remarque... On voit avec les 2 clusters que le gain est d'autant plus grand qu'on choisit une bonne précision. Donc, plusieurs solutions, lancer rapidement un long test pour confirmer ca, ou enlever des tests... ou on ne change rien :-)}
665 \LZK{Ma question est: le bandwidth et latency sont ceux inter-clusters ou pour les deux inter et intra cluster??}
668 The experimental results on executing a parallel iterative algorithm in
669 asynchronous mode on an environment simulating a large scale of virtual
670 computers organized with interconnected clusters have been presented.
671 Our work has demonstrated that using such a simulation tool allow us to
672 reach the following three objectives:
675 \item To have a flexible configurable execution platform resolving the
676 hard exercise to access to very limited but so solicited physical
678 \item to ensure the algorithm convergence with a reasonable time and
680 \item and finally and more importantly, to find the correct combination
681 of the cluster and network specifications permitting to save time in
682 executing the algorithm in asynchronous mode.
684 Our results have shown that in certain conditions, asynchronous mode is
685 speeder up to \np[\%]{40} than executing the algorithm in synchronous mode
686 which is not negligible for solving complex practical problems with more
687 and more increasing size.
689 Several studies have already addressed the performance execution time of
690 this class of algorithm. The work presented in this paper has
691 demonstrated an original solution to optimize the use of a simulation
692 tool to run efficiently an iterative parallel algorithm in asynchronous
693 mode in a grid architecture.
695 \LZK{Perspectives???}
697 \section*{Acknowledgment}
699 This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01).
700 \todo[inline]{The authors would like to thank\dots{}}
702 % trigger a \newpage just before the given reference
703 % number - used to balance the columns on the last page
704 % adjust value as needed - may need to be readjusted if
705 % the document is modified later
706 \bibliographystyle{IEEEtran}
707 \bibliography{IEEEabrv,hpccBib}
717 %%% ispell-local-dictionary: "american"
720 % LocalWords: Ramamonjisoa Laiymani Arnaud Giersch Ziane Khodja Raphaël Femto
721 % LocalWords: Université Franche Comté IUT Montbéliard Maréchal Juin Inria Sud
722 % LocalWords: Ouest Vieille Talence cedex scalability experimentations HPC MPI
723 % LocalWords: Parallelization AIAC GMRES multi SMPI SISC SIAC SimDAG DAGs Lua
724 % LocalWords: Fortran GFlops priori Mbit de du fcomte multisplitting scalable
725 % LocalWords: SimGrid Belfort parallelize Labex ANR LABX IEEEabrv hpccBib
726 % LocalWords: intra durations nonsingular Waitall discretization discretized
727 % LocalWords: InnerSolver Isend Irecv