1 \documentclass[conference]{IEEEtran}
3 \usepackage[T1]{fontenc}
4 \usepackage[utf8]{inputenc}
5 \usepackage{amsfonts,amssymb}
7 %\usepackage{algorithm}
8 \usepackage{algpseudocode}
11 \usepackage[american]{babel}
12 % Extension pour les liens intra-documents (tagged PDF)
13 % et l'affichage correct des URL (commande \url{http://example.com})
14 %\usepackage{hyperref}
17 \DeclareUrlCommand\email{\urlstyle{same}}
19 \usepackage[autolanguage,np]{numprint}
21 \renewcommand*\npunitcommand[1]{\text{#1}}
22 \npthousandthpartsep{}}
25 \usepackage[textsize=footnotesize]{todonotes}
26 \newcommand{\AG}[2][inline]{%
27 \todo[color=green!50,#1]{\sffamily\textbf{AG:} #2}\xspace}
28 \newcommand{\DL}[2][inline]{%
29 \todo[color=yellow!50,#1]{\sffamily\textbf{DL:} #2}\xspace}
30 \newcommand{\LZK}[2][inline]{%
31 \todo[color=blue!10,#1]{\sffamily\textbf{LZK:} #2}\xspace}
32 \newcommand{\RC}[2][inline]{%
33 \todo[color=red!10,#1]{\sffamily\textbf{RC:} #2}\xspace}
35 \algnewcommand\algorithmicinput{\textbf{Input:}}
36 \algnewcommand\Input{\item[\algorithmicinput]}
38 \algnewcommand\algorithmicoutput{\textbf{Output:}}
39 \algnewcommand\Output{\item[\algorithmicoutput]}
41 \newcommand{\MI}{\mathit{MaxIter}}
46 \title{Simulation of Asynchronous Iterative Numerical Algorithms Using SimGrid}
50 Charles Emile Ramamonjisoa\IEEEauthorrefmark{1},
51 David Laiymani\IEEEauthorrefmark{1},
52 Arnaud Giersch\IEEEauthorrefmark{1},
53 Lilia Ziane Khodja\IEEEauthorrefmark{2} and
54 Raphaël Couturier\IEEEauthorrefmark{1}
56 \IEEEauthorblockA{\IEEEauthorrefmark{1}%
57 Femto-ST Institute -- DISC Department\\
58 Université de Franche-Comté,
59 IUT de Belfort-Montbéliard\\
60 19 avenue du Maréchal Juin, BP 527, 90016 Belfort cedex, France\\
61 Email: \email{{charles.ramamonjisoa,david.laiymani,arnaud.giersch,raphael.couturier}@univ-fcomte.fr}
63 \IEEEauthorblockA{\IEEEauthorrefmark{2}%
64 Inria Bordeaux Sud-Ouest\\
65 200 avenue de la Vieille Tour, 33405 Talence cedex, France \\
66 Email: \email{lilia.ziane@inria.fr}
72 \RC{Ordre des autheurs pas définitif.}
74 In recent years, the scalability of large-scale implementation in a
75 distributed environment of algorithms becoming more and more complex has
76 always been hampered by the limits of physical computing resources
77 capacity. One solution is to run the program in a virtual environment
78 simulating a real interconnected computers architecture. The results are
79 convincing and useful solutions are obtained with far fewer resources
80 than in a real platform. However, challenges remain for the convergence
81 and efficiency of a class of algorithms that concern us here, namely
82 numerical parallel iterative algorithms executed in asynchronous mode,
83 especially in a large scale level. Actually, such algorithm requires a
84 balance and a compromise between computation and communication time
85 during the execution. Two important factors determine the success of the
86 experimentation: the convergence of the iterative algorithm on a large
87 scale and the execution time reduction in asynchronous mode. Once again,
88 from the current work, a simulated environment like SimGrid provides
89 accurate results which are difficult or even impossible to obtain in a
90 physical platform by exploiting the flexibility of the simulator on the
91 computing units clusters and the network structure design. Our
92 experimental outputs showed a saving of up to \np[\%]{40} for the algorithm
93 execution time in asynchronous mode compared to the synchronous one with
94 a residual precision up to \np{E-11}. Such successful results open
95 perspectives on experimentations for running the algorithm on a
96 simulated large scale growing environment and with larger problem size.
98 % no keywords for IEEE conferences
99 % Keywords: Algorithm distributed iterative asynchronous simulation SimGrid
102 \section{Introduction}
104 Parallel computing and high performance computing (HPC) are becoming
105 more and more imperative for solving various problems raised by
106 researchers on various scientific disciplines but also by industrial in
107 the field. Indeed, the increasing complexity of these requested
108 applications combined with a continuous increase of their sizes lead to
109 write distributed and parallel algorithms requiring significant hardware
110 resources (grid computing, clusters, broadband network, etc.) but
111 also a non-negligible CPU execution time. We consider in this paper a
112 class of highly efficient parallel algorithms called iterative executed
113 in a distributed environment. As their name suggests, these algorithm
114 solves a given problem that might be NP-complete complex by successive
115 iterations ($X_{n +1} = f(X_{n})$) from an initial value $X_{0}$ to find
116 an approximate value $X^*$ of the solution with a very low
117 residual error. Several well-known methods demonstrate the convergence
118 of these algorithms. Generally, to reduce the complexity and the
119 execution time, the problem is divided into several \emph{pieces} that will
120 be solved in parallel on multiple processing units. The latter will
121 communicate each intermediate results before a new iteration starts
122 until the approximate solution is reached. These distributed parallel
123 computations can be performed either in \emph{synchronous} communication mode
124 where a new iteration begin only when all nodes communications are
125 completed, either \emph{asynchronous} mode where processors can continue
126 independently without or few synchronization points. Despite the
127 effectiveness of iterative approach, a major drawback of the method is
128 the requirement of huge resources in terms of computing capacity,
129 storage and high speed communication network. Indeed, limited physical
130 resources are blocking factors for large-scale deployment of parallel
133 In recent years, the use of a simulation environment to execute parallel
134 iterative algorithms found some interests in reducing the highly cost of
135 access to computing resources: (1) for the applications development life
136 cycle and in code debugging (2) and in production to get results in a
137 reasonable execution time with a simulated infrastructure not accessible
138 with physical resources. Indeed, the launch of distributed iterative
139 asynchronous algorithms to solve a given problem on a large-scale
140 simulated environment challenges to find optimal configurations giving
141 the best results with a lowest residual error and in the best of
142 execution time. According our knowledge, no testing of large-scale
143 simulation of the class of algorithm solving to achieve real results has
144 been undertaken to date. We had in the scope of this work implemented a
145 program for solving large non-symmetric linear system of equations by
146 numerical method GMRES (Generalized Minimal Residual) in the simulation
147 environment SimGrid. The simulated platform had allowed us to launch
148 the application from a modest computing infrastructure by simulating
149 different distributed architectures composed by clusters nodes
150 interconnected by variable speed networks. In addition, it has been
151 permitted to show the effectiveness of asynchronous mode algorithm by
152 comparing its performance with the synchronous mode time. With selected
153 parameters on the network platforms (bandwidth, latency of inter cluster
154 network) and on the clusters architecture (number, capacity calculation
155 power) in the simulated environment, the experimental results have
156 demonstrated not only the algorithm convergence within a reasonable time
157 compared with the physical environment performance, but also a time
158 saving of up to \np[\%]{40} in asynchronous mode.
160 This article is structured as follows: after this introduction, the next
161 section will give a brief description of iterative asynchronous model.
162 Then, the simulation framework SimGrid will be presented with the
163 settings to create various distributed architectures. The algorithm of
164 the multi-splitting method used by GMRES written with MPI primitives
165 and its adaptation to SimGrid with SMPI (Simulated MPI) will be in the
166 next section. At last, the experiments results carried out will be
167 presented before the conclusion which we will announce the opening of
168 our future work after the results.
170 \section{The asynchronous iteration model}
172 As exposed in the introduction, parallel iterative methods are now
173 widely used in many scientific domains. They can be classified in three main classes
174 depending on how iterations and communications are managed (for more
175 details readers can refer to \cite{bcvc02:ip}). In the
176 \textit{Synchronous Iterations - Synchronous Communications (SISC)}
177 model data are exchanged at the end of each iteration. All the
178 processors must begin the same iteration at the same time and
179 important idle times on processors are generated. The
180 \textit{Synchronous Iterations - Asynchronous Communications (SIAC)}
181 model can be compared to the previous one except that data required on
182 another processor are sent asynchronously i.e. without stopping
183 current computations. This technique allows to partially overlap
184 communications by computations but unfortunately, the overlapping is
185 only partial and important idle times remain. It is clear that, in a
186 grid computing context, where the number of computational nodes is large,
187 heterogeneous and widely distributed, the idle times generated by
188 synchronizations are very penalizing. One way to overcome this problem
189 is to use the \textit{Asynchronous Iterations - Asynchronous
190 Communications (AIAC)} model. Here, local computations do not need
191 to wait for required data. Processors can then perform their
192 iterations with the data present at that time. Figure \ref{fig:aiac}
193 illustrates this model where the grey blocks represent the computation
194 phases, the white spaces the idle times and the arrows the
195 communications. With this algorithmic model, the number of iterations
196 required before the convergence is generally greater than for the two
197 former classes. But, and as detailed in \cite{bcvc06:ij}, AIAC
198 algorithms can significantly reduce overall execution times by
199 suppressing idle times due to synchronizations especially in a grid
204 \includegraphics[width=8cm]{AIAC.pdf}
205 \caption{The Asynchronous Iterations - Asynchronous Communications model }
214 SimGrid~\cite{casanova+legrand+quinson.2008.simgrid,SimGrid} is a simulation
215 framework to sudy the behavior of large-scale distributed systems. As its name
216 says, it emanates from the grid computing community, but is nowadays used to
217 study grids, clouds, HPC or peer-to-peer systems.
218 %- open source, developped since 1999, one of the major solution in the field
220 SimGrid provides several programming interfaces: MSG to simulate Concurrent
221 Sequential Processes, SimDAG to simulate DAGs of (parallel) tasks, and SMPI to
222 run real applications written in MPI~\cite{MPI}. Apart from the native C
223 interface, SimGrid provides bindings for the C++, Java, Lua and Ruby programming
224 languages. The SMPI interface supports applications written in C or Fortran,
225 with little or no modifications.
226 %- implements most of MPI-2 \cite{ref} standard [CHECK]
228 %%% explain simulation
229 %- simulated processes folded in one real process
230 %- simulates interactions on the network, fluid model
231 %- able to skip long-lasting computations
235 %- describe resources and their interconnection, with their properties
238 %%% validation + refs
240 \AG{Décrire SimGrid~\cite{casanova+legrand+quinson.2008.simgrid,SimGrid} (Arnaud)}
242 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
243 \section{Simulation of the multisplitting method}
244 %Décrire le problème (algo) traité ainsi que le processus d'adaptation à SimGrid.
245 Let $Ax=b$ be a large sparse system of $n$ linear equations in $\mathbb{R}$, where $A$ is a sparse square and nonsingular matrix, $x$ is the solution vector and $b$ is the right-hand side vector. We use a multisplitting method based on the block Jacobi splitting to solve this linear system on a large scale platform composed of $L$ clusters of processors. In this case, we apply a row-by-row splitting without overlapping
247 \left(\begin{array}{ccc}
248 A_{11} & \cdots & A_{1L} \\
249 \vdots & \ddots & \vdots\\
250 A_{L1} & \cdots & A_{LL}
253 \left(\begin{array}{c}
259 \left(\begin{array}{c}
263 \end{array} \right)\]
264 in such a way that successive rows of matrix $A$ and both vectors $x$ and $b$ are assigned to one cluster, where for all $l,m\in\{1,\ldots,L\}$ $A_{lm}$ is a rectangular block of $A$ of size $n_l\times n_m$, $X_l$ and $B_l$ are sub-vectors of $x$ and $b$, respectively, of size $n_l$ each and $\sum_{l} n_l=\sum_{m} n_m=n$.
266 The multisplitting method proceeds by iteration to solve in parallel the linear system on $L$ clusters of processors, in such a way each sub-system
270 A_{ll}X_l = Y_l \mbox{,~such that}\\
271 Y_l = B_l - \displaystyle\sum_{\substack{m=1\\ m\neq l}}^{L}A_{lm}X_m
276 is solved independently by a cluster and communications are required to update the right-hand side sub-vector $Y_l$, such that the sub-vectors $X_m$ represent the data dependencies between the clusters. As each sub-system (\ref{eq:4.1}) is solved in parallel by a cluster of processors, our multisplitting method uses an iterative method as an inner solver which is easier to parallelize and more scalable than a direct method. In this work, we use the parallel algorithm of GMRES method~\cite{ref1} which is one of the most used iterative method by many researchers.
279 %%% IEEE instructions forbid to use an algorithm environment here, use figure
281 \begin{algorithmic}[1]
282 \Input $A_l$ (sparse sub-matrix), $B_l$ (right-hand side sub-vector)
283 \Output $X_l$ (solution sub-vector)\vspace{0.2cm}
284 \State Load $A_l$, $B_l$
285 \State Set the initial guess $x^0$
286 \For {$k=0,1,2,\ldots$ until the global convergence}
287 \State Restart outer iteration with $x^0=x^k$
288 \State Inner iteration: \Call{InnerSolver}{$x^0$, $k+1$}
289 \State Send shared elements of $X_l^{k+1}$ to neighboring clusters
290 \State Receive shared elements in $\{X_m^{k+1}\}_{m\neq l}$
295 \Function {InnerSolver}{$x^0$, $k$}
296 \State Compute local right-hand side $Y_l$: \[Y_l = B_l - \sum\nolimits^L_{\substack{m=1 \\m\neq l}}A_{lm}X_m^0\]
297 \State Solving sub-system $A_{ll}X_l^k=Y_l$ with the parallel GMRES method
298 \State \Return $X_l^k$
301 \caption{A multisplitting solver with GMRES method}
305 Algorithm on Figure~\ref{algo:01} shows the main key points of the
306 multisplitting method to solve a large sparse linear system. This algorithm is
307 based on an outer-inner iteration method where the parallel synchronous GMRES
308 method is used to solve the inner iteration. It is executed in parallel by each
309 cluster of processors. For all $l,m\in\{1,\ldots,L\}$, the matrices and vectors
310 with the subscript $l$ represent the local data for cluster $l$, while
311 $\{A_{lm}\}_{m\neq l}$ are off-diagonal matrices of sparse matrix $A$ and
312 $\{X_m\}_{m\neq l}$ contain vector elements of solution $x$ shared with
313 neighboring clusters. At every outer iteration $k$, asynchronous communications
314 are performed between processors of the local cluster and those of distant
315 clusters (lines $6$ and $7$ in Figure~\ref{algo:01}). The shared vector
316 elements of the solution $x$ are exchanged by message passing using MPI
317 non-blocking communication routines.
321 \includegraphics[width=60mm,keepaspectratio]{clustering}
322 \caption{Example of three clusters of processors interconnected by a virtual unidirectional ring network.}
326 The global convergence of the asynchronous multisplitting solver is detected when the clusters of processors have all converged locally. We implemented the global convergence detection process as follows. On each cluster a master processor is designated (for example the processor with rank $1$) and masters of all clusters are interconnected by a virtual unidirectional ring network (see Figure~\ref{fig:4.1}). During the resolution, a Boolean token circulates around the virtual ring from a master processor to another until the global convergence is achieved. So starting from the cluster with rank $1$, each master processor $i$ sets the token to {\it True} if the local convergence is achieved or to {\it False} otherwise, and sends it to master processor $i+1$. Finally, the global convergence is detected when the master of cluster $1$ receives from the master of cluster $L$ a token set to {\it True}. In this case, the master of cluster $1$ broadcasts a stop message to masters of other clusters. In this work, the local convergence on each cluster $l$ is detected when the following condition is satisfied
327 \[(k\leq \MI) \mbox{~or~} (\|X_l^k - X_l^{k+1}\|_{\infty}\leq\epsilon)\]
328 where $\MI$ is the maximum number of outer iterations and $\epsilon$ is the tolerance threshold of the error computed between two successive local solution $X_l^k$ and $X_l^{k+1}$.
330 \LZK{Description du processus d'adaptation de l'algo multisplitting à SimGrid}
331 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
340 \section{Experimental results}
342 When the \emph{real} application runs in the simulation environment and produces
343 the expected results, varying the input parameters and the program arguments
344 allows us to compare outputs from the code execution. We have noticed from this
345 study that the results depend on the following parameters: (1) at the network
346 level, we found that the most critical values are the bandwidth (bw) and the
347 network latency (lat). (2) Hosts power (GFlops) can also influence on the
348 results. And finally, (3) when submitting job batches for execution, the
349 arguments values passed to the program like the maximum number of iterations or
350 the \emph{external} precision are critical to ensure not only the convergence of the
351 algorithm but also to get the main objective of the experimentation of the
352 simulation in having an execution time in asynchronous less than in synchronous
353 mode, in others words, in having a \emph{speedup} less than 1
354 ({speedup}${}={}${execution time in synchronous mode}${}/{}${execution time in
357 A priori, obtaining a speedup less than 1 would be difficult in a local area
358 network configuration where the synchronous mode will take advantage on the rapid
359 exchange of information on such high-speed links. Thus, the methodology adopted
360 was to launch the application on clustered network. In this last configuration,
361 degrading the inter-cluster network performance will \emph{penalize} the synchronous
362 mode allowing to get a speedup lower than 1. This action simulates the case of
363 clusters linked with long distance network like Internet.
365 As a first step, the algorithm was run on a network consisting of two clusters
366 containing fifty hosts each, totaling one hundred hosts. Various combinations of
367 the above factors have providing the results shown in Table~\ref{tab.cluster.2x50} with a matrix size
368 ranging from $N_x = N_y = N_z = 62 \text{ to } 171$ elements or from $62^{3} = \np{238328}$ to
369 $171^{3} = \np{5211000}$ entries.
371 Then we have changed the network configuration using three clusters containing
372 respectively 33, 33 and 34 hosts, or again by on hundred hosts for all the
373 clusters. In the same way as above, a judicious choice of key parameters has
374 permitted to get the results in Table~\ref{tab.cluster.3x33} which shows the speedups less than 1 with
375 a matrix size from 62 to 100 elements.
377 In a final step, results of an execution attempt to scale up the three clustered
378 configuration but increasing by two hundreds hosts has been recorded in Table~\ref{tab.cluster.3x67}.
380 Note that the program was run with the following parameters:
382 \paragraph*{SMPI parameters}
385 \item HOSTFILE: Hosts file description.
386 \item PLATFORM: file description of the platform architecture : clusters (CPU power,
387 \dots{}), intra cluster network description, inter cluster network (bandwidth bw,
388 lat latency, \dots{}).
392 \paragraph*{Arguments of the program}
395 \item Description of the cluster architecture;
396 \item Maximum number of internal and external iterations;
397 \item Internal and external precisions;
398 \item Matrix size $N_x$, $N_y$ and $N_z$;
399 \item Matrix diagonal value: \np{6.0};
400 \item Execution Mode: synchronous or asynchronous.
405 \caption{2 clusters X 50 nodes}
406 \label{tab.cluster.2x50}
407 \AG{Ces tableaux (\ref{tab.cluster.2x50}, \ref{tab.cluster.3x33} et
408 \ref{tab.cluster.3x67}) sont affreux. Utiliser un format vectoriel (eps ou
409 pdf) ou, mieux, les réécrire en \LaTeX{}. Réécrire les légendes proprement
410 également (\texttt{\textbackslash{}times} au lieu de \texttt{X} par ex.)}
411 \includegraphics[width=209pt]{img1.jpg}
416 \caption{3 clusters X 33 nodes}
417 \label{tab.cluster.3x33}
418 \AG{Refaire le tableau.}
419 \includegraphics[width=209pt]{img2.jpg}
424 \caption{3 clusters X 67 nodes}
425 \label{tab.cluster.3x67}
426 \AG{Refaire le tableau.}
427 % \includegraphics[width=160pt]{img3.jpg}
428 \includegraphics[scale=0.5]{img3.jpg}
431 \paragraph*{Interpretations and comments}
433 After analyzing the outputs, generally, for the configuration with two or three
434 clusters including one hundred hosts (Tables~\ref{tab.cluster.2x50} and~\ref{tab.cluster.3x33}), some combinations of the
435 used parameters affecting the results have given a speedup less than 1, showing
436 the effectiveness of the asynchronous performance compared to the synchronous
439 In the case of a two clusters configuration, Table~\ref{tab.cluster.2x50} shows that with a
440 deterioration of inter cluster network set with \np[Mbits/s]{5} of bandwidth, a latency
441 in order of a hundredth of a millisecond and a system power of one GFlops, an
442 efficiency of about \np[\%]{40} in asynchronous mode is obtained for a matrix size of 62
443 elements. It is noticed that the result remains stable even if we vary the
444 external precision from \np{E-5} to \np{E-9}. By increasing the problem size up to 100
445 elements, it was necessary to increase the CPU power of \np[\%]{50} to \np[GFlops]{1.5} for a
446 convergence of the algorithm with the same order of asynchronous mode efficiency.
447 Maintaining such a system power but this time, increasing network throughput
448 inter cluster up to \np[Mbits/s]{50}, the result of efficiency of about \np[\%]{40} is
449 obtained with high external precision of \np{E-11} for a matrix size from 110 to 150
452 For the 3 clusters architecture including a total of 100 hosts, Table~\ref{tab.cluster.3x33} shows
453 that it was difficult to have a combination which gives an efficiency of
454 asynchronous below \np[\%]{80}. Indeed, for a matrix size of 62 elements, equality
455 between the performance of the two modes (synchronous and asynchronous) is
456 achieved with an inter cluster of \np[Mbits/s]{10} and a latency of \np[ms]{E-1}. To
457 challenge an efficiency by \np[\%]{78} with a matrix size of 100 points, it was
458 necessary to degrade the inter cluster network bandwidth from 5 to 2 Mbit/s.
460 A last attempt was made for a configuration of three clusters but more powerful
461 with 200 nodes in total. The convergence with a speedup of \np[\%]{90} was obtained
462 with a bandwidth of \np[Mbits/s]{1} as shown in Table~\ref{tab.cluster.3x67}.
465 The experimental results on executing a parallel iterative algorithm in
466 asynchronous mode on an environment simulating a large scale of virtual
467 computers organized with interconnected clusters have been presented.
468 Our work has demonstrated that using such a simulation tool allow us to
469 reach the following three objectives:
471 \newcounter{numberedCntD}
473 \item To have a flexible configurable execution platform resolving the
474 hard exercise to access to very limited but so solicited physical
476 \item to ensure the algorithm convergence with a raisonnable time and
478 \item and finally and more importantly, to find the correct combination
479 of the cluster and network specifications permitting to save time in
480 executing the algorithm in asynchronous mode.
481 \setcounter{numberedCntD}{\theenumi}
483 Our results have shown that in certain conditions, asynchronous mode is
484 speeder up to \np[\%]{40} than executing the algorithm in synchronous mode
485 which is not negligible for solving complex practical problems with more
486 and more increasing size.
488 Several studies have already addressed the performance execution time of
489 this class of algorithm. The work presented in this paper has
490 demonstrated an original solution to optimize the use of a simulation
491 tool to run efficiently an iterative parallel algorithm in asynchronous
492 mode in a grid architecture.
494 \section*{Acknowledgment}
497 The authors would like to thank\dots{}
500 % trigger a \newpage just before the given reference
501 % number - used to balance the columns on the last page
502 % adjust value as needed - may need to be readjusted if
503 % the document is modified later
504 \bibliographystyle{IEEEtran}
505 \bibliography{IEEEabrv,hpccBib}
513 %%% ispell-local-dictionary: "american"