1 \documentclass[times]{cpeauth}
5 %\usepackage[dvips,colorlinks,bookmarksopen,bookmarksnumbered,citecolor=red,urlcolor=red]{hyperref}
7 %\newcommand\BibTeX{{\rmfamily B\kern-.05em \textsc{i\kern-.025em b}\kern-.08em
8 %T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
16 \usepackage[T1]{fontenc}
17 \usepackage[utf8]{inputenc}
18 \usepackage{amsfonts,amssymb}
20 \usepackage{algorithm}
21 \usepackage{algpseudocode}
24 \usepackage[american]{babel}
25 % Extension pour les liens intra-documents (tagged PDF)
26 % et l'affichage correct des URL (commande \url{http://example.com})
27 %\usepackage{hyperref}
30 \DeclareUrlCommand\email{\urlstyle{same}}
32 \usepackage[autolanguage,np]{numprint}
34 \renewcommand*\npunitcommand[1]{\text{#1}}
35 \npthousandthpartsep{}}
38 \usepackage[textsize=footnotesize]{todonotes}
40 \newcommand{\AG}[2][inline]{%
41 \todo[color=green!50,#1]{\sffamily\textbf{AG:} #2}\xspace}
42 \newcommand{\RC}[2][inline]{%
43 \todo[color=red!10,#1]{\sffamily\textbf{RC:} #2}\xspace}
44 \newcommand{\LZK}[2][inline]{%
45 \todo[color=blue!10,#1]{\sffamily\textbf{LZK:} #2}\xspace}
46 \newcommand{\RCE}[2][inline]{%
47 \todo[color=yellow!10,#1]{\sffamily\textbf{RCE:} #2}\xspace}
49 \algnewcommand\algorithmicinput{\textbf{Input:}}
50 \algnewcommand\Input{\item[\algorithmicinput]}
52 \algnewcommand\algorithmicoutput{\textbf{Output:}}
53 \algnewcommand\Output{\item[\algorithmicoutput]}
55 \newcommand{\TOLG}{\mathit{tol_{gmres}}}
56 \newcommand{\MIG}{\mathit{maxit_{gmres}}}
57 \newcommand{\TOLM}{\mathit{tol_{multi}}}
58 \newcommand{\MIM}{\mathit{maxit_{multi}}}
59 \newcommand{\TOLC}{\mathit{tol_{cgls}}}
60 \newcommand{\MIC}{\mathit{maxit_{cgls}}}
63 \usepackage{color, colortbl}
64 \newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}}
65 \newcolumntype{Z}[1]{>{\raggedleft}m{#1}}
67 \newcolumntype{g}{>{\columncolor{Gray}}c}
68 \definecolor{Gray}{gray}{0.9}
72 \begin{document} \RCE{Titre a confirmer.} \title{Comparative performance
73 analysis of simulated grid-enabled numerical iterative algorithms}
74 %\itshape{\journalnamelc}\footnotemark[2]}
76 \author{ Charles Emile Ramamonjisoa and
79 Lilia Ziane Khodja and
85 Femto-ST Institute - DISC Department\\
86 Université de Franche-Comté\\
88 Email: \email{{raphael.couturier,arnaud.giersch,david.laiymani,charles.ramamonjisoa}@univ-fcomte.fr}
91 %% Lilia Ziane Khodja: Department of Aerospace \& Mechanical Engineering\\ Non Linear Computational Mechanics\\ University of Liege\\ Liege, Belgium. Email: l.zianekhodja@ulg.ac.be
93 \begin{abstract} The behavior of multi-core applications is always a challenge
94 to predict, especially with a new architecture for which no experiment has been
95 performed. With some applications, it is difficult, if not impossible, to build
96 accurate performance models. That is why another solution is to use a simulation
97 tool which allows us to change many parameters of the architecture (network
98 bandwidth, latency, number of processors) and to simulate the execution of such
99 applications. We have decided to use SimGrid as it enables to benchmark MPI
102 In this paper, we focus our attention on two parallel iterative algorithms based
103 on the Multisplitting algorithm and we compare them to the GMRES algorithm.
104 These algorithms are used to solve libear systems. Two different variants of
105 the Multisplitting are studied: one using synchronoous iterations and another
106 one with asynchronous iterations. For each algorithm we have tested different
107 parameters to see their influence. We strongly recommend people interested
108 by investing into a new expensive hardware architecture to benchmark
109 their applications using a simulation tool before.
116 %\keywords{Algorithm; distributed; iterative; asynchronous; simulation; simgrid;
118 \keywords{ Performance evaluation, Simulation, SimGrid, Synchronous and asynchronous iterations, Multisplitting algorithms}
122 \section{Introduction} The use of multi-core architectures to solve large
123 scientific problems seems to become imperative in many situations.
124 Whatever the scale of these architectures (distributed clusters, computational
125 grids, embedded multi-core,~\ldots) they are generally well adapted to execute
126 complex parallel applications operating on a large amount of data.
127 Unfortunately, users (industrials or scientists), who need such computational
128 resources, may not have an easy access to such efficient architectures. The cost
129 of using the platform and/or the cost of testing and deploying an application
130 are often very important. So, in this context it is difficult to optimize a
131 given application for a given architecture. In this way and in order to reduce
132 the access cost to these computing resources it seems very interesting to use a
133 simulation environment. The advantages are numerous: development life cycle,
134 code debugging, ability to obtain results quickly,~\ldots. In counterpart, the simulation results need to be consistent with the real ones.
136 In this paper we focus on a class of highly efficient parallel algorithms called
137 \emph{iterative algorithms}. The parallel scheme of iterative methods is quite
138 simple. It generally involves the division of the problem into several
139 \emph{blocks} that will be solved in parallel on multiple processing
140 units. Each processing unit has to compute an iteration, to send/receive some
141 data dependencies to/from its neighbors and to iterate this process until the
142 convergence of the method. Several well-known methods demonstrate the
143 convergence of these algorithms~\cite{BT89,bahi07}. In this processing mode a
144 task cannot begin a new iteration while it has not received data dependencies
145 from its neighbors. We say that the iteration computation follows a synchronous
146 scheme. In the asynchronous scheme a task can compute a new iteration without
147 having to wait for the data dependencies coming from its neighbors. Both
148 communication and computations are asynchronous inducing that there is no more
149 idle time, due to synchronizations, between two iterations~\cite{bcvc06:ij}.
150 This model presents some advantages and drawbacks that we detail in
151 section~\ref{sec:asynchro} but even if the number of iterations required to
152 converge is generally greater than for the synchronous case, it appears that
153 the asynchronous iterative scheme can significantly reduce overall execution
154 times by suppressing idle times due to synchronizations~(see~\cite{bahi07}
157 Nevertheless, in both cases (synchronous or asynchronous) it is very time
158 consuming to find optimal configuration and deployment requirements for a given
159 application on a given multi-core architecture. Finding good resource
160 allocations policies under varying CPU power, network speeds and loads is very
161 challenging and labor intensive~\cite{Calheiros:2011:CTM:1951445.1951450}. This
162 problematic is even more difficult for the asynchronous scheme where a small
163 parameter variation of the execution platform can lead to very different numbers
164 of iterations to reach the converge and so to very different execution times. In
165 this challenging context we think that the use of a simulation tool can greatly
166 leverage the possibility of testing various platform scenarios.
168 The main contribution of this paper is to show that the use of a simulation tool
169 (i.e. the SimGrid toolkit~\cite{SimGrid}) in the context of real parallel
170 applications (i.e. large linear system solvers) can help developers to better
171 tune their application for a given multi-core architecture. To show the validity
172 of this approach we first compare the simulated execution of the multisplitting
173 algorithm with the GMRES (Generalized Minimal Residual)
174 solver~\cite{saad86} in synchronous mode. The obtained results on different
175 simulated multi-core architectures confirm the real results previously obtained
176 on non simulated architectures. We also confirm the efficiency of the
177 asynchronous multisplitting algorithm compared to the synchronous GMRES. In
178 this way and with a simple computing architecture (a laptop) SimGrid allows us
179 to run a test campaign of a real parallel iterative applications on
180 different simulated multi-core architectures. To our knowledge, there is no
181 related work on the large-scale multi-core simulation of a real synchronous and
182 asynchronous iterative application.
184 This paper is organized as follows. Section~\ref{sec:asynchro} presents the
185 iteration model we use and more particularly the asynchronous scheme. In
186 section~\ref{sec:simgrid} the SimGrid simulation toolkit is presented.
187 Section~\ref{sec:04} details the different solvers that we use. Finally our
188 experimental results are presented in section~\ref{sec:expe} followed by some
189 concluding remarks and perspectives.
192 \section{The asynchronous iteration model}
195 Asynchronous iterative methods have been studied for many years theoritecally and
196 practically. Many methods have been considered and convergence results have been
197 proved. These methods can be used to solve, in parallel, fixed point problems
198 (i.e. problems for which the solution is $x^\star =f(x^\star)$. In practice,
199 asynchronous iterations methods can be used to solve, for example, linear and
200 non-linear systems of equations or optimization problems, interested readers are
201 invited to read~\cite{BT89,bahi07}.
203 Before using an asynchronous iterative method, the convergence must be
204 studied. Otherwise, the application is not ensure to reach the convergence. An
205 algorithm that supports both the synchronous or the asynchronous iteration model
206 requires very few modifications to be able to be executed in both variants. In
207 practice, only the communications and convergence detection are different. In
208 the synchronous mode, iterations are synchronized whereas in the asynchronous
209 one, they are not. It should be noticed that non blocking communications can be
210 used in both modes. Concerning the convergence detection, synchronous variants
211 can use a global convergence procedure which acts as a global synchronization
212 point. In the asynchronous model, the convergence detection is more tricky as
213 it must not synchronize all the processors. Interested readers can
214 consult~\cite{myBCCV05c,bahi07,ccl09:ij}.
219 %%%%%%%%%%%%%%%%%%%%%%%%%
220 %%%%%%%%%%%%%%%%%%%%%%%%%
222 \section{Two-stage multisplitting methods}
224 \subsection{Synchronous and asynchronous two-stage methods for sparse linear systems}
226 In this paper we focus on two-stage multisplitting methods in their both versions (synchronous and asynchronous)~\cite{Frommer92,Szyld92,Bru95}. These iterative methods are based on multisplitting methods~\cite{O'leary85,White86,Alefeld97} and use two nested iterations: the outer iteration and the inner iteration. Let us consider the following sparse linear system of $n$ equations in $\mathbb{R}$
231 where $A$ is a sparse square and nonsingular matrix, $b$ is the right-hand side and $x$ is the solution of the system. Our work in this paper is restricted to the block Jacobi splitting method. This approach of multisplitting consists in partitioning the matrix $A$ into $L$ horizontal band matrices of order $\frac{n}{L}\times n$ without overlapping (i.e. sub-vectors $\{x_\ell\}_{1\leq\ell\leq L}$ are disjoint). Two-stage multisplitting methods solve the linear system~(\ref{eq:01}) iteratively as follows
233 x_\ell^{k+1} = A_{\ell\ell}^{-1}(b_\ell - \displaystyle\sum^{L}_{\substack{m=1\\m\neq\ell}}{A_{\ell m}x^k_m}),\mbox{~for~}\ell=1,\ldots,L\mbox{~and~}k=1,2,3,\ldots
236 where $x_\ell$ are sub-vectors of the solution $x$, $b_\ell$ are the sub-vectors of the right-hand side $b$, and $A_{\ell\ell}$ and $A_{\ell m}$ are diagonal and off-diagonal blocks of matrix $A$ respectively. The iterations of these methods can naturally be computed in parallel such that each processor or cluster of processors is responsible for solving one splitting as a linear sub-system
238 A_{\ell\ell} x_\ell = c_\ell,\mbox{~for~}\ell=1,\ldots,L,
241 where right-hand sides $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m$ are computed using the shared vectors $x_m$. In this paper, we use the well-known iterative method GMRES ({\it Generalized Minimal RESidual})~\cite{saad86} as an inner iteration to approximate the solutions of the different splittings arising from the block Jacobi multisplitting of matrix $A$. The algorithm in Figure~\ref{alg:01} shows the main key points of our block Jacobi two-stage method executed by a cluster of processors. In line~\ref{solve}, the linear sub-system~(\ref{eq:03}) is solved in parallel using GMRES method where $\MIG$ and $\TOLG$ are the maximum number of inner iterations and the tolerance threshold for GMRES respectively. The convergence of the two-stage multisplitting methods, based on synchronous or asynchronous iterations, is studied by many authors for example~\cite{Bru95,bahi07}.
244 %\begin{algorithm}[t]
245 %\caption{Block Jacobi two-stage multisplitting method}
246 \begin{algorithmic}[1]
247 \Input $A_\ell$ (sparse matrix), $b_\ell$ (right-hand side)
248 \Output $x_\ell$ (solution vector)\vspace{0.2cm}
249 \State Set the initial guess $x^0$
250 \For {$k=1,2,3,\ldots$ until convergence}
251 \State $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m^{k-1}$
252 \State $x^k_\ell=Solve_{gmres}(A_{\ell\ell},c_\ell,x^{k-1}_\ell,\MIG,\TOLG)$\label{solve}
253 \State Send $x_\ell^k$ to neighboring clusters\label{send}
254 \State Receive $\{x_m^k\}_{m\neq\ell}$ from neighboring clusters\label{recv}
257 \caption{Block Jacobi two-stage multisplitting method}
262 In this paper, we propose two algorithms of two-stage multisplitting methods. The first algorithm is based on the asynchronous model which allows the communications to be overlapped by computations and reduces the idle times resulting from the synchronizations. So in the asynchronous mode, our two-stage algorithm uses asynchronous outer iterations and asynchronous communications between clusters. The communications (i.e. lines~\ref{send} and~\ref{recv} in Figure~\ref{alg:01}) are performed by message passing using MPI non-blocking communication routines. The convergence of the asynchronous iterations is detected when all clusters have locally converged
264 k\geq\MIM\mbox{~or~}\|x_\ell^{k+1}-x_\ell^k\|_{\infty }\leq\TOLM,
267 where $\MIM$ is the maximum number of outer iterations and $\TOLM$ is the tolerance threshold for the two-stage algorithm.
269 The second two-stage algorithm is based on synchronous outer iterations. We propose to use the Krylov iteration based on residual minimization to improve the slow convergence of the multisplitting methods. In this case, a $n\times s$ matrix $S$ is set using solutions issued from the inner iteration
271 S=[x^1,x^2,\ldots,x^s],~s\ll n.
274 At each $s$ outer iterations, the algorithm computes a new approximation $\tilde{x}=S\alpha$ which minimizes the residual
276 \min_{\alpha\in\mathbb{R}^s}{\|b-AS\alpha\|_2}.
279 The algorithm in Figure~\ref{alg:02} includes the procedure of the residual minimization and the outer iteration is restarted with a new approximation $\tilde{x}$ at every $s$ iterations. The least-squares problem~(\ref{eq:06}) is solved in parallel by all clusters using CGLS method~\cite{Hestenes52} such that $\MIC$ is the maximum number of iterations and $\TOLC$ is the tolerance threshold for this method (line~\ref{cgls} in Figure~\ref{alg:02}).
282 %\begin{algorithm}[t]
283 %\caption{Krylov two-stage method using block Jacobi multisplitting}
284 \begin{algorithmic}[1]
285 \Input $A_\ell$ (sparse matrix), $b_\ell$ (right-hand side)
286 \Output $x_\ell$ (solution vector)\vspace{0.2cm}
287 \State Set the initial guess $x^0$
288 \For {$k=1,2,3,\ldots$ until convergence}
289 \State $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m^{k-1}$
290 \State $x^k_\ell=Solve_{gmres}(A_{\ell\ell},c_\ell,x^{k-1}_\ell,\MIG,\TOLG)$
291 \State $S_{\ell,k\mod s}=x_\ell^k$
293 \State $\alpha = Solve_{cgls}(AS,b,\MIC,\TOLC)$\label{cgls}
294 \State $\tilde{x_\ell}=S_\ell\alpha$
295 \State Send $\tilde{x_\ell}$ to neighboring clusters
297 \State Send $x_\ell^k$ to neighboring clusters
299 \State Receive $\{x_m^k\}_{m\neq\ell}$ from neighboring clusters
302 \caption{Krylov two-stage method using block Jacobi multisplitting}
307 \subsection{Simulation of two-stage methods using SimGrid framework}
310 One of our objectives when simulating the application in Simgrid is, as in real
311 life, to get accurate results (solutions of the problem) but also ensure the
312 test reproducibility under the same conditions. According to our experience,
313 very few modifications are required to adapt a MPI program for the Simgrid
314 simulator using SMPI (Simulator MPI). The first modification is to include SMPI
315 libraries and related header files (smpi.h). The second modification is to
316 suppress all global variables by replacing them with local variables or using a
317 Simgrid selector called "runtime automatic switching"
318 (smpi/privatize\_global\_variables). Indeed, global variables can generate side
319 effects on runtime between the threads running in the same process, generated by
320 Simgrid to simulate the grid environment. \RC{On vire cette phrase ?} \RCE {Si c'est la phrase d'avant sur les threads, je pense qu'on peut la retenir car c'est l'explication du pourquoi Simgrid n'aime pas les variables globales. Si c'est pas bien dit, on peut la reformuler. Si c'est la phrase ci-apres, effectivement, on peut la virer si elle preterais a discussion}The
321 last modification on the MPI program pointed out for some cases, the review of
322 the sequence of the MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions which
323 might cause an infinite loop.
326 \paragraph{Simgrid Simulator parameters}
327 \ \\ \noindent Before running a Simgrid benchmark, many parameters for the
328 computation platform must be defined. For our experiments, we consider platforms
329 in which several clusters are geographically distant, so there are intra and
330 inter-cluster communications. In the following, these parameters are described:
333 \item hostfile: hosts description file.
334 \item platform: file describing the platform architecture: clusters (CPU power,
335 \dots{}), intra cluster network description, inter cluster network (bandwidth bw,
336 latency lat, \dots{}).
337 \item archi : grid computational description (number of clusters, number of
338 nodes/processors for each cluster).
341 In addition, the following arguments are given to the programs at runtime:
344 \item maximum number of inner and outer iterations;
345 \item inner and outer precisions;
346 \item maximum number of the gmres's restarts in the Arnorldi process;
347 \item maximum number of iterations qnd the tolerance threshold in classical GMRES;
348 \item tolerance threshold for outer and inner-iterations;
349 \item matrix size (N$_{x}$, N$_{y}$ and N$_{z}$) respectively on x, y, z axis;
350 \item matrix diagonal value = 6.0 for synchronous Krylov multisplitting experiments and 6.2 for asynchronous block Jacobi experiments; \RC{CE tu vérifies, je dis ca de tête}
351 \item matrix off-diagonal value;
352 \item execution mode: synchronous or asynchronous;
353 \RCE {C'est ok la liste des arguments du programme mais si Lilia ou toi pouvez preciser pour les arguments pour CGLS ci dessous}
354 \item Size of matrix S;
355 \item Maximum number of iterations and tolerance threshold for CGLS.
358 It should also be noticed that both solvers have been executed with the Simgrid selector -cfg=smpi/running\_power which determines the computational power (here 19GFlops) of the simulator host machine.
360 %%%%%%%%%%%%%%%%%%%%%%%%%
361 %%%%%%%%%%%%%%%%%%%%%%%%%
363 \section{Experimental Results}
366 In this section, experiments for both Multisplitting algorithms are reported. First the problem used in our experiments is described.
368 We use our two-stage algorithms to solve the well-known Poisson problem $\nabla^2\phi=f$~\cite{Polyanin01}. In three-dimensional Cartesian coordinates in $\mathbb{R}^3$, the problem takes the following form
370 \frac{\partial^2}{\partial x^2}\phi(x,y,z)+\frac{\partial^2}{\partial y^2}\phi(x,y,z)+\frac{\partial^2}{\partial z^2}\phi(x,y,z)=f(x,y,z)\mbox{~in the domain~}\Omega
375 \phi(x,y,z)=0\mbox{~on the boundary~}\partial\Omega
377 where the real-valued function $\phi(x,y,z)$ is the solution sought, $f(x,y,z)$ is a known function and $\Omega=[0,1]^3$. The 3D discretization of the Laplace operator $\nabla^2$ with the finite difference scheme includes 7 points stencil on the computational grid. The numerical approximation of the Poisson problem on three-dimensional grid is repeatedly computed as $\phi=\phi^\star$ such that
380 \phi^\star(x,y,z)=&\frac{1}{6}(\phi(x-h,y,z)+\phi(x,y-h,z)+\phi(x,y,z-h)\\&+\phi(x+h,y,z)+\phi(x,y+h,z)+\phi(x,y,z+h)\\&-h^2f(x,y,z))
384 until convergence where $h$ is the grid spacing between two adjacent elements in the 3D computational grid.
386 In the parallel context, the 3D Poisson problem is partitioned into $L\times p$ sub-problems such that $L$ is the number of clusters and $p$ is the number of processors in each cluster. We apply the three-dimensional partitioning instead of the row-by-row one in order to reduce the size of the data shared at the sub-problems boundaries. In this case, each processor is in charge of parallelepipedic block of the problem and has at most six neighbors in the same cluster or in distant clusters with which it shares data at boundaries.
388 \subsection{Study setup and Simulation Methodology}
390 First, to conduct our study, we propose the following methodology
391 which can be reused for any grid-enabled applications.\\
393 \textbf{Step 1}: Choose with the end users the class of algorithms or
394 the application to be tested. Numerical parallel iterative algorithms
395 have been chosen for the study in this paper. \\
397 \textbf{Step 2}: Collect the software materials needed for the
398 experimentation. In our case, we have two variants algorithms for the
399 resolution of the 3D-Poisson problem: (1) using the classical GMRES; (2) and the Multisplitting method. In addition, the Simgrid simulator has been chosen to simulate the behaviors of the
400 distributed applications. Simgrid is running on the Mesocentre datacenter in the University of Franche-Comte and also in a virtual machine on a simple laptop. \\
402 \textbf{Step 3}: Fix the criteria which will be used for the future
403 results comparison and analysis. In the scope of this study, we retain
404 on the one hand the algorithm execution mode (synchronous and asynchronous)
405 and on the other hand the execution time and the number of iterations to reach the convergence. \\
407 \textbf{Step 4 }: Set up the different grid testbed environments that will be
408 simulated in the simulator tool to run the program. The following architecture
409 has been configured in Simgrid : 2x16, 4x8, 4x16, 8x8 and 2x50. The first number
410 represents the number of clusters in the grid and the second number represents
411 the number of hosts (processors/cores) in each cluster. The network has been
412 designed to operate with a bandwidth equals to 10Gbits (resp. 1Gbits/s) and a
413 latency of 8.10$^{-6}$ seconds (resp. 5.10$^{-5}$) for the intra-clusters links
414 (resp. inter-clusters backbone links). \\
416 \textbf{Step 5}: Conduct an extensive and comprehensive testings
417 within these configurations by varying the key parameters, especially
418 the CPU power capacity, the network parameters and also the size of the
421 \textbf{Step 6} : Collect and analyze the output results.
423 \subsection{Factors impacting distributed applications performance in
426 When running a distributed application in a computational grid, many factors may
427 have a strong impact on the performances. First of all, the architecture of the
428 grid itself can obviously influence the performance results of the program. The
429 performance gain might be important theoretically when the number of clusters
430 and/or the number of nodes (processors/cores) in each individual cluster
433 Another important factor impacting the overall performances of the application
434 is the network configuration. Two main network parameters can modify drastically
435 the program output results:
437 \item the network bandwidth (bw=bits/s) also known as "the data-carrying
438 capacity" of the network is defined as the maximum of data that can transit
439 from one point to another in a unit of time.
440 \item the network latency (lat : microsecond) defined as the delay from the
441 start time to send the data from a source and the final time the destination
442 have finished to receive it.
444 Upon the network characteristics, another impacting factor is the
445 application dependent volume of data exchanged between the nodes in the cluster
446 and between distant clusters. Large volume of data can be transferred and
447 transit between the clusters and nodes during the code execution.
449 In a grid environment, it is common to distinguish, on the one hand, the
450 "intra-network" which refers to the links between nodes within a cluster and,
451 on the other hand, the "inter-network" which is the backbone link between
452 clusters. In practice, these two networks have different speeds. The
453 intra-network generally works like a high speed local network with a high
454 bandwith and very low latency. In opposite, the inter-network connects clusters
455 sometime via heterogeneous networks components throuth internet with a lower
456 speed. The network between distant clusters might be a bottleneck for the
457 global performance of the application.
459 \subsection{Comparing GMRES and Multisplitting algorithms in
462 In the scope of this paper, our first objective is to demonstrate the
463 Algo-2 (Multisplitting method) shows a better performance in grid
464 architecture compared with Algo-1 (Classical GMRES) both running in
465 \textit{synchronous mode}. Better algorithm performance
466 should means a less number of iterations output and a less execution time
467 before reaching the convergence. For a systematic study, the experiments
468 should figure out that, for various grid parameters values, the
469 simulator will confirm the targeted outcomes, particularly for poor and
470 slow networks, focusing on the impact on the communication performance
471 on the chosen class of algorithm.
473 The following paragraphs present the test conditions, the output results
477 \textit{3.a Executing the algorithms on various computational grid
478 architecture and scaling up the input matrix size}
483 \begin{tabular}{r c }
485 Grid & 2x16, 4x8, 4x16 and 8x8\\ %\hline
486 Network & N2 : bw=1Gbits/s - lat=5.10$^{-5}$ \\ %\hline
487 Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ %\hline
488 - & N$_{x}$ x N$_{y}$ x N$_{z}$ =170 x 170 x 170 \\ \hline
490 Table 1 : Clusters x Nodes with N$_{x}$=150 or N$_{x}$=170 \\
496 %\RCE{J'ai voulu mettre les tableaux des données mais je pense que c'est inutile et ça va surcharger}
499 In this section, we compare the algorithms performance running on various grid configuration (2x16, 4x8, 4x16 and 8x8). First, the results in figure 3 show for all grid configuration the non-variation of the number of iterations of classical GMRES for a given input matrix size; it is not
500 the case for the multisplitting method.
502 %\begin{wrapfigure}{l}{100mm}
505 \includegraphics[width=100mm]{cluster_x_nodes_nx_150_and_nx_170.pdf}
506 \caption{Cluster x Nodes N$_{x}$=150 and N$_{x}$=170}
511 The execution time difference between the two algorithms is important when
512 comparing between different grid architectures, even with the same number of
513 processors (like 2x16 and 4x8 = 32 processors for example). The
514 experiment concludes the low sensitivity of the multisplitting method
515 (compared with the classical GMRES) when scaling up the number of the processors in the grid: in average, the GMRES (resp. Multisplitting) algorithm performs 40\% better (resp. 48\%) less when running from 2x16=32 to 8x8=64 processors.
517 \textit{\\3.b Running on two different speed cluster inter-networks\\}
521 \begin{tabular}{r c }
523 Grid & 2x16, 4x8\\ %\hline
524 Network & N1 : bw=10Gbs-lat=8.10$^{-6}$ \\ %\hline
525 - & N2 : bw=1Gbs-lat=5.10$^{-5}$ \\
526 Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \\
528 Table 2 : Clusters x Nodes - Networks N1 x N2 \\
534 %\begin{wrapfigure}{l}{100mm}
537 \includegraphics[width=100mm]{cluster_x_nodes_n1_x_n2.pdf}
538 \caption{Cluster x Nodes N1 x N2}
543 The experiments compare the behavior of the algorithms running first on
544 a speed inter- cluster network (N1) and also on a less performant network (N2).
545 Figure 4 shows that end users will gain to reduce the execution time
546 for both algorithms in using a grid architecture like 4x16 or 8x8: the
547 performance was increased in a factor of 2. The results depict also that
548 when the network speed drops down (12.5\%), the difference between the execution
549 times can reach more than 25\%.
551 \textit{\\3.c Network latency impacts on performance\\}
555 \begin{tabular}{r c }
557 Grid & 2x16\\ %\hline
558 Network & N1 : bw=1Gbs \\ %\hline
559 Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline\\
561 Table 3 : Network latency impact \\
569 \includegraphics[width=100mm]{network_latency_impact_on_execution_time.pdf}
570 \caption{Network latency impact on execution time}
575 According the results in figure 5, degradation of the network
576 latency from 8.10$^{-6}$ to 6.10$^{-5}$ implies an absolute time
577 increase more than 75\% (resp. 82\%) of the execution for the classical
578 GMRES (resp. multisplitting) algorithm. In addition, it appears that the
579 multisplitting method tolerates more the network latency variation with
580 a less rate increase of the execution time. Consequently, in the worst case (lat=6.10$^{-5
581 }$), the execution time for GMRES is almost the double of the time for
582 the multisplitting, even though, the performance was on the same order
583 of magnitude with a latency of 8.10$^{-6}$.
585 \textit{\\3.d Network bandwidth impacts on performance\\}
589 \begin{tabular}{r c }
591 Grid & 2x16\\ %\hline
592 Network & N1 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline
593 Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \\
595 Table 4 : Network bandwidth impact \\
602 \includegraphics[width=100mm]{network_bandwith_impact_on_execution_time.pdf}
603 \caption{Network bandwith impact on execution time}
609 The results of increasing the network bandwidth show the improvement
610 of the performance for both of the two algorithms by reducing the execution time (Figure 6). However, and again in this case, the multisplitting method presents a better performance in the considered bandwidth interval with a gain of 40\% which is only around 24\% for classical GMRES.
612 \textit{\\3.e Input matrix size impacts on performance\\}
616 \begin{tabular}{r c }
619 Network & N2 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline
620 Input matrix size & N$_{x}$ = From 40 to 200\\ \hline \\
622 Table 5 : Input matrix size impact\\
629 \includegraphics[width=100mm]{pb_size_impact_on_execution_time.pdf}
630 \caption{Pb size impact on execution time}
634 In this experimentation, the input matrix size has been set from
635 N$_{x}$ = N$_{y}$ = N$_{z}$ = 40 to 200 side elements that is from 40$^{3}$ = 64.000 to
636 200$^{3}$ = 8.000.000 points. Obviously, as shown in the figure 7,
637 the execution time for the two algorithms convergence increases with the
638 iinput matrix size. But the interesting results here direct on (i) the
639 drastic increase (300 times) of the number of iterations needed before
640 the convergence for the classical GMRES algorithm when the matrix size
641 go beyond N$_{x}$=150; (ii) the classical GMRES execution time also almost
642 the double from N$_{x}$=140 compared with the convergence time of the
643 multisplitting method. These findings may help a lot end users to setup
644 the best and the optimal targeted environment for the application
645 deployment when focusing on the problem size scale up. Note that the
646 same test has been done with the grid 2x16 getting the same conclusion.
648 \textit{\\3.f CPU Power impact on performance\\}
652 \begin{tabular}{r c }
654 Grid & 2x16\\ %\hline
655 Network & N2 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline
656 Input matrix size & N$_{x}$ = 150 x 150 x 150\\ \hline
658 Table 6 : CPU Power impact \\
665 \includegraphics[width=100mm]{cpu_power_impact_on_execution_time.pdf}
666 \caption{CPU Power impact on execution time}
670 Using the Simgrid simulator flexibility, we have tried to determine the
671 impact on the algorithms performance in varying the CPU power of the
672 clusters nodes from 1 to 19 GFlops. The outputs depicted in the figure 6
673 confirm the performance gain, around 95\% for both of the two methods,
674 after adding more powerful CPU.
676 \subsection{Comparing GMRES in native synchronous mode and
677 Multisplitting algorithms in asynchronous mode}
679 The previous paragraphs put in evidence the interests to simulate the
680 behavior of the application before any deployment in a real environment.
681 We have focused the study on analyzing the performance in varying the
682 key factors impacting the results. The study compares
683 the performance of the two proposed algorithms both in \textit{synchronous mode
684 }. In this section, following the same previous methodology, the goal is to
685 demonstrate the efficiency of the multisplitting method in \textit{
686 asynchronous mode} compared with the classical GMRES staying in
687 \textit{synchronous mode}.
689 Note that the interest of using the asynchronous mode for data exchange
690 is mainly, in opposite of the synchronous mode, the non-wait aspects of
691 the current computation after a communication operation like sending
692 some data between nodes. Each processor can continue their local
693 calculation without waiting for the end of the communication. Thus, the
694 asynchronous may theoretically reduce the overall execution time and can
695 improve the algorithm performance.
697 As stated supra, Simgrid simulator tool has been used to prove the
698 efficiency of the multisplitting in asynchronous mode and to find the
699 best combination of the grid resources (CPU, Network, input matrix size,
700 \ldots ) to get the highest \textit{"relative gain"} (exec\_time$_{GMRES}$ / exec\_time$_{multisplitting}$) in comparison with the classical GMRES time.
703 The test conditions are summarized in the table below : \\
707 \begin{tabular}{r c }
709 Grid & 2x50 totaling 100 processors\\ %\hline
710 Processors Power & 1 GFlops to 1.5 GFlops\\
711 Intra-Network & bw=1.25 Gbits - lat=5.10$^{-5}$ \\ %\hline
712 Inter-Network & bw=5 Mbits - lat=2.10$^{-2}$\\
713 Input matrix size & N$_{x}$ = From 62 to 150\\ %\hline
714 Residual error precision & 10$^{-5}$ to 10$^{-9}$\\ \hline \\
718 Again, comprehensive and extensive tests have been conducted varying the
719 CPU power and the network parameters (bandwidth and latency) in the
720 simulator tool with different problem size. The relative gains greater
721 than 1 between the two algorithms have been captured after each step of
722 the test. Table 7 below has recorded the best grid configurations
723 allowing the multisplitting method execution time more performant 2.5 times than
724 the classical GMRES execution and convergence time. The experimentation has demonstrated the relative multisplitting algorithm tolerance when using a low speed network that we encounter usually with distant clusters thru the internet.
726 % use the same column width for the following three tables
727 \newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}}
728 \newenvironment{mytable}[1]{% #1: number of columns for data
729 \renewcommand{\arraystretch}{1.3}%
730 \begin{tabular}{|>{\bfseries}r%
731 |*{#1}{>{\centering\arraybackslash}p{\mytablew}|}}}{%
737 % \caption{Relative gain of the multisplitting algorithm compared with the classical GMRES}
739 Table 7. Relative gain of the multisplitting algorithm compared with
740 the classical GMRES \\
745 & 5 & 5 & 5 & 5 & 5 & 50 & 50 & 50 & 50 & 50 \\
748 & 20 & 20 & 20 & 20 & 20 & 20 & 20 & 20 & 20 & 20 \\
751 & 1 & 1 & 1 & 1.5 & 1.5 & 1.5 & 1.5 & 1 & 1.5 & 1.5 \\
754 & 62 & 62 & 62 & 100 & 100 & 110 & 120 & 130 & 140 & 150 \\
757 & \np{E-5} & \np{E-8} & \np{E-9} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11}\\
760 & 2.52 & 2.55 & 2.52 & 2.57 & 2.54 & 2.53 & 2.51 & 2.58 & 2.55 & 2.54 \\
769 \section*{Acknowledgment}
771 This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01).
774 \bibliographystyle{wileyj}
775 \bibliography{biblio}
783 %%% ispell-local-dictionary: "american"