1 \documentclass[times]{cpeauth}
5 %\usepackage[dvips,colorlinks,bookmarksopen,bookmarksnumbered,citecolor=red,urlcolor=red]{hyperref}
7 %\newcommand\BibTeX{{\rmfamily B\kern-.05em \textsc{i\kern-.025em b}\kern-.08em
8 %T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
16 \usepackage[T1]{fontenc}
17 \usepackage[utf8]{inputenc}
18 \usepackage{amsfonts,amssymb}
20 \usepackage{algorithm}
21 \usepackage{algpseudocode}
24 % Extension pour les liens intra-documents (tagged PDF)
25 % et l'affichage correct des URL (commande \url{http://example.com})
26 %\usepackage{hyperref}
29 \DeclareUrlCommand\email{\urlstyle{same}}
31 \usepackage[autolanguage,np]{numprint}
33 \renewcommand*\npunitcommand[1]{\text{#1}}
34 \npthousandthpartsep{}}
37 \usepackage[textsize=footnotesize]{todonotes}
39 \newcommand{\AG}[2][inline]{%
40 \todo[color=green!50,#1]{\sffamily\textbf{AG:} #2}\xspace}
41 \newcommand{\RC}[2][inline]{%
42 \todo[color=red!10,#1]{\sffamily\textbf{RC:} #2}\xspace}
43 \newcommand{\LZK}[2][inline]{%
44 \todo[color=blue!10,#1]{\sffamily\textbf{LZK:} #2}\xspace}
45 \newcommand{\RCE}[2][inline]{%
46 \todo[color=yellow!10,#1]{\sffamily\textbf{RCE:} #2}\xspace}
47 \newcommand{\DL}[2][inline]{%
48 \todo[color=pink!10,#1]{\sffamily\textbf{DL:} #2}\xspace}
50 \algnewcommand\algorithmicinput{\textbf{Input:}}
51 \algnewcommand\Input{\item[\algorithmicinput]}
53 \algnewcommand\algorithmicoutput{\textbf{Output:}}
54 \algnewcommand\Output{\item[\algorithmicoutput]}
56 \newcommand{\TOLG}{\mathit{tol_{gmres}}}
57 \newcommand{\MIG}{\mathit{maxit_{gmres}}}
58 \newcommand{\TOLM}{\mathit{tol_{multi}}}
59 \newcommand{\MIM}{\mathit{maxit_{multi}}}
60 \newcommand{\TOLC}{\mathit{tol_{cgls}}}
61 \newcommand{\MIC}{\mathit{maxit_{cgls}}}
64 \usepackage{color, colortbl}
65 \newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}}
66 \newcolumntype{Z}[1]{>{\raggedleft}m{#1}}
68 \newcolumntype{g}{>{\columncolor{Gray}}c}
69 \definecolor{Gray}{gray}{0.9}
74 \title{Grid-enabled simulation of large-scale linear iterative solvers}
75 %\itshape{\journalnamelc}\footnotemark[2]}
77 \author{Charles Emile Ramamonjisoa\affil{1},
78 David Laiymani\affil{1},
79 Arnaud Giersch\affil{1},
80 Lilia Ziane Khodja\affil{2} and
81 Raphaël Couturier\affil{1}
86 Femto-ST Institute, DISC Department,
87 University of Franche-Comté,
89 Email:~\email{{charles.ramamonjisoa,david.laiymani,arnaud.giersch,raphael.couturier}@univ-fcomte.fr}\break
91 Department of Aerospace \& Mechanical Engineering,
92 Non Linear Computational Mechanics,
93 University of Liege, Liege, Belgium.
94 Email:~\email{l.zianekhodja@ulg.ac.be}
97 \begin{abstract} The behavior of multi-core applications is always a challenge
98 to predict, especially with a new architecture for which no experiment has been
99 performed. With some applications, it is difficult, if not impossible, to build
100 accurate performance models. That is why another solution is to use a simulation
101 tool which allows us to change many parameters of the architecture (network
102 bandwidth, latency, number of processors) and to simulate the execution of such
103 applications. The main contribution of this paper is to show that the use of a
104 simulation tool (here we have decided to use the SimGrid toolkit) can really
105 help developers to better tune their applications for a given multi-core
108 %In particular we focus our attention on two parallel iterative algorithms based
109 %on the Multisplitting algorithm and we compare them to the GMRES algorithm.
110 %These algorithms are used to solve linear systems. Two different variants of
111 %the Multisplitting are studied: one using synchronoous iterations and another
112 %one with asynchronous iterations.
113 In this paper we focus our attention on the simulation of iterative algorithms to solve sparse linear systems on large clusters. We study the behavior of the widely used GMRES algorithm and two different variants of the Multisplitting algorithms: one using synchronous iterations and another one with asynchronous iterations.
114 For each algorithm we have simulated
115 different architecture parameters to evaluate their influence on the overall
117 %The obtain simulated results confirm the real results
118 %previously obtained on different real multi-core architectures and also confirm
119 %the efficiency of the asynchronous Multisplitting algorithm compared to the
120 %synchronous GMRES method.
121 The simulations confirm the real results previously obtained on different real multi-core architectures and also confirm the efficiency of the asynchronous Multisplitting algorithm on distant clusters compared to the synchronous GMRES algorithm.
125 %\keywords{Algorithm; distributed; iterative; asynchronous; simulation; simgrid;
127 \keywords{ Performance evaluation, Simulation, SimGrid, Synchronous and asynchronous iterations, Multisplitting algorithms}
131 \section{Introduction} The use of multi-core architectures to solve large
132 scientific problems seems to become imperative in many situations.
133 Whatever the scale of these architectures (distributed clusters, computational
134 grids, embedded multi-core,~\ldots) they are generally well adapted to execute
135 complex parallel applications operating on a large amount of data.
136 Unfortunately, users (industrials or scientists), who need such computational
137 resources, may not have an easy access to such efficient architectures. The cost
138 of using the platform and/or the cost of testing and deploying an application
139 are often very important. So, in this context it is difficult to optimize a
140 given application for a given architecture. In this way and in order to reduce
141 the access cost to these computing resources it seems very interesting to use a
142 simulation environment. The advantages are numerous: development life cycle,
143 code debugging, ability to obtain results quickly\dots{} In counterpart, the simulation results need to be consistent with the real ones.
145 In this paper we focus on a class of highly efficient parallel algorithms called
146 \emph{iterative algorithms}. The parallel scheme of iterative methods is quite
147 simple. It generally involves the division of the problem into several
148 \emph{blocks} that will be solved in parallel on multiple processing
149 units. Each processing unit has to compute an iteration to send/receive some
150 data dependencies to/from its neighbors and to iterate this process until the
151 convergence of the method. Several well-known studies demonstrate the
152 convergence of these algorithms~\cite{BT89,bahi07}. In this processing mode a
153 task cannot begin a new iteration while it has not received data dependencies
154 from its neighbors. We say that the iteration computation follows a
155 \textit{synchronous} scheme. In the asynchronous scheme a task can compute a new
156 iteration without having to wait for the data dependencies coming from its
157 neighbors. Both communication and computations are \textit{asynchronous}
158 inducing that there is no more idle time, due to synchronizations, between two
159 iterations~\cite{bcvc06:ij}. This model presents some advantages and drawbacks
160 that we detail in section~\ref{sec:asynchro} but even if the number of
161 iterations required to converge is generally greater than for the synchronous
162 case, it appears that the asynchronous iterative scheme can significantly
163 reduce overall execution times by suppressing idle times due to
164 synchronizations~(see~\cite{bahi07} for more details).
166 Nevertheless, in both cases (synchronous or asynchronous) it is very time
167 consuming to find optimal configuration and deployment requirements for a given
168 application on a given multi-core architecture. Finding good resource
169 allocations policies under varying CPU power, network speeds and loads is very
170 challenging and labor intensive~\cite{Calheiros:2011:CTM:1951445.1951450}. This
171 problematic is even more difficult for the asynchronous scheme where a small
172 parameter variation of the execution platform and of the application data can
173 lead to very different numbers of iterations to reach the converge and so to
174 very different execution times. In this challenging context we think that the
175 use of a simulation tool can greatly leverage the possibility of testing various
178 The {\bf main contribution of this paper} is to show that the use of a
179 simulation tool (i.e. the SimGrid toolkit~\cite{SimGrid}) in the context of real
180 parallel applications (i.e. large linear system solvers) can help developers to
181 better tune their application for a given multi-core architecture. To show the
182 validity of this approach we first compare the simulated execution of the Krylov
183 multisplitting algorithm with the GMRES (Generalized Minimal Residual)
184 solver~\cite{saad86} in synchronous mode. The simulation results allow us to
185 determine which method to choose given a specified multi-core architecture.
186 Moreover the obtained results on different simulated multi-core architectures
187 confirm the real results previously obtained on non simulated architectures.
188 More precisely the simulated results are in accordance (i.e. with the same order
189 of magnitude) with the works presented in~\cite{couturier15}, which show that
190 the synchronous multisplitting method is more efficient than GMRES for large
191 scale clusters. Simulated results also confirm the efficiency of the
192 asynchronous multisplitting algorithm compared to the synchronous GMRES
193 especially in case of geographically distant clusters.
195 In this way and with a simple computing architecture (a laptop) SimGrid allows us
196 to run a test campaign of a real parallel iterative applications on
197 different simulated multi-core architectures. To our knowledge, there is no
198 related work on the large-scale multi-core simulation of a real synchronous and
199 asynchronous iterative application.
201 This paper is organized as follows. Section~\ref{sec:asynchro} presents the
202 iteration model we use and more particularly the asynchronous scheme. In
203 section~\ref{sec:simgrid} the SimGrid simulation toolkit is presented.
204 Section~\ref{sec:04} details the different solvers that we use. Finally our
205 experimental results are presented in section~\ref{sec:expe} followed by some
206 concluding remarks and perspectives.
209 \section{The asynchronous iteration model and the motivations of our work}
212 Asynchronous iterative methods have been studied for many years theoritecally and
213 practically. Many methods have been considered and convergence results have been
214 proved. These methods can be used to solve, in parallel, fixed point problems
215 (i.e. problems for which the solution is $x^\star =f(x^\star)$. In practice,
216 asynchronous iterations methods can be used to solve, for example, linear and
217 non-linear systems of equations or optimization problems, interested readers are
218 invited to read~\cite{BT89,bahi07}.
220 Before using an asynchronous iterative method, the convergence must be
221 studied. Otherwise, the application is not ensure to reach the convergence. An
222 algorithm that supports both the synchronous or the asynchronous iteration model
223 requires very few modifications to be able to be executed in both variants. In
224 practice, only the communications and convergence detection are different. In
225 the synchronous mode, iterations are synchronized whereas in the asynchronous
226 one, they are not. It should be noticed that non blocking communications can be
227 used in both modes. Concerning the convergence detection, synchronous variants
228 can use a global convergence procedure which acts as a global synchronization
229 point. In the asynchronous model, the convergence detection is more tricky as
230 it must not synchronize all the processors. Interested readers can
231 consult~\cite{myBCCV05c,bahi07,ccl09:ij}.
233 The number of iterations required to reach the convergence is generally greater
234 for the asynchronous scheme (this number depends depends on the delay of the
235 messages). Note that, it is not the case in the synchronous mode where the
236 number of iterations is the same than in the sequential mode. In this way, the
237 set of the parameters of the platform (number of nodes, power of nodes,
238 inter and intra clusters bandwidth and latency, \ldots) and of the
239 application can drastically change the number of iterations required to get the
240 convergence. It follows that asynchronous iterative algorithms are difficult to
241 optimize since the financial and deployment costs on large scale multi-core
242 architecture are often very important. So, prior to delpoyment and tests it
243 seems very promising to be able to simulate the behavior of asynchronous
244 iterative algorithms. The problematic is then to show that the results produce
245 by simulation are in accordance with reality i.e. of the same order of
246 magnitude. To our knowledge, there is no study on this problematic.
250 SimGrid~\cite{SimGrid,casanova+legrand+quinson.2008.simgrid,casanova+giersch+legrand+al.2014.versatile} is a discrete event simulation framework to study the behavior of large-scale distributed computing platforms as Grids, Peer-to-Peer systems, Clouds and High Performance Computation systems. It is widely used to simulate and evaluate heuristics, prototype applications or even assess legacy MPI applications. It is still actively developed by the scientific community and distributed as an open source software.
252 %%%%%%%%%%%%%%%%%%%%%%%%%
253 % SimGrid~\cite{SimGrid,casanova+legrand+quinson.2008.simgrid,casanova+giersch+legrand+al.2014.versatile}
254 % is a simulation framework to study the behavior of large-scale distributed
255 % systems. As its name suggests, it emanates from the grid computing community,
256 % but is nowadays used to study grids, clouds, HPC or peer-to-peer systems. The
257 % early versions of SimGrid date back from 1999, but it is still actively
258 % developed and distributed as an open source software. Today, it is one of the
259 % major generic tools in the field of simulation for large-scale distributed
262 SimGrid provides several programming interfaces: MSG to simulate Concurrent
263 Sequential Processes, SimDAG to simulate DAGs of (parallel) tasks, and SMPI to
264 run real applications written in MPI~\cite{MPI}. Apart from the native C
265 interface, SimGrid provides bindings for the C++, Java, Lua and Ruby programming
266 languages. SMPI is the interface that has been used for the work described in
267 this paper. The SMPI interface implements about \np[\%]{80} of the MPI 2.0
268 standard~\cite{bedaride+degomme+genaud+al.2013.toward}, and supports
269 applications written in C or Fortran, with little or no modifications (cf Section IV - paragraph B).
271 Within SimGrid, the execution of a distributed application is simulated by a
272 single process. The application code is really executed, but some operations,
273 like communications, are intercepted, and their running time is computed
274 according to the characteristics of the simulated execution platform. The
275 description of this target platform is given as an input for the execution, by
276 means of an XML file. It describes the properties of the platform, such as
277 the computing nodes with their computing power, the interconnection links with
278 their bandwidth and latency, and the routing strategy. The scheduling of the
279 simulated processes, as well as the simulated running time of the application
280 are computed according to these properties.
282 To compute the durations of the operations in the simulated world, and to take
283 into account resource sharing (e.g. bandwidth sharing between competing
284 communications), SimGrid uses a fluid model. This allows users to run relatively fast
285 simulations, while still keeping accurate
286 results~\cite{bedaride+degomme+genaud+al.2013.toward,
287 velho+schnorr+casanova+al.2013.validity}. Moreover, depending on the
288 simulated application, SimGrid/SMPI allows to skip long lasting computations and
289 to only take their duration into account. When the real computations cannot be
290 skipped, but the results are unimportant for the simulation results, it is
291 also possible to share dynamically allocated data structures between
292 several simulated processes, and thus to reduce the whole memory consumption.
293 These two techniques can help to run simulations on a very large scale.
295 The validity of simulations with SimGrid has been asserted by several studies.
296 See, for example, \cite{velho+schnorr+casanova+al.2013.validity} and articles
297 referenced therein for the validity of the network models. Comparisons between
298 real execution of MPI applications on the one hand, and their simulation with
299 SMPI on the other hand, are presented in~\cite{guermouche+renard.2010.first,
300 clauss+stillwell+genaud+al.2011.single,
301 bedaride+degomme+genaud+al.2013.toward}. All these works conclude that
302 SimGrid is able to simulate pretty accurately the real behavior of the
304 %%%%%%%%%%%%%%%%%%%%%%%%%
306 \section{Two-stage multisplitting methods}
308 \subsection{Synchronous and asynchronous two-stage methods for sparse linear systems}
310 In this paper we focus on two-stage multisplitting methods in their both versions (synchronous and asynchronous)~\cite{Frommer92,Szyld92,Bru95}. These iterative methods are based on multisplitting methods~\cite{O'leary85,White86,Alefeld97} and use two nested iterations: the outer iteration and the inner iteration. Let us consider the following sparse linear system of $n$ equations in $\mathbb{R}$:
315 where $A$ is a sparse square and nonsingular matrix, $b$ is the right-hand side and $x$ is the solution of the system. Our work in this paper is restricted to the block Jacobi splitting method. This approach of multisplitting consists in partitioning the matrix $A$ into $L$ horizontal band matrices of order $\frac{n}{L}\times n$ without overlapping (i.e. sub-vectors $\{x_\ell\}_{1\leq\ell\leq L}$ are disjoint). Two-stage multisplitting methods solve the linear system~(\ref{eq:01}) iteratively as follows:
317 x_\ell^{k+1} = A_{\ell\ell}^{-1}(b_\ell - \displaystyle\sum^{L}_{\substack{m=1\\m\neq\ell}}{A_{\ell m}x^k_m}),\mbox{~for~}\ell=1,\ldots,L\mbox{~and~}k=1,2,3,\ldots
320 where $x_\ell$ are sub-vectors of the solution $x$, $b_\ell$ are the sub-vectors of the right-hand side $b$, and $A_{\ell\ell}$ and $A_{\ell m}$ are diagonal and off-diagonal blocks of matrix $A$ respectively. The iterations of these methods can naturally be computed in parallel such that each processor or cluster of processors is responsible for solving one splitting as a linear sub-system:
322 A_{\ell\ell} x_\ell = c_\ell,\mbox{~for~}\ell=1,\ldots,L,
325 where right-hand sides $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m$ are computed using the shared vectors $x_m$. In this paper, we use the well-known iterative method GMRES ({\it Generalized Minimal RESidual})~\cite{saad86} as an inner iteration to approximate the solutions of the different splittings arising from the block Jacobi multisplitting of matrix $A$. The algorithm in Figure~\ref{alg:01} shows the main key points of our block Jacobi two-stage method executed by a cluster of processors. In line~\ref{solve}, the linear sub-system~(\ref{eq:03}) is solved in parallel using GMRES method where $\MIG$ and $\TOLG$ are the maximum number of inner iterations and the tolerance threshold for GMRES respectively. The convergence of the two-stage multisplitting methods, based on synchronous or asynchronous iterations, has been studied by many authors for example~\cite{Bru95,bahi07}.
328 %\begin{algorithm}[t]
329 %\caption{Block Jacobi two-stage multisplitting method}
330 \begin{algorithmic}[1]
331 \Input $A_\ell$ (sparse matrix), $b_\ell$ (right-hand side)
332 \Output $x_\ell$ (solution vector)\vspace{0.2cm}
333 \State Set the initial guess $x^0$
334 \For {$k=1,2,3,\ldots$ until convergence}
335 \State $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m^{k-1}$
336 \State $x^k_\ell=Solve_{gmres}(A_{\ell\ell},c_\ell,x^{k-1}_\ell,\MIG,\TOLG)$\label{solve}
337 \State Send $x_\ell^k$ to neighboring clusters\label{send}
338 \State Receive $\{x_m^k\}_{m\neq\ell}$ from neighboring clusters\label{recv}
341 \caption{Block Jacobi two-stage multisplitting method}
346 In this paper, we propose two algorithms of two-stage multisplitting methods. The first algorithm is based on the asynchronous model which allows communications to be overlapped by computations and reduces the idle times resulting from the synchronizations. So in the asynchronous mode, our two-stage algorithm uses asynchronous outer iterations and asynchronous communications between clusters. The communications (i.e. lines~\ref{send} and~\ref{recv} in Figure~\ref{alg:01}) are performed by message passing using MPI non-blocking communication routines. The convergence of the asynchronous iterations is detected when all clusters have locally converged:
348 k\geq\MIM\mbox{~or~}\|x_\ell^{k+1}-x_\ell^k\|_{\infty }\leq\TOLM,
351 where $\MIM$ is the maximum number of outer iterations and $\TOLM$ is the tolerance threshold for the two-stage algorithm.
353 The second two-stage algorithm is based on synchronous outer iterations. We propose to use the Krylov iteration based on residual minimization to improve the slow convergence of the multisplitting methods. In this case, a $n\times s$ matrix $S$ is set using solutions issued from the inner iteration:
355 S=[x^1,x^2,\ldots,x^s],~s\ll n.
358 At each $s$ outer iterations, the algorithm computes a new approximation $\tilde{x}=S\alpha$ which minimizes the residual:
360 \min_{\alpha\in\mathbb{R}^s}{\|b-AS\alpha\|_2}.
363 The algorithm in Figure~\ref{alg:02} includes the procedure of the residual minimization and the outer iteration is restarted with a new approximation $\tilde{x}$ at every $s$ iterations. The least-squares problem~(\ref{eq:06}) is solved in parallel by all clusters using CGLS method~\cite{Hestenes52} such that $\MIC$ is the maximum number of iterations and $\TOLC$ is the tolerance threshold for this method (line~\ref{cgls} in Figure~\ref{alg:02}).
366 %\begin{algorithm}[t]
367 %\caption{Krylov two-stage method using block Jacobi multisplitting}
368 \begin{algorithmic}[1]
369 \Input $A_\ell$ (sparse matrix), $b_\ell$ (right-hand side)
370 \Output $x_\ell$ (solution vector)\vspace{0.2cm}
371 \State Set the initial guess $x^0$
372 \For {$k=1,2,3,\ldots$ until convergence}
373 \State $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m^{k-1}$
374 \State $x^k_\ell=Solve_{gmres}(A_{\ell\ell},c_\ell,x^{k-1}_\ell,\MIG,\TOLG)$
375 \State $S_{\ell,k\mod s}=x_\ell^k$
377 \State $\alpha = Solve_{cgls}(AS,b,\MIC,\TOLC)$\label{cgls}
378 \State $\tilde{x_\ell}=S_\ell\alpha$
379 \State Send $\tilde{x_\ell}$ to neighboring clusters
381 \State Send $x_\ell^k$ to neighboring clusters
383 \State Receive $\{x_m^k\}_{m\neq\ell}$ from neighboring clusters
386 \caption{Krylov two-stage method using block Jacobi multisplitting}
391 \subsection{Simulation of the two-stage methods using SimGrid toolkit}
394 One of our objectives when simulating the application in Simgrid is, as in real
395 life, to get accurate results (solutions of the problem) but also to ensure the
396 test reproducibility under the same conditions. According to our experience,
397 very few modifications are required to adapt a MPI program for the Simgrid
398 simulator using SMPI (Simulator MPI). The first modification is to include SMPI
399 libraries and related header files (smpi.h). The second modification is to
400 suppress all global variables by replacing them with local variables or using a
401 Simgrid selector called "runtime automatic switching"
402 (smpi/privatize\_global\_variables). Indeed, global variables can generate side
403 effects on runtime between the threads running in the same process and generated by
404 Simgrid to simulate the grid environment.
406 %\RC{On vire cette phrase ?} \RCE {Si c'est la phrase d'avant sur les threads, je pense qu'on peut la retenir car c'est l'explication du pourquoi Simgrid n'aime pas les variables globales. Si c'est pas bien dit, on peut la reformuler. Si c'est la phrase ci-apres, effectivement, on peut la virer si elle preterais a discussion}The
407 %last modification on the MPI program pointed out for some cases, the review of
408 %the sequence of the MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions which
409 %might cause an infinite loop.
412 \paragraph{Simgrid Simulator parameters}
413 \ \\ \noindent Before running a Simgrid benchmark, many parameters for the
414 computation platform must be defined. For our experiments, we consider platforms
415 in which several clusters are geographically distant, so there are intra and
416 inter-cluster communications. In the following, these parameters are described:
419 \item hostfile: hosts description file.
420 \item platform: file describing the platform architecture: clusters (CPU power,
421 \dots{}), intra cluster network description, inter cluster network (bandwidth bw,
422 latency lat, \dots{}).
423 \item archi : grid computational description (number of clusters, number of
424 nodes/processors for each cluster).
427 In addition, the following arguments are given to the programs at runtime:
430 \item maximum number of inner iterations $\MIG$ and outer iterations $\MIM$,
431 \item inner precision $\TOLG$ and outer precision $\TOLM$,
432 \item matrix sizes of the 3D Poisson problem: N$_{x}$, N$_{y}$ and N$_{z}$ on axis $x$, $y$ and $z$ respectively,
433 \item matrix diagonal value is fixed to $6.0$ for synchronous Krylov multisplitting experiments and $6.2$ for asynchronous block Jacobi experiments,
434 \item matrix off-diagonal value is fixed to $-1.0$,
435 \item number of vectors in matrix $S$ (i.e. value of $s$),
436 \item maximum number of iterations $\MIC$ and precision $\TOLC$ for CGLS method,
437 \item maximum number of iterations and precision for the classical GMRES method,
438 \item maximum number of restarts for the Arnorldi process in GMRES method,
439 \item execution mode: synchronous or asynchronous.
442 It should also be noticed that both solvers have been executed with the Simgrid selector \texttt{-cfg=smpi/running\_power} which determines the computational power (here 19GFlops) of the simulator host machine.
444 %%%%%%%%%%%%%%%%%%%%%%%%%
445 %%%%%%%%%%%%%%%%%%%%%%%%%
447 \section{Experimental Results}
450 In this section, experiments for both Multisplitting algorithms are reported. First the 3D Poisson problem used in our experiments is described.
452 \subsection{The 3D Poisson problem}
455 We use our two-stage algorithms to solve the well-known Poisson problem $\nabla^2\phi=f$~\cite{Polyanin01}. In three-dimensional Cartesian coordinates in $\mathbb{R}^3$, the problem takes the following form:
457 \frac{\partial^2}{\partial x^2}\phi(x,y,z)+\frac{\partial^2}{\partial y^2}\phi(x,y,z)+\frac{\partial^2}{\partial z^2}\phi(x,y,z)=f(x,y,z)\mbox{~in the domain~}\Omega
462 \phi(x,y,z)=0\mbox{~on the boundary~}\partial\Omega
464 where the real-valued function $\phi(x,y,z)$ is the solution sought, $f(x,y,z)$ is a known function and $\Omega=[0,1]^3$. The 3D discretization of the Laplace operator $\nabla^2$ with the finite difference scheme includes 7 points stencil on the computational grid. The numerical approximation of the Poisson problem on three-dimensional grid is repeatedly computed as $\phi=\phi^\star$ such that:
467 \phi^\star(x,y,z)=&\frac{1}{6}(\phi(x-h,y,z)+\phi(x,y-h,z)+\phi(x,y,z-h)\\&+\phi(x+h,y,z)+\phi(x,y+h,z)+\phi(x,y,z+h)\\&-h^2f(x,y,z))
471 until convergence where $h$ is the grid spacing between two adjacent elements in the 3D computational grid.
473 In the parallel context, the 3D Poisson problem is partitioned into $L\times p$ sub-problems such that $L$ is the number of clusters and $p$ is the number of processors in each cluster. We apply the three-dimensional partitioning instead of the row-by-row one in order to reduce the size of the data shared at the sub-problems boundaries. In this case, each processor is in charge of parallelepipedic block of the problem and has at most six neighbors in the same cluster or in distant clusters with which it shares data at boundaries.
475 \subsection{Study setup and simulation methodology}
477 First, to conduct our study, we propose the following methodology
478 which can be reused for any grid-enabled applications.\\
480 \textbf{Step 1}: Choose with the end users the class of algorithms or
481 the application to be tested. Numerical parallel iterative algorithms
482 have been chosen for the study in this paper. \\
484 \textbf{Step 2}: Collect the software materials needed for the experimentation.
485 In our case, we have two variants algorithms for the resolution of the
486 3D-Poisson problem: (1) using the classical GMRES; (2) and the Multisplitting
487 method. In addition, the Simgrid simulator has been chosen to simulate the
488 behaviors of the distributed applications. Simgrid is running in a virtual
489 machine on a simple laptop. \\
491 \textbf{Step 3}: Fix the criteria which will be used for the future
492 results comparison and analysis. In the scope of this study, we retain
493 on the one hand the algorithm execution mode (synchronous and asynchronous)
494 and on the other hand the execution time and the number of iterations to reach the convergence. \\
496 \textbf{Step 4 }: Set up the different grid testbed environments that will be
497 simulated in the simulator tool to run the program. The following architecture
498 has been configured in Simgrid : 2x16, 4x8, 4x16, 8x8 and 2x50. The first number
499 represents the number of clusters in the grid and the second number represents
500 the number of hosts (processors/cores) in each cluster. The network has been
501 designed to operate with a bandwidth equals to 10Gbits (resp. 1Gbits/s) and a
502 latency of 8.10$^{-6}$ seconds (resp. 5.10$^{-5}$) for the intra-clusters links
503 (resp. inter-clusters backbone links). \\
505 \textbf{Step 5}: Conduct an extensive and comprehensive testings
506 within these configurations by varying the key parameters, especially
507 the CPU power capacity, the network parameters and also the size of the
510 \textbf{Step 6} : Collect and analyze the output results.
512 \subsection{Factors impacting distributed applications performance in
515 When running a distributed application in a computational grid, many factors may
516 have a strong impact on the performance. First of all, the architecture of the
517 grid itself can obviously influence the performance results of the program. The
518 performance gain might be important theoretically when the number of clusters
519 and/or the number of nodes (processors/cores) in each individual cluster
522 Another important factor impacting the overall performance of the application
523 is the network configuration. Two main network parameters can modify drastically
524 the program output results:
526 \item the network bandwidth (bw=bits/s) also known as "the data-carrying
527 capacity" of the network is defined as the maximum of data that can transit
528 from one point to another in a unit of time.
529 \item the network latency (lat : microsecond) defined as the delay from the
530 start time to send a simple data from a source to a destination.
532 Upon the network characteristics, another impacting factor is the volume of data exchanged between the nodes in the cluster
533 and between distant clusters. This parameter is application dependent.
535 In a grid environment, it is common to distinguish, on the one hand, the
536 "intra-network" which refers to the links between nodes within a cluster and
537 on the other hand, the "inter-network" which is the backbone link between
538 clusters. In practice, these two networks have different speeds.
539 The intra-network generally works like a high speed local network with a
540 high bandwith and very low latency. In opposite, the inter-network connects
541 clusters sometime via heterogeneous networks components throuth internet with
542 a lower speed. The network between distant clusters might be a bottleneck
543 for the global performance of the application.
545 \subsection{Comparison of GMRES and Krylov Multisplitting algorithms in synchronous mode}
547 In the scope of this paper, our first objective is to analyze when the Krylov
548 Multisplitting method has better performance than the classical GMRES
549 method. With a synchronous iterative method, better performance means a
550 smaller number of iterations and execution time before reaching the convergence.
551 For a systematic study, the experiments should figure out that, for various
552 grid parameters values, the simulator will confirm the targeted outcomes,
553 particularly for poor and slow networks, focusing on the impact on the
554 communication performance on the chosen class of algorithm.
556 The following paragraphs present the test conditions, the output results
560 \subsubsection{Execution of the algorithms on various computational grid
561 architectures and scaling up the input matrix size}
567 \begin{tabular}{r c }
569 Grid Architecture & 2x16, 4x8, 4x16 and 8x8\\ %\hline
570 Network & N2 : bw=1Gbits/s - lat=5.10$^{-5}$ \\ %\hline
571 Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ %\hline
572 - & N$_{x}$ x N$_{y}$ x N$_{z}$ =170 x 170 x 170 \\ \hline
574 \caption{Test conditions: various grid configurations with the input matix size N$_{x}$=150 or N$_{x}$=170 \RC{N2 n'est pas défini..}\RC{Nx est défini, Ny? Nz?}
575 \AG{La lettre 'x' n'est pas le symbole de la multiplication. Utiliser \texttt{\textbackslash times}. Idem dans le texte, les figures, etc.}}
584 In this section, we analyze the performance of algorithms running on various
585 grid configurations (2x16, 4x8, 4x16 and 8x8). First, the results in Figure~\ref{fig:01}
586 show for all grid configurations the non-variation of the number of iterations of
587 classical GMRES for a given input matrix size; it is not the case for the
588 multisplitting method.
590 \RC{CE attention tu n'as pas mis de label dans tes figures, donc c'est le bordel, j'en mets mais vérifie...}
591 \RC{Les légendes ne sont pas explicites...}
596 \includegraphics[width=100mm]{cluster_x_nodes_nx_150_and_nx_170.pdf}
598 \caption{Various grid configurations with the input matrix size N$_{x}$=150 and N$_{x}$=170\RC{idem}
599 \AG{Utiliser le point comme séparateur décimal et non la virgule. Idem dans les autres figures.}}
604 The execution times between the two algorithms is significant with different
605 grid architectures, even with the same number of processors (for example, 2x16
606 and 4x8). We can observ the low sensitivity of the Krylov multisplitting method
607 (compared with the classical GMRES) when scaling up the number of the processors
608 in the grid: in average, the GMRES (resp. Multisplitting) algorithm performs
609 $40\%$ better (resp. $48\%$) when running from 2x16=32 to 8x8=64 processors. \RC{pas très clair, c'est pas précis de dire qu'un algo perform mieux qu'un autre, selon quel critère?}
611 \subsubsection{Running on two different inter-clusters network speeds \\}
615 \begin{tabular}{r c }
617 Grid Architecture & 2x16, 4x8\\ %\hline
618 Network & N1 : bw=10Gbs-lat=8.10$^{-6}$ \\ %\hline
619 - & N2 : bw=1Gbs-lat=5.10$^{-5}$ \\
620 Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline
622 \caption{Test conditions: grid 2x16 and 4x8 with networks N1 vs N2}
627 These experiments compare the behavior of the algorithms running first on a
628 speed inter-cluster network (N1) and also on a less performant network (N2). \RC{Il faut définir cela avant...}
629 Figure~\ref{fig:02} shows that end users will reduce the execution time
630 for both algorithms when using a grid architecture like 4x16 or 8x8: the reduction is about $2$. The results depict also that when
631 the network speed drops down (variation of 12.5\%), the difference between the two Multisplitting algorithms execution times can reach more than 25\%.
635 %\begin{wrapfigure}{l}{100mm}
638 \includegraphics[width=100mm]{cluster_x_nodes_n1_x_n2.pdf}
639 \caption{Grid 2x16 and 4x8 with networks N1 vs N2
640 \AG{\np{8E-6}, \np{5E-6} au lieu de 8E-6, 5E-6}}
646 \subsubsection{Network latency impacts on performance}
650 \begin{tabular}{r c }
652 Grid Architecture & 2x16\\ %\hline
653 Network & N1 : bw=1Gbs \\ %\hline
654 Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline
656 \caption{Test conditions: network latency impacts}
664 \includegraphics[width=100mm]{network_latency_impact_on_execution_time.pdf}
665 \caption{Network latency impacts on execution time
671 According to the results of Figure~\ref{fig:03}, a degradation of the network
672 latency from $8.10^{-6}$ to $6.10^{-5}$ implies an absolute time increase of
673 more than $75\%$ (resp. $82\%$) of the execution for the classical GMRES
674 (resp. Krylov multisplitting) algorithm. In addition, it appears that the
675 Krylov multisplitting method tolerates more the network latency variation with a
676 less rate increase of the execution time.\RC{Les 2 précédentes phrases me
677 semblent en contradiction....} Consequently, in the worst case ($lat=6.10^{-5
678 }$), the execution time for GMRES is almost the double than the time of the
679 Krylov multisplitting, even though, the performance was on the same order of
680 magnitude with a latency of $8.10^{-6}$.
682 \subsubsection{Network bandwidth impacts on performance}
686 \begin{tabular}{r c }
688 Grid Architecture & 2x16\\ %\hline
689 Network & N1 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline
690 Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \\
692 \caption{Test conditions: Network bandwidth impacts\RC{Qu'est ce qui varie ici? Il n'y a pas de variation dans le tableau}}
699 \includegraphics[width=100mm]{network_bandwith_impact_on_execution_time.pdf}
700 \caption{Network bandwith impacts on execution time
701 \AG{``Execution time'' avec un 't' minuscule}. Idem autres figures.}
705 The results of increasing the network bandwidth show the improvement of the
706 performance for both algorithms by reducing the execution time (see
707 Figure~\ref{fig:04}). However, in this case, the Krylov multisplitting method
708 presents a better performance in the considered bandwidth interval with a gain
709 of $40\%$ which is only around $24\%$ for the classical GMRES.
711 \subsubsection{Input matrix size impacts on performance}
715 \begin{tabular}{r c }
717 Grid Architecture & 4x8\\ %\hline
718 Network & N2 : bw=1Gbs - lat=5.10$^{-5}$ \\
719 Input matrix size & N$_{x}$ = From 40 to 200\\ \hline
721 \caption{Test conditions: Input matrix size impacts}
728 \includegraphics[width=100mm]{pb_size_impact_on_execution_time.pdf}
729 \caption{Problem size impacts on execution time}
733 In these experiments, the input matrix size has been set from $N_{x} = N_{y}
734 = N_{z} = 40$ to $200$ side elements that is from $40^{3} = 64.000$ to $200^{3}
735 = 8,000,000$ points. Obviously, as shown in Figure~\ref{fig:05}, the execution
736 time for both algorithms increases when the input matrix size also increases.
737 But the interesting results are:
739 \item the drastic increase ($10$ times) of the number of iterations needed to
740 reach the convergence for the classical GMRES algorithm when the matrix size
741 go beyond $N_{x}=150$; \RC{C'est toujours pas clair... ok le nommbre d'itérations est 10 fois plus long mais la suite de la phrase ne veut rien dire}
742 \item the classical GMRES execution time is almost the double for $N_{x}=140$
743 compared with the Krylov multisplitting method.
746 These findings may help a lot end users to setup the best and the optimal
747 targeted environment for the application deployment when focusing on the problem
748 size scale up. It should be noticed that the same test has been done with the
749 grid 2x16 leading to the same conclusion.
751 \subsubsection{CPU Power impacts on performance}
755 \begin{tabular}{r c }
757 Grid architecture & 2x16\\ %\hline
758 Network & N2 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline
759 Input matrix size & N$_{x}$ = 150 x 150 x 150\\ \hline
761 \caption{Test conditions: CPU Power impacts}
767 \includegraphics[width=100mm]{cpu_power_impact_on_execution_time.pdf}
768 \caption{CPU Power impacts on execution time}
772 Using the Simgrid simulator flexibility, we have tried to determine the impact
773 on the algorithms performance in varying the CPU power of the clusters nodes
774 from $1$ to $19$ GFlops. The outputs depicted in Figure~\ref{fig:06} confirm the
775 performance gain, around $95\%$ for both of the two methods, after adding more
778 %\DL{il faut une conclusion sur ces tests : ils confirment les résultats déjà
779 %obtenus en grandeur réelle. Donc c'est une aide précieuse pour les dev. Pas
780 %besoin de déployer sur une archi réelle}
782 To conclude these series of experiments, with SimGrid we have been able to make
783 many simulations with many parameters variations. Doing all these experiments
784 with a real platform is most of the time not possible. Moreover the behavior of
785 both GMRES and Krylov multisplitting methods is in accordance with larger real
786 executions on large scale supercomputer~\cite{couturier15}.
789 \subsection{Comparing GMRES in native synchronous mode and the multisplitting algorithm in asynchronous mode}
791 The previous paragraphs put in evidence the interests to simulate the behavior
792 of the application before any deployment in a real environment. In this
793 section, following the same previous methodology, our goal is to compare the
794 efficiency of the multisplitting method in \textit{ asynchronous mode} compared with the
795 classical GMRES in \textit{synchronous mode}.
797 The interest of using an asynchronous algorithm is that there is no more
798 synchronization. With geographically distant clusters, this may be essential.
799 In this case, each processor can compute its iteration freely without any
800 synchronization with the other processors. Thus, the asynchronous may
801 theoretically reduce the overall execution time and can improve the algorithm
804 \RC{la phrase suivante est bizarre, je ne comprends pas pourquoi elle vient ici}
805 In this section, Simgrid simulator tool has been successfully used to show
806 the efficiency of the multisplitting in asynchronous mode and to find the best
807 combination of the grid resources (CPU, Network, input matrix size, \ldots ) to
808 get the highest \textit{"relative gain"} (exec\_time$_{GMRES}$ /
809 exec\_time$_{multisplitting}$) in comparison with the classical GMRES time.
812 The test conditions are summarized in the table~\ref{tab:07}: \\
816 \begin{tabular}{r c }
818 Grid Architecture & 2x50 totaling 100 processors\\ %\hline
819 Processors Power & 1 GFlops to 1.5 GFlops\\
820 Intra-Network & bw=1.25 Gbits - lat=5.10$^{-5}$ \\ %\hline
821 Inter-Network & bw=5 Mbits - lat=2.10$^{-2}$\\
822 Input matrix size & N$_{x}$ = From 62 to 150\\ %\hline
823 Residual error precision & 10$^{-5}$ to 10$^{-9}$\\ \hline \\
825 \caption{Test conditions: GMRES in synchronous mode vs Krylov Multisplitting in asynchronous mode}
829 Again, comprehensive and extensive tests have been conducted with different
830 parameters as the CPU power, the network parameters (bandwidth and latency)
831 and with different problem size. The relative gains greater than $1$ between the
832 two algorithms have been captured after each step of the test. In
833 Table~\ref{tab:08} are reported the best grid configurations allowing
834 the multisplitting method to be more than $2.5$ times faster than the
835 classical GMRES. These experiments also show the relative tolerance of the
836 multisplitting algorithm when using a low speed network as usually observed with
837 geographically distant clusters through the internet.
839 % use the same column width for the following three tables
840 \newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}}
841 \newenvironment{mytable}[1]{% #1: number of columns for data
842 \renewcommand{\arraystretch}{1.3}%
843 \begin{tabular}{|>{\bfseries}r%
844 |*{#1}{>{\centering\arraybackslash}p{\mytablew}|}}}{%
851 % \caption{Relative gain of the multisplitting algorithm compared with the classical GMRES}
856 & 5 & 5 & 5 & 5 & 5 & 50 & 50 & 50 & 50 & 50 \\
859 & 20 & 20 & 20 & 20 & 20 & 20 & 20 & 20 & 20 & 20 \\
862 & 1 & 1 & 1 & 1.5 & 1.5 & 1.5 & 1.5 & 1 & 1.5 & 1.5 \\
865 & 62 & 62 & 62 & 100 & 100 & 110 & 120 & 130 & 140 & 150 \\
868 & \np{E-5} & \np{E-8} & \np{E-9} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11}\\
871 & 2.52 & 2.55 & 2.52 & 2.57 & 2.54 & 2.53 & 2.51 & 2.58 & 2.55 & 2.54 \\
875 \caption{Relative gain of the multisplitting algorithm compared with the classical GMRES}
884 %\section*{Acknowledgment}
886 This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01).
888 \bibliographystyle{wileyj}
889 \bibliography{biblio}
898 %%% ispell-local-dictionary: "american"