X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rce2015.git/blobdiff_plain/18767a457a4de2e57af831774829e95aa27adae1..d82538505eacc4b261cdfde4170ad69f2956c048:/paper.tex?ds=sidebyside diff --git a/paper.tex b/paper.tex index 397decc..1b7b9eb 100644 --- a/paper.tex +++ b/paper.tex @@ -21,7 +21,6 @@ \usepackage{algpseudocode} %\usepackage{amsthm} \usepackage{graphicx} -\usepackage[american]{babel} % Extension pour les liens intra-documents (tagged PDF) % et l'affichage correct des URL (commande \url{http://example.com}) %\usepackage{hyperref} @@ -45,6 +44,8 @@ \todo[color=blue!10,#1]{\sffamily\textbf{LZK:} #2}\xspace} \newcommand{\RCE}[2][inline]{% \todo[color=yellow!10,#1]{\sffamily\textbf{RCE:} #2}\xspace} +\newcommand{\DL}[2][inline]{% + \todo[color=pink!10,#1]{\sffamily\textbf{DL:} #2}\xspace} \algnewcommand\algorithmicinput{\textbf{Input:}} \algnewcommand\Input{\item[\algorithmicinput]} @@ -69,52 +70,56 @@ -\begin{document} \RCE{Titre a confirmer.} \title{Comparative performance -analysis of simulated grid-enabled numerical iterative algorithms} +\begin{document} +\title{Grid-enabled simulation of large-scale linear iterative solvers} %\itshape{\journalnamelc}\footnotemark[2]} -\author{ Charles Emile Ramamonjisoa and - David Laiymani and - Arnaud Giersch and - Lilia Ziane Khodja and - Raphaël Couturier +\author{Charles Emile Ramamonjisoa\affil{1}, + David Laiymani\affil{1}, + Arnaud Giersch\affil{1}, + Lilia Ziane Khodja\affil{2} and + Raphaël Couturier\affil{1} } \address{ - \centering - Femto-ST Institute - DISC Department\\ - Université de Franche-Comté\\ - Belfort\\ - Email: \email{{raphael.couturier,arnaud.giersch,david.laiymani,charles.ramamonjisoa}@univ-fcomte.fr} + \affilnum{1}% + Femto-ST Institute, DISC Department, + University of Franche-Comté, + Belfort, France. + Email:~\email{{charles.ramamonjisoa,david.laiymani,arnaud.giersch,raphael.couturier}@univ-fcomte.fr}\break + \affilnum{2} + Department of Aerospace \& Mechanical Engineering, + Non Linear Computational Mechanics, + University of Liege, Liege, Belgium. + Email:~\email{l.zianekhodja@ulg.ac.be} } -%% Lilia Ziane Khodja: Department of Aerospace \& Mechanical Engineering\\ Non Linear Computational Mechanics\\ University of Liege\\ Liege, Belgium. Email: l.zianekhodja@ulg.ac.be - \begin{abstract} The behavior of multi-core applications is always a challenge to predict, especially with a new architecture for which no experiment has been performed. With some applications, it is difficult, if not impossible, to build accurate performance models. That is why another solution is to use a simulation tool which allows us to change many parameters of the architecture (network bandwidth, latency, number of processors) and to simulate the execution of such -applications. We have decided to use SimGrid as it enables to benchmark MPI -applications. +applications. The main contribution of this paper is to show that the use of a +simulation tool (here we have decided to use the SimGrid toolkit) can really +help developpers to better tune their applications for a given multi-core +architecture. -In this paper, we focus our attention on two parallel iterative algorithms based +In particular we focus our attention on two parallel iterative algorithms based on the Multisplitting algorithm and we compare them to the GMRES algorithm. -These algorithms are used to solve libear systems. Two different variants of +These algorithms are used to solve linear systems. Two different variants of the Multisplitting are studied: one using synchronoous iterations and another -one with asynchronous iterations. For each algorithm we have tested different -parameters to see their influence. We strongly recommend people interested -by investing into a new expensive hardware architecture to benchmark -their applications using a simulation tool before. - - - +one with asynchronous iterations. For each algorithm we have simulated +different architecture parameters to evaluate their influence on the overall +execution time. The obtain simulated results confirm the real results +previously obtained on different real multi-core architectures and also confirm +the efficiency of the asynchronous multisplitting algorithm compared to the +synchronous GMRES method. \end{abstract} %\keywords{Algorithm; distributed; iterative; asynchronous; simulation; simgrid; -%performance} +%performance} \keywords{ Performance evaluation, Simulation, SimGrid, Synchronous and asynchronous iterations, Multisplitting algorithms} \maketitle @@ -131,28 +136,28 @@ are often very important. So, in this context it is difficult to optimize a given application for a given architecture. In this way and in order to reduce the access cost to these computing resources it seems very interesting to use a simulation environment. The advantages are numerous: development life cycle, -code debugging, ability to obtain results quickly,~\ldots. In counterpart, the simulation results need to be consistent with the real ones. +code debugging, ability to obtain results quickly\dots{} In counterpart, the simulation results need to be consistent with the real ones. In this paper we focus on a class of highly efficient parallel algorithms called \emph{iterative algorithms}. The parallel scheme of iterative methods is quite simple. It generally involves the division of the problem into several \emph{blocks} that will be solved in parallel on multiple processing -units. Each processing unit has to compute an iteration, to send/receive some +units. Each processing unit has to compute an iteration to send/receive some data dependencies to/from its neighbors and to iterate this process until the -convergence of the method. Several well-known methods demonstrate the +convergence of the method. Several well-known studies demonstrate the convergence of these algorithms~\cite{BT89,bahi07}. In this processing mode a task cannot begin a new iteration while it has not received data dependencies -from its neighbors. We say that the iteration computation follows a synchronous -scheme. In the asynchronous scheme a task can compute a new iteration without -having to wait for the data dependencies coming from its neighbors. Both -communication and computations are asynchronous inducing that there is no more -idle time, due to synchronizations, between two iterations~\cite{bcvc06:ij}. -This model presents some advantages and drawbacks that we detail in -section~\ref{sec:asynchro} but even if the number of iterations required to -converge is generally greater than for the synchronous case, it appears that -the asynchronous iterative scheme can significantly reduce overall execution -times by suppressing idle times due to synchronizations~(see~\cite{bahi07} -for more details). +from its neighbors. We say that the iteration computation follows a +\textit{synchronous} scheme. In the asynchronous scheme a task can compute a new +iteration without having to wait for the data dependencies coming from its +neighbors. Both communication and computations are \textit{asynchronous} +inducing that there is no more idle time, due to synchronizations, between two +iterations~\cite{bcvc06:ij}. This model presents some advantages and drawbacks +that we detail in section~\ref{sec:asynchro} but even if the number of +iterations required to converge is generally greater than for the synchronous +case, it appears that the asynchronous iterative scheme can significantly +reduce overall execution times by suppressing idle times due to +synchronizations~(see~\cite{bahi07} for more details). Nevertheless, in both cases (synchronous or asynchronous) it is very time consuming to find optimal configuration and deployment requirements for a given @@ -160,22 +165,30 @@ application on a given multi-core architecture. Finding good resource allocations policies under varying CPU power, network speeds and loads is very challenging and labor intensive~\cite{Calheiros:2011:CTM:1951445.1951450}. This problematic is even more difficult for the asynchronous scheme where a small -parameter variation of the execution platform can lead to very different numbers -of iterations to reach the converge and so to very different execution times. In -this challenging context we think that the use of a simulation tool can greatly -leverage the possibility of testing various platform scenarios. - -The main contribution of this paper is to show that the use of a simulation tool -(i.e. the SimGrid toolkit~\cite{SimGrid}) in the context of real parallel -applications (i.e. large linear system solvers) can help developers to better -tune their application for a given multi-core architecture. To show the validity -of this approach we first compare the simulated execution of the multisplitting -algorithm with the GMRES (Generalized Minimal Residual) -solver~\cite{saad86} in synchronous mode. The obtained results on different -simulated multi-core architectures confirm the real results previously obtained -on non simulated architectures. We also confirm the efficiency of the -asynchronous multisplitting algorithm compared to the synchronous GMRES. In -this way and with a simple computing architecture (a laptop) SimGrid allows us +parameter variation of the execution platform and of the application data can +lead to very different numbers of iterations to reach the converge and so to +very different execution times. In this challenging context we think that the +use of a simulation tool can greatly leverage the possibility of testing various +platform scenarios. + +The {\bf main contribution of this paper} is to show that the use of a +simulation tool (i.e. the SimGrid toolkit~\cite{SimGrid}) in the context of real +parallel applications (i.e. large linear system solvers) can help developers to +better tune their application for a given multi-core architecture. To show the +validity of this approach we first compare the simulated execution of the Krylov +multisplitting algorithm with the GMRES (Generalized Minimal Residual) +solver~\cite{saad86} in synchronous mode. The simulation results allow us to +determine which method to choose given a specified multi-core architecture. +Moreover the obtained results on different simulated multi-core architectures +confirm the real results previously obtained on non simulated architectures. +More precisely the simulated results are in accordance (i.e. with the same order +of magnitude) with the works presented in~\cite{couturier15}, which show that +the synchronous multisplitting method is more efficient than GMRES for large +scale clusters. Simulated results also confirm the efficiency of the +asynchronous multisplitting algorithm compared to the synchronous GMRES +especially in case of geographically distant clusters. + +In this way and with a simple computing architecture (a laptop) SimGrid allows us to run a test campaign of a real parallel iterative applications on different simulated multi-core architectures. To our knowledge, there is no related work on the large-scale multi-core simulation of a real synchronous and @@ -189,7 +202,7 @@ experimental results are presented in section~\ref{sec:expe} followed by some concluding remarks and perspectives. -\section{The asynchronous iteration model} +\section{The asynchronous iteration model and the motivations of our work} \label{sec:asynchro} Asynchronous iterative methods have been studied for many years theoritecally and @@ -213,32 +226,99 @@ point. In the asynchronous model, the convergence detection is more tricky as it must not synchronize all the processors. Interested readers can consult~\cite{myBCCV05c,bahi07,ccl09:ij}. +The number of iterations required to reach the convergence is generally greater +for the asynchronous scheme (this number depends depends on the delay of the +messages). Note that, it is not the case in the synchronous mode where the +number of iterations is the same than in the sequential mode. In this way, the +set of the parameters of the platform (number of nodes, power of nodes, +inter and intra clusters bandwidth and latency, \ldots) and of the +application can drastically change the number of iterations required to get the +convergence. It follows that asynchronous iterative algorithms are difficult to +optimize since the financial and deployment costs on large scale multi-core +architecture are often very important. So, prior to delpoyment and tests it +seems very promising to be able to simulate the behavior of asynchronous +iterative algorithms. The problematic is then to show that the results produce +by simulation are in accordance with reality i.e. of the same order of +magnitude. To our knowledge, there is no study on this problematic. + \section{SimGrid} - \label{sec:simgrid} +\label{sec:simgrid} +SimGrid~\cite{SimGrid,casanova+legrand+quinson.2008.simgrid,casanova+giersch+legrand+al.2014.versatile} is a discrete event simulation framework to study the behavior of large-scale distributed computing platforms as Grids, Peer-to-Peer systems, Clouds and High Performance Computation systems. It is widely used to simulate and evaluate heuristics, prototype applications or even assess legacy MPI applications. It is still actively developed by the scientific community and distributed as an open source software. %%%%%%%%%%%%%%%%%%%%%%%%% +% SimGrid~\cite{SimGrid,casanova+legrand+quinson.2008.simgrid,casanova+giersch+legrand+al.2014.versatile} +% is a simulation framework to study the behavior of large-scale distributed +% systems. As its name suggests, it emanates from the grid computing community, +% but is nowadays used to study grids, clouds, HPC or peer-to-peer systems. The +% early versions of SimGrid date back from 1999, but it is still actively +% developed and distributed as an open source software. Today, it is one of the +% major generic tools in the field of simulation for large-scale distributed +% systems. + +SimGrid provides several programming interfaces: MSG to simulate Concurrent +Sequential Processes, SimDAG to simulate DAGs of (parallel) tasks, and SMPI to +run real applications written in MPI~\cite{MPI}. Apart from the native C +interface, SimGrid provides bindings for the C++, Java, Lua and Ruby programming +languages. SMPI is the interface that has been used for the work described in +this paper. The SMPI interface implements about \np[\%]{80} of the MPI 2.0 +standard~\cite{bedaride+degomme+genaud+al.2013.toward}, and supports +applications written in C or Fortran, with little or no modifications (cf Section IV - paragraph B). + +Within SimGrid, the execution of a distributed application is simulated by a +single process. The application code is really executed, but some operations, +like communications, are intercepted, and their running time is computed +according to the characteristics of the simulated execution platform. The +description of this target platform is given as an input for the execution, by +means of an XML file. It describes the properties of the platform, such as +the computing nodes with their computing power, the interconnection links with +their bandwidth and latency, and the routing strategy. The scheduling of the +simulated processes, as well as the simulated running time of the application +are computed according to these properties. + +To compute the durations of the operations in the simulated world, and to take +into account resource sharing (e.g. bandwidth sharing between competing +communications), SimGrid uses a fluid model. This allows users to run relatively fast +simulations, while still keeping accurate +results~\cite{bedaride+degomme+genaud+al.2013.toward, + velho+schnorr+casanova+al.2013.validity}. Moreover, depending on the +simulated application, SimGrid/SMPI allows to skip long lasting computations and +to only take their duration into account. When the real computations cannot be +skipped, but the results are unimportant for the simulation results, it is +also possible to share dynamically allocated data structures between +several simulated processes, and thus to reduce the whole memory consumption. +These two techniques can help to run simulations on a very large scale. + +The validity of simulations with SimGrid has been asserted by several studies. +See, for example, \cite{velho+schnorr+casanova+al.2013.validity} and articles +referenced therein for the validity of the network models. Comparisons between +real execution of MPI applications on the one hand, and their simulation with +SMPI on the other hand, are presented in~\cite{guermouche+renard.2010.first, + clauss+stillwell+genaud+al.2011.single, + bedaride+degomme+genaud+al.2013.toward}. All these works conclude that +SimGrid is able to simulate pretty accurately the real behavior of the +applications. %%%%%%%%%%%%%%%%%%%%%%%%% \section{Two-stage multisplitting methods} \label{sec:04} \subsection{Synchronous and asynchronous two-stage methods for sparse linear systems} \label{sec:04.01} -In this paper we focus on two-stage multisplitting methods in their both versions (synchronous and asynchronous)~\cite{Frommer92,Szyld92,Bru95}. These iterative methods are based on multisplitting methods~\cite{O'leary85,White86,Alefeld97} and use two nested iterations: the outer iteration and the inner iteration. Let us consider the following sparse linear system of $n$ equations in $\mathbb{R}$ +In this paper we focus on two-stage multisplitting methods in their both versions (synchronous and asynchronous)~\cite{Frommer92,Szyld92,Bru95}. These iterative methods are based on multisplitting methods~\cite{O'leary85,White86,Alefeld97} and use two nested iterations: the outer iteration and the inner iteration. Let us consider the following sparse linear system of $n$ equations in $\mathbb{R}$: \begin{equation} Ax=b, \label{eq:01} \end{equation} -where $A$ is a sparse square and nonsingular matrix, $b$ is the right-hand side and $x$ is the solution of the system. Our work in this paper is restricted to the block Jacobi splitting method. This approach of multisplitting consists in partitioning the matrix $A$ into $L$ horizontal band matrices of order $\frac{n}{L}\times n$ without overlapping (i.e. sub-vectors $\{x_\ell\}_{1\leq\ell\leq L}$ are disjoint). Two-stage multisplitting methods solve the linear system~(\ref{eq:01}) iteratively as follows +where $A$ is a sparse square and nonsingular matrix, $b$ is the right-hand side and $x$ is the solution of the system. Our work in this paper is restricted to the block Jacobi splitting method. This approach of multisplitting consists in partitioning the matrix $A$ into $L$ horizontal band matrices of order $\frac{n}{L}\times n$ without overlapping (i.e. sub-vectors $\{x_\ell\}_{1\leq\ell\leq L}$ are disjoint). Two-stage multisplitting methods solve the linear system~(\ref{eq:01}) iteratively as follows: \begin{equation} x_\ell^{k+1} = A_{\ell\ell}^{-1}(b_\ell - \displaystyle\sum^{L}_{\substack{m=1\\m\neq\ell}}{A_{\ell m}x^k_m}),\mbox{~for~}\ell=1,\ldots,L\mbox{~and~}k=1,2,3,\ldots \label{eq:02} \end{equation} -where $x_\ell$ are sub-vectors of the solution $x$, $b_\ell$ are the sub-vectors of the right-hand side $b$, and $A_{\ell\ell}$ and $A_{\ell m}$ are diagonal and off-diagonal blocks of matrix $A$ respectively. The iterations of these methods can naturally be computed in parallel such that each processor or cluster of processors is responsible for solving one splitting as a linear sub-system +where $x_\ell$ are sub-vectors of the solution $x$, $b_\ell$ are the sub-vectors of the right-hand side $b$, and $A_{\ell\ell}$ and $A_{\ell m}$ are diagonal and off-diagonal blocks of matrix $A$ respectively. The iterations of these methods can naturally be computed in parallel such that each processor or cluster of processors is responsible for solving one splitting as a linear sub-system: \begin{equation} A_{\ell\ell} x_\ell = c_\ell,\mbox{~for~}\ell=1,\ldots,L, \label{eq:03} \end{equation} -where right-hand sides $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m$ are computed using the shared vectors $x_m$. In this paper, we use the well-known iterative method GMRES ({\it Generalized Minimal RESidual})~\cite{saad86} as an inner iteration to approximate the solutions of the different splittings arising from the block Jacobi multisplitting of matrix $A$. The algorithm in Figure~\ref{alg:01} shows the main key points of our block Jacobi two-stage method executed by a cluster of processors. In line~\ref{solve}, the linear sub-system~(\ref{eq:03}) is solved in parallel using GMRES method where $\MIG$ and $\TOLG$ are the maximum number of inner iterations and the tolerance threshold for GMRES respectively. The convergence of the two-stage multisplitting methods, based on synchronous or asynchronous iterations, is studied by many authors for example~\cite{Bru95,bahi07}. +where right-hand sides $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m$ are computed using the shared vectors $x_m$. In this paper, we use the well-known iterative method GMRES ({\it Generalized Minimal RESidual})~\cite{saad86} as an inner iteration to approximate the solutions of the different splittings arising from the block Jacobi multisplitting of matrix $A$. The algorithm in Figure~\ref{alg:01} shows the main key points of our block Jacobi two-stage method executed by a cluster of processors. In line~\ref{solve}, the linear sub-system~(\ref{eq:03}) is solved in parallel using GMRES method where $\MIG$ and $\TOLG$ are the maximum number of inner iterations and the tolerance threshold for GMRES respectively. The convergence of the two-stage multisplitting methods, based on synchronous or asynchronous iterations, has been studied by many authors for example~\cite{Bru95,bahi07}. \begin{figure}[t] %\begin{algorithm}[t] @@ -259,19 +339,19 @@ where right-hand sides $c_\ell=b_\ell-\sum_{m\neq\ell}A_{\ell m}x_m$ are compute %\end{algorithm} \end{figure} -In this paper, we propose two algorithms of two-stage multisplitting methods. The first algorithm is based on the asynchronous model which allows the communications to be overlapped by computations and reduces the idle times resulting from the synchronizations. So in the asynchronous mode, our two-stage algorithm uses asynchronous outer iterations and asynchronous communications between clusters. The communications (i.e. lines~\ref{send} and~\ref{recv} in Figure~\ref{alg:01}) are performed by message passing using MPI non-blocking communication routines. The convergence of the asynchronous iterations is detected when all clusters have locally converged +In this paper, we propose two algorithms of two-stage multisplitting methods. The first algorithm is based on the asynchronous model which allows communications to be overlapped by computations and reduces the idle times resulting from the synchronizations. So in the asynchronous mode, our two-stage algorithm uses asynchronous outer iterations and asynchronous communications between clusters. The communications (i.e. lines~\ref{send} and~\ref{recv} in Figure~\ref{alg:01}) are performed by message passing using MPI non-blocking communication routines. The convergence of the asynchronous iterations is detected when all clusters have locally converged: \begin{equation} k\geq\MIM\mbox{~or~}\|x_\ell^{k+1}-x_\ell^k\|_{\infty }\leq\TOLM, \label{eq:04} \end{equation} -where $\MIM$ is the maximum number of outer iterations and $\TOLM$ is the tolerance threshold for the two-stage algorithm. +where $\MIM$ is the maximum number of outer iterations and $\TOLM$ is the tolerance threshold for the two-stage algorithm. -The second two-stage algorithm is based on synchronous outer iterations. We propose to use the Krylov iteration based on residual minimization to improve the slow convergence of the multisplitting methods. In this case, a $n\times s$ matrix $S$ is set using solutions issued from the inner iteration +The second two-stage algorithm is based on synchronous outer iterations. We propose to use the Krylov iteration based on residual minimization to improve the slow convergence of the multisplitting methods. In this case, a $n\times s$ matrix $S$ is set using solutions issued from the inner iteration: \begin{equation} S=[x^1,x^2,\ldots,x^s],~s\ll n. \label{eq:05} \end{equation} -At each $s$ outer iterations, the algorithm computes a new approximation $\tilde{x}=S\alpha$ which minimizes the residual +At each $s$ outer iterations, the algorithm computes a new approximation $\tilde{x}=S\alpha$ which minimizes the residual: \begin{equation} \min_{\alpha\in\mathbb{R}^s}{\|b-AS\alpha\|_2}. \label{eq:06} @@ -304,11 +384,11 @@ The algorithm in Figure~\ref{alg:02} includes the procedure of the residual mini %\end{algorithm} \end{figure} -\subsection{Simulation of two-stage methods using SimGrid framework} +\subsection{Simulation of the two-stage methods using SimGrid toolkit} \label{sec:04.02} One of our objectives when simulating the application in Simgrid is, as in real -life, to get accurate results (solutions of the problem) but also ensure the +life, to get accurate results (solutions of the problem) but also to ensure the test reproducibility under the same conditions. According to our experience, very few modifications are required to adapt a MPI program for the Simgrid simulator using SMPI (Simulator MPI). The first modification is to include SMPI @@ -316,11 +396,13 @@ libraries and related header files (smpi.h). The second modification is to suppress all global variables by replacing them with local variables or using a Simgrid selector called "runtime automatic switching" (smpi/privatize\_global\_variables). Indeed, global variables can generate side -effects on runtime between the threads running in the same process, generated by -Simgrid to simulate the grid environment. \RC{On vire cette phrase ?} \RCE {Si c'est la phrase d'avant sur les threads, je pense qu'on peut la retenir car c'est l'explication du pourquoi Simgrid n'aime pas les variables globales. Si c'est pas bien dit, on peut la reformuler. Si c'est la phrase ci-apres, effectivement, on peut la virer si elle preterais a discussion}The -last modification on the MPI program pointed out for some cases, the review of -the sequence of the MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions which -might cause an infinite loop. +effects on runtime between the threads running in the same process and generated by +Simgrid to simulate the grid environment. + +%\RC{On vire cette phrase ?} \RCE {Si c'est la phrase d'avant sur les threads, je pense qu'on peut la retenir car c'est l'explication du pourquoi Simgrid n'aime pas les variables globales. Si c'est pas bien dit, on peut la reformuler. Si c'est la phrase ci-apres, effectivement, on peut la virer si elle preterais a discussion}The +%last modification on the MPI program pointed out for some cases, the review of +%the sequence of the MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions which +%might cause an infinite loop. \paragraph{Simgrid Simulator parameters} @@ -341,21 +423,19 @@ nodes/processors for each cluster). In addition, the following arguments are given to the programs at runtime: \begin{itemize} - \item maximum number of inner and outer iterations; - \item inner and outer precisions; - \item maximum number of the gmres's restarts in the Arnorldi process; - \item maximum number of iterations qnd the tolerance threshold in classical GMRES; - \item tolerance threshold for outer and inner-iterations; - \item matrix size (N$_{x}$, N$_{y}$ and N$_{z}$) respectively on x, y, z axis; - \item matrix diagonal value = 6.0 for synchronous Krylov multisplitting experiments and 6.2 for asynchronous block Jacobi experiments; \RC{CE tu vérifies, je dis ca de tête} - \item matrix off-diagonal value; - \item execution mode: synchronous or asynchronous; - \RCE {C'est ok la liste des arguments du programme mais si Lilia ou toi pouvez preciser pour les arguments pour CGLS ci dessous} - \item Size of matrix S; - \item Maximum number of iterations and tolerance threshold for CGLS. + \item maximum number of inner iterations $\MIG$ and outer iterations $\MIM$, + \item inner precision $\TOLG$ and outer precision $\TOLM$, + \item matrix sizes of the 3D Poisson problem: N$_{x}$, N$_{y}$ and N$_{z}$ on axis $x$, $y$ and $z$ respectively, + \item matrix diagonal value is fixed to $6.0$ for synchronous Krylov multisplitting experiments and $6.2$ for asynchronous block Jacobi experiments, + \item matrix off-diagonal value is fixed to $-1.0$, + \item number of vectors in matrix $S$ (i.e. value of $s$), + \item maximum number of iterations $\MIC$ and precision $\TOLC$ for CGLS method, + \item maximum number of iterations and precision for the classical GMRES method, + \item maximum number of restarts for the Arnorldi process in GMRES method, + \item execution mode: synchronous or asynchronous. \end{itemize} -It should also be noticed that both solvers have been executed with the Simgrid selector -cfg=smpi/running\_power which determines the computational power (here 19GFlops) of the simulator host machine. +It should also be noticed that both solvers have been executed with the Simgrid selector \texttt{-cfg=smpi/running\_power} which determines the computational power (here 19GFlops) of the simulator host machine. %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%% @@ -363,32 +443,32 @@ It should also be noticed that both solvers have been executed with the Simgrid \section{Experimental Results} \label{sec:expe} -In this section, experiments for both Multisplitting algorithms are reported. First the problem used in our experiments is described. +In this section, experiments for both Multisplitting algorithms are reported. First the 3D Poisson problem used in our experiments is described. + +\subsection{The 3D Poisson problem} -We use our two-stage algorithms to solve the well-known Poisson problem $\nabla^2\phi=f$~\cite{Polyanin01}. In three-dimensional Cartesian coordinates in $\mathbb{R}^3$, the problem takes the following form + +We use our two-stage algorithms to solve the well-known Poisson problem $\nabla^2\phi=f$~\cite{Polyanin01}. In three-dimensional Cartesian coordinates in $\mathbb{R}^3$, the problem takes the following form: \begin{equation} \frac{\partial^2}{\partial x^2}\phi(x,y,z)+\frac{\partial^2}{\partial y^2}\phi(x,y,z)+\frac{\partial^2}{\partial z^2}\phi(x,y,z)=f(x,y,z)\mbox{~in the domain~}\Omega \label{eq:07} \end{equation} -such that +such that: \begin{equation*} \phi(x,y,z)=0\mbox{~on the boundary~}\partial\Omega \end{equation*} -where the real-valued function $\phi(x,y,z)$ is the solution sought, $f(x,y,z)$ is a known function and $\Omega=[0,1]^3$. The 3D discretization of the Laplace operator $\nabla^2$ with the finite difference scheme includes 7 points stencil on the computational grid. The numerical approximation of the Poisson problem on three-dimensional grid is repeatedly computed as $\phi=\phi^\star$ such that +where the real-valued function $\phi(x,y,z)$ is the solution sought, $f(x,y,z)$ is a known function and $\Omega=[0,1]^3$. The 3D discretization of the Laplace operator $\nabla^2$ with the finite difference scheme includes 7 points stencil on the computational grid. The numerical approximation of the Poisson problem on three-dimensional grid is repeatedly computed as $\phi=\phi^\star$ such that: \begin{equation} \begin{array}{ll} -\phi^\star(x,y,z)= & \frac{1}{6}(\phi(x-h,y,z)+\phi(x+h,y,z) \\ - & +\phi(x,y-h,z)+\phi(x,y+h,z) \\ - & +\phi(x,y,z-h)+\phi(x,y,z+h)\\ - & -h^2f(x,y,z)) +\phi^\star(x,y,z)=&\frac{1}{6}(\phi(x-h,y,z)+\phi(x,y-h,z)+\phi(x,y,z-h)\\&+\phi(x+h,y,z)+\phi(x,y+h,z)+\phi(x,y,z+h)\\&-h^2f(x,y,z)) \end{array} \label{eq:08} \end{equation} -until convergence where $h$ is the grid spacing between two adjacent elements in the 3D computational grid. +until convergence where $h$ is the grid spacing between two adjacent elements in the 3D computational grid. -In the parallel context, the 3D Poisson problem is partitioned into $L\times p$ sub-problems such that $L$ is the number of clusters and $p$ is the number of processors in each cluster. We apply the three-dimensional partitioning instead of the row-by-row one in order to reduce the size of the data shared at the sub-problems boundaries. In this case, each processor is in charge of parallelepipedic sub-problem and has at most six neighbors in the same cluster or in distant clusters with which it shares data at boundaries. +In the parallel context, the 3D Poisson problem is partitioned into $L\times p$ sub-problems such that $L$ is the number of clusters and $p$ is the number of processors in each cluster. We apply the three-dimensional partitioning instead of the row-by-row one in order to reduce the size of the data shared at the sub-problems boundaries. In this case, each processor is in charge of parallelepipedic block of the problem and has at most six neighbors in the same cluster or in distant clusters with which it shares data at boundaries. -\subsection{Study setup and Simulation Methodology} +\subsection{Study setup and simulation methodology} First, to conduct our study, we propose the following methodology which can be reused for any grid-enabled applications.\\ @@ -397,10 +477,12 @@ which can be reused for any grid-enabled applications.\\ the application to be tested. Numerical parallel iterative algorithms have been chosen for the study in this paper. \\ -\textbf{Step 2}: Collect the software materials needed for the -experimentation. In our case, we have two variants algorithms for the -resolution of the 3D-Poisson problem: (1) using the classical GMRES; (2) and the Multisplitting method. In addition, the Simgrid simulator has been chosen to simulate the behaviors of the -distributed applications. Simgrid is running on the Mesocentre datacenter in the University of Franche-Comte and also in a virtual machine on a simple laptop. \\ +\textbf{Step 2}: Collect the software materials needed for the experimentation. +In our case, we have two variants algorithms for the resolution of the +3D-Poisson problem: (1) using the classical GMRES; (2) and the Multisplitting +method. In addition, the Simgrid simulator has been chosen to simulate the +behaviors of the distributed applications. Simgrid is running in a virtual +machine on a simple laptop. \\ \textbf{Step 3}: Fix the criteria which will be used for the future results comparison and analysis. In the scope of this study, we retain @@ -427,13 +509,13 @@ input data. \\ a grid environment} When running a distributed application in a computational grid, many factors may -have a strong impact on the performances. First of all, the architecture of the +have a strong impact on the performance. First of all, the architecture of the grid itself can obviously influence the performance results of the program. The performance gain might be important theoretically when the number of clusters and/or the number of nodes (processors/cores) in each individual cluster increase. -Another important factor impacting the overall performances of the application +Another important factor impacting the overall performance of the application is the network configuration. Two main network parameters can modify drastically the program output results: \begin{enumerate} @@ -441,290 +523,310 @@ the program output results: capacity" of the network is defined as the maximum of data that can transit from one point to another in a unit of time. \item the network latency (lat : microsecond) defined as the delay from the - start time to send the data from a source and the final time the destination - have finished to receive it. + start time to send a simple data from a source to a destination. \end{enumerate} -Upon the network characteristics, another impacting factor is the -application dependent volume of data exchanged between the nodes in the cluster -and between distant clusters. Large volume of data can be transferred and -transit between the clusters and nodes during the code execution. +Upon the network characteristics, another impacting factor is the volume of data exchanged between the nodes in the cluster +and between distant clusters. This parameter is application dependent. In a grid environment, it is common to distinguish, on the one hand, the - "intra-network" which refers to the links between nodes within a cluster and, + "intra-network" which refers to the links between nodes within a cluster and on the other hand, the "inter-network" which is the backbone link between - clusters. In practice, these two networks have different speeds. The - intra-network generally works like a high speed local network with a high - bandwith and very low latency. In opposite, the inter-network connects clusters - sometime via heterogeneous networks components throuth internet with a lower - speed. The network between distant clusters might be a bottleneck for the - global performance of the application. - -\subsection{Comparing GMRES and Multisplitting algorithms in -synchronous mode} - -In the scope of this paper, our first objective is to demonstrate the -Algo-2 (Multisplitting method) shows a better performance in grid -architecture compared with Algo-1 (Classical GMRES) both running in -\textit{synchronous mode}. Better algorithm performance -should means a less number of iterations output and a less execution time -before reaching the convergence. For a systematic study, the experiments -should figure out that, for various grid parameters values, the -simulator will confirm the targeted outcomes, particularly for poor and -slow networks, focusing on the impact on the communication performance -on the chosen class of algorithm. + clusters. In practice, these two networks have different speeds. + The intra-network generally works like a high speed local network with a + high bandwith and very low latency. In opposite, the inter-network connects + clusters sometime via heterogeneous networks components throuth internet with + a lower speed. The network between distant clusters might be a bottleneck + for the global performance of the application. + +\subsection{Comparison of GMRES and Krylov Multisplitting algorithms in synchronous mode} + +In the scope of this paper, our first objective is to analyze when the Krylov +Multisplitting method has better performance than the classical GMRES +method. With a synchronous iterative method, better performance means a +smaller number of iterations and execution time before reaching the convergence. +For a systematic study, the experiments should figure out that, for various +grid parameters values, the simulator will confirm the targeted outcomes, +particularly for poor and slow networks, focusing on the impact on the +communication performance on the chosen class of algorithm. The following paragraphs present the test conditions, the output results and our comments.\\ -\textit{3.a Executing the algorithms on various computational grid -architecture and scaling up the input matrix size} -\\ - +\subsubsection{Execution of the algorithms on various computational grid +architectures and scaling up the input matrix size} +\ \\ % environment -\begin{footnotesize} + +\begin{table} [ht!] +\begin{center} \begin{tabular}{r c } \hline - Grid & 2x16, 4x8, 4x16 and 8x8\\ %\hline + Grid Architecture & 2x16, 4x8, 4x16 and 8x8\\ %\hline Network & N2 : bw=1Gbits/s - lat=5.10$^{-5}$ \\ %\hline Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ %\hline - & N$_{x}$ x N$_{y}$ x N$_{z}$ =170 x 170 x 170 \\ \hline \end{tabular} -Table 1 : Clusters x Nodes with N$_{x}$=150 or N$_{x}$=170 \\ +\caption{Test conditions: various grid configurations with the input matix size N$_{x}$=150 or N$_{x}$=170 \RC{N2 n'est pas défini..}\RC{Nx est défini, Ny? Nz?} +\AG{La lettre 'x' n'est pas le symbole de la multiplication. Utiliser \texttt{\textbackslash times}. Idem dans le texte, les figures, etc.}} +\label{tab:01} +\end{center} +\end{table} -\end{footnotesize} -%\RCE{J'ai voulu mettre les tableaux des données mais je pense que c'est inutile et ça va surcharger} +In this section, we analyze the performance of algorithms running on various +grid configurations (2x16, 4x8, 4x16 and 8x8). First, the results in Figure~\ref{fig:01} +show for all grid configurations the non-variation of the number of iterations of +classical GMRES for a given input matrix size; it is not the case for the +multisplitting method. + +\RC{CE attention tu n'as pas mis de label dans tes figures, donc c'est le bordel, j'en mets mais vérifie...} +\RC{Les légendes ne sont pas explicites...} -In this section, we compare the algorithms performance running on various grid configuration (2x16, 4x8, 4x16 and 8x8). First, the results in figure 3 show for all grid configuration the non-variation of the number of iterations of classical GMRES for a given input matrix size; it is not -the case for the multisplitting method. -%\begin{wrapfigure}{l}{100mm} \begin{figure} [ht!] -\centering -\includegraphics[width=100mm]{cluster_x_nodes_nx_150_and_nx_170.pdf} -\caption{Cluster x Nodes N$_{x}$=150 and N$_{x}$=170} -%\label{overflow}} + \begin{center} + \includegraphics[width=100mm]{cluster_x_nodes_nx_150_and_nx_170.pdf} + \end{center} + \caption{Various grid configurations with the input matrix size N$_{x}$=150 and N$_{x}$=170\RC{idem} +\AG{Utiliser le point comme séparateur décimal et non la virgule. Idem dans les autres figures.}} + \label{fig:01} \end{figure} -%\end{wrapfigure} -The execution time difference between the two algorithms is important when -comparing between different grid architectures, even with the same number of -processors (like 2x16 and 4x8 = 32 processors for example). The -experiment concludes the low sensitivity of the multisplitting method -(compared with the classical GMRES) when scaling up the number of the processors in the grid: in average, the GMRES (resp. Multisplitting) algorithm performs 40\% better (resp. 48\%) less when running from 2x16=32 to 8x8=64 processors. -\textit{\\3.b Running on two different speed cluster inter-networks\\} +The execution times between the two algorithms is significant with different +grid architectures, even with the same number of processors (for example, 2x16 +and 4x8). We can observ the low sensitivity of the Krylov multisplitting method +(compared with the classical GMRES) when scaling up the number of the processors +in the grid: in average, the GMRES (resp. Multisplitting) algorithm performs +$40\%$ better (resp. $48\%$) when running from 2x16=32 to 8x8=64 processors. \RC{pas très clair, c'est pas précis de dire qu'un algo perform mieux qu'un autre, selon quel critère?} -% environment -\begin{footnotesize} +\subsubsection{Running on two different inter-clusters network speeds \\} + +\begin{table} [ht!] +\begin{center} \begin{tabular}{r c } \hline - Grid & 2x16, 4x8\\ %\hline + Grid Architecture & 2x16, 4x8\\ %\hline Network & N1 : bw=10Gbs-lat=8.10$^{-6}$ \\ %\hline - & N2 : bw=1Gbs-lat=5.10$^{-5}$ \\ - Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \\ + Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \end{tabular} -Table 2 : Clusters x Nodes - Networks N1 x N2 \\ - - \end{footnotesize} +\caption{Test conditions: grid 2x16 and 4x8 with networks N1 vs N2} +\label{tab:02} +\end{center} +\end{table} +These experiments compare the behavior of the algorithms running first on a +speed inter-cluster network (N1) and also on a less performant network (N2). \RC{Il faut définir cela avant...} +Figure~\ref{fig:02} shows that end users will reduce the execution time +for both algorithms when using a grid architecture like 4x16 or 8x8: the reduction is about $2$. The results depict also that when +the network speed drops down (variation of 12.5\%), the difference between the two Multisplitting algorithms execution times can reach more than 25\%. +%\RC{c'est pas clair : la différence entre quoi et quoi?} +%\DL{pas clair} +%\RCE{Modifie} %\begin{wrapfigure}{l}{100mm} \begin{figure} [ht!] \centering \includegraphics[width=100mm]{cluster_x_nodes_n1_x_n2.pdf} -\caption{Cluster x Nodes N1 x N2} -%\label{overflow}} +\caption{Grid 2x16 and 4x8 with networks N1 vs N2 +\AG{\np{8E-6}, \np{5E-6} au lieu de 8E-6, 5E-6}} +\label{fig:02} \end{figure} %\end{wrapfigure} -The experiments compare the behavior of the algorithms running first on -a speed inter- cluster network (N1) and also on a less performant network (N2). -Figure 4 shows that end users will gain to reduce the execution time -for both algorithms in using a grid architecture like 4x16 or 8x8: the -performance was increased in a factor of 2. The results depict also that -when the network speed drops down (12.5\%), the difference between the execution -times can reach more than 25\%. - -\textit{\\3.c Network latency impacts on performance\\} -% environment -\begin{footnotesize} +\subsubsection{Network latency impacts on performance} +\ \\ +\begin{table} [ht!] +\centering \begin{tabular}{r c } \hline - Grid & 2x16\\ %\hline + Grid Architecture & 2x16\\ %\hline Network & N1 : bw=1Gbs \\ %\hline - Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline\\ + Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \end{tabular} -Table 3 : Network latency impact \\ - -\end{footnotesize} +\caption{Test conditions: network latency impacts} +\label{tab:03} +\end{table} \begin{figure} [ht!] \centering \includegraphics[width=100mm]{network_latency_impact_on_execution_time.pdf} -\caption{Network latency impact on execution time} -%\label{overflow}} +\caption{Network latency impacts on execution time +\AG{\np{E-6}}} +\label{fig:03} \end{figure} -According the results in figure 5, degradation of the network -latency from 8.10$^{-6}$ to 6.10$^{-5}$ implies an absolute time -increase more than 75\% (resp. 82\%) of the execution for the classical -GMRES (resp. multisplitting) algorithm. In addition, it appears that the -multisplitting method tolerates more the network latency variation with -a less rate increase of the execution time. Consequently, in the worst case (lat=6.10$^{-5 -}$), the execution time for GMRES is almost the double of the time for -the multisplitting, even though, the performance was on the same order -of magnitude with a latency of 8.10$^{-6}$. - -\textit{\\3.d Network bandwidth impacts on performance\\} - -% environment -\begin{footnotesize} +According to the results of Figure~\ref{fig:03}, a degradation of the network +latency from $8.10^{-6}$ to $6.10^{-5}$ implies an absolute time increase of +more than $75\%$ (resp. $82\%$) of the execution for the classical GMRES +(resp. Krylov multisplitting) algorithm. In addition, it appears that the +Krylov multisplitting method tolerates more the network latency variation with a +less rate increase of the execution time.\RC{Les 2 précédentes phrases me + semblent en contradiction....} Consequently, in the worst case ($lat=6.10^{-5 +}$), the execution time for GMRES is almost the double than the time of the +Krylov multisplitting, even though, the performance was on the same order of +magnitude with a latency of $8.10^{-6}$. + +\subsubsection{Network bandwidth impacts on performance} +\ \\ +\begin{table} [ht!] +\centering \begin{tabular}{r c } \hline - Grid & 2x16\\ %\hline + Grid Architecture & 2x16\\ %\hline Network & N1 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \\ \end{tabular} -Table 4 : Network bandwidth impact \\ - -\end{footnotesize} +\caption{Test conditions: Network bandwidth impacts\RC{Qu'est ce qui varie ici? Il n'y a pas de variation dans le tableau}} +\label{tab:04} +\end{table} \begin{figure} [ht!] \centering \includegraphics[width=100mm]{network_bandwith_impact_on_execution_time.pdf} -\caption{Network bandwith impact on execution time} -%\label{overflow} +\caption{Network bandwith impacts on execution time +\AG{``Execution time'' avec un 't' minuscule}. Idem autres figures.} +\label{fig:04} \end{figure} +The results of increasing the network bandwidth show the improvement of the +performance for both algorithms by reducing the execution time (see +Figure~\ref{fig:04}). However, in this case, the Krylov multisplitting method +presents a better performance in the considered bandwidth interval with a gain +of $40\%$ which is only around $24\%$ for the classical GMRES. - -The results of increasing the network bandwidth show the improvement -of the performance for both of the two algorithms by reducing the execution time (Figure 6). However, and again in this case, the multisplitting method presents a better performance in the considered bandwidth interval with a gain of 40\% which is only around 24\% for classical GMRES. - -\textit{\\3.e Input matrix size impacts on performance\\} - -% environment -\begin{footnotesize} +\subsubsection{Input matrix size impacts on performance} +\ \\ +\begin{table} [ht!] +\centering \begin{tabular}{r c } \hline - Grid & 4x8\\ %\hline - Network & N2 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline - Input matrix size & N$_{x}$ = From 40 to 200\\ \hline \\ + Grid Architecture & 4x8\\ %\hline + Network & N2 : bw=1Gbs - lat=5.10$^{-5}$ \\ + Input matrix size & N$_{x}$ = From 40 to 200\\ \hline \end{tabular} -Table 5 : Input matrix size impact\\ - -\end{footnotesize} +\caption{Test conditions: Input matrix size impacts} +\label{tab:05} +\end{table} \begin{figure} [ht!] \centering \includegraphics[width=100mm]{pb_size_impact_on_execution_time.pdf} -\caption{Pb size impact on execution time} -%\label{overflow}} +\caption{Problem size impacts on execution time} +\label{fig:05} \end{figure} -In this experimentation, the input matrix size has been set from -N$_{x}$ = N$_{y}$ = N$_{z}$ = 40 to 200 side elements that is from 40$^{3}$ = 64.000 to -200$^{3}$ = 8.000.000 points. Obviously, as shown in the figure 7, -the execution time for the two algorithms convergence increases with the -iinput matrix size. But the interesting results here direct on (i) the -drastic increase (300 times) of the number of iterations needed before -the convergence for the classical GMRES algorithm when the matrix size -go beyond N$_{x}$=150; (ii) the classical GMRES execution time also almost -the double from N$_{x}$=140 compared with the convergence time of the -multisplitting method. These findings may help a lot end users to setup -the best and the optimal targeted environment for the application -deployment when focusing on the problem size scale up. Note that the -same test has been done with the grid 2x16 getting the same conclusion. - -\textit{\\3.f CPU Power impact on performance\\} +In these experiments, the input matrix size has been set from $N_{x} = N_{y} += N_{z} = 40$ to $200$ side elements that is from $40^{3} = 64.000$ to $200^{3} += 8,000,000$ points. Obviously, as shown in Figure~\ref{fig:05}, the execution +time for both algorithms increases when the input matrix size also increases. +But the interesting results are: +\begin{enumerate} + \item the drastic increase ($10$ times) of the number of iterations needed to + reach the convergence for the classical GMRES algorithm when the matrix size + go beyond $N_{x}=150$; \RC{C'est toujours pas clair... ok le nommbre d'itérations est 10 fois plus long mais la suite de la phrase ne veut rien dire} +\item the classical GMRES execution time is almost the double for $N_{x}=140$ + compared with the Krylov multisplitting method. +\end{enumerate} -% environment -\begin{footnotesize} +These findings may help a lot end users to setup the best and the optimal +targeted environment for the application deployment when focusing on the problem +size scale up. It should be noticed that the same test has been done with the +grid 2x16 leading to the same conclusion. + +\subsubsection{CPU Power impacts on performance} + +\begin{table} [ht!] +\centering \begin{tabular}{r c } \hline - Grid & 2x16\\ %\hline + Grid architecture & 2x16\\ %\hline Network & N2 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline Input matrix size & N$_{x}$ = 150 x 150 x 150\\ \hline \end{tabular} -Table 6 : CPU Power impact \\ - -\end{footnotesize} - +\caption{Test conditions: CPU Power impacts} +\label{tab:06} +\end{table} \begin{figure} [ht!] \centering \includegraphics[width=100mm]{cpu_power_impact_on_execution_time.pdf} -\caption{CPU Power impact on execution time} -%\label{overflow}} -s\end{figure} - -Using the Simgrid simulator flexibility, we have tried to determine the -impact on the algorithms performance in varying the CPU power of the -clusters nodes from 1 to 19 GFlops. The outputs depicted in the figure 6 -confirm the performance gain, around 95\% for both of the two methods, -after adding more powerful CPU. - -\subsection{Comparing GMRES in native synchronous mode and -Multisplitting algorithms in asynchronous mode} - -The previous paragraphs put in evidence the interests to simulate the -behavior of the application before any deployment in a real environment. -We have focused the study on analyzing the performance in varying the -key factors impacting the results. The study compares -the performance of the two proposed algorithms both in \textit{synchronous mode -}. In this section, following the same previous methodology, the goal is to -demonstrate the efficiency of the multisplitting method in \textit{ -asynchronous mode} compared with the classical GMRES staying in -\textit{synchronous mode}. - -Note that the interest of using the asynchronous mode for data exchange -is mainly, in opposite of the synchronous mode, the non-wait aspects of -the current computation after a communication operation like sending -some data between nodes. Each processor can continue their local -calculation without waiting for the end of the communication. Thus, the -asynchronous may theoretically reduce the overall execution time and can -improve the algorithm performance. - -As stated supra, Simgrid simulator tool has been used to prove the -efficiency of the multisplitting in asynchronous mode and to find the -best combination of the grid resources (CPU, Network, input matrix size, -\ldots ) to get the highest \textit{"relative gain"} (exec\_time$_{GMRES}$ / exec\_time$_{multisplitting}$) in comparison with the classical GMRES time. - - -The test conditions are summarized in the table below : \\ +\caption{CPU Power impacts on execution time} +\label{fig:06} +\end{figure} -% environment -\begin{footnotesize} +Using the Simgrid simulator flexibility, we have tried to determine the impact +on the algorithms performance in varying the CPU power of the clusters nodes +from $1$ to $19$ GFlops. The outputs depicted in Figure~\ref{fig:06} confirm the +performance gain, around $95\%$ for both of the two methods, after adding more +powerful CPU. + +\DL{il faut une conclusion sur ces tests : ils confirment les résultats déjà +obtenus en grandeur réelle. Donc c'est une aide précieuse pour les dev. Pas +besoin de déployer sur une archi réelle} + + +\subsection{Comparing GMRES in native synchronous mode and the multisplitting algorithm in asynchronous mode} + +The previous paragraphs put in evidence the interests to simulate the behavior +of the application before any deployment in a real environment. In this +section, following the same previous methodology, our goal is to compare the +efficiency of the multisplitting method in \textit{ asynchronous mode} compared with the +classical GMRES in \textit{synchronous mode}. + +The interest of using an asynchronous algorithm is that there is no more +synchronization. With geographically distant clusters, this may be essential. +In this case, each processor can compute its iteration freely without any +synchronization with the other processors. Thus, the asynchronous may +theoretically reduce the overall execution time and can improve the algorithm +performance. + +\RC{la phrase suivante est bizarre, je ne comprends pas pourquoi elle vient ici} +In this section, Simgrid simulator tool has been successfully used to show +the efficiency of the multisplitting in asynchronous mode and to find the best +combination of the grid resources (CPU, Network, input matrix size, \ldots ) to +get the highest \textit{"relative gain"} (exec\_time$_{GMRES}$ / +exec\_time$_{multisplitting}$) in comparison with the classical GMRES time. + + +The test conditions are summarized in the table~\ref{tab:07}: \\ + +\begin{table} [ht!] +\centering \begin{tabular}{r c } \hline - Grid & 2x50 totaling 100 processors\\ %\hline + Grid Architecture & 2x50 totaling 100 processors\\ %\hline Processors Power & 1 GFlops to 1.5 GFlops\\ Intra-Network & bw=1.25 Gbits - lat=5.10$^{-5}$ \\ %\hline Inter-Network & bw=5 Mbits - lat=2.10$^{-2}$\\ Input matrix size & N$_{x}$ = From 62 to 150\\ %\hline Residual error precision & 10$^{-5}$ to 10$^{-9}$\\ \hline \\ \end{tabular} -\end{footnotesize} +\caption{Test conditions: GMRES in synchronous mode vs Krylov Multisplitting in asynchronous mode} +\label{tab:07} +\end{table} -Again, comprehensive and extensive tests have been conducted varying the -CPU power and the network parameters (bandwidth and latency) in the -simulator tool with different problem size. The relative gains greater -than 1 between the two algorithms have been captured after each step of -the test. Table 7 below has recorded the best grid configurations -allowing the multisplitting method execution time more performant 2.5 times than -the classical GMRES execution and convergence time. The experimentation has demonstrated the relative multisplitting algorithm tolerance when using a low speed network that we encounter usually with distant clusters thru the internet. +Again, comprehensive and extensive tests have been conducted with different +parameters as the CPU power, the network parameters (bandwidth and latency) +and with different problem size. The relative gains greater than $1$ between the +two algorithms have been captured after each step of the test. In +Figure~\ref{fig:07} are reported the best grid configurations allowing +the multisplitting method to be more than $2.5$ times faster than the +classical GMRES. These experiments also show the relative tolerance of the +multisplitting algorithm when using a low speed network as usually observed with +geographically distant clusters through the internet. % use the same column width for the following three tables \newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}} @@ -735,14 +837,12 @@ the classical GMRES execution and convergence time. The experimentation has demo \end{tabular}} -\begin{table}[!t] - \centering +\begin{figure}[!t] +\centering +%\begin{table} % \caption{Relative gain of the multisplitting algorithm compared with the classical GMRES} % \label{"Table 7"} -Table 7. Relative gain of the multisplitting algorithm compared with -the classical GMRES \\ - - \begin{mytable}{11} + \begin{mytable}{11} \hline bandwidth (Mbit/s) & 5 & 5 & 5 & 5 & 5 & 50 & 50 & 50 & 50 & 50 \\ @@ -763,20 +863,25 @@ the classical GMRES \\ & 2.52 & 2.55 & 2.52 & 2.57 & 2.54 & 2.53 & 2.51 & 2.58 & 2.55 & 2.54 \\ \hline \end{mytable} -\end{table} +%\end{table} + \caption{Relative gain of the multisplitting algorithm compared with the classical GMRES +\AG{C'est un tableau, pas une figure}} + \label{fig:07} +\end{figure} + \section{Conclusion} CONCLUSION -\section*{Acknowledgment} - +%\section*{Acknowledgment} +\ack This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01). - \bibliographystyle{wileyj} \bibliography{biblio} + \end{document} %%% Local Variables: