\todo[color=blue!10,#1]{\sffamily\textbf{LZK:} #2}\xspace}
\newcommand{\RC}[2][inline]{%
\todo[color=red!10,#1]{\sffamily\textbf{RC:} #2}\xspace}
+\newcommand{\CER}[2][inline]{%
+ \todo[color=pink!10,#1]{\sffamily\textbf{CER:} #2}\xspace}
\algnewcommand\algorithmicinput{\textbf{Input:}}
\algnewcommand\Input{\item[\algorithmicinput]}
\newcommand{\MI}{\mathit{MaxIter}}
-\usepackage{array}
-\usepackage{color, colortbl}
-\newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}}
-\newcolumntype{Z}[1]{>{\raggedleft}m{#1}}
-
\begin{document}
\title{Simulation of Asynchronous Iterative Numerical Algorithms Using SimGrid}
\maketitle
-\RC{Ordre des autheurs pas définitif.}
+\RC{Ordre des auteurs pas définitif.}
\begin{abstract}
+\AG{L'abstract est AMHA incompréhensible et ne donne pas envie de lire la suite.}
In recent years, the scalability of large-scale implementation in a
distributed environment of algorithms becoming more and more complex has
always been hampered by the limits of physical computing resources
increasing complexity of these requested applications combined with a continuous increase of their sizes lead to write
distributed and parallel algorithms requiring significant hardware resources (grid computing, clusters, broadband
network, etc.) but also a non-negligible CPU execution time. We consider in this paper a class of highly efficient
-parallel algorithms called \texttt{numerical iterative algorithms} executed in a distributed environment. As their name
-suggests, these algorithm solves a given problem by successive iterations ($X_{n +1} = f(X_{n})$) from an initial value
+parallel algorithms called \emph{numerical iterative algorithms} executed in a distributed environment. As their name
+suggests, these algorithms solve a given problem by successive iterations ($X_{n +1} = f(X_{n})$) from an initial value
$X_{0}$ to find an approximate value $X^*$ of the solution with a very low residual error. Several well-known methods
-demonstrate the convergence of these algorithms \cite{}.
+demonstrate the convergence of these algorithms~\cite{BT89,Bahi07}.
-Parallelization of such algorithms generally involved the division of the problem into several \emph{pieces} that will
+Parallelization of such algorithms generally involve the division of the problem into several \emph{blocks} that will
be solved in parallel on multiple processing units. The latter will communicate each intermediate results before a new
-iteration starts until the approximate solution is reached. These parallel computations can be performed either in
-\emph{synchronous} communication mode where a new iteration begin only when all nodes communications are completed,
-either \emph{asynchronous} mode where processors can continue independently without or few synchronization points. For
-instance in the \textit{Asynchronous Iterations - Asynchronous Communications (AIAC)} model \cite{bcvc06:ij}, local
+iteration starts and until the approximate solution is reached. These parallel computations can be performed either in
+\emph{synchronous} mode where a new iteration begins only when all nodes communications are completed,
+or in \emph{asynchronous} mode where processors can continue independently with few or no synchronization points. For
+instance in the \textit{Asynchronous Iterations~-- Asynchronous Communications (AIAC)} model~\cite{bcvc06:ij}, local
computations do not need to wait for required data. Processors can then perform their iterations with the data present
at that time. Even if the number of iterations required before the convergence is generally greater than for the
synchronous case, AIAC algorithms can significantly reduce overall execution times by suppressing idle times due to
-synchronizations especially in a grid computing context (see \cite{bcvc06:ij} for more details).
-
-Parallel numerical applications (synchronous or asynchronous) may have different configuration and deployment
-requirements. Quantifying their performance of resource allocation policies and application scheduling algorithms in
-grid computing environments under varying load, CPU power and network speeds is very costly, labor intensive and time
-consuming \cite{BuRaCa}. The case of AIAC algorithms is even more problematic since they are very sensible to the
-execution environment context. For instance, variations in the network bandwith (intra and inter- clusters), in the
-number and the power of nodes, in the number of clusters... can lead to very different number of iterations and so to
-very different execution times. In this context, it appears that the use of simulation tools to explore various platform
-scenarios and to run enormous numbers of experiments quickly can be very promising. In this way, the use of a simulation
-environment to execute parallel iterative algorithms found some interests in reducing the highly cost of access to
-computing resources: (1) for the applications development life cycle and in code debugging (2) and in production to get
-results in a reasonable execution time with a simulated infrastructure not accessible with physical resources. Indeed,
-the launch of distributed iterative asynchronous algorithms to solve a given problem on a large-scale simulated
-environment challenges to find optimal configurations giving the best results with a lowest residual error and in the
-best of execution time.
-
-To our knowledge, there is no existing work on the large-scale simulation of a real AIAC application. The aim of this
-paper is to give a first approach of the simulation of AIAC algorithms using the SimGrid toolkit \cite{SimGrid}. We had
-in the scope of this work implemented a program for solving large non-symmetric linear system of equations by numerical
-method GMRES (Generalized Minimal Residual). SimGrid had allowed us to launch the application from a modest computing
-infrastructure by simulating different distributed architectures composed by clusters nodes interconnected by variable
-speed networks. The simulated results we obtained are in line with real results exposed in ?? In addition, it has been
-permitted to show the effectiveness of asynchronous mode algorithm by comparing its performance with the synchronous
-mode time. With selected parameters on the network platforms (bandwidth, latency of inter cluster network) and on the
-clusters architecture (number, capacity calculation power) in the simulated environment, the experimental results have
-demonstrated not only the algorithm convergence within a reasonable time compared with the physical environment
-performance, but also a time saving of up to \np[\%]{40} in asynchronous mode.
+synchronizations especially in a grid computing context (see~\cite{Bahi07} for more details).
+
+Parallel numerical applications (synchronous or asynchronous) may have different
+configuration and deployment requirements. Quantifying their resource
+allocation policies and application scheduling algorithms in grid computing
+environments under varying load, CPU power and network speeds is very costly,
+very labor intensive and very time
+consuming~\cite{Calheiros:2011:CTM:1951445.1951450}. The case of AIAC
+algorithms is even more problematic since they are very sensible to the
+execution environment context. For instance, variations in the network bandwidth
+(intra and inter-clusters), in the number and the power of nodes, in the number
+of clusters\dots{} can lead to very different number of iterations and so to
+very different execution times. Then, it appears that the use of simulation
+tools to explore various platform scenarios and to run large numbers of
+experiments quickly can be very promising. In this way, the use of a simulation
+environment to execute parallel iterative algorithms found some interests in
+reducing the highly cost of access to computing resources: (1) for the
+applications development life cycle and in code debugging (2) and in production
+to get results in a reasonable execution time with a simulated infrastructure
+not accessible with physical resources. Indeed, the launch of distributed
+iterative asynchronous algorithms to solve a given problem on a large-scale
+simulated environment challenges to find optimal configurations giving the best
+results with a lowest residual error and in the best of execution time.
+
+To our knowledge, there is no existing work on the large-scale simulation of a
+real AIAC application. The aim of this paper is twofold. First we give a first
+approach of the simulation of AIAC algorithms using a simulation tool (i.e. the
+SimGrid toolkit~\cite{SimGrid}). Second, we confirm the effectiveness of
+asynchronous mode algorithms by comparing their performance with the synchronous
+mode. More precisely, we had implemented a program for solving large
+linear system of equations by numerical method GMRES (Generalized
+Minimal Residual) \cite{ref1}. We show, that with minor modifications of the
+initial MPI code, the SimGrid toolkit allows us to perform a test campaign of a
+real AIAC application on different computing architectures. The simulated
+results we obtained are in line with real results exposed in ??\AG[]{ref?}.
+SimGrid had allowed us to launch the application from a modest computing
+infrastructure by simulating different distributed architectures composed by
+clusters nodes interconnected by variable speed networks. With selected
+parameters on the network platforms (bandwidth, latency of inter cluster
+network) and on the clusters architecture (number, capacity calculation power)
+in the simulated environment, the experimental results have demonstrated not
+only the algorithm convergence within a reasonable time compared with the
+physical environment performance, but also a time saving of up to \np[\%]{40} in
+asynchronous mode.
+\AG{Il faudrait revoir la phrase précédente (couper en deux?). Là, on peut
+ avoir l'impression que le gain de \np[\%]{40} est entre une exécution réelle
+ et une exécution simulée!}
This article is structured as follows: after this introduction, the next section will give a brief description of
iterative asynchronous model. Then, the simulation framework SimGrid is presented with the settings to create various
-distributed architectures. The algorithm of the multi-splitting method used by GMRES written with MPI primitives and
+distributed architectures. The algorithm of the multisplitting method used by GMRES written with MPI primitives and
its adaptation to SimGrid with SMPI (Simulated MPI) is detailed in the next section. At last, the experiments results
carried out will be presented before some concluding remarks and future works.
As exposed in the introduction, parallel iterative methods are now widely used in many scientific domains. They can be
classified in three main classes depending on how iterations and communications are managed (for more details readers
-can refer to \cite{bcvc02:ip}). In the \textit{Synchronous Iterations - Synchronous Communications (SISC)} model data
+can refer to~\cite{bcvc06:ij}). In the \textit{Synchronous Iterations~-- Synchronous Communications (SISC)} model data
are exchanged at the end of each iteration. All the processors must begin the same iteration at the same time and
-important idle times on processors are generated. The \textit{Synchronous Iterations - Asynchronous Communications
+important idle times on processors are generated. The \textit{Synchronous Iterations~-- Asynchronous Communications
(SIAC)} model can be compared to the previous one except that data required on another processor are sent asynchronously
i.e. without stopping current computations. This technique allows to partially overlap communications by computations
but unfortunately, the overlapping is only partial and important idle times remain. It is clear that, in a grid
computing context, where the number of computational nodes is large, heterogeneous and widely distributed, the idle
times generated by synchronizations are very penalizing. One way to overcome this problem is to use the
-\textit{Asynchronous Iterations - Asynchronous Communications (AIAC)} model. Here, local computations do not need to
-wait for required data. Processors can then perform their iterations with the data present at that time. Figure
-\ref{fig:aiac} illustrates this model where the grey blocks represent the computation phases, the white spaces the idle
-times and the arrows the communications. With this algorithmic model, the number of iterations required before the
-convergence is generally greater than for the two former classes. But, and as detailed in \cite{bcvc06:ij}, AIAC
+\textit{Asynchronous Iterations~-- Asynchronous Communications (AIAC)} model. Here, local computations do not need to
+wait for required data. Processors can then perform their iterations with the data present at that time. Figure~\ref{fig:aiac}
+illustrates this model where the gray blocks represent the computation phases, the white spaces the idle
+times and the arrows the communications.
+\AG{There are no ``white spaces'' on the figure.}
+With this algorithmic model, the number of iterations required before the
+convergence is generally greater than for the two former classes. But, and as detailed in~\cite{bcvc06:ij}, AIAC
algorithms can significantly reduce overall execution times by suppressing idle times due to synchronizations especially
in a grid computing context.
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{AIAC.pdf}
- \caption{The Asynchronous Iterations - Asynchronous Communications model }
+ \caption{The Asynchronous Iterations~-- Asynchronous Communications model}
\label{fig:aiac}
\end{figure}
-It is very challenging to develop efficient applications for large scale, heterogeneous and distributed platforms such
-as computing grids. Researchers and engineers have to develop techniques for maximizing application performance of these
-multi-cluster platforms, by redesigning the applications and/or by using novel algorithms that can account for the
-composite and heterogeneous nature of the platform. Unfortunately, the deployment of such applications on these very
-large scale systems is very costly, labor intensive and time consuming. In this context, it appears that the use of
-simulation tools to explore various platform scenarios at will and to run enormous numbers of experiments quickly can be
-very promising. Several works...
+It is very challenging to develop efficient applications for large scale,
+heterogeneous and distributed platforms such as computing grids. Researchers and
+engineers have to develop techniques for maximizing application performance of
+these multi-cluster platforms, by redesigning the applications and/or by using
+novel algorithms that can account for the composite and heterogeneous nature of
+the platform. Unfortunately, the deployment of such applications on these very
+large scale systems is very costly, labor intensive and time consuming. In this
+context, it appears that the use of simulation tools to explore various platform
+scenarios at will and to run enormous numbers of experiments quickly can be very
+promising. Several works\dots{}
-In the context of AIAC algorithms, the use of simulation tools is even more relevant. Indeed, this class of applications
-is very sensible to the execution environment context. For instance, variations in the network bandwith (intra and
-inter-clusters), in the number and the power of nodes, in the number of clusters... can lead to very different number of
-iterations and so to very different execution times.
+\AG{Several works\dots{} what?\\
+ Le paragraphe suivant se trouve déjà dans l'intro ?}
+In the context of AIAC algorithms, the use of simulation tools is even more
+relevant. Indeed, this class of applications is very sensible to the execution
+environment context. For instance, variations in the network bandwidth (intra
+and inter-clusters), in the number and the power of nodes, in the number of
+clusters\dots{} can lead to very different number of iterations and so to very
+different execution times.
\section{SimGrid}
-SimGrid~\cite{casanova+legrand+quinson.2008.simgrid,SimGrid} is a simulation
-framework to sudy the behavior of large-scale distributed systems. As its name
+SimGrid~\cite{SimGrid,casanova+legrand+quinson.2008.simgrid} is a simulation
+framework to study the behavior of large-scale distributed systems. As its name
says, it emanates from the grid computing community, but is nowadays used to
-study grids, clouds, HPC or peer-to-peer systems.
-%- open source, developped since 1999, one of the major solution in the field
-%
+study grids, clouds, HPC or peer-to-peer systems. The early versions of SimGrid
+date from 1999, but it's still actively developed and distributed as an open
+source software. Today, it's one of the major generic tools in the field of
+simulation for large-scale distributed systems.
+
SimGrid provides several programming interfaces: MSG to simulate Concurrent
Sequential Processes, SimDAG to simulate DAGs of (parallel) tasks, and SMPI to
run real applications written in MPI~\cite{MPI}. Apart from the native C
interface, SimGrid provides bindings for the C++, Java, Lua and Ruby programming
-languages. The SMPI interface supports applications written in C or Fortran,
-with little or no modifications.
-%- implements most of MPI-2 \cite{ref} standard [CHECK]
-
-%%% explain simulation
-%- simulated processes folded in one real process
-%- simulates interactions on the network, fluid model
-%- able to skip long-lasting computations
-%- traces + visu?
-
-%%% platforms
-%- describe resources and their interconnection, with their properties
-%- XML files
-
-%%% validation + refs
-
-\AG{Décrire SimGrid~\cite{casanova+legrand+quinson.2008.simgrid,SimGrid} (Arnaud)}
+languages. SMPI is the interface that has been used for the work exposed in
+this paper. The SMPI interface implements about \np[\%]{80} of the MPI 2.0
+standard~\cite{bedaride:hal-00919507}, and supports applications written in C or
+Fortran, with little or no modifications.
+
+Within SimGrid, the execution of a distributed application is simulated on a
+single machine. The application code is really executed, but some operations
+like the communications are intercepted, and their running time is computed
+according to the characteristics of the simulated execution platform. The
+description of this target platform is given as an input for the execution, by
+the mean of an XML file. It describes the properties of the platform, such as
+the computing node with their computing power, the interconnection links with
+their bandwidth and latency, and the routing strategy. The simulated running
+time of the application is computed according to these properties.
+
+To compute the durations of the operations in the simulated world, and to take
+into account resource sharing (e.g. bandwidth sharing between competing
+communications), SimGrid uses a fluid model. This allows to run relatively fast
+simulations, while still keeping accurate
+results~\cite{bedaride:hal-00919507,tomacs13}. Moreover, depending on the
+simulated application, SimGrid/SMPI allows to skip long lasting computations and
+to only take their duration into account. When the real computations cannot be
+skipped, but the results have no importance for the simulation results, there is
+also the possibility to share dynamically allocated data structures between
+several simulated processes, and thus to reduce the whole memory consumption.
+These two techniques can help to run simulations at a very large scale.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Simulation of the multisplitting method}
%Décrire le problème (algo) traité ainsi que le processus d'adaptation à SimGrid.
-Let $Ax=b$ be a large sparse system of $n$ linear equations in $\mathbb{R}$, where $A$ is a sparse square and nonsingular matrix, $x$ is the solution vector and $b$ is the right-hand side vector. We use a multisplitting method based on the block Jacobi splitting to solve this linear system on a large scale platform composed of $L$ clusters of processors. In this case, we apply a row-by-row splitting without overlapping
-\[
-\left(\begin{array}{ccc}
-A_{11} & \cdots & A_{1L} \\
-\vdots & \ddots & \vdots\\
-A_{L1} & \cdots & A_{LL}
-\end{array} \right)
-\times
-\left(\begin{array}{c}
-X_1 \\
-\vdots\\
-X_L
-\end{array} \right)
-=
-\left(\begin{array}{c}
-B_1 \\
-\vdots\\
-B_L
-\end{array} \right)\]
+Let $Ax=b$ be a large sparse system of $n$ linear equations in $\mathbb{R}$, where $A$ is a sparse square and nonsingular matrix, $x$ is the solution vector and $b$ is the right-hand side vector. We use a multisplitting method based on the block Jacobi splitting to solve this linear system on a large scale platform composed of $L$ clusters of processors~\cite{o1985multi}. In this case, we apply a row-by-row splitting without overlapping
+\begin{equation*}
+ \left(\begin{array}{ccc}
+ A_{11} & \cdots & A_{1L} \\
+ \vdots & \ddots & \vdots\\
+ A_{L1} & \cdots & A_{LL}
+ \end{array} \right)
+ \times
+ \left(\begin{array}{c}
+ X_1 \\
+ \vdots\\
+ X_L
+ \end{array} \right)
+ =
+ \left(\begin{array}{c}
+ B_1 \\
+ \vdots\\
+ B_L
+ \end{array} \right)
+\end{equation*}
in such a way that successive rows of matrix $A$ and both vectors $x$ and $b$ are assigned to one cluster, where for all $l,m\in\{1,\ldots,L\}$ $A_{lm}$ is a rectangular block of $A$ of size $n_l\times n_m$, $X_l$ and $B_l$ are sub-vectors of $x$ and $b$, respectively, of size $n_l$ each and $\sum_{l} n_l=\sum_{m} n_m=n$.
The multisplitting method proceeds by iteration to solve in parallel the linear system on $L$ clusters of processors, in such a way each sub-system
\begin{equation}
-\left\{
-\begin{array}{l}
-A_{ll}X_l = Y_l \mbox{,~such that}\\
-Y_l = B_l - \displaystyle\sum_{\substack{m=1\\ m\neq l}}^{L}A_{lm}X_m
-\end{array}
-\right.
-\label{eq:4.1}
+ \label{eq:4.1}
+ \left\{
+ \begin{array}{l}
+ A_{ll}X_l = Y_l \text{, such that}\\
+ Y_l = B_l - \displaystyle\sum_{\substack{m=1\\ m\neq l}}^{L}A_{lm}X_m
+ \end{array}
+ \right.
\end{equation}
is solved independently by a cluster and communications are required to update the right-hand side sub-vector $Y_l$, such that the sub-vectors $X_m$ represent the data dependencies between the clusters. As each sub-system (\ref{eq:4.1}) is solved in parallel by a cluster of processors, our multisplitting method uses an iterative method as an inner solver which is easier to parallelize and more scalable than a direct method. In this work, we use the parallel algorithm of GMRES method~\cite{ref1} which is one of the most used iterative method by many researchers.
\For {$k=0,1,2,\ldots$ until the global convergence}
\State Restart outer iteration with $x^0=x^k$
\State Inner iteration: \Call{InnerSolver}{$x^0$, $k+1$}
-\State Send shared elements of $X_l^{k+1}$ to neighboring clusters
-\State Receive shared elements in $\{X_m^{k+1}\}_{m\neq l}$
+\State\label{algo:01:send} Send shared elements of $X_l^{k+1}$ to neighboring clusters
+\State\label{algo:01:recv} Receive shared elements in $\{X_m^{k+1}\}_{m\neq l}$
\EndFor
\Statex
\Function {InnerSolver}{$x^0$, $k$}
-\State Compute local right-hand side $Y_l$: \[Y_l = B_l - \sum\nolimits^L_{\substack{m=1 \\m\neq l}}A_{lm}X_m^0\]
+\State Compute local right-hand side $Y_l$:
+ \begin{equation*}
+ Y_l = B_l - \sum\nolimits^L_{\substack{m=1\\ m\neq l}}A_{lm}X_m^0
+ \end{equation*}
\State Solving sub-system $A_{ll}X_l^k=Y_l$ with the parallel GMRES method
\State \Return $X_l^k$
\EndFunction
\label{algo:01}
\end{figure}
-Algorithm on Figure~\ref{algo:01} shows the main key points of the
-multisplitting method to solve a large sparse linear system. This algorithm is
-based on an outer-inner iteration method where the parallel synchronous GMRES
-method is used to solve the inner iteration. It is executed in parallel by each
-cluster of processors. For all $l,m\in\{1,\ldots,L\}$, the matrices and vectors
-with the subscript $l$ represent the local data for cluster $l$, while
-$\{A_{lm}\}_{m\neq l}$ are off-diagonal matrices of sparse matrix $A$ and
-$\{X_m\}_{m\neq l}$ contain vector elements of solution $x$ shared with
-neighboring clusters. At every outer iteration $k$, asynchronous communications
-are performed between processors of the local cluster and those of distant
-clusters (lines $6$ and $7$ in Figure~\ref{algo:01}). The shared vector
-elements of the solution $x$ are exchanged by message passing using MPI
-non-blocking communication routines.
+Algorithm on Figure~\ref{algo:01} shows the main key points of the multisplitting method to solve a large sparse linear system. This algorithm is based on an outer-inner iteration method where the parallel synchronous GMRES method is used to solve the inner iteration. It is executed in parallel by each cluster of processors. For all $l,m\in\{1,\ldots,L\}$, the matrices and vectors with the subscript $l$ represent the local data for cluster $l$, while $\{A_{lm}\}_{m\neq l}$ are off-diagonal matrices of sparse matrix $A$ and $\{X_m\}_{m\neq l}$ contain vector elements of solution $x$ shared with neighboring clusters. At every outer iteration $k$, asynchronous communications are performed between processors of the local cluster and those of distant clusters (lines~\ref{algo:01:send} and~\ref{algo:01:recv} in Figure~\ref{algo:01}). The shared vector elements of the solution $x$ are exchanged by message passing using MPI non-blocking communication routines.
\begin{figure}[!t]
\centering
\label{fig:4.1}
\end{figure}
-The global convergence of the asynchronous multisplitting solver is detected when the clusters of processors have all converged locally. We implemented the global convergence detection process as follows. On each cluster a master processor is designated (for example the processor with rank $1$) and masters of all clusters are interconnected by a virtual unidirectional ring network (see Figure~\ref{fig:4.1}). During the resolution, a Boolean token circulates around the virtual ring from a master processor to another until the global convergence is achieved. So starting from the cluster with rank $1$, each master processor $i$ sets the token to {\it True} if the local convergence is achieved or to {\it False} otherwise, and sends it to master processor $i+1$. Finally, the global convergence is detected when the master of cluster $1$ receives from the master of cluster $L$ a token set to {\it True}. In this case, the master of cluster $1$ broadcasts a stop message to masters of other clusters. In this work, the local convergence on each cluster $l$ is detected when the following condition is satisfied
-\[(k\leq \MI) \mbox{~or~} (\|X_l^k - X_l^{k+1}\|_{\infty}\leq\epsilon)\]
-where $\MI$ is the maximum number of outer iterations and $\epsilon$ is the tolerance threshold of the error computed between two successive local solution $X_l^k$ and $X_l^{k+1}$.
+The global convergence of the asynchronous multisplitting solver is detected
+when the clusters of processors have all converged locally. We implemented the
+global convergence detection process as follows. On each cluster a master
+processor is designated (for example the processor with rank 1) and masters of
+all clusters are interconnected by a virtual unidirectional ring network (see
+Figure~\ref{fig:4.1}). During the resolution, a Boolean token circulates around
+the virtual ring from a master processor to another until the global convergence
+is achieved. So starting from the cluster with rank 1, each master processor $i$
+sets the token to \textit{True} if the local convergence is achieved or to
+\textit{False} otherwise, and sends it to master processor $i+1$. Finally, the
+global convergence is detected when the master of cluster 1 receives from the
+master of cluster $L$ a token set to \textit{True}. In this case, the master of
+cluster 1 broadcasts a stop message to masters of other clusters. In this work,
+the local convergence on each cluster $l$ is detected when the following
+condition is satisfied
+\begin{equation*}
+ (k\leq \MI) \text{ or } (\|X_l^k - X_l^{k+1}\|_{\infty}\leq\epsilon)
+\end{equation*}
+where $\MI$ is the maximum number of outer iterations and $\epsilon$ is the
+tolerance threshold of the error computed between two successive local solution
+$X_l^k$ and $X_l^{k+1}$.
-\LZK{Description du processus d'adaptation de l'algo multisplitting à SimGrid}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code
+debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. In synchronous
+mode, the execution of the program raised no particular issue but in asynchronous mode, the review of the sequence of MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions
+and with the addition of the primitive MPI\_Test was needed to avoid a memory fault due to an infinite loop resulting from the non-convergence of the algorithm.
+\CER{On voulait en fait montrer la simplicité de l'adaptation de l'algo a SimGrid. Les problèmes rencontrés décrits dans ce paragraphe concerne surtout le mode async}\LZK{OK. J'aurais préféré avoir un peu plus de détails sur l'adaptation de la version async}
+Note here that the use of SMPI functions optimizer for memory footprint and CPU usage is not recommended knowing that one wants to get real results by simulation.
+As mentioned, upon this adaptation, the algorithm is executed as in the real life in the simulated environment after the following minor changes. First, all declared
+global variables have been moved to local variables for each subroutine. In fact, global variables generate side effects arising from the concurrent access of
+shared memory used by threads simulating each computing unit in the SimGrid architecture. Second, the alignment of certain types of variables such as ``long int'' had
+also to be reviewed. Finally, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid.
+In total, the initial MPI program running on the simulation environment SMPI gave after a very simple adaptation the same results as those obtained in a real
+environment. We have successfully executed the code in synchronous mode using GMRES algorithm compared with a multisplitting method in asynchrnous mode after few modification.
+\section{Experimental results}
+When the \textit{real} application runs in the simulation environment and produces the expected results, varying the input
+parameters and the program arguments allows us to compare outputs from the code execution. We have noticed from this
+study that the results depend on the following parameters:
+\begin{itemize}
+\item At the network level, we found that the most critical values are the
+ bandwidth (bw) and the network latency (lat).
+\item Hosts power (GFlops) can also influence on the results.
+\item Finally, when submitting job batches for execution, the arguments values
+ passed to the program like the maximum number of iterations or the
+ \textit{external} precision are critical. They allow to ensure not only the
+ convergence of the algorithm but also to get the main objective of the
+ experimentation of the simulation in having an execution time in asynchronous
+ less than in synchronous mode. The ratio between the execution time of asynchronous compared to the synchronous mode is defined as the "relative gain". So, our objective running the algorithm in SimGrid is to obtain a relative gain greater than 1.
+\end{itemize}
+\LZK{Propositions pour remplacer le terme ``speedup'': acceleration ratio ou relative gain}
+\CER{C'est fait. En conséquence, les tableaux et les commentaires ont été aussi modifiés}
+A priori, obtaining a relative gain greater than 1 would be difficult in a local area
+network configuration where the synchronous mode will take advantage on the
+rapid exchange of information on such high-speed links. Thus, the methodology
+adopted was to launch the application on clustered network. In this last
+configuration, degrading the inter-cluster network performance will
+\textit{penalize} the synchronous mode allowing to get a relative gain greater than 1.
+This action simulates the case of distant clusters linked with long distance network
+like Internet.
+
+In this paper, we solve the 3D Poisson problem whose the mathematical model is
+\begin{equation}
+\left\{
+\begin{array}{l}
+\nabla^2 u = f \text{~in~} \Omega \\
+u =0 \text{~on~} \Gamma =\partial\Omega
+\end{array}
+\right.
+\label{eq:02}
+\end{equation}
+where $\nabla^2$ is the Laplace operator, $f$ and $u$ are real-valued functions, and $\Omega=[0,1]^3$. The spatial discretization with a finite difference scheme reduces problem~(\ref{eq:02}) to a system of sparse linear equations. The general iteration scheme of our multisplitting method in a 3D domain using a seven point stencil could be written as
+\begin{equation}
+\begin{array}{ll}
+u^{k+1}(x,y,z)= & u^k(x,y,z) - \frac{1}{6}\times\\
+ & (u^k(x-1,y,z) + u^k(x+1,y,z) + \\
+ & u^k(x,y-1,z) + u^k(x,y+1,z) + \\
+ & u^k(x,y,z-1) + u^k(x,y,z+1)),
+\end{array}
+\label{eq:03}
+\end{equation}
+where the iteration matrix $A$ of size $N_x\times N_y\times N_z$ of the discretized linear system is sparse, symmetric and positive definite.
+The parallel solving of the 3D Poisson problem with our multisplitting method requires a data partitioning of the problem between clusters and between processors within a cluster. We have chosen the 3D partitioning instead of the row-by-row partitioning in order to reduce the data exchanges at sub-domain boundaries. Figure~\ref{fig:4.2} shows an example of the data partitioning of the 3D Poisson problem between two clusters of processors, where each sub-problem is assigned to a processor. In this context, a processor has at most six neighbors within a cluster or in distant clusters with which it shares data at sub-domain boundaries.
+\begin{figure}[!t]
+\centering
+ \includegraphics[width=80mm,keepaspectratio]{partition}
+\caption{Example of the 3D data partitioning between two clusters of processors.}
+\label{fig:4.2}
+\end{figure}
-
-\section{Experimental results}
-
-When the \emph{real} application runs in the simulation environment and produces
-the expected results, varying the input parameters and the program arguments
-allows us to compare outputs from the code execution. We have noticed from this
-study that the results depend on the following parameters: (1) at the network
-level, we found that the most critical values are the bandwidth (bw) and the
-network latency (lat). (2) Hosts power (GFlops) can also influence on the
-results. And finally, (3) when submitting job batches for execution, the
-arguments values passed to the program like the maximum number of iterations or
-the \emph{external} precision are critical to ensure not only the convergence of the
-algorithm but also to get the main objective of the experimentation of the
-simulation in having an execution time in asynchronous less than in synchronous
-mode, in others words, in having a \emph{speedup} less than 1
-({speedup}${}={}${execution time in synchronous mode}${}/{}${execution time in
-asynchronous mode}).
-
-A priori, obtaining a speedup less than 1 would be difficult in a local area
-network configuration where the synchronous mode will take advantage on the rapid
-exchange of information on such high-speed links. Thus, the methodology adopted
-was to launch the application on clustered network. In this last configuration,
-degrading the inter-cluster network performance will \emph{penalize} the synchronous
-mode allowing to get a speedup lower than 1. This action simulates the case of
-clusters linked with long distance network like Internet.
-
As a first step, the algorithm was run on a network consisting of two clusters
-containing fifty hosts each, totaling one hundred hosts. Various combinations of
-the above factors have providing the results shown in
-Table~\ref{tab.cluster.2x50} with a matrix size ranging from $N_x = N_y = N_z =
-62 \text{ to } 171$ elements or from $62^{3} = \np{238328}$ to $171^{3} =
-\np{5211000}$ entries.
+containing 50 hosts each, totaling 100 hosts. Various combinations of the above
+factors have providing the results shown in Table~\ref{tab.cluster.2x50} with a
+matrix size ranging from $N_x = N_y = N_z = \text{62}$ to 171 elements or from
+$\text{62}^\text{3} = \text{\np{238328}}$ to $\text{171}^\text{3} =
+\text{\np{5000211}}$ entries.
+
+% use the same column width for the following three tables
+\newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}}
+\newenvironment{mytable}[1]{% #1: number of columns for data
+ \renewcommand{\arraystretch}{1.3}%
+ \begin{tabular}{|>{\bfseries}r%
+ |*{#1}{>{\centering\arraybackslash}p{\mytablew}|}}}{%
+ \end{tabular}}
\begin{table}[!t]
\centering
\caption{2 clusters, each with 50 nodes}
\label{tab.cluster.2x50}
- \tiny
-
-\begin{tabular}{|Z{0.55cm}|Z{0.25cm}|Z{0.25cm}|M{0.25cm}|Z{0.25cm}|M{0.25cm}|M{0.25cm}|M{0.25cm}|M{0.25cm}|M{0.25cm}|M{0.25cm}|M{0.25cm}|M{0.25cm}|M{0.25cm}|}
- \hline
- \bf bw & 5 &5 & 5 & 5 & 5 & 50 & 50 & 50 & 50 & 50 & 10 & 10\\
- \hline
- \bf lat & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 & 0.03 & 0.01\\
- \hline
- \bf power & 1 & 1 & 1 & 1.5 & 1.5 & 1.5 & 1.5 & 1.5 & 1.5 & 1.5 & 1 & 1.5\\ \hline \bf size & 62 & 62 & 62 & 100 & 100 & 110 & 120& 130 & 140 & 150 & 171 & 171\\ \hline
- \bf Prec/Eprec & 10$^{-5}$ & 10$^{-8}$ & 10$^{-9}$ & 10$^{-11}$ & 10$^{-11}$ & 10$^{-11}$ & 10$^{-11}$ & 10$^{-11}$ & 10$^{-11}$ & 10$^{-11}$ & 10$^{-5}$ & 10$^{-5}$\\ \hline
- \bf speedup & 0.396 & 0.392 & 0.396 & 0.391 & 0.393 & 0.395 & 0.398 & 0.388 & 0.393 & 0.394 & 0.63 & 0.778\\ \hline
- \end{tabular}
-\end{table}
+ \begin{mytable}{6}
+ \hline
+ bw
+ & 5 & 5 & 5 & 5 & 5 & 50 \\
+ \hline
+ lat
+ & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 \\
+ \hline
+ power
+ & 1 & 1 & 1 & 1.5 & 1.5 & 1.5 \\
+ \hline
+ size
+ & 62 & 62 & 62 & 100 & 100 & 110 \\
+ \hline
+ Prec/Eprec
+ & \np{E-5} & \np{E-8} & \np{E-9} & \np{E-11} & \np{E-11} & \np{E-11} \\
+ \hline
+ Relative gain
+ & 2.52 & 2.55 & 2.52 & 2.57 & 2.54 & 2.53 \\
+ \hline
+ \end{mytable}
+
+ \smallskip
+
+ \begin{mytable}{6}
+ \hline
+ bw
+ & 50 & 50 & 50 & 50 & 10 & 10 \\
+ \hline
+ lat
+ & 0.02 & 0.02 & 0.02 & 0.02 & 0.03 & 0.01 \\
+ \hline
+ power
+ & 1.5 & 1.5 & 1.5 & 1.5 & 1 & 1.5 \\
+ \hline
+ size
+ & 120 & 130 & 140 & 150 & 171 & 171 \\
+ \hline
+ Prec/Eprec
+ & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-5} & \np{E-5} \\
+ \hline
+ Relative gain
+ & 2.51 & 2.58 & 2.55 & 2.54 & 1.59 & 1.29 \\
+ \hline
+ \end{mytable}
+\end{table}
Then we have changed the network configuration using three clusters containing
respectively 33, 33 and 34 hosts, or again by on hundred hosts for all the
clusters. In the same way as above, a judicious choice of key parameters has
permitted to get the results in Table~\ref{tab.cluster.3x33} which shows the
-speedups less than 1 with a matrix size from 62 to 100 elements.
+relative gains greater than 1 with a matrix size from 62 to 100 elements.
\begin{table}[!t]
\centering
\caption{3 clusters, each with 33 nodes}
\label{tab.cluster.3x33}
- \tiny
-
-\begin{tabular}{|Z{0.55cm}|Z{0.25cm}|Z{0.25cm}|M{0.25cm}|Z{0.25cm}|M{0.25cm}|M{0.25cm}|}
- \hline
- \bf bw & 10 &5 & 4 & 3 & 2 & 6\\ \hline
- \bf lat & 0.01 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02\\
- \hline
- \bf power & 1 & 1 & 1 & 1 & 1 & 1\\ \hline
- \bf size & 62 & 100 & 100 & 100 & 100 & 171\\ \hline
- \bf Prec/Eprec & 10$^{-5}$ & 10$^{-5}$ & 10$^{-5}$ & 10$^{-5}$ & 10$^{-5}$ & 10$^{-5}$\\ \hline
- \bf speedup & 0.997 & 0.99 & 0.93 & 0.84 & 0.78 & 0.99\\
- \hline
- \end{tabular}
-\end{table}
-
+ \begin{mytable}{6}
+ \hline
+ bw
+ & 10 & 5 & 4 & 3 & 2 & 6 \\
+ \hline
+ lat
+ & 0.01 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 \\
+ \hline
+ power
+ & 1 & 1 & 1 & 1 & 1 & 1 \\
+ \hline
+ size
+ & 62 & 100 & 100 & 100 & 100 & 171 \\
+ \hline
+ Prec/Eprec
+ & \np{E-5} & \np{E-5} & \np{E-5} & \np{E-5} & \np{E-5} & \np{E-5} \\
+ \hline
+ Relative gain
+ & 1.003 & 1.01 & 1.08 & 0.19 & 1.28 & 1.01 \\
+ \hline
+ \end{mytable}
+\end{table}
In a final step, results of an execution attempt to scale up the three clustered
configuration but increasing by two hundreds hosts has been recorded in
\caption{3 clusters, each with 66 nodes}
\label{tab.cluster.3x67}
- \tiny
-\begin{tabular}{|M{0.55cm}|M{0.25cm}|}
- \hline
- \bf bw & 1\\ \hline
- \bf lat & 0.02\\
- \hline
- \bf power & 1\\
- \hline
- \bf size & 62\\
- \hline
- \bf Prec/Eprec & 10$^{-5}$\\
- \hline
- \bf speedup & 0.9\\
- \hline
- \end{tabular}
-\end{table}
+ \begin{mytable}{1}
+ \hline
+ bw & 1 \\
+ \hline
+ lat & 0.02 \\
+ \hline
+ power & 1 \\
+ \hline
+ size & 62 \\
+ \hline
+ Prec/Eprec & \np{E-5} \\
+ \hline
+ Relative gain & 1.11 \\
+ \hline
+ \end{mytable}
+\end{table}
Note that the program was run with the following parameters:
\item Internal and external precisions;
\item Matrix size $N_x$, $N_y$ and $N_z$;
\item Matrix diagonal value: \np{6.0};
+ \item Matrix Off-diagonal value: \np{-1.0};
\item Execution Mode: synchronous or asynchronous.
\end{itemize}
After analyzing the outputs, generally, for the configuration with two or three
clusters including one hundred hosts (Tables~\ref{tab.cluster.2x50}
and~\ref{tab.cluster.3x33}), some combinations of the used parameters affecting
-the results have given a speedup less than 1, showing the effectiveness of the
+the results have given a relative gain more than 2.5, showing the effectiveness of the
asynchronous performance compared to the synchronous mode.
In the case of a two clusters configuration, Table~\ref{tab.cluster.2x50} shows
-that with a deterioration of inter cluster network set with \np[Mbits/s]{5} of
+that with a deterioration of inter cluster network set with \np[Mbit/s]{5} of
bandwidth, a latency in order of a hundredth of a millisecond and a system power
of one GFlops, an efficiency of about \np[\%]{40} in asynchronous mode is
obtained for a matrix size of 62 elements. It is noticed that the result remains
stable even if we vary the external precision from \np{E-5} to \np{E-9}. By
-increasing the problem size up to 100 elements, it was necessary to increase the
+increasing the matrix size up to 100 elements, it was necessary to increase the
CPU power of \np[\%]{50} to \np[GFlops]{1.5} for a convergence of the algorithm
with the same order of asynchronous mode efficiency. Maintaining such a system
power but this time, increasing network throughput inter cluster up to
-\np[Mbits/s]{50}, the result of efficiency of about \np[\%]{40} is obtained with
+\np[Mbit/s]{50}, the result of efficiency with a relative gain of 1.5 is obtained with
high external precision of \np{E-11} for a matrix size from 110 to 150 side
elements.
For the 3 clusters architecture including a total of 100 hosts,
Table~\ref{tab.cluster.3x33} shows that it was difficult to have a combination
-which gives an efficiency of asynchronous below \np[\%]{80}. Indeed, for a
+which gives a relative gain of asynchronous mode more than 1.2. Indeed, for a
matrix size of 62 elements, equality between the performance of the two modes
(synchronous and asynchronous) is achieved with an inter cluster of
-\np[Mbits/s]{10} and a latency of \np[ms]{E-1}. To challenge an efficiency by
-\np[\%]{78} with a matrix size of 100 points, it was necessary to degrade the
-inter cluster network bandwidth from 5 to 2 Mbit/s.
+\np[Mbit/s]{10} and a latency of \np[ms]{E-1}. To challenge an efficiency greater than 1.2 with a matrix size of 100 points, it was necessary to degrade the
+inter cluster network bandwidth from 5 to \np[Mbit/s]{2}.
A last attempt was made for a configuration of three clusters but more powerful
-with 200 nodes in total. The convergence with a speedup of \np[\%]{90} was
-obtained with a bandwidth of \np[Mbits/s]{1} as shown in
+with 200 nodes in total. The convergence with a relative gain around 1.1 was
+obtained with a bandwidth of \np[Mbit/s]{1} as shown in
Table~\ref{tab.cluster.3x67}.
+\LZK{Dans le papier, on compare les deux versions synchrone et asycnhrone du multisplitting. Y a t il des résultats pour comparer gmres parallèle classique avec multisplitting asynchrone? Ca permettra de montrer l'intérêt du multisplitting asynchrone sur des clusters distants}
+\CER{En fait, les résultats ont été obtenus en comparant les temps d'exécution entre l'algo classique GMRES en mode synchrone avec le multisplitting en mode asynchrone, le tout sur un environnement de clusters distants}
+
\section{Conclusion}
The experimental results on executing a parallel iterative algorithm in
asynchronous mode on an environment simulating a large scale of virtual
Our work has demonstrated that using such a simulation tool allow us to
reach the following three objectives:
-\newcounter{numberedCntD}
\begin{enumerate}
\item To have a flexible configurable execution platform resolving the
hard exercise to access to very limited but so solicited physical
resources;
-\item to ensure the algorithm convergence with a raisonnable time and
+\item to ensure the algorithm convergence with a reasonable time and
iteration number ;
\item and finally and more importantly, to find the correct combination
of the cluster and network specifications permitting to save time in
executing the algorithm in asynchronous mode.
-\setcounter{numberedCntD}{\theenumi}
\end{enumerate}
Our results have shown that in certain conditions, asynchronous mode is
speeder up to \np[\%]{40} than executing the algorithm in synchronous mode
tool to run efficiently an iterative parallel algorithm in asynchronous
mode in a grid architecture.
-\section*{Acknowledgment}
-
+\LZK{Perspectives???}
-The authors would like to thank\dots{}
+\section*{Acknowledgment}
+This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01).
+\todo[inline]{The authors would like to thank\dots{}}
% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page
\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,hpccBib}
+
+
\end{document}
%%% Local Variables:
%%% fill-column: 80
%%% ispell-local-dictionary: "american"
%%% End:
+
+% LocalWords: Ramamonjisoa Laiymani Arnaud Giersch Ziane Khodja Raphaël Femto
+% LocalWords: Université Franche Comté IUT Montbéliard Maréchal Juin Inria Sud
+% LocalWords: Ouest Vieille Talence cedex scalability experimentations HPC MPI
+% LocalWords: Parallelization AIAC GMRES multi SMPI SISC SIAC SimDAG DAGs Lua
+% LocalWords: Fortran GFlops priori Mbit de du fcomte multisplitting scalable
+% LocalWords: SimGrid Belfort parallelize Labex ANR LABX IEEEabrv hpccBib