\algnewcommand\Output{\item[\algorithmicoutput]}
\newcommand{\MI}{\mathit{MaxIter}}
+\newcommand{\Time}[1]{\mathit{Time}_\mathit{#1}}
\begin{document}
perspectives on experimentations for running the algorithm on a
simulated large scale growing environment and with larger problem size.
+\LZK{Long\ldots}
+
% no keywords for IEEE conferences
% Keywords: Algorithm distributed iterative asynchronous simulation SimGrid
\end{abstract}
This article is structured as follows: after this introduction, the next section will give a brief description of
iterative asynchronous model. Then, the simulation framework SimGrid is presented with the settings to create various
-distributed architectures. The algorithm of the multisplitting method used by GMRES written with MPI primitives and
+distributed architectures. The algorithm of the multisplitting method used by GMRES \LZK{??? GMRES n'utilise pas la méthode de multisplitting! Sinon ne doit on pas expliquer le choix d'une méthode de multisplitting?} written with MPI primitives and
its adaptation to SimGrid with SMPI (Simulated MPI) is detailed in the next section. At last, the experiments results
carried out will be presented before some concluding remarks and future works.
With this algorithmic model, the number of iterations required before the
convergence is generally greater than for the two former classes. But, and as detailed in~\cite{bcvc06:ij}, AIAC
algorithms can significantly reduce overall execution times by suppressing idle times due to synchronizations especially
-in a grid computing context.
+in a grid computing context.\LZK{Répétition par rapport à l'intro}
\begin{figure}[!t]
\centering
according to the characteristics of the simulated execution platform. The
description of this target platform is given as an input for the execution, by
the mean of an XML file. It describes the properties of the platform, such as
-the computing node with their computing power, the interconnection links with
+the computing nodes with their computing power, the interconnection links with
their bandwidth and latency, and the routing strategy. The simulated running
time of the application is computed according to these properties.
B_L
\end{array} \right)
\end{equation*}
-in such a way that successive rows of matrix $A$ and both vectors $x$ and $b$ are assigned to one cluster, where for all $l,m\in\{1,\ldots,L\}$ $A_{lm}$ is a rectangular block of $A$ of size $n_l\times n_m$, $X_l$ and $B_l$ are sub-vectors of $x$ and $b$, respectively, of size $n_l$ each and $\sum_{l} n_l=\sum_{m} n_m=n$.
+in such a way that successive rows of matrix $A$ and both vectors $x$ and $b$
+are assigned to one cluster, where for all $\ell,m\in\{1,\ldots,L\}$, $A_{\ell
+ m}$ is a rectangular block of $A$ of size $n_\ell\times n_m$, $X_\ell$ and
+$B_\ell$ are sub-vectors of $x$ and $b$, respectively, of size $n_\ell$ each,
+and $\sum_{\ell} n_\ell=\sum_{m} n_m=n$.
The multisplitting method proceeds by iteration to solve in parallel the linear system on $L$ clusters of processors, in such a way each sub-system
\begin{equation}
\label{eq:4.1}
\left\{
\begin{array}{l}
- A_{ll}X_l = Y_l \text{, such that}\\
- Y_l = B_l - \displaystyle\sum_{\substack{m=1\\ m\neq l}}^{L}A_{lm}X_m
+ A_{\ell\ell}X_\ell = Y_\ell \text{, such that}\\
+ Y_\ell = B_\ell - \displaystyle\sum_{\substack{m=1\\ m\neq \ell}}^{L}A_{\ell m}X_m
\end{array}
\right.
\end{equation}
-is solved independently by a cluster and communications are required to update the right-hand side sub-vector $Y_l$, such that the sub-vectors $X_m$ represent the data dependencies between the clusters. As each sub-system (\ref{eq:4.1}) is solved in parallel by a cluster of processors, our multisplitting method uses an iterative method as an inner solver which is easier to parallelize and more scalable than a direct method. In this work, we use the parallel algorithm of GMRES method~\cite{ref1} which is one of the most used iterative method by many researchers.
+is solved independently by a cluster and communications are required to update
+the right-hand side sub-vector $Y_\ell$, such that the sub-vectors $X_m$
+represent the data dependencies between the clusters. As each sub-system
+(\ref{eq:4.1}) is solved in parallel by a cluster of processors, our
+multisplitting method uses an iterative method as an inner solver which is
+easier to parallelize and more scalable than a direct method. In this work, we
+use the parallel algorithm of GMRES method~\cite{ref1} which is one of the most
+used iterative method by many researchers.
\begin{figure}[!t]
%%% IEEE instructions forbid to use an algorithm environment here, use figure
%%% instead
\begin{algorithmic}[1]
-\Input $A_l$ (sparse sub-matrix), $B_l$ (right-hand side sub-vector)
-\Output $X_l$ (solution sub-vector)\vspace{0.2cm}
-\State Load $A_l$, $B_l$
+\Input $A_\ell$ (sparse sub-matrix), $B_\ell$ (right-hand side sub-vector)
+\Output $X_\ell$ (solution sub-vector)\medskip
+
+\State Load $A_\ell$, $B_\ell$
\State Set the initial guess $x^0$
\For {$k=0,1,2,\ldots$ until the global convergence}
\State Restart outer iteration with $x^0=x^k$
\State Inner iteration: \Call{InnerSolver}{$x^0$, $k+1$}
-\State\label{algo:01:send} Send shared elements of $X_l^{k+1}$ to neighboring clusters
-\State\label{algo:01:recv} Receive shared elements in $\{X_m^{k+1}\}_{m\neq l}$
+\State\label{algo:01:send} Send shared elements of $X_\ell^{k+1}$ to neighboring clusters
+\State\label{algo:01:recv} Receive shared elements in $\{X_m^{k+1}\}_{m\neq \ell}$
\EndFor
\Statex
\Function {InnerSolver}{$x^0$, $k$}
-\State Compute local right-hand side $Y_l$:
+\State Compute local right-hand side $Y_\ell$:
\begin{equation*}
- Y_l = B_l - \sum\nolimits^L_{\substack{m=1\\ m\neq l}}A_{lm}X_m^0
+ Y_\ell = B_\ell - \sum\nolimits^L_{\substack{m=1\\ m\neq \ell}}A_{\ell m}X_m^0
\end{equation*}
-\State Solving sub-system $A_{ll}X_l^k=Y_l$ with the parallel GMRES method
-\State \Return $X_l^k$
+\State Solving sub-system $A_{\ell\ell}X_\ell^k=Y_\ell$ with the parallel GMRES method
+\State \Return $X_\ell^k$
\EndFunction
\end{algorithmic}
\caption{A multisplitting solver with GMRES method}
\label{algo:01}
\end{figure}
-Algorithm on Figure~\ref{algo:01} shows the main key points of the multisplitting method to solve a large sparse linear system. This algorithm is based on an outer-inner iteration method where the parallel synchronous GMRES method is used to solve the inner iteration. It is executed in parallel by each cluster of processors. For all $l,m\in\{1,\ldots,L\}$, the matrices and vectors with the subscript $l$ represent the local data for cluster $l$, while $\{A_{lm}\}_{m\neq l}$ are off-diagonal matrices of sparse matrix $A$ and $\{X_m\}_{m\neq l}$ contain vector elements of solution $x$ shared with neighboring clusters. At every outer iteration $k$, asynchronous communications are performed between processors of the local cluster and those of distant clusters (lines~\ref{algo:01:send} and~\ref{algo:01:recv} in Figure~\ref{algo:01}). The shared vector elements of the solution $x$ are exchanged by message passing using MPI non-blocking communication routines.
+Algorithm on Figure~\ref{algo:01} shows the main key points of the
+multisplitting method to solve a large sparse linear system. This algorithm is
+based on an outer-inner iteration method where the parallel synchronous GMRES
+method is used to solve the inner iteration. It is executed in parallel by each
+cluster of processors. For all $\ell,m\in\{1,\ldots,L\}$, the matrices and
+vectors with the subscript $\ell$ represent the local data for cluster $\ell$,
+while $\{A_{\ell m}\}_{m\neq \ell}$ are off-diagonal matrices of sparse matrix
+$A$ and $\{X_m\}_{m\neq \ell}$ contain vector elements of solution $x$ shared
+with neighboring clusters. At every outer iteration $k$, asynchronous
+communications are performed between processors of the local cluster and those
+of distant clusters (lines~\ref{algo:01:send} and~\ref{algo:01:recv} in
+Figure~\ref{algo:01}). The shared vector elements of the solution $x$ are
+exchanged by message passing using MPI non-blocking communication routines.
\begin{figure}[!t]
\centering
global convergence is detected when the master of cluster 1 receives from the
master of cluster $L$ a token set to \textit{True}. In this case, the master of
cluster 1 broadcasts a stop message to masters of other clusters. In this work,
-the local convergence on each cluster $l$ is detected when the following
+the local convergence on each cluster $\ell$ is detected when the following
condition is satisfied
\begin{equation*}
- (k\leq \MI) \text{ or } (\|X_l^k - X_l^{k+1}\|_{\infty}\leq\epsilon)
+ (k\leq \MI) \text{ or } (\|X_\ell^k - X_\ell^{k+1}\|_{\infty}\leq\epsilon)
\end{equation*}
where $\MI$ is the maximum number of outer iterations and $\epsilon$ is the
tolerance threshold of the error computed between two successive local solution
-$X_l^k$ and $X_l^{k+1}$.
+$X_\ell^k$ and $X_\ell^{k+1}$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code
As mentioned, upon this adaptation, the algorithm is executed as in the real life in the simulated environment after the following minor changes. First, all declared
global variables have been moved to local variables for each subroutine. In fact, global variables generate side effects arising from the concurrent access of
shared memory used by threads simulating each computing unit in the SimGrid architecture. Second, the alignment of certain types of variables such as ``long int'' had
-also to be reviewed. Finally, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid.
+also to be reviewed.
+\AG{À propos de ces problèmes d'alignement, en dire plus si ça a un intérêt, ou l'enlever.}
+ Finally, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid.
In total, the initial MPI program running on the simulation environment SMPI gave after a very simple adaptation the same results as those obtained in a real
-environment. We have successfully executed the code in synchronous mode using GMRES algorithm compared with a multisplitting method in asynchrnous mode after few modification.
+environment. We have successfully executed the code in synchronous mode using parallel GMRES algorithm compared with our multisplitting algorithm in asynchronous mode after few modifications.
+
\section{Experimental results}
study that the results depend on the following parameters:
\begin{itemize}
\item At the network level, we found that the most critical values are the
- bandwidth (bw) and the network latency (lat).
+ bandwidth and the network latency.
\item Hosts power (GFlops) can also influence on the results.
\item Finally, when submitting job batches for execution, the arguments values
- passed to the program like the maximum number of iterations or the
- \textit{external} precision are critical. They allow to ensure not only the
- convergence of the algorithm but also to get the main objective of the
- experimentation of the simulation in having an execution time in asynchronous
- less than in synchronous mode. The ratio between the execution time of asynchronous compared to the synchronous mode is defined as the "relative gain". So, our objective running the algorithm in SimGrid is to obtain a relative gain greater than 1.
+ passed to the program like the maximum number of iterations or the external
+ precision are critical. They allow to ensure not only the convergence of the
+ algorithm but also to get the main objective of the experimentation of the
+ simulation in having an execution time in asynchronous less than in
+ synchronous mode. The ratio between the execution time of asynchronous
+ compared to the synchronous mode is defined as the \emph{relative gain}. So,
+ our objective running the algorithm in SimGrid is to obtain a relative gain
+ greater than 1.
+ \AG{$t_\text{async} / t_\text{sync} > 1$, l'objectif est donc que ça dure plus
+ longtemps (que ça aille moins vite) en asynchrone qu'en synchrone ?
+ Ce n'est pas plutôt l'inverse ?}
\end{itemize}
-\LZK{Propositions pour remplacer le terme ``speedup'': acceleration ratio ou relative gain}
-\CER{C'est fait. En conséquence, les tableaux et les commentaires ont été aussi modifiés}
-A priori, obtaining a relative gain greater than 1 would be difficult in a local area
-network configuration where the synchronous mode will take advantage on the
+
+A priori, obtaining a relative gain greater than 1 would be difficult in a local
+area network configuration where the synchronous mode will take advantage on the
rapid exchange of information on such high-speed links. Thus, the methodology
adopted was to launch the application on clustered network. In this last
-configuration, degrading the inter-cluster network performance will
-\textit{penalize} the synchronous mode allowing to get a relative gain greater than 1.
-This action simulates the case of distant clusters linked with long distance network
-like Internet.
-
+configuration, degrading the inter-cluster network performance will penalize the
+synchronous mode allowing to get a relative gain greater than 1. This action
+simulates the case of distant clusters linked with long distance network like
+Internet.
+
+\AG{Cette partie sur le poisson 3D
+ % on sait donc que ce n'est pas une plie ou une sole (/me fatigué)
+ n'est pas à sa place. Elle devrait être placée plus tôt.}
In this paper, we solve the 3D Poisson problem whose the mathematical model is
\begin{equation}
\left\{
As a first step, the algorithm was run on a network consisting of two clusters
containing 50 hosts each, totaling 100 hosts. Various combinations of the above
-factors have providing the results shown in Table~\ref{tab.cluster.2x50} with a
+factors have provided the results shown in Table~\ref{tab.cluster.2x50} with a
matrix size ranging from $N_x = N_y = N_z = \text{62}$ to 171 elements or from
$\text{62}^\text{3} = \text{\np{238328}}$ to $\text{171}^\text{3} =
\text{\np{5000211}}$ entries.
+\AG{Expliquer comment lire les tableaux.}
% use the same column width for the following three tables
\newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}}
\begin{mytable}{6}
\hline
- bw
+ bandwidth
& 5 & 5 & 5 & 5 & 5 & 50 \\
\hline
- lat
+ latency
& 0.02 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 \\
\hline
power
& 62 & 62 & 62 & 100 & 100 & 110 \\
\hline
Prec/Eprec
- & \np{E-5} & \np{E-8} & \np{E-9} & \np{E-11} & \np{E-11} & \np{E-11} \\
+ & \np{E-5} & \np{E-8} & \np{E-9} & \np{E-11} & \np{E-11} & \np{E-11} \\
+ \hline
\hline
Relative gain
& 2.52 & 2.55 & 2.52 & 2.57 & 2.54 & 2.53 \\
\hline
\end{mytable}
- \smallskip
+ \bigskip
\begin{mytable}{6}
\hline
- bw
+ bandwidth
& 50 & 50 & 50 & 50 & 10 & 10 \\
\hline
- lat
+ latency
& 0.02 & 0.02 & 0.02 & 0.02 & 0.03 & 0.01 \\
\hline
power
Prec/Eprec
& \np{E-11} & \np{E-11} & \np{E-11} & \np{E-11} & \np{E-5} & \np{E-5} \\
\hline
+ \hline
Relative gain
& 2.51 & 2.58 & 2.55 & 2.54 & 1.59 & 1.29 \\
\hline
\begin{mytable}{6}
\hline
- bw
+ bandwidth
& 10 & 5 & 4 & 3 & 2 & 6 \\
\hline
- lat
+ latency
& 0.01 & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 \\
\hline
power
Prec/Eprec
& \np{E-5} & \np{E-5} & \np{E-5} & \np{E-5} & \np{E-5} & \np{E-5} \\
\hline
+ \hline
Relative gain
- & 1.003 & 1.01 & 1.08 & 0.19 & 1.28 & 1.01 \\
+ & 1.003 & 1.01 & 1.08 & 1.19 & 1.28 & 1.01 \\
\hline
\end{mytable}
\end{table}
\begin{mytable}{1}
\hline
- bw & 1 \\
+ bandwidth & 1 \\
\hline
- lat & 0.02 \\
+ latency & 0.02 \\
\hline
power & 1 \\
\hline
\hline
Prec/Eprec & \np{E-5} \\
\hline
+ \hline
Relative gain & 1.11 \\
\hline
\end{mytable}
\paragraph*{SMPI parameters}
+~\\{}\AG{Donner un peu plus de précisions (plateforme en particulier).}
\begin{itemize}
- \item HOSTFILE: Hosts file description.
- \item PLATFORM: file description of the platform architecture : clusters (CPU power,
-\dots{}), intra cluster network description, inter cluster network (bandwidth bw,
-lat latency, \dots{}).
+\item HOSTFILE: Hosts file description.
+\item PLATFORM: file description of the platform architecture : clusters (CPU
+ power, \dots{}), intra cluster network description, inter cluster network
+ (bandwidth, latency, \dots{}).
\end{itemize}
\item Internal and external precisions;
\item Matrix size $N_x$, $N_y$ and $N_z$;
\item Matrix diagonal value: \np{6.0};
- \item Matrix Off-diagonal value: \np{-1.0};
+ \item Matrix off-diagonal value: \np{-1.0};
\item Execution Mode: synchronous or asynchronous.
\end{itemize}
CPU power of \np[\%]{50} to \np[GFlops]{1.5} for a convergence of the algorithm
with the same order of asynchronous mode efficiency. Maintaining such a system
power but this time, increasing network throughput inter cluster up to
-\np[Mbit/s]{50}, the result of efficiency with a relative gain of 1.5 is obtained with
+\np[Mbit/s]{50}, the result of efficiency with a relative gain of 1.5\AG[]{2.5 ?} is obtained with
high external precision of \np{E-11} for a matrix size from 110 to 150 side
elements.
(synchronous and asynchronous) is achieved with an inter cluster of
\np[Mbit/s]{10} and a latency of \np[ms]{E-1}. To challenge an efficiency greater than 1.2 with a matrix size of 100 points, it was necessary to degrade the
inter cluster network bandwidth from 5 to \np[Mbit/s]{2}.
+\AG{Conclusion, on prend une plateforme pourrie pour avoir un bon ratio sync/async ???
+ Quelle est la perte de perfs en faisant ça ?}
A last attempt was made for a configuration of three clusters but more powerful
with 200 nodes in total. The convergence with a relative gain around 1.1 was
obtained with a bandwidth of \np[Mbit/s]{1} as shown in
Table~\ref{tab.cluster.3x67}.
-\LZK{Dans le papier, on compare les deux versions synchrone et asycnhrone du multisplitting. Y a t il des résultats pour comparer gmres parallèle classique avec multisplitting asynchrone? Ca permettra de montrer l'intérêt du multisplitting asynchrone sur des clusters distants}
-\CER{En fait, les résultats ont été obtenus en comparant les temps d'exécution entre l'algo classique GMRES en mode synchrone avec le multisplitting en mode asynchrone, le tout sur un environnement de clusters distants}
+\RC{Est ce qu'on sait expliquer pourquoi il y a une telle différence entre les résultats avec 2 et 3 clusters... Avec 3 clusters, ils sont pas très bons... Je me demande s'il ne faut pas les enlever...}
+\RC{En fait je pense avoir la réponse à ma remarque... On voit avec les 2 clusters que le gain est d'autant plus grand qu'on choisit une bonne précision. Donc, plusieurs solutions, lancer rapidement un long test pour confirmer ca, ou enlever des tests... ou on ne change rien :-)}
+\LZK{Ma question est: le bandwidth et latency sont ceux inter-clusters ou pour les deux inter et intra cluster??}
\section{Conclusion}
The experimental results on executing a parallel iterative algorithm in
% LocalWords: Parallelization AIAC GMRES multi SMPI SISC SIAC SimDAG DAGs Lua
% LocalWords: Fortran GFlops priori Mbit de du fcomte multisplitting scalable
% LocalWords: SimGrid Belfort parallelize Labex ANR LABX IEEEabrv hpccBib
+% LocalWords: intra durations nonsingular Waitall discretization discretized
+% LocalWords: InnerSolver Isend Irecv