To our knowledge, there is no existing work on the large-scale simulation of a
real asynchronous iterative application. {\bf The contribution of the present
To our knowledge, there is no existing work on the large-scale simulation of a
real asynchronous iterative application. {\bf The contribution of the present
of the simulation of asynchronous iterative algorithms using a simulation tool
(i.e. the SimGrid toolkit~\cite{SimGrid}). Second, we confirm the
effectiveness of the asynchronous multisplitting algorithm by comparing its
of the simulation of asynchronous iterative algorithms using a simulation tool
(i.e. the SimGrid toolkit~\cite{SimGrid}). Second, we confirm the
effectiveness of the asynchronous multisplitting algorithm by comparing its
\cite{ref1}. Both these codes can be used to solve large linear systems. In
this paper, we focus on a 3D Poisson problem. We show, that with minor
modifications of the initial MPI code, the SimGrid toolkit allows us to perform
\cite{ref1}. Both these codes can be used to solve large linear systems. In
this paper, we focus on a 3D Poisson problem. We show, that with minor
modifications of the initial MPI code, the SimGrid toolkit allows us to perform
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Simulation of the multisplitting method}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Simulation of the multisplitting method}
%Décrire le problème (algo) traité ainsi que le processus d'adaptation à SimGrid.
Let $Ax=b$ be a large sparse system of $n$ linear equations in $\mathbb{R}$, where $A$ is a sparse square and nonsingular matrix, $x$ is the solution vector and $b$ is the right-hand side vector. We use a multisplitting method based on the block Jacobi splitting to solve this linear system on a large scale platform composed of $L$ clusters of processors~\cite{o1985multi}. In this case, we apply a row-by-row splitting without overlapping
\begin{equation*}
%Décrire le problème (algo) traité ainsi que le processus d'adaptation à SimGrid.
Let $Ax=b$ be a large sparse system of $n$ linear equations in $\mathbb{R}$, where $A$ is a sparse square and nonsingular matrix, $x$ is the solution vector and $b$ is the right-hand side vector. We use a multisplitting method based on the block Jacobi splitting to solve this linear system on a large scale platform composed of $L$ clusters of processors~\cite{o1985multi}. In this case, we apply a row-by-row splitting without overlapping
\begin{equation*}
all clusters are interconnected by a virtual unidirectional ring network (see
Figure~\ref{fig:4.1}). During the resolution, a Boolean token circulates around
the virtual ring from a master processor to another until the global convergence
all clusters are interconnected by a virtual unidirectional ring network (see
Figure~\ref{fig:4.1}). During the resolution, a Boolean token circulates around
the virtual ring from a master processor to another until the global convergence
global convergence is detected when the master of cluster 1 receives from the
master of cluster $L$ a token set to \textit{True}. In this case, the master of
cluster 1 broadcasts a stop message to masters of other clusters. In this work,
global convergence is detected when the master of cluster 1 receives from the
master of cluster $L$ a token set to \textit{True}. In this case, the master of
cluster 1 broadcasts a stop message to masters of other clusters. In this work,
where $\nabla^2$ is the Laplace operator, $f$ and $u$ are real-valued functions, and $\Omega=[0,1]^3$. The spatial discretization with a finite difference scheme reduces problem~(\ref{eq:02}) to a system of sparse linear equations. Our multisplitting method solves the 3D Poisson problem using a seven point stencil whose the general expression could be written as
\begin{equation}
\begin{array}{l}
where $\nabla^2$ is the Laplace operator, $f$ and $u$ are real-valued functions, and $\Omega=[0,1]^3$. The spatial discretization with a finite difference scheme reduces problem~(\ref{eq:02}) to a system of sparse linear equations. Our multisplitting method solves the 3D Poisson problem using a seven point stencil whose the general expression could be written as
\begin{equation}
\begin{array}{l}
-u(x-1,y,z) + u(x,y-1,z) + u(x,y,z-1)\\+u(x+1,y,z)+u(x,y+1,z)+u(x,y,z+1) \\ -6u(x,y,z)=h^2f(x,y,z)
+u(x-1,y,z) + u(x,y-1,z) + u(x,y,z-1)\\+u(x+1,y,z)+u(x,y+1,z)+u(x,y,z+1) \\ -6u(x,y,z)=h^2f(x,y,z),
%u(x,y,z)= & \frac{1}{6}\times [u(x-1,y,z) + u(x+1,y,z) + \\
% & u(x,y-1,z) + u(x,y+1,z) + \\
% & u(x,y,z-1) + u(x,y,z+1) - \\ & h^2f(x,y,z)],
%u(x,y,z)= & \frac{1}{6}\times [u(x-1,y,z) + u(x+1,y,z) + \\
% & u(x,y-1,z) + u(x,y+1,z) + \\
% & u(x,y,z-1) + u(x,y,z+1) - \\ & h^2f(x,y,z)],
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code
-debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. In synchronous
-mode, the execution of the program raised no particular issue but in asynchronous mode, the review of the sequence of MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions
-and with the addition of the primitive MPI\_Test was needed to avoid a memory fault due to an infinite loop resulting from the non-convergence of the algorithm.
+debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. For the synchronous GMRES method, the execution of the program raised no particular issue but in the asynchronous multisplitting method , the review of the sequence of \texttt{MPI\_Isend, MPI\_Irecv} and \texttt{MPI\_Waitall} instructions
+and with the addition of the primitive \texttt{MPI\_Test} was needed to avoid a memory fault due to an infinite loop resulting from the non-convergence of the algorithm.
%\CER{On voulait en fait montrer la simplicité de l'adaptation de l'algo a SimGrid. Les problèmes rencontrés décrits dans ce paragraphe concerne surtout le mode async}\LZK{OK. J'aurais préféré avoir un peu plus de détails sur l'adaptation de la version async}
%\CER{Le problème majeur sur l'adaptation MPI vers SMPI pour la partie asynchrone de l'algorithme a été le plantage en SMPI de Waitall après un Isend et Irecv. J'avais proposé un workaround en utilisant un MPI\_wait séparé pour chaque échange a la place d'un waitall unique pour TOUTES les échanges, une instruction qui semble bien fonctionner en MPI. Ce workaround aussi fonctionne bien. Mais après, tu as modifié le programme avec l'ajout d'un MPI\_Test, au niveau de la routine de détection de la convergence et du coup, l'échange global avec waitall a aussi fonctionné.}
Note here that the use of SMPI functions optimizer for memory footprint and CPU usage is not recommended knowing that one wants to get real results by simulation.
%\CER{On voulait en fait montrer la simplicité de l'adaptation de l'algo a SimGrid. Les problèmes rencontrés décrits dans ce paragraphe concerne surtout le mode async}\LZK{OK. J'aurais préféré avoir un peu plus de détails sur l'adaptation de la version async}
%\CER{Le problème majeur sur l'adaptation MPI vers SMPI pour la partie asynchrone de l'algorithme a été le plantage en SMPI de Waitall après un Isend et Irecv. J'avais proposé un workaround en utilisant un MPI\_wait séparé pour chaque échange a la place d'un waitall unique pour TOUTES les échanges, une instruction qui semble bien fonctionner en MPI. Ce workaround aussi fonctionne bien. Mais après, tu as modifié le programme avec l'ajout d'un MPI\_Test, au niveau de la routine de détection de la convergence et du coup, l'échange global avec waitall a aussi fonctionné.}
Note here that the use of SMPI functions optimizer for memory footprint and CPU usage is not recommended knowing that one wants to get real results by simulation.
%Second, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid.
%\AG{compilation or run-time error?}
In total, the initial MPI program running on the simulation environment SMPI gave after a very simple adaptation the same results as those obtained in a real
%Second, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid.
%\AG{compilation or run-time error?}
In total, the initial MPI program running on the simulation environment SMPI gave after a very simple adaptation the same results as those obtained in a real
\item Hosts processors power (GFlops) can also influence on the results.
\item Finally, when submitting job batches for execution, the arguments values
passed to the program like the maximum number of iterations or the precision are critical. They allow us to ensure not only the convergence of the
\item Hosts processors power (GFlops) can also influence on the results.
\item Finally, when submitting job batches for execution, the arguments values
passed to the program like the maximum number of iterations or the precision are critical. They allow us to ensure not only the convergence of the
- algorithm but also to get the main objective in getting an execution time in asynchronous communication less than in
- synchronous mode. The ratio between the execution time of synchronous
- compared to the asynchronous mode ($t_\text{sync} / t_\text{async}$) is defined as the \emph{relative gain}. So,
- our objective running the algorithm in SimGrid is to obtain a relative gain
- greater than 1.
-\end{itemize}
+ algorithm but also to get the main objective in getting an execution time with the asynchronous multisplitting less than with synchronous GMRES.
+ \end{itemize}
+The ratio between the simulated execution time of synchronous GMRES algorithm
+compared to the asynchronous multisplitting algorithm ($t_\text{GMRES} / t_\text{Multisplitting}$) is defined as the \emph{relative gain}. So,
+our objective running the algorithm in SimGrid is to obtain a relative gain greater than 1.
rapid exchange of information on such high-speed links. Thus, the methodology
adopted was to launch the application on a clustered network. In this
configuration, degrading the inter-cluster network performance will penalize the
rapid exchange of information on such high-speed links. Thus, the methodology
adopted was to launch the application on a clustered network. In this
configuration, degrading the inter-cluster network performance will penalize the
-% As a first step,
-The algorithm was run on a two clusters based network with 50 hosts each, totaling 100 hosts. Various combinations of the above
-factors have provided the results shown in Table~\ref{tab.cluster.2x50}. The algorithm convergence with a 3D
-matrix size ranging from $N_x = N_y = N_z = \text{62}$ to 150 elements (that is from
+
+Both codes were simulated on a two clusters based network with 50 hosts each, totaling 100 hosts. Various combinations of the above
+factors have provided the results shown in Table~\ref{tab.cluster.2x50}. The problem size of the 3D Poisson problem ranges from $N_x = N_y = N_z = \text{62}$ to 150 elements (that is from
-\text{\np{3375000}}$ entries), is obtained in asynchronous in average 2.5 times faster than in the synchronous mode.
-\AG{Expliquer comment lire les tableaux.}
-\CER{J'ai reformulé la phrase par la lecture du tableau. Plus de détails seront lus dans la partie Interprétations et commentaires}
+\text{\np{3375000}}$ entries). With the asynchronous multisplitting algorithm the simulated execution time is in average 2.5 times faster than with the synchronous GMRES one.
+%\AG{Expliquer comment lire les tableaux.}
+%\CER{J'ai reformulé la phrase par la lecture du tableau. Plus de détails seront lus dans la partie Interprétations et commentaires}
% use the same column width for the following three tables
\newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}}
\newenvironment{mytable}[1]{% #1: number of columns for data
% use the same column width for the following three tables
\newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}}
\newenvironment{mytable}[1]{% #1: number of columns for data
- \item Processor unit power: \np[GFlops]{1.5};
- \item Intracluster network bandwidth: \np[Gbit/s]{1.25} and latency:
- \np[$\mu$s]{0.05};
- \item Intercluster network bandwidth: \np[Mbit/s]{5} and latency:
- \np[$\mu$s]{5};
+ \item 2 clusters of 50 hosts each;
+ \item Processor unit power: \np[GFlops]{1} or \np[GFlops]{1.5};
+ \item Intra-cluster network bandwidth: \np[Gbit/s]{1.25} and latency: \np[$\mu$s]{0.05};
+ \item Inter-cluster network bandwidth: \np[Mbit/s]{5} or \np[Mbit/s]{50} and latency: \np[$\mu$s]{20};
\item Maximum number of iterations;
\item Precisions on the residual error;
\item Matrix size $N_x$, $N_y$ and $N_z$;
\item Maximum number of iterations;
\item Precisions on the residual error;
\item Matrix size $N_x$, $N_y$ and $N_z$;
-\item Matrix diagonal value: \np{1.0} (See~(\ref{eq:03}));
-\item Matrix off-diagonal value: \np{-1}/\np{6} (See~(\ref{eq:03}));
+\item Matrix diagonal value: $6$ (See Equation~(\ref{eq:03}));
+\item Matrix off-diagonal value: $-1$;
After analyzing the outputs, generally, for the two clusters including one hundred hosts configuration (Tables~\ref{tab.cluster.2x50}), some combinations of parameters affecting
the results have given a relative gain more than 2.5, showing the effectiveness of the
After analyzing the outputs, generally, for the two clusters including one hundred hosts configuration (Tables~\ref{tab.cluster.2x50}), some combinations of parameters affecting
the results have given a relative gain more than 2.5, showing the effectiveness of the
of one GFlops, an efficiency of about \np[\%]{40} is
obtained in asynchronous mode for a matrix size of 62 elements. It is noticed that the result remains
stable even we vary the residual error precision from \np{E-5} to \np{E-9}. By
of one GFlops, an efficiency of about \np[\%]{40} is
obtained in asynchronous mode for a matrix size of 62 elements. It is noticed that the result remains
stable even we vary the residual error precision from \np{E-5} to \np{E-9}. By
%\LZK{Ma question est: le bandwidth et latency sont ceux inter-clusters ou pour les deux inter et intra cluster??}
%\CER{Définitivement, les paramètres réseaux variables ici se rapportent au réseau INTER cluster.}
\section{Conclusion}
%\LZK{Ma question est: le bandwidth et latency sont ceux inter-clusters ou pour les deux inter et intra cluster??}
%\CER{Définitivement, les paramètres réseaux variables ici se rapportent au réseau INTER cluster.}
\section{Conclusion}
-The experimental results on executing a parallel iterative algorithm in
-asynchronous mode on an environment simulating a large scale of virtual
-computers organized with interconnected clusters have been presented.
-Our work has demonstrated that using such a simulation tool allow us to
+The simulation of the execution of parallel asynchronous iterative algorithms on large scale clusters has been presented.
+In this work, we show that SIMGRID is an efficient simulation tool that allows us to
executing the algorithm in asynchronous mode.
\end{enumerate}
Our results have shown that in certain conditions, asynchronous mode is
executing the algorithm in asynchronous mode.
\end{enumerate}
Our results have shown that in certain conditions, asynchronous mode is
this class of algorithm. The work presented in this paper has
demonstrated an original solution to optimize the use of a simulation
tool to run efficiently an iterative parallel algorithm in asynchronous
mode in a grid architecture.
this class of algorithm. The work presented in this paper has
demonstrated an original solution to optimize the use of a simulation
tool to run efficiently an iterative parallel algorithm in asynchronous
mode in a grid architecture.
+For our futur works, we plan to extend our experimentations to larger scale platforms by increasing the number of computing cores and the number of clusters.
+We will also have to increase the size of the input problem which will require the use of a more powerful simulation platform. At last, we expect to compare our simulation results to real execution results on real architectures in order to experimentally validate our study.
\section*{Acknowledgment}
This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01).
\section*{Acknowledgment}
This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01).
% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page
% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page