asynchronous mode algorithms by comparing their performance with the synchronous
mode. More precisely, we had implemented a program for solving large
non-symmetric linear system of equations by numerical method GMRES (Generalized
-Minimal Residual) []\AG[]{[]?}. We show, that with minor modifications of the
+Minimal Residual) []\AG[]{[]?}\LZK[]{\cite{ref1}}.\LZK{Problème traité dans le papier est symétrique ou asymétrique? (Poisson 3D symétrique?)} We show, that with minor modifications of the
initial MPI code, the SimGrid toolkit allows us to perform a test campaign of a
real AIAC application on different computing architectures. The simulated
results we obtained are in line with real results exposed in ??\AG[]{??}.
\label{algo:01}
\end{figure}
-Algorithm on Figure~\ref{algo:01} shows the main key points of the
-multisplitting method to solve a large sparse linear system. This algorithm is
-based on an outer-inner iteration method where the parallel synchronous GMRES
-method is used to solve the inner iteration. It is executed in parallel by each
-cluster of processors. For all $l,m\in\{1,\ldots,L\}$, the matrices and vectors
-with the subscript $l$ represent the local data for cluster $l$, while
-$\{A_{lm}\}_{m\neq l}$ are off-diagonal matrices of sparse matrix $A$ and
-$\{X_m\}_{m\neq l}$ contain vector elements of solution $x$ shared with
-neighboring clusters. At every outer iteration $k$, asynchronous communications
-are performed between processors of the local cluster and those of distant
-clusters (lines~\ref{algo:01:send} and~\ref{algo:01:recv} in
-Figure~\ref{algo:01}). The shared vector elements of the solution $x$ are
-exchanged by message passing using MPI non-blocking communication routines.
+Algorithm on Figure~\ref{algo:01} shows the main key points of the multisplitting method to solve a large sparse linear system. This algorithm is based on an outer-inner iteration method where the parallel synchronous GMRES method is used to solve the inner iteration. It is executed in parallel by each cluster of processors. For all $l,m\in\{1,\ldots,L\}$, the matrices and vectors with the subscript $l$ represent the local data for cluster $l$, while $\{A_{lm}\}_{m\neq l}$ are off-diagonal matrices of sparse matrix $A$ and $\{X_m\}_{m\neq l}$ contain vector elements of solution $x$ shared with neighboring clusters. At every outer iteration $k$, asynchronous communications are performed between processors of the local cluster and those of distant clusters (lines~\ref{algo:01:send} and~\ref{algo:01:recv} in Figure~\ref{algo:01}). The shared vector elements of the solution $x$ are exchanged by message passing using MPI non-blocking communication routines.
\begin{figure}[!t]
\centering
\label{fig:4.1}
\end{figure}
-The global convergence of the asynchronous multisplitting solver is detected
-when the clusters of processors have all converged locally. We implemented the
-global convergence detection process as follows. On each cluster a master
-processor is designated (for example the processor with rank 1) and masters of
-all clusters are interconnected by a virtual unidirectional ring network (see
-Figure~\ref{fig:4.1}). During the resolution, a Boolean token circulates around
-the virtual ring from a master processor to another until the global convergence
-is achieved. So starting from the cluster with rank 1, each master processor $i$
-sets the token to \textit{True} if the local convergence is achieved or to
-\text\it{False} otherwise, and sends it to master processor $i+1$. Finally, the
-global convergence is detected when the master of cluster 1 receives from the
-master of cluster $L$ a token set to \textit{True}. In this case, the master of
-cluster 1 broadcasts a stop message to masters of other clusters. In this work,
-the local convergence on each cluster $l$ is detected when the following
-condition is satisfied
+The global convergence of the asynchronous multisplitting solver is detected when the clusters of processors have all converged locally. We implemented the global convergence detection process as follows. On each cluster a master processor is designated (for example the processor with rank 1) and masters of all clusters are interconnected by a virtual unidirectional ring network (see Figure~\ref{fig:4.1}). During the resolution, a Boolean token circulates around the virtual ring from a master processor to another until the global convergence is achieved. So starting from the cluster with rank 1, each master processor $i$ sets the token to {\it True} if the local convergence is achieved or to {\it False} otherwise, and sends it to master processor $i+1$. Finally, the global convergence is detected when the master of cluster 1 receives from the master of cluster $L$ a token set to {\it True}. In this case, the master of cluster 1 broadcasts a stop message to masters of other clusters. In this work, the local convergence on each cluster $l$ is detected when the following condition is satisfied
\begin{equation*}
(k\leq \MI) \text{ or } (\|X_l^k - X_l^{k+1}\|_{\infty}\leq\epsilon)
\end{equation*}
where $\MI$ is the maximum number of outer iterations and $\epsilon$ is the tolerance threshold of the error computed between two successive local solution $X_l^k$ and $X_l^{k+1}$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid\LZK[]{SimGrid} unless some code
-debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between the six neighbors of each point (left,right,front,behind,top,down) in a cubic partitionned submatrix within a cluster or between clusters, \LZK{Il faut expliquer pourquoi 6 points voisins (7-point stencil problem)} \CER{J'ai rajouté quelques précisions mais serait-il nécessaire de décrire a ce niveau la discrétisation 3D ?}
+We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code
+debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between the six neighbors of each point (left,right,front,behind,top,down) in a cubic partitionned submatrix within a cluster or between clusters, \CER{J'ai rajouté quelques précisions mais serait-il nécessaire de décrire a ce niveau la discrétisation 3D ?}
+\LZK{Non ce n'est pas nécessaire. A ce niveau, on décrit l'algorithme général de multisplitting. Donc, je pense qu'il est préférable de ne pas préciser le schéma de communication qui peut changer selon le type de problème. \\ {\bf Par exemple: Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters}}
the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. In synchronous
mode, the execution of the program raised no particular issue but in asynchronous mode, the review of the sequence of MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions
and with the addition of the primitive MPI\_Test was needed to avoid a memory fault due to an infinite loop resulting from the non-convergence of the algorithm.
-\LZK{Peut-être mettre plus de précisions sur les difficultés rencontrées dans la version async et les adaptaions effectuées pour SimGrid}\CER{On voulait en fait montrer la simplicité de l'adaptation de l'algo a SimGrid. Les problèmes rencontrés décrits dans ce paragraphe concerne surtout le mode async}
+\CER{On voulait en fait montrer la simplicité de l'adaptation de l'algo a SimGrid. Les problèmes rencontrés décrits dans ce paragraphe concerne surtout le mode async}\LZK{OK. J'aurais préféré avoir un peu plus de détails sur l'adaptation de la version async}
Note here that the use of SMPI functions optimizer for memory footprint and CPU usage is not recommended knowing that one wants to get real results by simulation.
As mentioned, upon this adaptation, the algorithm is executed as in the real life in the simulated environment after the following minor changes. First, all declared
global variables have been moved to local variables for each subroutine. In fact, global variables generate side effects arising from the concurrent access of
also to be reviewed. Finally, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid.
In total, the initial MPI program running on the simulation environment SMPI gave after a very simple adaptation the same results as those obtained in a real
environment. We have tested in synchronous mode with a simulated platform starting from a modest 2 or 3 clusters grid to a larger configuration like simulating
-Grid5000 with more than 1500 hosts with 5000 cores~\cite{bolze2006grid}.\LZK{Dernière phrase peut être supprimée} \CER {J'ai enlevé la dernière phrase}
+Grid5000 with more than 1500 hosts with 5000 cores~\cite{bolze2006grid}.
experimentation of the simulation in having an execution time in asynchronous
less than in synchronous mode (i.e. speed-up less than 1).
\end{itemize}
-\LZK{Propositions pour changer le terme ``speedup'': acceleration ratio ou relative gain}
+\LZK{Propositions pour remplacer le terme ``speedup'': acceleration ratio ou relative gain}
A priori, obtaining a speedup less than 1 would be difficult in a local area
network configuration where the synchronous mode will take advantage on the
This action simulates the case of clusters linked with long distance network
like Internet.
+In this paper, we solve the 3D Poisson problem whose the mathematical model is
+\begin{equation}
+\left\{
+\begin{array}{l}
+\nabla^2 u = f \text{~in~} \Omega \\
+u =0 \text{~on~} \Gamma =\partial\Omega
+\end{array}
+\right.
+\label{eq:02}
+\end{equation}
+where $\nabla^2$ is the Laplace operator, $f$ and $u$ are real-valued functions, and $\Omega=[0,1]^3$. The spatial discretization with a finite difference scheme reduces problem~(\ref{eq:02}) to a system of sparse linear equations. The general iteration scheme of our multisplitting method in a 3D domain using a seven point stencil could be written as
+\begin{equation}
+\begin{array}{ll}
+u^{k+1}(x,y,z)= & u^k(x,y,z) - \frac{1}{6}\times\\
+ & (u^k(x-1,y,z) + u^k(x+1,y,z) + \\
+ & u^k(x,y-1,z) + u^k(x,y+1,z) + \\
+ & u^k(x,y,z-1) + u^k(x,y,z+1)),
+\end{array}
+\label{eq:03}
+\end{equation}
+where the iteration matrix $A$ of size $N_x\times N_y\times N_z$ of the discretized linear system is sparse, symmetric and positive definite.
+
+The parallel solving of the 3D Poisson problem with our multisplitting method requires a data partitioning of the problem between clusters and between processors within a cluster. We have chosen the 3D partitioning instead of the row-by-row partitioning in order to reduce the data exchanges at sub-domain boundaries. Figure~\ref{fig:4.2} shows an example of the data partitioning of the 3D Poisson problem between two clusters of processors, where each sub-problem is assigned to a processor. In this context, a processor has at most six neighbors within a cluster or in distant clusters with which it shares data at sub-domain boundaries.
+
+\begin{figure}[!t]
+\centering
+ \includegraphics[width=80mm,keepaspectratio]{partition}
+\caption{Example of the 3D data partitioning between two clusters of processors.}
+\label{fig:4.2}
+\end{figure}
+
+
As a first step, the algorithm was run on a network consisting of two clusters
containing 50 hosts each, totaling 100 hosts. Various combinations of the above
factors have providing the results shown in Table~\ref{tab.cluster.2x50} with a
matrix size ranging from $N_x = N_y = N_z = \text{62}$ to 171 elements or from
$\text{62}^\text{3} = \text{\np{238328}}$ to $\text{171}^\text{3} =
\text{\np{5211000}}$ entries.
-\LZK{Donner le type et la description du problème traité (problème symétrique Poisson 3D) et préciser peut être aussi qu'on a utilisé un partitionnement 3D}
-\CER{Voir ma remarque plus si nécessaire de décrire en détail le partitionnement 3D}
+
% use the same column width for the following three tables
\newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}}
\newenvironment{mytable}[1]{% #1: number of columns for data
\item Maximum number of internal and external iterations;
\item Internal and external precisions;
\item Matrix size $N_x$, $N_y$ and $N_z$;
+%<<<<<<< HEAD
\item Matrix diagonal value: \np{6.0};
- \item Matrix Off-diagonal value: \np{-1};
- \LZK{Off-diagonal values? (-1?)}
- \CER{oui}
+ \item Matrix Off-diagonal value: \np{-1.0};
+%=======
+%>>>>>>> 5fb6769d88c1720b6480a28521119ef010462fa6
\item Execution Mode: synchronous or asynchronous.
\end{itemize}
tool to run efficiently an iterative parallel algorithm in asynchronous
mode in a grid architecture.
+\LZK{Perspectives???}
+
\section*{Acknowledgment}
This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01).