To our knowledge, there is no existing work on the large-scale simulation of a
real asynchronous iterative application. {\bf The contribution of the present
- paper can be summarised in two main points}. First we give a first approach
+ paper can be summarized in two main points}. First we give a first approach
of the simulation of asynchronous iterative algorithms using a simulation tool
(i.e. the SimGrid toolkit~\cite{SimGrid}). Second, we confirm the
effectiveness of the asynchronous multisplitting algorithm by comparing its
-performance with the synchronous GMRES (Generalized Minimal Residual)
+performance with the synchronous GMRES (Generalized Minimal Residual) method
\cite{ref1}. Both these codes can be used to solve large linear systems. In
this paper, we focus on a 3D Poisson problem. We show, that with minor
modifications of the initial MPI code, the SimGrid toolkit allows us to perform
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code
-debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. In synchronous
-mode, the execution of the program raised no particular issue but in asynchronous mode, the review of the sequence of MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions
-and with the addition of the primitive MPI\_Test was needed to avoid a memory fault due to an infinite loop resulting from the non-convergence of the algorithm.
+debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. For the synchronous GMRES method, the execution of the program raised no particular issue but in the asynchronous multisplitting method , the review of the sequence of \texttt{MPI\_Isend, MPI\_Irecv} and \texttt{MPI\_Waitall} instructions
+and with the addition of the primitive \texttt{MPI\_Test} was needed to avoid a memory fault due to an infinite loop resulting from the non-convergence of the algorithm.
%\CER{On voulait en fait montrer la simplicité de l'adaptation de l'algo a SimGrid. Les problèmes rencontrés décrits dans ce paragraphe concerne surtout le mode async}\LZK{OK. J'aurais préféré avoir un peu plus de détails sur l'adaptation de la version async}
%\CER{Le problème majeur sur l'adaptation MPI vers SMPI pour la partie asynchrone de l'algorithme a été le plantage en SMPI de Waitall après un Isend et Irecv. J'avais proposé un workaround en utilisant un MPI\_wait séparé pour chaque échange a la place d'un waitall unique pour TOUTES les échanges, une instruction qui semble bien fonctionner en MPI. Ce workaround aussi fonctionne bien. Mais après, tu as modifié le programme avec l'ajout d'un MPI\_Test, au niveau de la routine de détection de la convergence et du coup, l'échange global avec waitall a aussi fonctionné.}
Note here that the use of SMPI functions optimizer for memory footprint and CPU usage is not recommended knowing that one wants to get real results by simulation.
%Second, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid.
%\AG{compilation or run-time error?}
In total, the initial MPI program running on the simulation environment SMPI gave after a very simple adaptation the same results as those obtained in a real
-environment. We have successfully executed the code in synchronous mode using parallel GMRES algorithm compared with our multisplitting algorithm in asynchronous mode after few modifications.
+environment. We have successfully executed the code for the synchronous GMRES algorithm compared with our asynchronous multisplitting algorithm after few modifications.