From: lilia Date: Mon, 28 Apr 2014 15:36:45 +0000 (+0200) Subject: v8 X-Git-Tag: hpcc2014_submission~18 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/hpcc2014.git/commitdiff_plain/ee24b6afced350626b57accf9d10690bf9667aca v8 --- diff --git a/hpcc.tex b/hpcc.tex index 6a39362..c48f1f6 100644 --- a/hpcc.tex +++ b/hpcc.tex @@ -450,7 +450,7 @@ The parallel solving of the 3D Poisson problem with our multisplitting method re %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code -debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. For the synchronous GMRES method, the execution of the program raised no particular issue but in the asynchronous multisplitting method , the review of the sequence of \texttt{MPI\_Isend, MPI\_Irecv} and \texttt{MPI\_Waitall} instructions +debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. For the synchronous GMRES method, the execution of the program raised no particular issue but in the asynchronous multisplitting method, the review of the sequence of \texttt{MPI\_Isend, MPI\_Irecv} and \texttt{MPI\_Waitall} instructions and with the addition of the primitive \texttt{MPI\_Test} was needed to avoid a memory fault due to an infinite loop resulting from the non-convergence of the algorithm. %\CER{On voulait en fait montrer la simplicité de l'adaptation de l'algo a SimGrid. Les problèmes rencontrés décrits dans ce paragraphe concerne surtout le mode async}\LZK{OK. J'aurais préféré avoir un peu plus de détails sur l'adaptation de la version async} %\CER{Le problème majeur sur l'adaptation MPI vers SMPI pour la partie asynchrone de l'algorithme a été le plantage en SMPI de Waitall après un Isend et Irecv. J'avais proposé un workaround en utilisant un MPI\_wait séparé pour chaque échange a la place d'un waitall unique pour TOUTES les échanges, une instruction qui semble bien fonctionner en MPI. Ce workaround aussi fonctionne bien. Mais après, tu as modifié le programme avec l'ajout d'un MPI\_Test, au niveau de la routine de détection de la convergence et du coup, l'échange global avec waitall a aussi fonctionné.}