From: lilia Date: Thu, 24 Apr 2014 15:07:15 +0000 (+0200) Subject: 24-04-2014bbb X-Git-Tag: hpcc2014_submission~74 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/hpcc2014.git/commitdiff_plain/f9bb0366521948860427dbe75c159008da521ac3?ds=inline;hp=580c1f07165fe15586922daa61ff46ff216c6965 24-04-2014bbb --- diff --git a/hpcc.tex b/hpcc.tex index 11c39db..e1d916e 100644 --- a/hpcc.tex +++ b/hpcc.tex @@ -339,9 +339,7 @@ where $\MI$ is the maximum number of outer iterations and $\epsilon$ is the tole %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code -debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between the six neighbors of each point (left,right,front,behind,top,down) in a cubic partitionned submatrix within a cluster or between clusters, \CER{J'ai rajouté quelques précisions mais serait-il nécessaire de décrire a ce niveau la discrétisation 3D ?} -\LZK{Non ce n'est pas nécessaire. A ce niveau, on décrit l'algorithme général de multisplitting. Donc, je pense qu'il est préférable de ne pas préciser le schéma de communication qui peut changer selon le type de problème. \\ {\bf Par exemple: Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters}} -the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. In synchronous +debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. In synchronous mode, the execution of the program raised no particular issue but in asynchronous mode, the review of the sequence of MPI\_Isend, MPI\_Irecv and MPI\_Waitall instructions and with the addition of the primitive MPI\_Test was needed to avoid a memory fault due to an infinite loop resulting from the non-convergence of the algorithm. \CER{On voulait en fait montrer la simplicité de l'adaptation de l'algo a SimGrid. Les problèmes rencontrés décrits dans ce paragraphe concerne surtout le mode async}\LZK{OK. J'aurais préféré avoir un peu plus de détails sur l'adaptation de la version async}