From: Arnaud Giersch Date: Fri, 25 Apr 2014 15:09:30 +0000 (+0200) Subject: Typo + remarks. X-Git-Tag: hpcc2014_submission~63 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/hpcc2014.git/commitdiff_plain/b78099805e44ef25710d8168d3d886021e646084?ds=sidebyside;hp=-c Typo + remarks. --- b78099805e44ef25710d8168d3d886021e646084 diff --git a/hpcc.tex b/hpcc.tex index 9dbbb3f..28d08c0 100644 --- a/hpcc.tex +++ b/hpcc.tex @@ -254,7 +254,7 @@ like the communications are intercepted, and their running time is computed according to the characteristics of the simulated execution platform. The description of this target platform is given as an input for the execution, by the mean of an XML file. It describes the properties of the platform, such as -the computing node with their computing power, the interconnection links with +the computing nodes with their computing power, the interconnection links with their bandwidth and latency, and the routing strategy. The simulated running time of the application is computed according to these properties. @@ -378,7 +378,9 @@ Note here that the use of SMPI functions optimizer for memory footprint and CPU As mentioned, upon this adaptation, the algorithm is executed as in the real life in the simulated environment after the following minor changes. First, all declared global variables have been moved to local variables for each subroutine. In fact, global variables generate side effects arising from the concurrent access of shared memory used by threads simulating each computing unit in the SimGrid architecture. Second, the alignment of certain types of variables such as ``long int'' had -also to be reviewed. Finally, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid. +also to be reviewed. +\AG{À propos de ces problèmes d'alignement, en dire plus si ça a un intérêt, ou l'enlever.} + Finally, some compilation errors on MPI\_Waitall and MPI\_Finalize primitives have been fixed with the latest version of SimGrid. In total, the initial MPI program running on the simulation environment SMPI gave after a very simple adaptation the same results as those obtained in a real environment. We have successfully executed the code in synchronous mode using GMRES algorithm compared with a multisplitting method in asynchrnous mode after few modification. @@ -448,6 +450,7 @@ factors have providing the results shown in Table~\ref{tab.cluster.2x50} with a matrix size ranging from $N_x = N_y = N_z = \text{62}$ to 171 elements or from $\text{62}^\text{3} = \text{\np{238328}}$ to $\text{171}^\text{3} = \text{\np{5000211}}$ entries. +\AG{Expliquer comment lire les tableaux.} % use the same column width for the following three tables \newlength{\mytablew}\settowidth{\mytablew}{\footnotesize\np{E-11}} @@ -573,6 +576,7 @@ Note that the program was run with the following parameters: \paragraph*{SMPI parameters} +~\\{}\AG{Donner un peu plus de précisions (plateforme en particulier).} \begin{itemize} \item HOSTFILE: Hosts file description. \item PLATFORM: file description of the platform architecture : clusters (CPU power,