X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/mpi-energy2.git/blobdiff_plain/fe2e14acbe9cfb21323577b06ba412e8ea6b2a75..HEAD:/mpi-energy2-extension/Heter_paper.tex diff --git a/mpi-energy2-extension/Heter_paper.tex b/mpi-energy2-extension/Heter_paper.tex index fabd22e..9108cb1 100644 --- a/mpi-energy2-extension/Heter_paper.tex +++ b/mpi-energy2-extension/Heter_paper.tex @@ -392,7 +392,7 @@ where $N$ is the number of clusters in the grid, $M_i$ is the number of nodes and $\Tcm[hj]$ is the communication time of processor $j$ in the cluster $h$ during the first iteration. The execution time for one iteration is equal to the sum of the maximum computation time for all nodes with the new scaling factors and the communication time of the slowest node without slack time during one iteration. - The slowest node $h$ is the node which takes the maximum execution time to execute an iteration before scaling down its frequency. +The slowest node in cluster $h$ is the node which takes the maximum execution time to execute an iteration before scaling down its frequency. It means that only the communication time without any slack time is taken into account. Therefore, the execution time of the application is equal to the execution time of one iteration as in Equation (\ref{eq:perf}) multiplied by the @@ -1266,7 +1266,8 @@ the global convergence of the iterative system. Finally, it would be interesting \section*{Acknowledgment} This work has been partially supported by the Labex ACTION project (contract -``ANR-11-LABX-01-01''). Computations have been performed on the Grid'5000 platform. As a PhD student, +``ANR-11-LABX-01-01''). Computations have been performed on the Grid'5000 +platform and on the mésocentre of Franche-Comté. As a PhD student, Mr. Ahmed Fanfakh, would like to thank the University of Babylon (Iraq) for supporting his work.