From: jean-claude Date: Wed, 23 Sep 2015 15:01:39 +0000 (+0200) Subject: correction X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/mpi-energy2.git/commitdiff_plain/e190696e80767b1bc629701f186939c868d24395?ds=inline;hp=-c correction --- e190696e80767b1bc629701f186939c868d24395 diff --git a/mpi-energy2-extension/Heter_paper.tex b/mpi-energy2-extension/Heter_paper.tex index f5917fa..8ada2a0 100644 --- a/mpi-energy2-extension/Heter_paper.tex +++ b/mpi-energy2-extension/Heter_paper.tex @@ -675,52 +675,53 @@ The benchmarks have seven different classes, S, W, A, B, C, D and E, that repres \subsection{The experimental results of the scaling algorithm} \label{sec.res} -In this section, the scaling factor selection algorithm \ref{HSA}, is applied -to NAS parallel benchmarks. Seven benchmarks, CG, MG, EP, LU, BT, SP and FT, of the class D -are executed over grid'5000 computing clusters. As mentioned previously, the experiments -of this paper obtained from a collection of many clusters distributed in two sites, Lyon and Nancy sites, -of grid'5000. Four different clusters are selected from these two sites to generate two -different scenarios. Each of these two scenarios used three clusters. The first scenario, -is composed from three clusters that located in two sites, Lyon and Nancy sites. One of these three -clusters is from Lyon site, Taurus cluster and the other two clusters are form Nancy site, -Graphene and Griffon clusters. The second scenario, is composed from three clusters that are -located in one site, Nancy site. These cluster are Graphite, Graphene and griffon. The main reason -behind using these two scenarios is because the first one is executing the NAS parllel benchmarks over -two sites that are connected via long distance network, then the computations to communications ratio -is very low due to the increase in communication times, while in the second scenario, all of the three clusters are -located in one site and they are connected via high speed local area networks, where the computations -to communications ratio is higher. Therefore, it is very interested to know the performance behaviour -and the energy consumption of NAS parallel benchmarks using the proposed method, when they run -over these two different platform scenarios. Moreover, The NAS parallel benchmarks are executed over +In this section, the results of the the application of the scaling factors selection algorithm \ref{HSA} +to the NAS parallel benchmarks are presented. + +As mentioned previously, the experiments +were conducted over two sites of grid'5000, Lyon and Nancy sites. +Two scenarios were considered while selecting the clusters from these two sites : +\begin{itemize} +\item In the first scenario, nodes from two sites and three heterogeneous clusters were selected. The two sites are connected +are connected via a long distance network. +\item In the second scenario nodes from three clusters that are +located in one site, Nancy site. +\end{itemize} + +The main reason +behind using these two scenarios is to evaluate the influence of long distance communications (higher latency) on the performance of the +scaling factors selection algorithm. Indeed, in the first scenario the computations to communications ratio +is very low due to the higher communication times which reduces the effect of DVFS operations. + +The NAS parallel benchmarks are executed over 16 and 32 nodes for each scenario. The number of participating computing nodes form each cluster -are different, this depends on the available number of nodes in each cluster. -Table \ref{tab:sc} shows the details of these two scenarios and the number of nodes -used from each cluster. +are different because all the selected clusters do not have the same available number of nodes and all benchmarks do not require the same number of computing nodes. +Table \ref{tab:sc} shows the number of nodes used from each cluster for each scenario. \begin{table}[h] \caption{The different clusters scenarios} \centering -\begin{tabular}{|*{3}{c|}} +\begin{tabular}{|*{4}{c|}} \hline -\multirow{2}{*}{Scenario name} & \multicolumn{2}{c|} {The participating clusters} \\ \cline{2-3} - & Cluster name & No. of nodes of each cluster \\ +\multirow{2}{*}{Scenario name} & \multicolumn{2}{c|} {The participating clusters} \\ \cline{2-4} + & Cluster & Site & No. of nodes \\ \hline -\multirow{3}{*}{Two sites / 16 nodes} & Taurus & 5 \\ \cline{2-3} - & Graphene & 5 \\ \cline{2-3} - & Griffon & 6 \\ +\multirow{3}{*}{Two sites / 16 nodes} & Taurus & Lyon & 5 \\ \cline{2-4} + & Graphene & Nancy & 5 \\ \cline{2-4} + & Griffon & Nancy & 6 \\ \hline -\multirow{3}{*}{Tow sites / 32 nodes} & Taurus & 10 \\ \cline{2-3} - & Graphene & 10 \\ \cline{2-3} - & Griffon & 12 \\ +\multirow{3}{*}{Tow sites / 32 nodes} & Taurus & Lyon & 10 \\ \cline{2-4} + & Graphene & Nancy & 10 \\ \cline{2-4} + & Griffon &Nancy & 12 \\ \hline -\multirow{3}{*}{One site / 16 nodes} & Graphite & 4 \\ \cline{2-3} - & Graphene & 6 \\ \cline{2-3} - & Griffon & 6 \\ +\multirow{3}{*}{One site / 16 nodes} & Graphite & Nancy & 4 \\ \cline{2-4} + & Graphene & Nancy & 6 \\ \cline{2-4} + & Griffon & Nancy & 6 \\ \hline -\multirow{3}{*}{One site / 32 nodes} & Graphite & 4 \\ \cline{2-3} - & Graphene & 12 \\ \cline{2-3} - & Griffon & 12 \\ +\multirow{3}{*}{One site / 32 nodes} & Graphite & Nancy & 4 \\ \cline{2-4} + & Graphene & Nancy & 12 \\ \cline{2-4} + & Griffon & Nancy & 12 \\ \hline \end{tabular} \label{tab:sc} @@ -743,23 +744,25 @@ used from each cluster. \end{figure} The NAS parallel benchmarks are executed over these two platform -scenarios with different number of nodes, as in Table \ref{tab:sc}. -The overall energy consumption of all benchmark, class D, with -applying the proposed frequency selection algorithm is measured + with different number of nodes, as in Table \ref{tab:sc}. +The overall energy consumption of all the benchmarks solving the class D instance and +using the proposed frequency selection algorithm is measured using the equation of the reduced energy consumption, equation (\ref{eq:energy}). This model uses the measured dynamic and static -power values that showed in Table \ref{table:grid5000}. The execution -time is measured for all benchmarks over these different scenarios. -The energy consumptions and the execution times for all benchmarks are -demonstrated in the plots \ref{fig:eng_sen} and \ref{fig:time_sen} respectively. -In general, the energy consumptions of NAS benchmarks over one site scenario -for 16 and 32 nodes are less than those executed over the two sites -scenarios. This because in the two sites scenario the communication times -are higher, due to long distance communications between the two distributed sites. -This leading to more static energy consumption which is linearly related to the -increased in the communication time. The execution times of these benchmarks -over one sites for 16 and 32 nodes are less comparing to the two sites -scenario according to the increase in communications times. +power values showed in Table \ref{table:grid5000}. The execution +time is measured for all the benchmarks over these different scenarios. + +The energy consumptions and the execution times for all the benchmarks are +presented in the plots \ref{fig:eng_sen} and \ref{fig:time_sen} respectively. + +In general, the energy consumed while executing the NAS benchmarks over one site scenario +for 16 and 32 nodes is lower than the energy consumed while executing over the two sites. +The long distance communications between the two distributed sites increases the idle time which leads to more static energy consumption. + The execution times of these benchmarks +over one site with 16 and 32 nodes are also lower when compared to those of the two sites +scenario. + + The EP and MG benchmarks, where there are no or small communications, showed that their execution times and the energy consumptions are not effected diff --git a/mpi-energy2-extension/my_reference.bib b/mpi-energy2-extension/my_reference.bib index de5f548..ae351be 100644 --- a/mpi-energy2-extension/my_reference.bib +++ b/mpi-energy2-extension/my_reference.bib @@ -805,7 +805,6 @@ ISSN={1045-9219},} address = {Hyderabad, India}, booktitle = {PDSEC 2015, 16th IEEE Int. Workshop on Parallel and Distributed Scientific and Engineering Computing (in conjuction with IPDPS 2015)}, month = {May}, - %pages = {***--***}, publisher = {IEEE} }