X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/mpi-energy2.git/blobdiff_plain/9bf808f8e110db33a501c0202fb6470145106354..08498d4d730017544934563b4b7ba98f160ae3a2:/mpi-energy2-extension/Heter_paper.tex?ds=sidebyside diff --git a/mpi-energy2-extension/Heter_paper.tex b/mpi-energy2-extension/Heter_paper.tex index 6b3fbb1..caa294f 100644 --- a/mpi-energy2-extension/Heter_paper.tex +++ b/mpi-energy2-extension/Heter_paper.tex @@ -214,7 +214,7 @@ scaling (DVFS) is one of them. It can be used to reduce the power consumption of The proposed algorithm is evaluated on a real grid, the grid'5000 platform, while running the NAS parallel benchmarks. The experiments show that it reduces the energy consumption on average by \np[\%]{30} while the performance is only degraded - on average by \np[\%]{3}. Finally, the algorithm is + on average by \np[\%]{3.2}. Finally, the algorithm is compared to an existing method. The comparison results show that it outperforms the latter in terms of energy consumption reduction and performance. \end{abstract} @@ -662,9 +662,9 @@ equation, as follows: \begin{figure} \centering \subfloat[Homogeneous cluster]{% - \includegraphics[width=.33\textwidth]{fig/homo}\label{fig:r1}} \hspace{2cm}% + \includegraphics[width=.4\textwidth]{fig/homo}\label{fig:r1}} \hspace{2cm}% \subfloat[Heterogeneous grid]{% - \includegraphics[width=.33\textwidth]{fig/heter}\label{fig:r2}} + \includegraphics[width=.4\textwidth]{fig/heter}\label{fig:r2}} \label{fig:rel} \caption{The energy and performance relation} \end{figure} @@ -999,16 +999,7 @@ Table \ref{tab:sc} shows the number of nodes used from each cluster for each sce \end{table} -\begin{figure} - \centering - \subfloat[The energy consumption by the nodes wile executing the NAS benchmarks over different scenarios - ]{% - \includegraphics[width=.4\textwidth]{fig/eng_con_scenarios.eps}\label{fig:eng_sen}} \hspace{1cm}% - \subfloat[The execution times of the NAS benchmarks over different scenarios]{% - \includegraphics[width=.4\textwidth]{fig/time_scenarios.eps}\label{fig:time_sen}} - \label{fig:exp-time-energy} - \caption{The energy consumption and execution time of NAS Benchmarks over different scenarios} -\end{figure} + The NAS parallel benchmarks are executed over these two platforms with different number of nodes, as in Table \ref{tab:sc}. @@ -1034,18 +1025,6 @@ However, the execution times and the energy consumptions of EP and MG benchmark in both scenarios. Even when the number of nodes is doubled. On the other hand, the communications of the rest of the benchmarks increases when using long distance communications between two sites or increasing the number of computing nodes. -\begin{figure} - \centering - \subfloat[The energy reduction while executing the NAS benchmarks over different scenarios ]{% - \includegraphics[width=.33\textwidth]{fig/eng_s.eps}\label{fig:eng_s}} \hspace{0.08cm}% - \subfloat[The performance degradation of the NAS benchmarks over different scenarios]{% - \includegraphics[width=.33\textwidth]{fig/per_d.eps}\label{fig:per_d}}\hspace{0.08cm}% - \subfloat[The tradeoff distance between the energy reduction and the performance of the NAS benchmarks - over different scenarios]{% - \includegraphics[width=.33\textwidth]{fig/dist.eps}\label{fig:dist}} - \label{fig:exp-res} - \caption{The experimental results of different scenarios} -\end{figure} The energy saving percentage is computed as the ratio between the reduced energy consumption, equation (\ref{eq:energy}), and the original energy consumption, @@ -1059,8 +1038,31 @@ is exponentially related to the CPU's frequency value. On the other side, the in increase the communication times and thus produces less energy saving depending on the benchmarks being executed. The results of the benchmarks CG, MG, BT and FT show more energy saving percentage in one site scenario when executed over 16 nodes comparing to 32 nodes. While, LU and SP consume more energy with 16 nodes than 32 in one site because their computations to communications ratio is not affected by the increase of the number of local communications. +\begin{figure} + \centering + \subfloat[The energy consumption by the nodes wile executing the NAS benchmarks over different scenarios + ]{% + \includegraphics[width=.4\textwidth]{fig/eng_con_scenarios.eps}\label{fig:eng_sen}} \hspace{1cm}% + \subfloat[The execution times of the NAS benchmarks over different scenarios]{% + \includegraphics[width=.4\textwidth]{fig/time_scenarios.eps}\label{fig:time_sen}} + \label{fig:exp-time-energy} + \caption{The energy consumption and execution time of NAS Benchmarks over different scenarios} +\end{figure} +\begin{figure} + \centering + \subfloat[The energy reduction while executing the NAS benchmarks over different scenarios ]{% + \includegraphics[width=.4\textwidth]{fig/eng_s.eps}\label{fig:eng_s}} \hspace{2cm}% + \subfloat[The performance degradation of the NAS benchmarks over different scenarios]{% + \includegraphics[width=.4\textwidth]{fig/per_d.eps}\label{fig:per_d}}\hspace{2cm}% + \subfloat[The tradeoff distance between the energy reduction and the performance of the NAS benchmarks + over different scenarios]{% + \includegraphics[width=.4\textwidth]{fig/dist.eps}\label{fig:dist}} + \label{fig:exp-res} + \caption{The experimental results of different scenarios} +\end{figure} + The energy saving percentage is reduced for all the benchmarks because of the long distance communications in the two sites scenario, except for the EP benchmark which has no communications. Therefore, the energy saving percentage of this benchmark is dependent on the maximum difference between the computing powers of the heterogeneous computing nodes, for example @@ -1077,9 +1079,9 @@ The best energy saving percentage was obtained in the one site scenario with 16 Figure \ref{fig:per_d} presents the performance degradation percentages for all benchmarks over the two scenarios. The performance degradation percentage for the benchmarks running on two sites with -16 or 32 nodes is on average equal to 8\% or 4\% respectively. +16 or 32 nodes is on average equal to 8.3\% or 4.7\% respectively. For this scenario, the proposed scaling algorithm selects smaller frequencies for the executions with 32 nodes without significantly degrading their performance because the communication times are higher with 32 nodes which results in smaller computations to communications ratio. On the other hand, the performance degradation percentage for the benchmarks running on one site with -16 or 32 nodes is on average equal to 3\% or 10\% respectively. In opposition to the two sites scenario, when the number of computing nodes is increased in the one site scenario, the performance degradation percentage is increased. Therefore, doubling the number of computing +16 or 32 nodes is on average equal to 3.2\% or 10.6\% respectively. In opposition to the two sites scenario, when the number of computing nodes is increased in the one site scenario, the performance degradation percentage is increased. Therefore, doubling the number of computing nodes when the communications occur in high speed network does not decrease the computations to communication ratio. @@ -1091,7 +1093,7 @@ when the communication times increase and vice versa. Figure \ref{fig:dist} presents the distance percentage between the energy saving and the performance degradation for each benchmark over both scenarios. The tradeoff distance percentage can be computed as in equation \ref{eq:max}. The one site scenario with 16 nodes gives the best energy and performance -tradeoff, on average it is equal to 26\%. The one site scenario using both 16 and 32 nodes had better energy and performance +tradeoff, on average it is equal to 26.8\%. The one site scenario using both 16 and 32 nodes had better energy and performance tradeoff comparing to the two sites scenario because the former has high speed local communications which increase the computations to communications ratio and the latter uses long distance communications which decrease this ratio. @@ -1177,12 +1179,12 @@ in the figure \ref{fig:dist-mc}. These tradeoff distance between energy consump \begin{figure} \centering \subfloat[The energy saving of running NAS benchmarks over one core and multicores scenarios]{% - \includegraphics[width=.33\textwidth]{fig/eng_s_mc.eps}\label{fig:eng-s-mc}} \hspace{0.08cm}% + \includegraphics[width=.4\textwidth]{fig/eng_s_mc.eps}\label{fig:eng-s-mc}} \hspace{2cm}% \subfloat[The performance degradation of running NAS benchmarks over one core and multicores scenarios ]{% - \includegraphics[width=.33\textwidth]{fig/per_d_mc.eps}\label{fig:per-d-mc}}\hspace{0.08cm}% + \includegraphics[width=.4\textwidth]{fig/per_d_mc.eps}\label{fig:per-d-mc}}\hspace{2cm}% \subfloat[The tradeoff distance of running NAS benchmarks over one core and multicores scenarios]{% - \includegraphics[width=.33\textwidth]{fig/dist_mc.eps}\label{fig:dist-mc}} + \includegraphics[width=.4\textwidth]{fig/dist_mc.eps}\label{fig:dist-mc}} \label{fig:exp-res} \caption{The experimental results of one core and multi-cores scenarios} \end{figure} @@ -1203,11 +1205,12 @@ In these experiments, the class D of the NAS parallel benchmarks are executed ov \begin{figure} \centering \subfloat[The energy saving percentages for the nodes executing the NAS benchmarks over the three power scenarios]{% - \includegraphics[width=.33\textwidth]{fig/eng_pow.eps}\label{fig:eng-pow}} \hspace{0.08cm}% + \includegraphics[width=.4\textwidth]{fig/eng_pow.eps}\label{fig:eng-pow}} \hspace{2cm}% \subfloat[The performance degradation percentages for the NAS benchmarks over the three power scenarios]{% - \includegraphics[width=.33\textwidth]{fig/per_pow.eps}\label{fig:per-pow}}\hspace{0.08cm}% + \includegraphics[width=.4\textwidth]{fig/per_pow.eps}\label{fig:per-pow}}\hspace{2cm}% \subfloat[The tradeoff distance between the energy reduction and the performance of the NAS benchmarks over the three power scenarios]{% - \includegraphics[width=.33\textwidth]{fig/dist_pow.eps}\label{fig:dist-pow}} + + \includegraphics[width=.4\textwidth]{fig/dist_pow.eps}\label{fig:dist-pow}} \label{fig:exp-pow} \caption{The experimental results of different static power scenarios} \end{figure} @@ -1267,11 +1270,11 @@ presented in the figures \ref{fig:edp-eng}, \ref{fig:edp-perf} and \ref{fig:edp- \begin{figure} \centering \subfloat[The energy reduction induced by the Maxdist method and the EDP method]{% - \includegraphics[width=.33\textwidth]{fig/edp_eng}\label{fig:edp-eng}} \hspace{0.08cm}% + \includegraphics[width=.4\textwidth]{fig/edp_eng}\label{fig:edp-eng}} \hspace{2cm}% \subfloat[The performance degradation induced by the Maxdist method and the EDP method]{% - \includegraphics[width=.33\textwidth]{fig/edp_per}\label{fig:edp-perf}}\hspace{0.08cm}% + \includegraphics[width=.4\textwidth]{fig/edp_per}\label{fig:edp-perf}}\hspace{2cm}% \subfloat[The tradeoff distance between the energy consumption reduction and the performance for the Maxdist method and the EDP method]{% - \includegraphics[width=.33\textwidth]{fig/edp_dist}\label{fig:edp-dist}} + \includegraphics[width=.4\textwidth]{fig/edp_dist}\label{fig:edp-dist}} \label{fig:edp-comparison} \caption{The comparison results} \end{figure} @@ -1300,7 +1303,7 @@ of the distributed iterative message passing application running over a grid arc To evaluate the proposed method on a real heterogeneous grid platform, it was applied on the NAS parallel benchmarks and the class D instance was executed over the grid'5000 testbed platform. The experimental results showed that the algorithm reduces on average 30\% of the energy consumption -for all the NAS benchmarks while only degrading by 3\% on average the performance. +for all the NAS benchmarks while only degrading by 3.2\% on average the performance. The Maxdist algorithm was also evaluated in different scenarios that vary in the distribution of the computing nodes between different clusters' sites or \textcolor{blue}{between using one core and multi-cores per node} or in the values of the consumed static power. The algorithm selects different vector of frequencies according to the computations and communication times ratios, and the values of the static and measured dynamic powers of the CPUs. Finally, the proposed algorithm was compared to another method that uses