X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/mpi-energy2.git/blobdiff_plain/51adf98ec4e478adc16d4bcfa4e8f52b80af889b..fb3df08c1b03ad24159fa23c77212a53ae95a42d:/Heter_paper.tex diff --git a/Heter_paper.tex b/Heter_paper.tex index 00f96d4..8579901 100644 --- a/Heter_paper.tex +++ b/Heter_paper.tex @@ -421,7 +421,7 @@ The plots~(\ref{fig:r1} and \ref{fig:r2}) illustrate the normalized performance \section{Experimental results} \label{sec.expe} -To evaluate the efficiency and the overall energy consumption reduction of algorithm~\ref{HSA}), it was applied to the NAS parallel benchmarks NPB v3.3 +To evaluate the efficiency and the overall energy consumption reduction of algorithm~(\ref{HSA}), it was applied to the NAS parallel benchmarks NPB v3.3 \cite{44}. The experiments were executed on the simulator SimGrid/SMPI v3.10~\cite{casanova+giersch+legrand+al.2014.versatile} which offers easy tools to create a heterogeneous platform and run message passing applications over it. The heterogeneous platform that was used in the experiments, had one core per node because just one process was executed per node. The heterogeneous platform was composed of four types of nodes. Each type of nodes had different characteristics such as the maximum CPU frequency, the number of available frequencies and the computational power, see table @@ -631,36 +631,27 @@ The proposed algorithm was applied to the seven parallel NAS benchmarks (EP, CG, The overall energy consumption was computed for each instance according to the energy consumption model EQ(\ref{eq:energy}), with and without applying the algorithm. The execution time was also measured for all these experiments. Then, the energy saving and performance degradation percentages were computed for each instance. The results are presented in tables (\ref{table:res_4n}, \ref{table:res_8n}, \ref{table:res_16n}, \ref{table:res_32n}, \ref{table:res_64n} and \ref{table:res_128n}). All these results are the average values from many experiments for energy savings and performance degradation. -The tables show the experimental results for running the NAS parallel benchmarks on different number of nodes. The experiments show that the algorithm reduce significantly the energy consumption (up to 35\%) and tries to limit the performance degradation. They also show that the energy saving percentage is decreased when the number of the computing nodes is increased. This reduction is due to the increase of the communication times compared to the execution times when the benchmarks are run over a high number of nodes. Indeed, the benchmarks with the same class, C, are executed on different number of nodes, so the computation required for each iteration is divided by the number of computing nodes. On the other hand, more communications are required when increasing the number of nodes so the static energy is increased linearly according to the communication time and the dynamic power is less relevant in the overall energy consumption. Therefore, reducing the frequency with algorithm~\ref{HSA}) have less effect in reducing the overall energy savings. It can also be noticed that for the benchmarks EP and SP that contain little or no communications, the energy savings are not significantly affected with the high number of nodes. No experiments were conducted using bigger classes such as D, because they require a lot of memory(more than 64GB) when being executed by the simulator on one machine. +The tables show the experimental results for running the NAS parallel benchmarks on different number of nodes. The experiments show that the algorithm reduce significantly the energy consumption (up to 35\%) and tries to limit the performance degradation. They also show that the energy saving percentage is decreased when the number of the computing nodes is increased. This reduction is due to the increase of the communication times compared to the execution times when the benchmarks are run over a high number of nodes. Indeed, the benchmarks with the same class, C, are executed on different number of nodes, so the computation required for each iteration is divided by the number of computing nodes. On the other hand, more communications are required when increasing the number of nodes so the static energy is increased linearly according to the communication time and the dynamic power is less relevant in the overall energy consumption. Therefore, reducing the frequency with algorithm~(\ref{HSA}) have less effect in reducing the overall energy savings. It can also be noticed that for the benchmarks EP and SP that contain little or no communications, the energy savings are not significantly affected with the high number of nodes. No experiments were conducted using bigger classes such as D, because they require a lot of memory(more than 64GB) when being executed by the simulator on one machine. The maximum distance between the normalized energy curve and the normalized performance for each instance is also shown in the result tables. It is decreased in the same way as the energy saving percentage. The tables also show that the performance degradation percentage is not significantly increased when the number of computing nodes is increased because the computation times are small when compared to the communication times. \begin{figure} \centering - \subfloat[CG, MG, LU and FT benchmarks]{% - \includegraphics[width=.23185\textwidth]{fig/avg_eq}\label{fig:avg_eq}}% + \subfloat[Energy saving]{% + \includegraphics[width=.2315\textwidth]{fig/energy}\label{fig:energy}}% \quad% - \subfloat[BT and SP benchmarks]{% - \includegraphics[width=.23185\textwidth]{fig/avg_neq}\label{fig:avg_neq}} + \subfloat[Performance degradation ]{% + \includegraphics[width=.2315\textwidth]{fig/per_deg}\label{fig:per_deg}} \label{fig:avg} - \caption{The average of energy and performance for all NAS benchmarks running with difference number of nodes} + \caption{The energy and performance for all NAS benchmarks running with difference number of nodes} \end{figure} - The average of values of these three objectives are plotted to the number of -nodes as in plots (\ref{fig:avg_eq} and \ref{fig:avg_neq}). In CG, MG, LU, and -FT benchmarks the average of energy saving is decreased when the number of nodes -is increased because the communication times is increased as mentioned -before. Thus, the average of distances (our objective function) is decreased -linearly with energy saving while keeping the average of performance degradation approximately is -the same. In BT and SP benchmarks, the average of the energy saving is not decreased -significantly compare to other benchmarks when the number of nodes is -increased. Nevertheless, the average of performance degradation approximately -still the same ratio. This difference is depends on the characteristics of the -benchmark such as the computations to communications ratio that has. - -\textbf{All the previous paragraph should be deleted, we need to talk about it} + \textbf{ The energy saving and performance degradation of all benchmarks are plotted to the number of +nodes as in plots (\ref{fig:energy} and \ref{fig:per_deg}). As shown in the plots, the energy saving percentage of the benchmarks MG, LU, BT and FT is decreased linearly when the the number of nodes increased. While in EP benchmark the energy saving percentage is approximately the same percentage when the number of computing nodes is increased, because in this benchmark there is no communications. In the SP benchmark the energy saving percentage is decreased when it runs on a small number of nodes, while this percentage is increased when it runs on a big number of nodes. The energy saving of the GC benchmarks is significantly decreased when the number of nodes is increased, because this benchmark has more communications compared to other benchmarks. The performance degradation percentage of the benchmarks CG, EP, LU and BT is decreased when they run on a big number of nodes. While in MG benchmark has a higher percentage of performance degradation when it runs on a big number of nodes. The inverse happen in SP benchmark has smaller performance degradation percentage when it runs on a big number of nodes.} + + \subsection{The results for different power consumption scenarios} The results of the previous section were obtained while using processors that consume during computation an overall power which is 80\% composed of dynamic power and 20\% of static power. In this