became the top of the Green500 list in November 2014 \cite{Green500_List}.
This heterogeneous platform executes more than 5 GFLOPS per watt while consumed 57.15 kilowatts.
-Besides hardware improvements, there are many software techniques to lower the energy consumption of these platforms,
+Besides platform improvements, there are many software and hardware techniques to lower the energy consumption of these platforms,
such as scheduling, DVFS, ... DVFS is a widely used process to reduce the energy consumption of a processor by lowering
its frequency \cite{Rizvandi_Some.Observations.on.Optimal.Frequency}. However, it also reduces the number of FLOPS
executed by the processor which might increase the execution time of the application running over that processor.
DVFS is a technique enabled
in modern processors to scale down both the voltage and the frequency of
the CPU while computing, in order to reduce the energy consumption of the processor. DVFS is
-also allowed in the GPUs to achieve the same goal. Reducing the frequency of a processor lowers its number of FLOPS and might degrade the performance of the application running on that processor, especially if it is compute bound. Therefore selecting the appropriate frequency for a processor to satisfy some objectives and while taking into account all the constraints, is not a trivial operation. Many researchers used different strategies to tackle this problem. Some of them developed online methods that compute the new frequency while executing the application, such as ~\cite{Hao_Learning.based.DVFS,Dhiman_Online.Learning.Power.Management}. Others used offline methods that might need to run the application and profile it before selecting the new frequency, such as ~\cite{Rountree_Bounding.energy.consumption.in.MPI,Cochran_Pack_and_Cap_Adaptive_DVFS}. The methods could be heuristics, exact or brute force methods that satisfy varied objectives such as energy reduction or performance. They also could be adapted to the execution's environment and the type of the application such as sequential, parallel or distributed architecture, homogeneous or heterogeneous platform, synchronous or asynchronous application, ...
+also allowed in the GPUs to achieve the same goal. Reducing the frequency of a processor lowers its number of FLOPS and might degrade the performance of the application running on that processor, especially if it is compute bound. Therefore selecting the appropriate frequency for a processor to satisfy some objectives and while taking into account all the constraints, is not a trivial operation. Many researchers used different strategies to tackle this problem. Some of them developed online methods that compute the new frequency while executing the application, such as ~\cite{Hao_Learning.based.DVFS,Spiliopoulos_Green.governors.Adaptive.DVFS}. Others used offline methods that might need to run the application and profile it before selecting the new frequency, such as ~\cite{Rountree_Bounding.energy.consumption.in.MPI,Cochran_Pack_and_Cap_Adaptive_DVFS}. The methods could be heuristics, exact or brute force methods that satisfy varied objectives such as energy reduction or performance. They also could be adapted to the execution's environment and the type of the application such as sequential, parallel or distributed architecture, homogeneous or heterogeneous platform, synchronous or asynchronous application, ...
In this paper, we are interested in reducing energy for message passing iterative synchronous applications running over heterogeneous platforms.
Some works have already been done for such platforms and they can be classified into two types of heterogeneous platforms:
\begin{figure}[t]
\centering
- \includegraphics[scale=0.5]{fig/commtasks}
+ \includegraphics[scale=0.6]{fig/commtasks}
\caption{Parallel tasks on a heterogeneous platform}
\label{fig:heter}
\end{figure}
if it is communicating with slower nodes, see figure(\ref{fig:heter}). Therefore, all nodes do
not have equal communication times. While the dynamic energy is computed according to the frequency
scaling factor and the dynamic power of each node as in (\ref{eq:Edyn}), the static energy is
-computed as the sum of the execution time of each processor multiplied by its static power.
+computed as the sum of the execution time of one iteration multiplied by static power of each processor.
The overall energy consumption of a message passing distributed application executed over a
heterogeneous platform during one iteration is the summation of all dynamic and static energies
for each processor. It is computed as follows:
toward lower frequencies. The algorithm iterates on all left frequencies, from the higher bound until all
nodes reach their minimum frequencies, to compute their overall energy consumption and performance, and select
the optimal frequency scaling factors vector. At each iteration the algorithm determines the slowest node
-according to (\ref{eq:perf}) and keeps its frequency unchanged, while it lowers the frequency of
+according to the equation (\ref{eq:perf}) and keeps its frequency unchanged, while it lowers the frequency of
all other nodes by one gear.
The new overall energy consumption and execution time are computed according to the new scaling factors.
The optimal set of frequency scaling factors is the set that gives the highest distance according to the objective
The precision of the proposed algorithm mainly depends on the execution time prediction model defined in
(\ref{eq:perf}) and the energy model computed by (\ref{eq:energy}).
The energy model is also significantly dependent on the execution time model because the static energy is
-linearly related the execution time and the dynamic energy is related to the computation time. So, all of
+linearly related to the execution time and the dynamic energy is related to the computation time. So, all of
the works presented in this paper is based on the execution time model. To verify this model, the predicted
execution time was compared to the real execution time over SimGrid/SMPI simulator, v3.10~\cite{casanova+giersch+legrand+al.2014.versatile},
for all the NAS parallel benchmarks NPB v3.3
\begin{figure}
\centering
\subfloat[Comparison of the results on 8 nodes]{%
- \includegraphics[width=.30\textwidth]{fig/sen_comp}\label{fig:sen_comp}}%
+ \includegraphics[width=.33\textwidth]{fig/sen_comp}\label{fig:sen_comp}}%
\subfloat[Comparison the selected frequency scaling factors of MG benchmark class C running on 8 nodes]{%
- \includegraphics[width=.34\textwidth]{fig/three_scenarios}\label{fig:scales_comp}}
+ \includegraphics[width=.33\textwidth]{fig/three_scenarios}\label{fig:scales_comp}}
\label{fig:comp}
\caption{The comparison of the three power scenarios}
\end{figure}
\subsection{The comparison of the proposed scaling algorithm }
\label{sec.compare_EDP}
-
In this section, the scaling factors selection algorithm
is compared to Spiliopoulos et al. algorithm \cite{Spiliopoulos_Green.governors.Adaptive.DVFS}.
They developed a green governor that regularly applies an online frequency selecting algorithm to reduce the energy consumed by a multicore architecture without degrading much its performance. The algorithm selects the frequencies that minimize the energy and delay products, $EDP=Enegry*Delay$ using the predicted overall energy consumption and execution time delay for each frequency.
- To fairly compare both algorithms, the same energy and execution time models, equations (\ref{eq:energy}) and (\ref{eq:fnew}), were used for both algorithms to predict the energy consumption and the execution times. Also Spiliopoulos et al. algorithm was adapted to start the search from the
+To fairly compare both algorithms, the same energy and execution time models, equations (\ref{eq:energy}) and (\ref{eq:fnew}), were used for both algorithms to predict the energy consumption and the execution times. Also Spiliopoulos et al. algorithm was adapted to start the search from the
initial frequencies computed using the equation (\ref{eq:Fint}). The resulting algorithm is an exhaustive search algorithm that minimizes the EDP and has the initial frequencies values as an upper bound.
-Both algorithms were applied to the parallel NAS benchmarks to compare their efficiency. Table \ref{table:compare_EDP} presents the results of comparing the execution times and the energy consumptions for both versions of the NAS benchmarks while running the class C of each benchmark over 8 or 9 heterogeneous nodes. . The results show that our algorithm gives better energy savings than Spiliopoulos et al. algorithm,
+Both algorithms were applied to the parallel NAS benchmarks to compare their efficiency. Table \ref{table:compare_EDP} presents the results of comparing the execution times and the energy consumptions for both versions of the NAS benchmarks while running the class C of each benchmark over 8 or 9 heterogeneous nodes. The results show that our algorithm gives better energy savings than Spiliopoulos et al. algorithm,
on average it results in 29.76\% energy saving while their algorithm returns just 25.75\%. The average of performance degradation percentage is approximately the same for both algorithms, about 4\%.
+
For all benchmarks, our algorithm outperforms
-Spiliopoulos et al. algorithm in term of energy and performance tradeoff, see figure (\ref{fig:compare_EDP}) because it maximizes the distance between the energy saving and the performance degradation values while giving the same weight for both metrics.
+Spiliopoulos et al. algorithm in term of energy and performance tradeoff, see figure (\ref{fig:compare_EDP}), because it maximizes the distance between the energy saving and the performance degradation values while giving the same weight for both metrics.
+
+
\begin{table}[h]
+
+
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{fig/compare_EDP.pdf}
outperforms their algorithm in term of energy-time tradeoff.
In the near future, this method will be applied to real heterogeneous platforms to evaluate its performance in a real study case. It would also be interesting to evaluate its scalability over large scale heterogeneous platform and measure the energy consumption reduction it can produce. Afterward, we would like to develop a similar method that is adapted to asynchronous iterative applications
-where each task does not wait for others tasks to finish there works. The development of such method might require a new
+where each task does not wait for others tasks to finish their works. The development of such method might require a new
energy model because the number of iterations is not
known in advance and depends on the global convergence of the iterative system.
\section*{Acknowledgment}
This work has been partially supported by the Labex
-ACTION project (contract “ANR-11-LABX-01-01”). As a PhD student,
+ACTION project (contract “ANR-11-LABX-01-01”). As a PhD student,
Mr. Ahmed Fanfakh, would like to thank the University of
-Babylon (Iraq) for supporting his work.
+Babylon (Iraq) for supporting his work.
% trigger a \newpage just before the given reference
\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,my_reference}
\end{document}
-
+
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t