From: afanfakh Date: Mon, 24 Nov 2014 15:04:10 +0000 (+0100) Subject: some corrections X-Git-Tag: pdsec15_submission~54 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/mpi-energy2.git/commitdiff_plain/109d44a56a58d5b6e903f01d68d196d66bb8c42f?ds=inline;hp=83482b6bebc94e878b3aa52c290509321fdca09c some corrections --- diff --git a/Heter_paper.tex b/Heter_paper.tex index a88d174..e0c0cf6 100644 --- a/Heter_paper.tex +++ b/Heter_paper.tex @@ -139,7 +139,7 @@ Finally, in Section~\ref{sec.concl} the paper is ended with a summary and some f DVFS is a technique enabled in modern processors to scale down both the voltage and the frequency of the CPU while computing, in order to reduce the energy consumption of the processor. DVFS is -also allowed in the GPUs to achieve the same goal. Reducing the frequency of a processor lowers its number of FLOPS and might degrade the performance of the application running on that processor, especially if it is compute bound. Therefore selecting the appropriate frequency for a processor to satisfy some objectives and while taking into account all the constraints, is not a trivial operation. Many researchers used different strategies to tackle this problem. Some of them used online methods that compute the new frequency while executing the application \textbf{add a reference for an online method here}. Others used offline methods that might need to run the application and profile it before selecting the new frequency \textbf{add a reference for an offline method}. The methods could be heuristics, exact or brute force methods that satisfy varied objectives such as energy reduction or performance. They also could be adapted to the execution's environment and the type of the application such as sequential, parallel or distributed architecture, homogeneous or heterogeneous platform, synchronous or asynchronous application, ... +also allowed in the GPUs to achieve the same goal. Reducing the frequency of a processor lowers its number of FLOPS and might degrade the performance of the application running on that processor, especially if it is compute bound. Therefore selecting the appropriate frequency for a processor to satisfy some objectives and while taking into account all the constraints, is not a trivial operation. Many researchers used different strategies to tackle this problem. Some of them used online methods that compute the new frequency while executing the application,\textbf{ such as ~\cite{Hao_Learning.based.DVFS,Dhiman_Online.Learning.Power.Management}}. Others used offline methods that might need to run the application and profile it before selecting the new frequency, \textbf{such as ~\cite{Rountree_Bounding.energy.consumption.in.MPI,Cochran_Pack_and_Cap_Adaptive_DVFS}}. The methods could be heuristics, exact or brute force methods that satisfy varied objectives such as energy reduction or performance. They also could be adapted to the execution's environment and the type of the application such as sequential, parallel or distributed architecture, homogeneous or heterogeneous platform, synchronous or asynchronous application, ... In this paper, we are interested in reducing energy for message passing iterative synchronous applications running over heterogeneous platforms. Some works have already been done for such platforms and it can be classified into two types of heterogeneous platforms: @@ -161,19 +161,19 @@ In~\cite{Rong_Effects.of.DVFS.on.K20.GPU}, Rong et al. showed that a heterogeneous (GPUs and CPUs) cluster that enables DVFS gave better energy and performance efficiency than other clusters only composed of CPUs. -The work presented in this paper concerns the second type of platform,, with heterogeneous CPUs. +The work presented in this paper concerns the second type of platform, with heterogeneous CPUs. Many methods were conceived to reduce the energy consumption of this type of platform. Naveen et al.~\cite{Naveen_Power.Efficient.Resource.Scaling} -developed a method that minimize the value of $energy*delay^2$ by dynamically assigning new frequencies to the CPUs of the heterogeneous cluster. \textbf{should define the delay} Lizhe et al.~\cite{Lizhe_Energy.aware.parallel.task.scheduling} propose +developed a method that minimize the value of $energy*delay^2$ by dynamically assigning new frequencies to the CPUs of the heterogeneous cluster. \textbf{Where the delay is defined as the slack times that happens between the synchronous tasks}. Lizhe et al.~\cite{Lizhe_Energy.aware.parallel.task.scheduling} propose an algorithm that divides the executed tasks into two types: the critical and non critical tasks. The algorithm scales down the frequency of non critical tasks proportionally to their slack and communication times while limiting the performance degradation percentage to less than 10\%. In~\cite{Joshi_Blackbox.prediction.of.impact.of.DVFS} and \cite{Spiliopoulos_Green.governors.Adaptive.DVFS}, a heterogeneous cluster composed of two types of Intel and AMD processors. The consumed energy and the performance for each frequency gear were predicted, then the algorithm selected the best gear that gave -the best tradeoff. \textbf{what energy model they used? what method they used? } +the best tradeoff. \textbf{The energy model used to measure the energy consumption using the voltage and frequency values. While the performance is predicted using the regression method.} In~\cite{Shelepov_Scheduling.on.Heterogeneous.Multicore} and \cite{Li_Minimizing.Energy.Consumption.for.Frame.Based.Tasks}, the best frequencies for a specified heterogeneous cluster are selected offline using some -heuristic. Chen et al.~\cite{Chen_DVFS.under.quality.of.service.requirements} used a greedy dynamic approach to -minimize the power consumption of heterogeneous severs with time/space complexity \textbf{what does it mean}. This approach +heuristic. Chen et al.~\cite{Chen_DVFS.under.quality.of.service.requirements} \textbf{used a greedy dynamic programming approach to +minimize the power consumption of heterogeneous severs with the time requirements.} This approach had considerable overhead. In contrast to the above described papers, this paper presents the following contributions : \begin{enumerate} @@ -213,7 +213,7 @@ task which have the highest computation time and no slack time. \begin{figure}[t] \centering - \includegraphics[scale=0.6]{fig/commtasks} + \includegraphics[scale=0.6]{fig/commtasks} \caption{Parallel tasks on a heterogeneous platform} \label{fig:heter} \end{figure} diff --git a/my_reference.bib b/my_reference.bib index 721155e..929e65c 100644 --- a/my_reference.bib +++ b/my_reference.bib @@ -83,7 +83,7 @@ year = {2013} } -@INPROCEEDINGS{8, +@INPROCEEDINGS{Rountree_Bounding.energy.consumption.in.MPI, author={Rountree, B. and Lowenthal, D.K. and Funk, S. and Freeh, Vincent W. and De Supinski, B.R. and Schulz, M.}, booktitle={Supercomputing, 2007. SC '07. Proceedings of the 2007 ACM/IEEE Conference on}, title={Bounding energy consumption in large-scale {MPI} programs}, @@ -512,7 +512,7 @@ author = {Wei Liu and Wei Du and Jing Chen and Wei Wang and GuoSun Zeng} address = {Vancouver, Canada} } -@inproceedings{38, +@inproceedings{Cochran_Pack_and_Cap_Adaptive_DVFS, author = {Cochran, Ryan and Hankendi, Can and Coskun, Ayse K. and Reda, Sherief}, title = {Pack \& Cap: Adaptive {DVFS} and Thread Packing Under Power Caps}, booktitle = {Proceedings of the 44th Annual IEEE/ACM International Symposium on Microarchitecture},