1 \documentclass[conference]{IEEEtran}
3 \usepackage[T1]{fontenc}
4 \usepackage[utf8]{inputenc}
5 \usepackage[english]{babel}
6 \usepackage{algpseudocode}
12 \DeclareUrlCommand\email{\urlstyle{same}}
14 \usepackage[autolanguage,np]{numprint}
16 \renewcommand*\npunitcommand[1]{\text{#1}}
17 \npthousandthpartsep{}}
20 \usepackage[textsize=footnotesize]{todonotes}
21 \newcommand{\AG}[2][inline]{%
22 \todo[color=green!50,#1]{\sffamily\textbf{AG:} #2}\xspace}
23 \newcommand{\JC}[2][inline]{%
24 \todo[color=red!10,#1]{\sffamily\textbf{JC:} #2}\xspace}
26 \newcommand{\Xsub}[2]{{\ensuremath{#1_\mathit{#2}}}}
28 %% used to put some subscripts lower, and make them more legible
29 \newcommand{\fxheight}[1]{\ifx#1\relax\relax\else\rule{0pt}{1.52ex}#1\fi}
31 \newcommand{\CL}{\Xsub{C}{L}}
32 \newcommand{\Dist}{\mathit{Dist}}
33 \newcommand{\EdNew}{\Xsub{E}{dNew}}
34 \newcommand{\Eind}{\Xsub{E}{ind}}
35 \newcommand{\Enorm}{\Xsub{E}{Norm}}
36 \newcommand{\Eoriginal}{\Xsub{E}{Original}}
37 \newcommand{\Ereduced}{\Xsub{E}{Reduced}}
38 \newcommand{\Es}{\Xsub{E}{S}}
39 \newcommand{\Fdiff}[1][]{\Xsub{F}{diff}_{\!#1}}
40 \newcommand{\Fmax}[1][]{\Xsub{F}{max}_{\fxheight{#1}}}
41 \newcommand{\Fnew}{\Xsub{F}{new}}
42 \newcommand{\Ileak}{\Xsub{I}{leak}}
43 \newcommand{\Kdesign}{\Xsub{K}{design}}
44 \newcommand{\MaxDist}{\mathit{Max}\Dist}
45 \newcommand{\MinTcm}{\mathit{Min}\Tcm}
46 \newcommand{\Ntrans}{\Xsub{N}{trans}}
47 \newcommand{\Pd}[1][]{\Xsub{P}{d}_{\fxheight{#1}}}
48 \newcommand{\PdNew}{\Xsub{P}{dNew}}
49 \newcommand{\PdOld}{\Xsub{P}{dOld}}
50 \newcommand{\Pnorm}{\Xsub{P}{Norm}}
51 \newcommand{\Ps}[1][]{\Xsub{P}{s}_{\fxheight{#1}}}
52 \newcommand{\Scp}[1][]{\Xsub{S}{cp}_{#1}}
53 \newcommand{\Sopt}[1][]{\Xsub{S}{opt}_{#1}}
54 \newcommand{\Tcm}[1][]{\Xsub{T}{cm}_{\fxheight{#1}}}
55 \newcommand{\Tcp}[1][]{\Xsub{T}{cp}_{#1}}
56 \newcommand{\TcpOld}[1][]{\Xsub{T}{cpOld}_{#1}}
57 \newcommand{\Tnew}{\Xsub{T}{New}}
58 \newcommand{\Told}{\Xsub{T}{Old}}
62 \title{Energy Consumption Reduction for Message Passing Iterative Applications in Heterogeneous Architecture Using DVFS}
72 FEMTO-ST Institute, University of Franche-Comte\\
73 IUT de Belfort-Montbéliard,
74 19 avenue du Maréchal Juin, BP 527, 90016 Belfort cedex, France\\
75 % Telephone: \mbox{+33 3 84 58 77 86}, % Raphaël
76 % Fax: \mbox{+33 3 84 58 77 81}\\ % Dept Info
77 Email: \email{{jean-claude.charr,raphael.couturier,ahmed.fanfakh_badri_muslim,arnaud.giersch}@univ-fcomte.fr}
84 Computing platforms are consuming more and more energy due to the increasing
85 number of nodes composing them. To minimize the operating costs of these
86 platforms many techniques have been used. Dynamic voltage and frequency scaling
87 (DVFS) is one of them. It reduces the frequency of a CPU to lower its energy
88 consumption. However, lowering the frequency of a CPU might increase the
89 execution time of an application running on that processor. Therefore, the
90 frequency that gives the best trade-off between the energy consumption and the
91 performance of an application must be selected.
93 In this paper, a new online frequency selecting algorithm for heterogeneous
94 platforms is presented. It selects the frequencies and tries to give the best
95 trade-off between energy saving and performance degradation, for each node
96 computing the message passing iterative application. The algorithm has a small
97 overhead and works without training or profiling. It uses a new energy model for
98 message passing iterative applications running on a heterogeneous platform. The
99 proposed algorithm is evaluated on the SimGrid simulator while running the NAS
100 parallel benchmarks. The experiments show that it reduces the energy
101 consumption by up to 35\% while limiting the performance degradation as much as
102 possible. Finally, the algorithm is compared to an existing method, the
103 comparison results showing that it outperforms the latter.
107 \section{Introduction}
109 The need for more computing power is continually increasing. To partially
110 satisfy this need, most supercomputers constructors just put more computing
111 nodes in their platform. The resulting platforms might achieve higher floating
112 point operations per second (FLOPS), but the energy consumption and the heat
113 dissipation are also increased. As an example, the Chinese supercomputer
114 Tianhe-2 had the highest FLOPS in November 2014 according to the Top500 list
115 \cite{TOP500_Supercomputers_Sites}. However, it was also the most power hungry
116 platform with its over 3 million cores consuming around 17.8 megawatts.
117 Moreover, according to the U.S. annual energy outlook 2014
118 \cite{U.S_Annual.Energy.Outlook.2014}, the price of energy for 1 megawatt-hour
119 was approximately equal to \$70. Therefore, the price of the energy consumed by
120 the Tianhe-2 platform is approximately more than \$10 million each year. The
121 computing platforms must be more energy efficient and offer the highest number
122 of FLOPS per watt possible, such as the L-CSC from the GSI Helmholtz Center
123 which became the top of the Green500 list in November 2014 \cite{Green500_List}.
124 This heterogeneous platform executes more than 5 GFLOPS per watt while consuming
127 Besides platform improvements, there are many software and hardware techniques
128 to lower the energy consumption of these platforms, such as scheduling, DVFS,
129 \dots{} DVFS is a widely used process to reduce the energy consumption of a
130 processor by lowering its frequency
131 \cite{Rizvandi_Some.Observations.on.Optimal.Frequency}. However, it also reduces
132 the number of FLOPS executed by the processor which might increase the execution
133 time of the application running over that processor. Therefore, researchers use
134 different optimization strategies to select the frequency that gives the best
135 trade-off between the energy reduction and performance degradation ratio. In
136 \cite{Our_first_paper}, a frequency selecting algorithm was proposed to reduce
137 the energy consumption of message passing iterative applications running over
138 homogeneous platforms. The results of the experiments show significant energy
139 consumption reductions. In this paper, a new frequency selecting algorithm
140 adapted for heterogeneous platform is presented. It selects the vector of
141 frequencies, for a heterogeneous platform running a message passing iterative
142 application, that simultaneously tries to offer the maximum energy reduction and
143 minimum performance degradation ratio. The algorithm has a very small overhead,
144 works online and does not need any training or profiling.
146 This paper is organized as follows: Section~\ref{sec.relwork} presents some
147 related works from other authors. Section~\ref{sec.exe} describes how the
148 execution time of message passing programs can be predicted. It also presents an energy
149 model that predicts the energy consumption of an application running over a heterogeneous platform. Section~\ref{sec.compet} presents
150 the energy-performance objective function that maximizes the reduction of energy
151 consumption while minimizing the degradation of the program's performance.
152 Section~\ref{sec.optim} details the proposed frequency selecting algorithm then the precision of the proposed algorithm is verified.
153 Section~\ref{sec.expe} presents the results of applying the algorithm on the NAS parallel benchmarks and executing them
154 on a heterogeneous platform. It shows the results of running three
155 different power scenarios and comparing them. Moreover, it also shows the comparison results
156 between the proposed method and an existing method.
157 Finally, in Section~\ref{sec.concl} the paper ends with a summary and some future works.
159 \section{Related works}
161 DVFS is a technique used in modern processors to scale down both the voltage and
162 the frequency of the CPU while computing, in order to reduce the energy
163 consumption of the processor. DVFS is also allowed in GPUs to achieve the same
164 goal. Reducing the frequency of a processor lowers its number of FLOPS and might
165 degrade the performance of the application running on that processor, especially
166 if it is compute bound. Therefore selecting the appropriate frequency for a
167 processor to satisfy some objectives while taking into account all the
168 constraints, is not a trivial operation. Many researchers used different
169 strategies to tackle this problem. Some of them developed online methods that
170 compute the new frequency while executing the application, such
171 as~\cite{Hao_Learning.based.DVFS,Spiliopoulos_Green.governors.Adaptive.DVFS}.
172 Others used offline methods that might need to run the application and profile
173 it before selecting the new frequency, such
174 as~\cite{Rountree_Bounding.energy.consumption.in.MPI,Cochran_Pack_and_Cap_Adaptive_DVFS}.
175 The methods could be heuristics, exact or brute force methods that satisfy
176 varied objectives such as energy reduction or performance. They also could be
177 adapted to the execution's environment and the type of the application such as
178 sequential, parallel or distributed architecture, homogeneous or heterogeneous
179 platform, synchronous or asynchronous application, \dots{}
181 In this paper, we are interested in reducing energy for message passing iterative synchronous applications running over heterogeneous platforms.
182 Some works have already been done for such platforms and they can be classified into two types of heterogeneous platforms:
185 \item the platform is composed of homogeneous GPUs and homogeneous CPUs.
186 \item the platform is only composed of heterogeneous CPUs.
190 For the first type of platform, the computing intensive parallel tasks are
191 executed on the GPUs and the rest are executed on the CPUs. Luley et
192 al.~\cite{Luley_Energy.efficiency.evaluation.and.benchmarking}, proposed a
193 heterogeneous cluster composed of Intel Xeon CPUs and NVIDIA GPUs. Their main
194 goal was to maximize the energy efficiency of the platform during computation by
195 maximizing the number of FLOPS per watt generated.
196 In~\cite{KaiMa_Holistic.Approach.to.Energy.Efficiency.in.GPU-CPU}, Kai Ma et
197 al. developed a scheduling algorithm that distributes workloads proportional to
198 the computing power of the nodes which could be a GPU or a CPU. All the tasks
199 must be completed at the same time. In~\cite{Rong_Effects.of.DVFS.on.K20.GPU},
200 Rong et al. showed that a heterogeneous (GPUs and CPUs) cluster that enables
201 DVFS gave better energy and performance efficiency than other clusters only
204 The work presented in this paper concerns the second type of platform, with
205 heterogeneous CPUs. Many methods were conceived to reduce the energy
206 consumption of this type of platform. Naveen et
207 al.~\cite{Naveen_Power.Efficient.Resource.Scaling} developed a method that
208 minimizes the value of $\mathit{energy}\times \mathit{delay}^2$ (the delay is
209 the sum of slack times that happen during synchronous communications) by
210 dynamically assigning new frequencies to the CPUs of the heterogeneous
211 cluster. Lizhe et al.~\cite{Lizhe_Energy.aware.parallel.task.scheduling}
212 proposed an algorithm that divides the executed tasks into two types: the
213 critical and non critical tasks. The algorithm scales down the frequency of non
214 critical tasks proportionally to their slack and communication times while
215 limiting the performance degradation percentage to less than
216 10\%. In~\cite{Joshi_Blackbox.prediction.of.impact.of.DVFS}, they developed a
217 heterogeneous cluster composed of two types of Intel and AMD processors. They
218 use a gradient method to predict the impact of DVFS operations on performance.
219 In~\cite{Shelepov_Scheduling.on.Heterogeneous.Multicore} and
220 \cite{Li_Minimizing.Energy.Consumption.for.Frame.Based.Tasks}, the best
221 frequencies for a specified heterogeneous cluster are selected offline using
222 some heuristic. Chen et
223 al.~\cite{Chen_DVFS.under.quality.of.service.requirements} used a greedy dynamic
224 programming approach to minimize the power consumption of heterogeneous servers
225 while respecting given time constraints. This approach had considerable
226 overhead. In contrast to the above described papers, this paper presents the
227 following contributions :
229 \item two new energy and performance models for message passing iterative synchronous applications running over
230 a heterogeneous platform. Both models take into account communication and slack times. The models can predict the required energy and the execution time of the application.
232 \item a new online frequency selecting algorithm for heterogeneous platforms. The algorithm has a very small
233 overhead and does not need any training or profiling. It uses a new optimization function which simultaneously maximizes the performance and minimizes the energy consumption of a message passing iterative synchronous application.
237 \section{The performance and energy consumption measurements on heterogeneous architecture}
242 \subsection{The execution time of message passing distributed
243 iterative applications on a heterogeneous platform}
245 In this paper, we are interested in reducing the energy consumption of message
246 passing distributed iterative synchronous applications running over
247 heterogeneous platforms. A heterogeneous platform is defined as a collection of
248 heterogeneous computing nodes interconnected via a high speed homogeneous
249 network. Therefore, each node has different characteristics such as computing
250 power (FLOPS), energy consumption, CPU's frequency range, \dots{} but they all
251 have the same network bandwidth and latency.
253 The overall execution time of a distributed iterative synchronous application
254 over a heterogeneous platform consists of the sum of the computation time and
255 the communication time for every iteration on a node. However, due to the
256 heterogeneous computation power of the computing nodes, slack times might occur
257 when fast nodes have to wait, during synchronous communications, for the slower
258 nodes to finish their computations (see Figure~\ref{fig:heter}). Therefore, the
259 overall execution time of the program is the execution time of the slowest task
260 which has the highest computation time and no slack time.
264 \includegraphics[scale=0.6]{fig/commtasks}
265 \caption{Parallel tasks on a heterogeneous platform}
269 Dynamic Voltage and Frequency Scaling (DVFS) is a process, implemented in
270 modern processors, that reduces the energy consumption of a CPU by scaling
271 down its voltage and frequency. Since DVFS lowers the frequency of a CPU
272 and consequently its computing power, the execution time of a program running
273 over that scaled down processor might increase, especially if the program is
274 compute bound. The frequency reduction process can be expressed by the scaling
275 factor S which is the ratio between the maximum and the new frequency of a CPU
279 S = \frac{\Fmax}{\Fnew}
281 The execution time of a compute bound sequential program is linearly proportional
282 to the frequency scaling factor $S$. On the other hand, message passing
283 distributed applications consist of two parts: computation and communication.
284 The execution time of the computation part is linearly proportional to the
285 frequency scaling factor $S$ but the communication time is not affected by the
286 scaling factor because the processors involved remain idle during the
287 communications~\cite{Freeh_Exploring.the.Energy.Time.Tradeoff}.
288 The communication time for a task is the summation of periods of
289 time that begin with an MPI call for sending or receiving a message
290 until the message is synchronously sent or received.
292 Since in a heterogeneous platform each node has different characteristics,
293 especially different frequency gears, when applying DVFS operations on these
294 nodes, they may get different scaling factors represented by a scaling vector:
295 $(S_1, S_2,\dots, S_N)$ where $S_i$ is the scaling factor of processor $i$. To
296 be able to predict the execution time of message passing synchronous iterative
297 applications running over a heterogeneous platform, for different vectors of
298 scaling factors, the communication time and the computation time for all the
299 tasks must be measured during the first iteration before applying any DVFS
300 operation. Then the execution time for one iteration of the application with any
301 vector of scaling factors can be predicted using (\ref{eq:perf}).
304 \Tnew = \max_{i=1,2,\dots,N} ({\TcpOld[i]} \cdot S_{i}) + \MinTcm
309 \MinTcm = \min_{i=1,2,\dots,N} (\Tcm[i])
311 where $\TcpOld[i]$ is the computation time of processor $i$ during the first
312 iteration and $\MinTcm$ is the communication time of the slowest processor from
313 the first iteration. The model computes the maximum computation time with
314 scaling factor from each node added to the communication time of the slowest
315 node. It means only the communication time without any slack time is taken into
316 account. Therefore, the execution time of the iterative application is equal to
317 the execution time of one iteration as in (\ref{eq:perf}) multiplied by the
318 number of iterations of that application.
320 This prediction model is developed from the model to predict the execution time
321 of message passing distributed applications for homogeneous
322 architectures~\cite{Our_first_paper}. The execution time prediction model is
323 used in the method to optimize both the energy consumption and the performance of
324 iterative methods, which is presented in the following sections.
327 \subsection{Energy model for heterogeneous platform}
328 Many researchers~\cite{Malkowski_energy.efficient.high.performance.computing,
329 Rauber_Analytical.Modeling.for.Energy,Zhuo_Energy.efficient.Dynamic.Task.Scheduling,
330 Rizvandi_Some.Observations.on.Optimal.Frequency} divide the power consumed by a processor into
331 two power metrics: the static and the dynamic power. While the first one is
332 consumed as long as the computing unit is turned on, the latter is only consumed during
333 computation times. The dynamic power $\Pd$ is related to the switching
334 activity $\alpha$, load capacitance $\CL$, the supply voltage $V$ and
335 operational frequency $F$, as shown in (\ref{eq:pd}).
338 \Pd = \alpha \cdot \CL \cdot V^2 \cdot F
340 The static power $\Ps$ captures the leakage power as follows:
343 \Ps = V \cdot \Ntrans \cdot \Kdesign \cdot \Ileak
345 where V is the supply voltage, $\Ntrans$ is the number of transistors,
346 $\Kdesign$ is a design dependent parameter and $\Ileak$ is a
347 technology dependent parameter. The energy consumed by an individual processor
348 to execute a given program can be computed as:
351 \Eind = \Pd \cdot \Tcp + \Ps \cdot T
353 where $T$ is the execution time of the program, $\Tcp$ is the computation
354 time and $\Tcp \le T$. $\Tcp$ may be equal to $T$ if there is no
355 communication and no slack time.
357 The main objective of DVFS operation is to reduce the overall energy consumption~\cite{Le_DVFS.Laws.of.Diminishing.Returns}.
358 The operational frequency $F$ depends linearly on the supply voltage $V$, i.e., $V = \beta \cdot F$ with some
359 constant $\beta$.~This equation is used to study the change of the dynamic
360 voltage with respect to various frequency values in~\cite{Rauber_Analytical.Modeling.for.Energy}. The reduction
361 process of the frequency can be expressed by the scaling factor $S$ which is the
362 ratio between the maximum and the new frequency as in (\ref{eq:s}).
363 The CPU governors are power schemes supplied by the operating
364 system's kernel to lower a core's frequency. The new frequency
365 $\Fnew$ from (\ref{eq:s}) can be calculated as follows:
368 \Fnew = S^{-1} \cdot \Fmax
370 Replacing $\Fnew$ in (\ref{eq:pd}) as in (\ref{eq:fnew}) gives the following
371 equation for dynamic power consumption:
374 \PdNew = \alpha \cdot \CL \cdot V^2 \cdot \Fnew = \alpha \cdot \CL \cdot \beta^2 \cdot \Fnew^3 \\
375 {} = \alpha \cdot \CL \cdot V^2 \cdot \Fmax \cdot S^{-3} = \PdOld \cdot S^{-3}
377 where $\PdNew$ and $\PdOld$ are the dynamic power consumed with the
378 new frequency and the maximum frequency respectively.
380 According to (\ref{eq:pdnew}) the dynamic power is reduced by a factor of $S^{-3}$ when
381 reducing the frequency by a factor of $S$~\cite{Rauber_Analytical.Modeling.for.Energy}. Since the FLOPS of a CPU is proportional
382 to the frequency of a CPU, the computation time is increased proportionally to $S$.
383 The new dynamic energy is the dynamic power multiplied by the new time of computation
384 and is given by the following equation:
387 \EdNew = \PdOld \cdot S^{-3} \cdot (\Tcp \cdot S)= S^{-2}\cdot \PdOld \cdot \Tcp
389 The static power is related to the power leakage of the CPU and is consumed during computation
390 and even when idle. As in~\cite{Rauber_Analytical.Modeling.for.Energy,Zhuo_Energy.efficient.Dynamic.Task.Scheduling},
391 the static power of a processor is considered as constant
392 during idle and computation periods, and for all its available frequencies.
393 The static energy is the static power multiplied by the execution time of the program.
394 According to the execution time model in (\ref{eq:perf}), the execution time of the program
395 is the sum of the computation and the communication times. The computation time is linearly related
396 to the frequency scaling factor, while this scaling factor does not affect the communication time.
397 The static energy of a processor after scaling its frequency is computed as follows:
400 \Es = \Ps \cdot (\Tcp \cdot S + \Tcm)
403 In the considered heterogeneous platform, each processor $i$ might have
404 different dynamic and static powers, noted as $\Pd[i]$ and $\Ps[i]$
405 respectively. Therefore, even if the distributed message passing iterative
406 application is load balanced, the computation time of each CPU $i$ noted
407 $\Tcp[i]$ might be different and different frequency scaling factors might be
408 computed in order to decrease the overall energy consumption of the application
409 and reduce slack times. The communication time of a processor $i$ is noted as
410 $\Tcm[i]$ and could contain slack times when communicating with slower
411 nodes, see Figure~\ref{fig:heter}. Therefore, all nodes do not have equal
412 communication times. While the dynamic energy is computed according to the
413 frequency scaling factor and the dynamic power of each node as in
414 (\ref{eq:Edyn}), the static energy is computed as the sum of the execution time
415 of one iteration multiplied by the static power of each processor. The overall
416 energy consumption of a message passing distributed application executed over a
417 heterogeneous platform during one iteration is the summation of all dynamic and
418 static energies for each processor. It is computed as follows:
421 E = \sum_{i=1}^{N} {(S_i^{-2} \cdot \Pd[i] \cdot \Tcp[i])} + {} \\
422 \sum_{i=1}^{N} (\Ps[i] \cdot (\max_{i=1,2,\dots,N} (\Tcp[i] \cdot S_{i}) +
426 Reducing the frequencies of the processors according to the vector of
427 scaling factors $(S_1, S_2,\dots, S_N)$ may degrade the performance of the
428 application and thus, increase the static energy because the execution time is
429 increased~\cite{Kim_Leakage.Current.Moore.Law}. The overall energy consumption for the iterative
430 application can be measured by measuring the energy consumption for one iteration as in (\ref{eq:energy})
431 multiplied by the number of iterations of that application.
434 \section{Optimization of both energy consumption and performance}
437 Using the lowest frequency for each processor does not necessarily give the most
438 energy efficient execution of an application. Indeed, even though the dynamic
439 power is reduced while scaling down the frequency of a processor, its
440 computation power is proportionally decreased. Hence, the execution time might
441 be drastically increased and during that time, dynamic and static powers are
442 being consumed. Therefore, it might cancel any gains achieved by scaling down
443 the frequency of all nodes to the minimum and the overall energy consumption of
444 the application might not be the optimal one. It is not trivial to select the
445 appropriate frequency scaling factor for each processor while considering the
446 characteristics of each processor (computation power, range of frequencies,
447 dynamic and static powers) and the task executed (computation/communication
448 ratio). The aim being to reduce the overall energy consumption and to avoid
449 increasing significantly the execution time. In our previous
450 work~\cite{Our_first_paper}, we proposed a method that selects the optimal
451 frequency scaling factor for a homogeneous cluster executing a message passing
452 iterative synchronous application while giving the best trade-off between the
453 energy consumption and the performance for such applications. In this work we
454 are interested in heterogeneous clusters as described above. Due to the
455 heterogeneity of the processors, a vector of scaling factors should
456 be selected and it must give the best trade-off between energy consumption and
459 The relation between the energy consumption and the execution time for an
460 application is complex and nonlinear, Thus, unlike the relation between the
461 execution time and the scaling factor, the relation between the energy and the
462 frequency scaling factors is nonlinear, for more details refer
463 to~\cite{Freeh_Exploring.the.Energy.Time.Tradeoff}. Moreover, these relations
464 are not measured using the same metric. To solve this problem, the execution
465 time is normalized by computing the ratio between the new execution time (after
466 scaling down the frequencies of some processors) and the initial one (with
467 maximum frequency for all nodes) as follows:
470 \Pnorm = \frac{\Tnew}{\Told}\\
471 {} = \frac{ \max_{i=1,2,\dots,N} (\Tcp[i] \cdot S_{i}) +\MinTcm}
472 {\max_{i=1,2,\dots,N}{(\Tcp[i]+\Tcm[i])}}
476 In the same way, the energy is normalized by computing the ratio between the consumed energy
477 while scaling down the frequency and the consumed energy with maximum frequency for all nodes:
480 \Enorm = \frac{\Ereduced}{\Eoriginal} \\
481 {} = \frac{ \sum_{i=1}^{N}{(S_i^{-2} \cdot \Pd[i] \cdot \Tcp[i])} +
482 \sum_{i=1}^{N} {(\Ps[i] \cdot \Tnew)}}{\sum_{i=1}^{N}{( \Pd[i] \cdot \Tcp[i])} +
483 \sum_{i=1}^{N} {(\Ps[i] \cdot \Told)}}
485 Where $\Ereduced$ and $\Eoriginal$ are computed using (\ref{eq:energy}) and
486 $\Tnew$ and $\Told$ are computed as in (\ref{eq:pnorm}).
489 goal is to optimize the energy and execution time at the same time, the normalized
490 energy and execution time curves are not in the same direction. According
491 to the equations~(\ref{eq:pnorm}) and (\ref{eq:enorm}), the vector of frequency
492 scaling factors $S_1,S_2,\dots,S_N$ reduce both the energy and the execution
493 time simultaneously. But the main objective is to produce maximum energy
494 reduction with minimum execution time reduction.
496 This problem can be solved by making the optimization process for energy and
497 execution time following the same direction. Therefore, the equation of the
498 normalized execution time is inverted which gives the normalized performance equation, as follows:
501 \Pnorm = \frac{\Told}{\Tnew}\\
502 = \frac{\max_{i=1,2,\dots,N}{(\Tcp[i]+\Tcm[i])}}
503 { \max_{i=1,2,\dots,N} (\Tcp[i] \cdot S_{i}) + \MinTcm}
509 \subfloat[Homogeneous platform]{%
510 \includegraphics[width=.33\textwidth]{fig/homo}\label{fig:r1}}%
513 \subfloat[Heterogeneous platform]{%
514 \includegraphics[width=.33\textwidth]{fig/heter}\label{fig:r2}}
516 \caption{The energy and performance relation}
519 Then, the objective function can be modeled in order to find the maximum
520 distance between the energy curve (\ref{eq:enorm}) and the performance curve
521 (\ref{eq:pnorm_inv}) over all available sets of scaling factors. This
522 represents the minimum energy consumption with minimum execution time (maximum
523 performance) at the same time, see Figure~\ref{fig:r1} or
524 Figure~\ref{fig:r2}. Then the objective function has the following form:
528 \mathop{\max_{i=1,\dots F}}_{j=1,\dots,N}
529 (\overbrace{\Pnorm(S_{ij})}^{\text{Maximize}} -
530 \overbrace{\Enorm(S_{ij})}^{\text{Minimize}} )
532 where $N$ is the number of nodes and $F$ is the number of available frequencies for each node.
533 Then, the optimal set of scaling factors that satisfies (\ref{eq:max}) can be selected.
534 The objective function can work with any energy model or any power values for each node
535 (static and dynamic powers). However, the most important energy reduction gain can be achieved when
536 the energy curve has a convex form as shown in~\cite{Zhuo_Energy.efficient.Dynamic.Task.Scheduling,Rauber_Analytical.Modeling.for.Energy,Hao_Learning.based.DVFS}.
538 \section{The scaling factors selection algorithm for heterogeneous platforms }
541 \subsection{The algorithm details}
542 In this section, Algorithm~\ref{HSA} is presented. It selects the frequency
543 scaling factors vector that gives the best trade-off between minimizing the
544 energy consumption and maximizing the performance of a message passing
545 synchronous iterative application executed on a heterogeneous platform. It works
546 online during the execution time of the iterative message passing program. It
547 uses information gathered during the first iteration such as the computation
548 time and the communication time in one iteration for each node. The algorithm is
549 executed after the first iteration and returns a vector of optimal frequency
550 scaling factors that satisfies the objective function (\ref{eq:max}). The
551 program applies DVFS operations to change the frequencies of the CPUs according
552 to the computed scaling factors. This algorithm is called just once during the
553 execution of the program. Algorithm~\ref{dvfs} shows where and when the proposed
554 scaling algorithm is called in the iterative MPI program.
556 The nodes in a heterogeneous platform have different computing powers, thus
557 while executing message passing iterative synchronous applications, fast nodes
558 have to wait for the slower ones to finish their computations before being able
559 to synchronously communicate with them as in Figure~\ref{fig:heter}. These
560 periods are called idle or slack times. The algorithm takes into account this
561 problem and tries to reduce these slack times when selecting the frequency
562 scaling factors vector. At first, it selects initial frequency scaling factors
563 that increase the execution times of fast nodes and minimize the differences
564 between the computation times of fast and slow nodes. The value of the initial
565 frequency scaling factor for each node is inversely proportional to its
566 computation time that was gathered from the first iteration. These initial
567 frequency scaling factors are computed as a ratio between the computation time
568 of the slowest node and the computation time of the node $i$ as follows:
571 \Scp[i] = \frac{\max_{i=1,2,\dots,N}(\Tcp[i])}{\Tcp[i]}
573 Using the initial frequency scaling factors computed in (\ref{eq:Scp}), the algorithm computes
574 the initial frequencies for all nodes as a ratio between the maximum frequency of node $i$
575 and the computation scaling factor $\Scp[i]$ as follows:
578 F_{i} = \frac{\Fmax[i]}{\Scp[i]},~{i=1,2,\dots,N}
580 If the computed initial frequency for a node is not available in the gears of
581 that node, it is replaced by the nearest available frequency. In
582 Figure~\ref{fig:st_freq}, the nodes are sorted by their computing power in
583 ascending order and the frequencies of the faster nodes are scaled down
584 according to the computed initial frequency scaling factors. The resulting new
585 frequencies are colored in blue in Figure~\ref{fig:st_freq}. This set of
586 frequencies can be considered as a higher bound for the search space of the
587 optimal vector of frequencies because selecting frequency scaling factors higher
588 than the higher bound will not improve the performance of the application and it
589 will increase its overall energy consumption. Therefore the algorithm that
590 selects the frequency scaling factors starts the search method from these
591 initial frequencies and takes a downward search direction toward lower
592 frequencies. The algorithm iterates on all left frequencies, from the higher
593 bound until all nodes reach their minimum frequencies, to compute their overall
594 energy consumption and performance, and select the optimal frequency scaling
595 factors vector. At each iteration the algorithm determines the slowest node
596 according to the equation (\ref{eq:perf}) and keeps its frequency unchanged,
597 while it lowers the frequency of all other nodes by one gear. The new overall
598 energy consumption and execution time are computed according to the new scaling
599 factors. The optimal set of frequency scaling factors is the set that gives the
600 highest distance according to the objective function (\ref{eq:max}).
602 Figures~\ref{fig:r1} and \ref{fig:r2} illustrate the normalized performance and
603 consumed energy for an application running on a homogeneous platform and a
604 heterogeneous platform respectively while increasing the scaling factors. It can
605 be noticed that in a homogeneous platform the search for the optimal scaling
606 factor should start from the maximum frequency because the performance and the
607 consumed energy decrease from the beginning of the plot. On the other hand,
608 in the heterogeneous platform the performance is maintained at the beginning of
609 the plot even if the frequencies of the faster nodes decrease until the
610 computing power of scaled down nodes are lower than the slowest node. In other
611 words, until they reach the higher bound. It can also be noticed that the higher
612 the difference between the faster nodes and the slower nodes is, the bigger the
613 maximum distance between the energy curve and the performance curve is while
614 the scaling factors are varying which results in bigger energy savings.
617 \includegraphics[scale=0.5]{fig/start_freq}
618 \caption{Selecting the initial frequencies}
626 \begin{algorithmic}[1]
630 \item[{$\Tcp[i]$}] array of all computation times for all nodes during one iteration and with highest frequency.
631 \item[{$\Tcm[i]$}] array of all communication times for all nodes during one iteration and with highest frequency.
632 \item[{$\Fmax[i]$}] array of the maximum frequencies for all nodes.
633 \item[{$\Pd[i]$}] array of the dynamic powers for all nodes.
634 \item[{$\Ps[i]$}] array of the static powers for all nodes.
635 \item[{$\Fdiff[i]$}] array of the difference between two successive frequencies for all nodes.
637 \Ensure $\Sopt[1],\Sopt[2] \dots, \Sopt[N]$ is a vector of optimal scaling factors
639 \State $\Scp[i] \gets \frac{\max_{i=1,2,\dots,N}(\Tcp[i])}{\Tcp[i]} $
640 \State $F_{i} \gets \frac{\Fmax[i]}{\Scp[i]},~{i=1,2,\cdots,N}$
641 \State Round the computed initial frequencies $F_i$ to the closest one available in each node.
642 \If{(not the first frequency)}
643 \State $F_i \gets F_i+\Fdiff[i],~i=1,\dots,N.$
645 \State $\Told \gets \max_{i=1,\dots,N} (\Tcp[i]+\Tcm[i])$
646 % \State $\Eoriginal \gets \sum_{i=1}^{N}{( \Pd[i] \cdot \Tcp[i])} +\sum_{i=1}^{N} {(\Ps[i] \cdot \Told)}$
647 \State $\Eoriginal \gets \sum_{i=1}^{N}{( \Pd[i] \cdot \Tcp[i] + \Ps[i] \cdot \Told)}$
648 \State $\Sopt[i] \gets 1,~i=1,\dots,N. $
649 \State $\Dist \gets 0 $
650 \While {(all nodes not reach their minimum frequency)}
651 \If{(not the last freq. \textbf{and} not the slowest node)}
652 \State $F_i \gets F_i - \Fdiff[i],~i=1,\dots,N.$
653 \State $S_i \gets \frac{\Fmax[i]}{F_i},~i=1,\dots,N.$
655 \State $\Tnew \gets \max_{i=1,\dots,N} (\Tcp[i] \cdot S_{i}) + \MinTcm $
656 % \State $\Ereduced \gets \sum_{i=1}^{N}{(S_i^{-2} \cdot \Pd[i] \cdot \Tcp[i])} + \sum_{i=1}^{N} {(\Ps[i] \cdot \rlap{\Tnew)}} $
657 \State $\Ereduced \gets \sum_{i=1}^{N}{(S_i^{-2} \cdot \Pd[i] \cdot \Tcp[i] + \Ps[i] \cdot \rlap{\Tnew)}} $
658 \State $\Pnorm \gets \frac{\Told}{\Tnew}$
659 \State $\Enorm\gets \frac{\Ereduced}{\Eoriginal}$
660 \If{$(\Pnorm - \Enorm > \Dist)$}
661 \State $\Sopt[i] \gets S_{i},~i=1,\dots,N. $
662 \State $\Dist \gets \Pnorm - \Enorm$
665 \State Return $\Sopt[1],\Sopt[2],\dots,\Sopt[N]$
667 \caption{frequency scaling factors selection algorithm}
672 \begin{algorithmic}[1]
674 \For {$k=1$ to \textit{some iterations}}
675 \State Computations section.
676 \State Communications section.
678 \State Gather all times of computation and\newline\hspace*{3em}%
679 communication from each node.
680 \State Call Algorithm \ref{HSA}.
681 \State Compute the new frequencies from the\newline\hspace*{3em}%
682 returned optimal scaling factors.
683 \State Set the new frequencies to nodes.
687 \caption{DVFS algorithm}
691 \subsection{The evaluation of the proposed algorithm}
692 \label{sec.verif.algo}
693 The precision of the proposed algorithm mainly depends on the execution time
694 prediction model defined in (\ref{eq:perf}) and the energy model computed by
695 (\ref{eq:energy}). The energy model is also significantly dependent on the
696 execution time model because the static energy is linearly related to the
697 execution time and the dynamic energy is related to the computation time. So,
698 all the works presented in this paper are based on the execution time model. To
699 verify this model, the predicted execution time was compared to the real
700 execution time over SimGrid/SMPI simulator,
701 v3.10~\cite{casanova+giersch+legrand+al.2014.versatile}, for all the NAS
702 parallel benchmarks NPB v3.3 \cite{NAS.Parallel.Benchmarks}, running class B on
703 8 or 9 nodes. The comparison showed that the proposed execution time model is
704 very precise, the maximum normalized difference between the predicted execution
705 time and the real execution time is equal to 0.03 for all the NAS benchmarks.
707 Since the proposed algorithm is not an exact method it does not test all the
708 possible solutions (vectors of scaling factors) in the search space. To prove
709 its efficiency, it was compared on small instances to a brute force search
710 algorithm that tests all the possible solutions. The brute force algorithm was
711 applied to different NAS benchmarks classes with different number of nodes. The
712 solutions returned by the brute force algorithm and the proposed algorithm were
713 identical and the proposed algorithm was on average 10 times faster than the
714 brute force algorithm. It has a small execution time: for a heterogeneous
715 cluster composed of four different types of nodes having the characteristics
716 presented in Table~\ref{table:platform}, it takes on average \np[ms]{0.04} for 4
717 nodes and \np[ms]{0.15} on average for 144 nodes to compute the best scaling
718 factors vector. The algorithm complexity is $O(F\cdot N)$, where $F$
719 is the number of iterations and $N$ is the number of computing nodes. The
720 algorithm needs from 12 to 20 iterations to select the best vector of frequency
721 scaling factors that gives the results of the next sections.
723 \section{Experimental results}
725 To evaluate the efficiency and the overall energy consumption reduction of
726 Algorithm~\ref{HSA}, it was applied to the NAS parallel benchmarks NPB v3.3. The
727 experiments were executed on the simulator SimGrid/SMPI which offers easy tools
728 to create a heterogeneous platform and run message passing applications over it.
729 The heterogeneous platform that was used in the experiments, had one core per
730 node because just one process was executed per node. The heterogeneous platform
731 was composed of four types of nodes. Each type of nodes had different
732 characteristics such as the maximum CPU frequency, the number of available
733 frequencies and the computational power, see Table~\ref{table:platform}. The
734 characteristics of these different types of nodes are inspired from the
735 specifications of real Intel processors. The heterogeneous platform had up to
736 144 nodes and had nodes from the four types in equal proportions, for example if
737 a benchmark was executed on 8 nodes, 2 nodes from each type were used. Since the
738 constructors of CPUs do not specify the dynamic and the static power of their
739 CPUs, for each type of node they were chosen proportionally to its computing
740 power (FLOPS). In the initial heterogeneous platform, while computing with
741 highest frequency, each node consumed an amount of power proportional to its
742 computing power (which corresponds to 80\% of its dynamic power and the
743 remaining 20\% to the static power), the same assumption was made in
744 \cite{Our_first_paper,Rauber_Analytical.Modeling.for.Energy}. Finally, These
745 nodes were connected via an Ethernet network with 1 Gbit/s bandwidth.
749 \caption{Heterogeneous nodes characteristics}
752 \begin{tabular}{|*{7}{r|}}
754 Node &Simulated & Max & Min & Diff. & Dynamic & Static \\
755 type &GFLOPS & Freq. & Freq. & Freq. & power & power \\
756 & & GHz & GHz &GHz & & \\
758 1 &40 & 2.50 & 1.20 & 0.100 & \np[W]{20} &\np[W]{4} \\
761 2 &50 & 2.66 & 1.60 & 0.133 & \np[W]{25} &\np[W]{5} \\
764 3 &60 & 2.90 & 1.20 & 0.100 & \np[W]{30} &\np[W]{6} \\
767 4 &70 & 3.40 & 1.60 & 0.133 & \np[W]{35} &\np[W]{7} \\
771 \label{table:platform}
775 %\subsection{Performance prediction verification}
778 \subsection{The experimental results of the scaling algorithm}
782 The proposed algorithm was applied to the seven parallel NAS benchmarks (EP, CG,
783 MG, FT, BT, LU and SP) and the benchmarks were executed with the three classes:
784 A, B and C. However, due to the lack of space in this paper, only the results of
785 the biggest class, C, are presented while being run on different number of
786 nodes, ranging from 4 to 128 or 144 nodes depending on the benchmark being
787 executed. Indeed, the benchmarks CG, MG, LU, EP and FT had to be executed on $1,
788 2, 4, 8, 16, 32, 64, 128$ nodes. The other benchmarks such as BT and SP had to
789 be executed on $1, 4, 9, 16, 36, 64, 144$ nodes.
794 \caption{Running NAS benchmarks on 4 nodes }
797 \begin{tabular}{|*{7}{r|}}
800 Program & Execution & Energy & Energy & Performance & Distance \\
801 name & time/s & consumption/J & saving\% & degradation\% & \\
803 CG & 64.64 & 3560.39 &34.16 &6.72 &27.44 \\
805 MG & 18.89 & 1074.87 &35.37 &4.34 &31.03 \\
807 EP &79.73 &5521.04 &26.83 &3.04 &23.79 \\
809 LU &308.65 &21126.00 &34.00 &6.16 &27.84 \\
811 BT &360.12 &21505.55 &35.36 &8.49 &26.87 \\
813 SP &234.24 &13572.16 &35.22 &5.70 &29.52 \\
815 FT &81.58 &4151.48 &35.58 &0.99 &34.59 \\
823 \caption{Running NAS benchmarks on 8 and 9 nodes }
826 \begin{tabular}{|*{7}{r|}}
829 Program & Execution & Energy & Energy & Performance & Distance \\
830 name & time/s & consumption/J & saving\% & degradation\% & \\
832 CG &36.11 &3263.49 &31.25 &7.12 &24.13 \\
834 MG &8.99 &953.39 &33.78 &6.41 &27.37 \\
836 EP &40.39 &5652.81 &27.04 &0.49 &26.55 \\
838 LU &218.79 &36149.77 &28.23 &0.01 &28.22 \\
840 BT &166.89 &23207.42 &32.32 &7.89 &24.43 \\
842 SP &104.73 &18414.62 &24.73 &2.78 &21.95 \\
844 FT &51.10 &4913.26 &31.02 &2.54 &28.48 \\
852 \caption{Running NAS benchmarks on 16 nodes }
855 \begin{tabular}{|*{7}{r|}}
858 Program & Execution & Energy & Energy & Performance & Distance \\
859 name & time/s & consumption/J & saving\% & degradation\% & \\
861 CG &31.74 &4373.90 &26.29 &9.57 &16.72 \\
863 MG &5.71 &1076.19 &32.49 &6.05 &26.44 \\
865 EP &20.11 &5638.49 &26.85 &0.56 &26.29 \\
867 LU &144.13 &42529.06 &28.80 &6.56 &22.24 \\
869 BT &97.29 &22813.86 &34.95 &5.80 &29.15 \\
871 SP &66.49 &20821.67 &22.49 &3.82 &18.67 \\
873 FT &37.01 &5505.60 &31.59 &6.48 &25.11 \\
876 \label{table:res_16n}
881 \caption{Running NAS benchmarks on 32 and 36 nodes }
884 \begin{tabular}{|*{7}{r|}}
887 Program & Execution & Energy & Energy & Performance & Distance \\
888 name & time/s & consumption/J & saving\% & degradation\% & \\
890 CG &32.35 &6704.21 &16.15 &5.30 &10.85 \\
892 MG &4.30 &1355.58 &28.93 &8.85 &20.08 \\
894 EP &9.96 &5519.68 &26.98 &0.02 &26.96 \\
896 LU &99.93 &67463.43 &23.60 &2.45 &21.15 \\
898 BT &48.61 &23796.97 &34.62 &5.83 &28.79 \\
900 SP &46.01 &27007.43 &22.72 &3.45 &19.27 \\
902 FT &28.06 &7142.69 &23.09 &2.90 &20.19 \\
905 \label{table:res_32n}
910 \caption{Running NAS benchmarks on 64 nodes }
913 \begin{tabular}{|*{7}{r|}}
916 Program & Execution & Energy & Energy & Performance & Distance \\
917 name & time/s & consumption/J & saving\% & degradation\% & \\
919 CG &46.65 &17521.83 &8.13 &1.68 &6.45 \\
921 MG &3.27 &1534.70 &29.27 &14.35 &14.92 \\
923 EP &5.05 &5471.1084 &27.12 &3.11 &24.01 \\
925 LU &73.92 &101339.16 &21.96 &3.67 &18.29 \\
927 BT &39.99 &27166.71 &32.02 &12.28 &19.74 \\
929 SP &52.00 &49099.28 &24.84 &0.03 &24.81 \\
931 FT &25.97 &10416.82 &20.15 &4.87 &15.28 \\
934 \label{table:res_64n}
939 \caption{Running NAS benchmarks on 128 and 144 nodes }
942 \begin{tabular}{|*{7}{r|}}
945 Program & Execution & Energy & Energy & Performance & Distance \\
946 name & time/s & consumption/J & saving\% & degradation\% & \\
948 CG &56.92 &41163.36 &4.00 &1.10 &2.90 \\
950 MG &3.55 &2843.33 &18.77 &10.38 &8.39 \\
952 EP &2.67 &5669.66 &27.09 &0.03 &27.06 \\
954 LU &51.23 &144471.90 &16.67 &2.36 &14.31 \\
956 BT &37.96 &44243.82 &23.18 &1.28 &21.90 \\
958 SP &64.53 &115409.71 &26.72 &0.05 &26.67 \\
960 FT &25.51 &18808.72 &12.85 &2.84 &10.01 \\
963 \label{table:res_128n}
965 The overall energy consumption was computed for each instance according to the
966 energy consumption model (\ref{eq:energy}), with and without applying the
967 algorithm. The execution time was also measured for all these experiments. Then,
968 the energy saving and performance degradation percentages were computed for each
969 instance. The results are presented in Tables~\ref{table:res_4n},
970 \ref{table:res_8n}, \ref{table:res_16n}, \ref{table:res_32n},
971 \ref{table:res_64n} and \ref{table:res_128n}. All these results are the average
972 values from many experiments for energy savings and performance degradation.
973 The tables show the experimental results for running the NAS parallel benchmarks
974 on different number of nodes. The experiments show that the algorithm
975 significantly reduces the energy consumption (up to 35\%) and tries to limit the
976 performance degradation. They also show that the energy saving percentage
977 decreases when the number of computing nodes increases. This reduction is due
978 to the increase of the communication times compared to the execution times when
979 the benchmarks are run over a high number of nodes. Indeed, the benchmarks with
980 the same class, C, are executed on different numbers of nodes, so the
981 computation required for each iteration is divided by the number of computing
982 nodes. On the other hand, more communications are required when increasing the
983 number of nodes so the static energy increases linearly according to the
984 communication time and the dynamic power is less relevant in the overall energy
985 consumption. Therefore, reducing the frequency with Algorithm~\ref{HSA} is
986 less effective in reducing the overall energy savings. It can also be noticed
987 that for the benchmarks EP and SP that contain little or no communications, the
988 energy savings are not significantly affected by the high number of nodes. No
989 experiments were conducted using bigger classes than D, because they require a
990 lot of memory (more than 64GB) when being executed by the simulator on one
991 machine. The maximum distance between the normalized energy curve and the
992 normalized performance for each instance is also shown in the result tables. It
993 decrease in the same way as the energy saving percentage. The tables also show
994 that the performance degradation percentage is not significantly increased when
995 the number of computing nodes is increased because the computation times are
996 small when compared to the communication times.
1002 \subfloat[Energy saving]{%
1003 \includegraphics[width=.33\textwidth]{fig/energy}\label{fig:energy}}%
1005 \subfloat[Performance degradation ]{%
1006 \includegraphics[width=.33\textwidth]{fig/per_deg}\label{fig:per_deg}}
1008 \caption{The energy and performance for all NAS benchmarks running with a different number of nodes}
1011 Figures~\ref{fig:energy} and \ref{fig:per_deg} present the energy saving and
1012 performance degradation respectively for all the benchmarks according to the
1013 number of used nodes. As shown in the first plot, the energy saving percentages
1014 of the benchmarks MG, LU, BT and FT decrease linearly when the number of nodes
1015 increase. While for the EP and SP benchmarks, the energy saving percentage is
1016 not affected by the increase of the number of computing nodes, because in these
1017 benchmarks there are little or no communications. Finally, the energy saving of
1018 the GC benchmark significantly decrease when the number of nodes increase
1019 because this benchmark has more communications than the others. The second plot
1020 shows that the performance degradation percentages of most of the benchmarks
1021 decrease when they run on a big number of nodes because they spend more time
1022 communicating than computing, thus, scaling down the frequencies of some nodes
1023 has less effect on the performance.
1028 \subsection{The results for different power consumption scenarios}
1030 The results of the previous section were obtained while using processors that
1031 consume during computation an overall power which is 80\% composed of dynamic
1032 power and of 20\% of static power. In this section, these ratios are changed and
1033 two new power scenarios are considered in order to evaluate how the proposed
1034 algorithm adapts itself according to the static and dynamic power values. The
1035 two new power scenarios are the following:
1038 \item 70\% of dynamic power and 30\% of static power
1039 \item 90\% of dynamic power and 10\% of static power
1042 The NAS parallel benchmarks were executed again over processors that follow the
1043 new power scenarios. The class C of each benchmark was run over 8 or 9 nodes
1044 and the results are presented in Tables~\ref{table:res_s1} and
1045 \ref{table:res_s2}. These tables show that the energy saving percentage of the
1046 70\%-30\% scenario is smaller for all benchmarks compared to the energy saving
1047 of the 90\%-10\% scenario. Indeed, in the latter more dynamic power is consumed
1048 when nodes are running on their maximum frequencies, thus, scaling down the
1049 frequency of the nodes results in higher energy savings than in the 70\%-30\%
1050 scenario. On the other hand, the performance degradation percentage is smaller
1051 in the 70\%-30\% scenario compared to the 90\%-10\% scenario. This is due to the
1052 higher static power percentage in the first scenario which makes it more
1053 relevant in the overall consumed energy. Indeed, the static energy is related
1054 to the execution time and if the performance is degraded the amount of consumed
1055 static energy directly increases. Therefore, the proposed algorithm does not
1056 really significantly scale down much the frequencies of the nodes in order to
1057 limit the increase of the execution time and thus limiting the effect of the
1058 consumed static energy.
1060 Both new power scenarios are compared to the old one in
1061 Figure~\ref{fig:sen_comp}. It shows the average of the performance degradation, the
1062 energy saving and the distances for all NAS benchmarks of class C running on 8
1063 or 9 nodes. The comparison shows that the energy saving ratio is proportional
1064 to the dynamic power ratio: it is increased when applying the 90\%-10\% scenario
1065 because at maximum frequency the dynamic energy is the most relevant in the
1066 overall consumed energy and can be reduced by lowering the frequency of some
1067 processors. On the other hand, the energy saving decreases when the 70\%-30\%
1068 scenario is used because the dynamic energy is less relevant in the overall
1069 consumed energy and lowering the frequency does not return big energy savings.
1070 Moreover, the average of the performance degradation is decreased when using a
1071 higher ratio for static power (e.g. 70\%-30\% scenario and 80\%-20\%
1072 scenario). Since the proposed algorithm optimizes the energy consumption when
1073 using a higher ratio for dynamic power the algorithm selects bigger frequency
1074 scaling factors that result in more energy saving but less performance, for
1075 example see Figure~\ref{fig:scales_comp}. The opposite happens when using a
1076 higher ratio for static power, the algorithm proportionally selects smaller
1077 scaling values which result in less energy saving but also less performance
1082 \caption{The results of the 70\%-30\% power scenario}
1085 \begin{tabular}{|*{6}{r|}}
1087 Program & Energy & Energy & Performance & Distance \\
1088 name & consumption/J & saving\% & degradation\% & \\
1090 CG &4144.21 &22.42 &7.72 &14.70 \\
1092 MG &1133.23 &24.50 &5.34 &19.16 \\
1094 EP &6170.30 &16.19 &0.02 &16.17 \\
1096 LU &39477.28 &20.43 &0.07 &20.36 \\
1098 BT &26169.55 &25.34 &6.62 &18.71 \\
1100 SP &19620.09 &19.32 &3.66 &15.66 \\
1102 FT &6094.07 &23.17 &0.36 &22.81 \\
1105 \label{table:res_s1}
1111 \caption{The results of the 90\%-10\% power scenario}
1114 \begin{tabular}{|*{6}{r|}}
1116 Program & Energy & Energy & Performance & Distance \\
1117 name & consumption/J & saving\% & degradation\% & \\
1119 CG &2812.38 &36.36 &6.80 &29.56 \\
1121 MG &825.427 &38.35 &6.41 &31.94 \\
1123 EP &5281.62 &35.02 &2.68 &32.34 \\
1125 LU &31611.28 &39.15 &3.51 &35.64 \\
1127 BT &21296.46 &36.70 &6.60 &30.10 \\
1129 SP &15183.42 &35.19 &11.76 &23.43 \\
1131 FT &3856.54 &40.80 &5.67 &35.13 \\
1134 \label{table:res_s2}
1138 \caption{Comparing the proposed algorithm}
1140 \begin{tabular}{|*{7}{r|}}
1142 Program & \multicolumn{2}{c|}{Energy saving \%} & \multicolumn{2}{c|}{Perf. degradation \%} & \multicolumn{2}{c|}{Distance} \\ \cline{2-7}
1143 name & EDP & MaxDist & EDP & MaxDist & EDP & MaxDist \\ \hline
1144 CG & 27.58 & 31.25 & 5.82 & 7.12 & 21.76 & 24.13 \\ \hline
1145 MG & 29.49 & 33.78 & 3.74 & 6.41 & 25.75 & 27.37 \\ \hline
1146 LU & 19.55 & 28.33 & 0.0 & 0.01 & 19.55 & 28.22 \\ \hline
1147 EP & 28.40 & 27.04 & 4.29 & 0.49 & 24.11 & 26.55 \\ \hline
1148 BT & 27.68 & 32.32 & 6.45 & 7.87 & 21.23 & 24.43 \\ \hline
1149 SP & 20.52 & 24.73 & 5.21 & 2.78 & 15.31 & 21.95 \\ \hline
1150 FT & 27.03 & 31.02 & 2.75 & 2.54 & 24.28 & 28.48 \\ \hline
1153 \label{table:compare_EDP}
1158 \subfloat[Comparison between the results on 8 nodes]{%
1159 \includegraphics[width=.33\textwidth]{fig/sen_comp}\label{fig:sen_comp}}%
1161 \subfloat[Comparison the selected frequency scaling factors of MG benchmark class C running on 8 nodes]{%
1162 \includegraphics[width=.33\textwidth]{fig/three_scenarios}\label{fig:scales_comp}}
1164 \caption{The comparison of the three power scenarios}
1169 \includegraphics[scale=0.5]{fig/compare_EDP.pdf}
1170 \caption{Trade-off comparison for NAS benchmarks class C}
1171 \label{fig:compare_EDP}
1175 \subsection{The comparison of the proposed scaling algorithm }
1176 \label{sec.compare_EDP}
1177 In this section, the scaling factors selection algorithm, called MaxDist, is
1178 compared to Spiliopoulos et al. algorithm
1179 \cite{Spiliopoulos_Green.governors.Adaptive.DVFS}, called EDP. They developed a
1180 green governor that regularly applies an online frequency selecting algorithm to
1181 reduce the energy consumed by a multicore architecture without degrading much
1182 its performance. The algorithm selects the frequencies that minimize the energy
1183 and delay products, $\mathit{EDP}=\mathit{energy}\times \mathit{delay}$ using
1184 the predicted overall energy consumption and execution time delay for each
1185 frequency. To fairly compare both algorithms, the same energy and execution
1186 time models, equations (\ref{eq:energy}) and (\ref{eq:fnew}), were used for both
1187 algorithms to predict the energy consumption and the execution times. Also
1188 Spiliopoulos et al. algorithm was adapted to start the search from the initial
1189 frequencies computed using the equation (\ref{eq:Fint}). The resulting algorithm
1190 is an exhaustive search algorithm that minimizes the EDP and has the initial
1191 frequencies values as an upper bound.
1193 Both algorithms were applied to the parallel NAS benchmarks to compare their
1194 efficiency. Table~\ref{table:compare_EDP} presents the results of comparing the
1195 execution times and the energy consumption for both versions of the NAS
1196 benchmarks while running the class C of each benchmark over 8 or 9 heterogeneous
1197 nodes. The results show that our algorithm provides better energy savings than
1198 Spiliopoulos et al. algorithm, on average it results in 29.76\% energy saving
1199 while their algorithm returns just 25.75\%. The average of performance
1200 degradation percentage is approximately the same for both algorithms, about 4\%.
1203 For all benchmarks, our algorithm outperforms Spiliopoulos et al. algorithm in
1204 terms of energy and performance trade-off, see Figure~\ref{fig:compare_EDP},
1205 because it maximizes the distance between the energy saving and the performance
1206 degradation values while giving the same weight for both metrics.
1209 \section{Conclusion}
1211 In this paper, a new online frequency selecting algorithm has been presented. It
1212 selects the best possible vector of frequency scaling factors that gives the
1213 maximum distance (optimal trade-off) between the predicted energy and the
1214 predicted performance curves for a heterogeneous platform. This algorithm uses a
1215 new energy model for measuring and predicting the energy of distributed
1216 iterative applications running over heterogeneous platforms. To evaluate the
1217 proposed method, it was applied on the NAS parallel benchmarks and executed over
1218 a heterogeneous platform simulated by SimGrid. The results of the experiments
1219 showed that the algorithm reduces up to 35\% the energy consumption of a message
1220 passing iterative method while limiting the degradation of the performance. The
1221 algorithm also selects different scaling factors according to the percentage of
1222 the computing and communication times, and according to the values of the static
1223 and dynamic powers of the CPUs. Finally, the algorithm was compared to
1224 Spiliopoulos et al. algorithm and the results showed that it outperforms their
1225 algorithm in terms of energy-time trade-off.
1227 In the near future, this method will be applied to real heterogeneous platforms
1228 to evaluate its performance in a real study case. It would also be interesting
1229 to evaluate its scalability over large scale heterogeneous platforms and measure
1230 the energy consumption reduction it can produce. Afterward, we would like to
1231 develop a similar method that is adapted to asynchronous iterative applications
1232 where each task does not wait for other tasks to finish their works. The
1233 development of such a method might require a new energy model because the number
1234 of iterations is not known in advance and depends on the global convergence of
1235 the iterative system.
1237 \section*{Acknowledgment}
1239 This work has been partially supported by the Labex
1240 ACTION project (contract “ANR-11-LABX-01-01”). As a PhD student,
1241 Mr. Ahmed Fanfakh, would like to thank the University of
1242 Babylon (Iraq) for supporting his work.
1245 % trigger a \newpage just before the given reference
1246 % number - used to balance the columns on the last page
1247 % adjust value as needed - may need to be readjusted if
1248 % the document is modified later
1249 %\IEEEtriggeratref{15}
1251 \bibliographystyle{IEEEtran}
1252 \bibliography{IEEEabrv,my_reference}
1255 %%% Local Variables:
1259 %%% ispell-local-dictionary: "american"
1262 % LocalWords: Fanfakh Charr FIXME Tianhe DVFS HPC NAS NPB SMPI Rauber's Rauber
1263 % LocalWords: CMOS EPSA Franche Comté Tflop Rünger IUT Maréchal Juin cedex GPU
1264 % LocalWords: de badri muslim MPI SimGrid GFlops Xeon EP BT GPUs CPUs AMD
1265 % LocalWords: Spiliopoulos scalability