1 \documentclass[conference]{IEEEtran}
3 \usepackage[T1]{fontenc}
4 \usepackage[utf8]{inputenc}
5 \usepackage[english]{babel}
6 \usepackage{algpseudocode}
13 \DeclareUrlCommand\email{\urlstyle{same}}
15 \usepackage[autolanguage,np]{numprint}
17 \renewcommand*\npunitcommand[1]{\text{#1}}
18 \npthousandthpartsep{}}
21 \usepackage[textsize=footnotesize]{todonotes}
22 \newcommand{\AG}[2][inline]{%
23 \todo[color=green!50,#1]{\sffamily\textbf{AG:} #2}\xspace}
24 \newcommand{\JC}[2][inline]{%
25 \todo[color=red!10,#1]{\sffamily\textbf{JC:} #2}\xspace}
27 \newcommand{\Xsub}[2]{{\ensuremath{#1_\mathit{#2}}}}
29 %% used to put some subscripts lower, and make them more legible
30 \newcommand{\fxheight}[1]{\ifx#1\relax\relax\else\rule{0pt}{1.52ex}#1\fi}
32 \newcommand{\CL}{\Xsub{C}{L}}
33 \newcommand{\Dist}{\mathit{Dist}}
34 \newcommand{\EdNew}{\Xsub{E}{dNew}}
35 \newcommand{\Eind}{\Xsub{E}{ind}}
36 \newcommand{\Enorm}{\Xsub{E}{Norm}}
37 \newcommand{\Eoriginal}{\Xsub{E}{Original}}
38 \newcommand{\Ereduced}{\Xsub{E}{Reduced}}
39 \newcommand{\Es}{\Xsub{E}{S}}
40 \newcommand{\Fdiff}[1][]{\Xsub{F}{diff}_{\!#1}}
41 \newcommand{\Fmax}[1][]{\Xsub{F}{max}_{\fxheight{#1}}}
42 \newcommand{\Fnew}{\Xsub{F}{new}}
43 \newcommand{\Ileak}{\Xsub{I}{leak}}
44 \newcommand{\Kdesign}{\Xsub{K}{design}}
45 \newcommand{\MaxDist}{\mathit{Max}\Dist}
46 \newcommand{\MinTcm}{\mathit{Min}\Tcm}
47 \newcommand{\Ntrans}{\Xsub{N}{trans}}
48 \newcommand{\Pd}[1][]{\Xsub{P}{d}_{\fxheight{#1}}}
49 \newcommand{\PdNew}{\Xsub{P}{dNew}}
50 \newcommand{\PdOld}{\Xsub{P}{dOld}}
51 \newcommand{\Pnorm}{\Xsub{P}{Norm}}
52 \newcommand{\Ps}[1][]{\Xsub{P}{s}_{\fxheight{#1}}}
53 \newcommand{\Scp}[1][]{\Xsub{S}{cp}_{#1}}
54 \newcommand{\Sopt}[1][]{\Xsub{S}{opt}_{#1}}
55 \newcommand{\Tcm}[1][]{\Xsub{T}{cm}_{\fxheight{#1}}}
56 \newcommand{\Tcp}[1][]{\Xsub{T}{cp}_{#1}}
57 \newcommand{\Pmax}[1][]{\Xsub{P}{max}_{\fxheight{#1}}}
58 \newcommand{\Pidle}[1][]{\Xsub{P}{idle}_{\fxheight{#1}}}
59 \newcommand{\TcpOld}[1][]{\Xsub{T}{cpOld}_{#1}}
60 \newcommand{\Tnew}{\Xsub{T}{New}}
61 \newcommand{\Told}{\Xsub{T}{Old}}
65 \title{Energy Consumption Reduction with DVFS for \\
66 Message Passing Iterative Applications on \\
67 Heterogeneous Architectures}
77 FEMTO-ST Institute, University of Franche-Comté\\
78 IUT de Belfort-Montbéliard,
79 19 avenue du Maréchal Juin, BP 527, 90016 Belfort cedex, France\\
80 % Telephone: \mbox{+33 3 84 58 77 86}, % Raphaël
81 % Fax: \mbox{+33 3 84 58 77 81}\\ % Dept Info
82 Email: \email{{jean-claude.charr,raphael.couturier,ahmed.fanfakh_badri_muslim,arnaud.giersch}@univ-fcomte.fr}
93 \section{Introduction}
98 \section{Related works}
102 \section{The performance and energy consumption measurements on heterogeneous architecture}
105 \subsection{The execution time of message passing distributed iterative
106 applications on a heterogeneous platform}
108 In this paper, we are interested in reducing the energy consumption of message
109 passing distributed iterative synchronous applications running over
110 heterogeneous grid platforms. A heterogeneous grid platform could be defined as a collection of
111 heterogeneous computing clusters interconnected via a long distance network which has lower bandwidth
112 and higher latency than the local networks of the clusters. Each computing cluster in the grid is composed of homogeneous nodes that are connected together via high speed network. Therefore, each cluster has different characteristics such as computing power (FLOPS), energy consumption, CPU's frequency range, network bandwidth and latency.
116 \includegraphics[scale=0.6]{fig/commtasks}
117 \caption{Parallel tasks on a heterogeneous platform}
121 The overall execution time of a distributed iterative synchronous application
122 over a heterogeneous grid consists of the sum of the computation time and
123 the communication time for every iteration on a node. However, due to the
124 heterogeneous computation power of the computing clusters, slack times may occur
125 when fast nodes have to wait, during synchronous communications, for the slower
126 nodes to finish their computations (see Figure~\ref{fig:heter}). Therefore, the
127 overall execution time of the program is the execution time of the slowest task
128 which has the highest computation time and no slack time.
130 Dynamic Voltage and Frequency Scaling (DVFS) is a process, implemented in
131 modern processors, that reduces the energy consumption of a CPU by scaling
132 down its voltage and frequency. Since DVFS lowers the frequency of a CPU
133 and consequently its computing power, the execution time of a program running
134 over that scaled down processor may increase, especially if the program is
135 compute bound. The frequency reduction process can be expressed by the scaling
136 factor S which is the ratio between the maximum and the new frequency of a CPU
140 S = \frac{\Fmax}{\Fnew}
142 The execution time of a compute bound sequential program is linearly
143 proportional to the frequency scaling factor $S$. On the other hand, message
144 passing distributed applications consist of two parts: computation and
145 communication. The execution time of the computation part is linearly
146 proportional to the frequency scaling factor $S$ but the communication time is
147 not affected by the scaling factor because the processors involved remain idle
148 during the communications~\cite{Freeh_Exploring.the.Energy.Time.Tradeoff}. The
149 communication time for a task is the summation of periods of time that begin
150 with an MPI call for sending or receiving a message until the message is
151 synchronously sent or received.
153 Since in a heterogeneous grid each cluster has different characteristics,
154 especially different frequency gears, when applying DVFS operations on the nodes
155 of these clusters, they may get different scaling factors represented by a scaling vector:
156 $(S_{11}, S_{12},\dots, S_{NM})$ where $S_{ij}$ is the scaling factor of processor $j$ in cluster $i$ . To
157 be able to predict the execution time of message passing synchronous iterative
158 applications running over a heterogeneous grid, for different vectors of
159 scaling factors, the communication time and the computation time for all the
160 tasks must be measured during the first iteration before applying any DVFS
161 operation. Then the execution time for one iteration of the application with any
162 vector of scaling factors can be predicted using (\ref{eq:perf}).
165 \Tnew = \mathop{\max_{i=1,\dots N}}_{j=1,\dots,M}({\TcpOld[ij]} \cdot S_{ij})
166 +\mathop{\min_{j=1,\dots,M}} (\Tcm[hj])
169 where $N$ is the number of clusters in the grid, $M$ is the number of nodes in
170 each cluster, $\TcpOld[ij]$ is the computation time of processor $j$ in the cluster $i$
171 and $\Tcm[hj]$ is the communication time of processor $j$ in the cluster $h$ during the
172 first iteration. The model computes the maximum computation time with scaling factor
173 from each node added to the communication time of the slowest node in the slowest cluster $h$.
174 It means only the communication time without any slack time is taken into account.
175 Therefore, the execution time of the iterative application is equal to
176 the execution time of one iteration as in (\ref{eq:perf}) multiplied by the
177 number of iterations of that application.
179 This prediction model is developed from the model to predict the execution time
180 of message passing distributed applications for homogeneous and heterogeneous clusters
181 ~\cite{Our_first_paper,pdsec2015}. The execution time prediction model is
182 used in the method to optimize both the energy consumption and the performance
183 of iterative methods, which is presented in the following sections.
186 \subsection{Energy model for heterogeneous platform}
188 Many researchers~\cite{Malkowski_energy.efficient.high.performance.computing,
189 Rauber_Analytical.Modeling.for.Energy,Zhuo_Energy.efficient.Dynamic.Task.Scheduling,
190 Rizvandi_Some.Observations.on.Optimal.Frequency} divide the power consumed by
191 a processor into two power metrics: the static and the dynamic power. While the
192 first one is consumed as long as the computing unit is turned on, the latter is
193 only consumed during computation times. The dynamic power $\Pd$ is related to
194 the switching activity $\alpha$, load capacitance $\CL$, the supply voltage $V$
195 and operational frequency $F$, as shown in (\ref{eq:pd}).
198 \Pd = \alpha \cdot \CL \cdot V^2 \cdot F
200 The static power $\Ps$ captures the leakage power as follows:
203 \Ps = V \cdot \Ntrans \cdot \Kdesign \cdot \Ileak
205 where V is the supply voltage, $\Ntrans$ is the number of transistors,
206 $\Kdesign$ is a design dependent parameter and $\Ileak$ is a
207 technology dependent parameter. The energy consumed by an individual processor
208 to execute a given program can be computed as:
211 \Eind = \Pd \cdot \Tcp + \Ps \cdot T
213 where $T$ is the execution time of the program, $\Tcp$ is the computation
214 time and $\Tcp \le T$. $\Tcp$ may be equal to $T$ if there is no
215 communication and no slack time.
217 The main objective of DVFS operation is to reduce the overall energy
218 consumption~\cite{Le_DVFS.Laws.of.Diminishing.Returns}. The operational
219 frequency $F$ depends linearly on the supply voltage $V$, i.e., $V = \beta \cdot
220 F$ with some constant $\beta$.~This equation is used to study the change of the
221 dynamic voltage with respect to various frequency values
222 in~\cite{Rauber_Analytical.Modeling.for.Energy}. The reduction process of the
223 frequency can be expressed by the scaling factor $S$ which is the ratio between
224 the maximum and the new frequency as in (\ref{eq:s}). The CPU governors are
225 power schemes supplied by the operating system's kernel to lower a core's
226 frequency. The new frequency $\Fnew$ from (\ref{eq:s}) can be calculated as
230 \Fnew = S^{-1} \cdot \Fmax
232 Replacing $\Fnew$ in (\ref{eq:pd}) as in (\ref{eq:fnew}) gives the following
233 equation for dynamic power consumption:
236 \PdNew = \alpha \cdot \CL \cdot V^2 \cdot \Fnew = \alpha \cdot \CL \cdot \beta^2 \cdot \Fnew^3 \\
237 {} = \alpha \cdot \CL \cdot V^2 \cdot \Fmax \cdot S^{-3} = \PdOld \cdot S^{-3}
239 where $\PdNew$ and $\PdOld$ are the dynamic power consumed with the
240 new frequency and the maximum frequency respectively.
242 According to (\ref{eq:pdnew}) the dynamic power is reduced by a factor of
243 $S^{-3}$ when reducing the frequency by a factor of
244 $S$~\cite{Rauber_Analytical.Modeling.for.Energy}. Since the FLOPS of a CPU is
245 proportional to the frequency of a CPU, the computation time is increased
246 proportionally to $S$. The new dynamic energy is the dynamic power multiplied
247 by the new time of computation and is given by the following equation:
250 \EdNew = \PdOld \cdot S^{-3} \cdot (\Tcp \cdot S)= S^{-2}\cdot \PdOld \cdot \Tcp
252 The static power is related to the power leakage of the CPU and is consumed
253 during computation and even when idle. As
254 in~\cite{Rauber_Analytical.Modeling.for.Energy,Zhuo_Energy.efficient.Dynamic.Task.Scheduling},
255 the static power of a processor is considered as constant during idle and
256 computation periods, and for all its available frequencies. The static energy
257 is the static power multiplied by the execution time of the program. According
258 to the execution time model in (\ref{eq:perf}), the execution time of the
259 program is the sum of the computation and the communication times. The
260 computation time is linearly related to the frequency scaling factor, while this
261 scaling factor does not affect the communication time. The static energy of a
262 processor after scaling its frequency is computed as follows:
265 \Es = \Ps \cdot (\Tcp \cdot S + \Tcm)
268 In the considered heterogeneous grid platform, each node $j$ in cluster $i$ may have
269 different dynamic and static powers from the nodes of the other clusters,
270 noted as $\Pd[ij]$ and $\Ps[ij]$ respectively. Therefore, even if the distributed
271 message passing iterative application is load balanced, the computation time of each CPU $j$
272 in cluster $i$ noted $\Tcp[ij]$ may be different and different frequency scaling factors may be
273 computed in order to decrease the overall energy consumption of the application
274 and reduce slack times. The communication time of a processor $j$ in cluster $i$ is noted as
275 $\Tcm[ij]$ and could contain slack times when communicating with slower nodes,
276 see Figure~\ref{fig:heter}. Therefore, all nodes do not have equal
277 communication times. While the dynamic energy is computed according to the
278 frequency scaling factor and the dynamic power of each node as in
279 (\ref{eq:Edyn}), the static energy is computed as the sum of the execution time
280 of one iteration multiplied by the static power of each processor. The overall
281 energy consumption of a message passing distributed application executed over a
282 heterogeneous grid platform during one iteration is the summation of all dynamic and
283 static energies for $M$ processors in $N$ clusters. It is computed as follows:
286 E = \sum_{i=1}^{N} \sum_{i=1}^{M} {(S_{ij}^{-2} \cdot \Pd[ij] \cdot \Tcp[ij])} +
287 \sum_{i=1}^{N} \sum_{j=1}^{M} (\Ps[ij] \cdot {} \\
288 (\mathop{\max_{i=1,\dots N}}_{j=1,\dots,M}({\Tcp[ij]} \cdot S_{ij})
289 +\mathop{\min_{j=1,\dots M}} (\Tcm[hj]) ))
292 Reducing the frequencies of the processors according to the vector of scaling
293 factors $(S_{11}, S_{12},\dots, S_{NM})$ may degrade the performance of the application
294 and thus, increase the static energy because the execution time is
295 increased~\cite{Kim_Leakage.Current.Moore.Law}. The overall energy consumption
296 for the iterative application can be measured by measuring the energy
297 consumption for one iteration as in (\ref{eq:energy}) multiplied by the number
298 of iterations of that application.
300 \section{Optimization of both energy consumption and performance}
303 Using the lowest frequency for each processor does not necessarily give the most
304 energy efficient execution of an application. Indeed, even though the dynamic
305 power is reduced while scaling down the frequency of a processor, its
306 computation power is proportionally decreased. Hence, the execution time might
307 be drastically increased and during that time, dynamic and static powers are
308 being consumed. Therefore, it might cancel any gains achieved by scaling down
309 the frequency of all nodes to the minimum and the overall energy consumption of
310 the application might not be the optimal one. It is not trivial to select the
311 appropriate frequency scaling factor for each processor while considering the
312 characteristics of each processor (computation power, range of frequencies,
313 dynamic and static powers) and the task executed (computation/communication
314 ratio). The aim being to reduce the overall energy consumption and to avoid
315 increasing significantly the execution time. In our previous
316 work~\cite{Our_first_paper,pdsec2015}, we proposed a method that selects the optimal
317 frequency scaling factor for a homogeneous and heterogeneous clusters executing a message passing
318 iterative synchronous application while giving the best trade-off between the
319 energy consumption and the performance for such applications. In this work we
320 are interested in heterogeneous grid as described above. Due to the
321 heterogeneity of the processors, a vector of scaling factors should be selected
322 and it must give the best trade-off between energy consumption and performance.
324 The relation between the energy consumption and the execution time for an
325 application is complex and nonlinear, Thus, unlike the relation between the
326 execution time and the scaling factor, the relation between the energy and the
327 frequency scaling factors is nonlinear, for more details refer
328 to~\cite{Freeh_Exploring.the.Energy.Time.Tradeoff}. Moreover, these relations
329 are not measured using the same metric. To solve this problem, the execution
330 time is normalized by computing the ratio between the new execution time (after
331 scaling down the frequencies of some processors) and the initial one (with
332 maximum frequency for all nodes) as follows:
335 \Pnorm = \frac{\Tnew}{\Told}
339 Where $Tnew$ is computed as in (\ref{eq:perf}) and $Told$ is computed as in (\ref{eq:told})
342 \Told = \mathop{\max_{i=1,2,\dots,N}}_{j=1,2,\dots,M} (\Tcp[ij]+\Tcm[ij])
344 In the same way, the energy is normalized by computing the ratio between the
345 consumed energy while scaling down the frequency and the consumed energy with
346 maximum frequency for all nodes:
349 \Enorm = \frac{\Ereduced}{\Eoriginal}
352 Where $\Ereduced$ is computed using (\ref{eq:energy}) and $\Eoriginal$ is
353 computed as in (\ref{eq:eorginal}).
358 \Eoriginal = \sum_{i=1}^{N} \sum_{j=1}^{M} ( \Pd[ij] \cdot \Tcp[ij]) +
359 \mathop{\sum_{i=1}^{N}} \sum_{j=1}^{M} (\Ps[ij] \cdot \Told)
362 While the main goal is to optimize the energy and execution time at the same
363 time, the normalized energy and execution time curves do not evolve (increase/decrease) in the same way.
364 According to the equations~(\ref{eq:pnorm}) and (\ref{eq:enorm}), the
365 vector of frequency scaling factors $S_1,S_2,\dots,S_N$ reduce both the energy
366 and the execution time simultaneously. But the main objective is to produce
367 maximum energy reduction with minimum execution time reduction.
369 This problem can be solved by making the optimization process for energy and
370 execution time follow the same evolution according to the vector of scaling factors
371 $(S_{11}, S_{12},\dots, S_{NM})$. Therefore, the equation of the
372 normalized execution time is inverted which gives the normalized performance
373 equation, as follows:
376 \Pnorm = \frac{\Told}{\Tnew}
381 \subfloat[Homogeneous cluster]{%
382 \includegraphics[width=.33\textwidth]{fig/homo}\label{fig:r1}}%
384 \subfloat[Heterogeneous grid]{%
385 \includegraphics[width=.33\textwidth]{fig/heter}\label{fig:r2}}
387 \caption{The energy and performance relation}
390 Then, the objective function can be modeled in order to find the maximum
391 distance between the energy curve (\ref{eq:enorm}) and the performance curve
392 (\ref{eq:pnorm_inv}) over all available sets of scaling factors. This
393 represents the minimum energy consumption with minimum execution time (maximum
394 performance) at the same time, see Figure~\ref{fig:r1} or
395 Figure~\ref{fig:r2}. Then the objective function has the following form:
399 \mathop{ \mathop{\max_{i=1,\dots N}}_{j=1,\dots,M}}_{k=1,\dots,F}
400 (\overbrace{\Pnorm(S_{ijk})}^{\text{Maximize}} -
401 \overbrace{\Enorm(S_{ijk})}^{\text{Minimize}} )
403 where $N$ is the number of clusters, $M$ is the number of nodes in each cluster and
404 $F$ is the number of available frequencies for each node. Then, the optimal set
405 of scaling factors that satisfies (\ref{eq:max}) can be selected.
406 The objective function can work with any energy model or any power
407 values for each node (static and dynamic powers). However, the most important
408 energy reduction gain can be achieved when the energy curve has a convex form as shown
409 in~\cite{Zhuo_Energy.efficient.Dynamic.Task.Scheduling,Rauber_Analytical.Modeling.for.Energy,Hao_Learning.based.DVFS}.
411 \section{The scaling factors selection algorithm for grids }
415 \begin{algorithmic}[1]
419 \item [{$N$}] number of clusters in the grid.
420 \item [{$M$}] number of nodes in each cluster.
421 \item[{$\Tcp[ij]$}] array of all computation times for all nodes during one iteration and with the highest frequency.
422 \item[{$\Tcm[ij]$}] array of all communication times for all nodes during one iteration and with the highest frequency.
423 \item[{$\Fmax[ij]$}] array of the maximum frequencies for all nodes.
424 \item[{$\Pd[ij]$}] array of the dynamic powers for all nodes.
425 \item[{$\Ps[ij]$}] array of the static powers for all nodes.
426 \item[{$\Fdiff[ij]$}] array of the differences between two successive frequencies for all nodes.
428 \Ensure $\Sopt[11],\Sopt[12] \dots, \Sopt[NM_i]$, a vector of scaling factors that gives the optimal tradeoff between energy consumption and execution time
430 \State $\Scp[ij] \gets \frac{\max_{i=1,2,\dots,N}(\max_{j=1,2,\dots,M_i}(\Tcp[ij]))}{\Tcp[ij]} $
431 \State $F_{ij} \gets \frac{\Fmax[ij]}{\Scp[i]},~{i=1,2,\cdots,N},~{j=1,2,\dots,M_i}.$
432 \State Round the computed initial frequencies $F_i$ to the closest available frequency for each node.
433 \If{(not the first frequency)}
434 \State $F_{ij} \gets F_{ij}+\Fdiff[ij],~i=1,\dots,N,~{j=1,\dots,M_i}.$
436 \State $\Told \gets $ computed as in equations (\ref{eq:told}).
437 \State $\Eoriginal \gets $ computed as in equations (\ref{eq:eorginal}) .
438 \State $\Sopt[ij] \gets 1,~i=1,\dots,N,~{j=1,\dots,M_i}. $
439 \State $\Dist \gets 0 $
440 \While {(all nodes have not reached their minimum \newline\hspace*{2.5em} frequency \textbf{or} $\Pnorm - \Enorm < 0 $)}
441 \If{(not the last freq. \textbf{and} not the slowest node)}
442 \State $F_{ij} \gets F_{ij} - \Fdiff[ij],~{i=1,\dots,N},~{j=1,\dots,M_i}$.
443 \State $S_{ij} \gets \frac{\Fmax[ij]}{F_{ij}},~{i=1,\dots,N},~{j=1,\dots,M_i}.$
445 \State $\Tnew \gets $ computed as in equations (\ref{eq:perf}).
446 \State $\Ereduced \gets $ computed as in equations (\ref{eq:energy}).
447 \State $\Pnorm \gets \frac{\Told}{\Tnew}$
448 \State $\Enorm\gets \frac{\Ereduced}{\Eoriginal}$
449 \If{$(\Pnorm - \Enorm > \Dist)$}
450 \State $\Sopt[ij] \gets S_{ij},~i=1,\dots,N,~j=1,\dots,M_i. $
451 \State $\Dist \gets \Pnorm - \Enorm$
454 \State Return $\Sopt[11],\Sopt[12],\dots,\Sopt[NM_i]$
456 \caption{Scaling factors selection algorithm}
461 \begin{algorithmic}[1]
463 \For {$k=1$ to \textit{some iterations}}
464 \State Computations section.
465 \State Communications section.
467 \State Gather all times of computation and\newline\hspace*{3em}%
468 communication from each node.
469 \State Call Algorithm \ref{HSA}.
470 \State Compute the new frequencies from the\newline\hspace*{3em}%
471 returned optimal scaling factors.
472 \State Set the new frequencies to nodes.
476 \caption{DVFS algorithm}
482 In this section, the scaling factors selection algorithm for grids, algorithm~\ref{HSA}, is presented. It selects the vector of the frequency
483 scaling factors that gives the best trade-off between minimizing the
484 energy consumption and maximizing the performance of a message passing
485 synchronous iterative application executed on a grid. It works
486 online during the execution time of the iterative message passing program. It
487 uses information gathered during the first iteration such as the computation
488 time and the communication time in one iteration for each node. The algorithm is
489 executed after the first iteration and returns a vector of optimal frequency
490 scaling factors that satisfies the objective function (\ref{eq:max}). The
491 program applies DVFS operations to change the frequencies of the CPUs according
492 to the computed scaling factors. This algorithm is called just once during the
493 execution of the program. Algorithm~\ref{dvfs} shows where and when the proposed
494 scaling algorithm is called in the iterative MPI program.
498 \includegraphics[scale=0.45]{fig/init_freq}
499 \caption{Selecting the initial frequencies}
503 Nodes from distinct clusters in a grid have different computing powers, thus
504 while executing message passing iterative synchronous applications, fast nodes
505 have to wait for the slower ones to finish their computations before being able
506 to synchronously communicate with them as in Figure~\ref{fig:heter}. These
507 periods are called idle or slack times. The algorithm takes into account this
508 problem and tries to reduce these slack times when selecting the vector of the frequency
509 scaling factors. At first, it selects initial frequency scaling factors
510 that increase the execution times of fast nodes and minimize the differences
511 between the computation times of fast and slow nodes. The value of the initial
512 frequency scaling factor for each node is inversely proportional to its
513 computation time that was gathered from the first iteration. These initial
514 frequency scaling factors are computed as a ratio between the computation time
515 of the slowest node and the computation time of the node $i$ as follows:
518 \Scp[ij] = \frac{ \mathop{\max_{i=1,\dots N}}_{j=1,\dots,M}(\Tcp[ij])} {\Tcp[ij]}
520 Using the initial frequency scaling factors computed in (\ref{eq:Scp}), the
521 algorithm computes the initial frequencies for all nodes as a ratio between the
522 maximum frequency of node $i$ and the computation scaling factor $\Scp[i]$ as
526 F_{ij} = \frac{\Fmax[ij]}{\Scp[ij]},~{i=1,2,\dots,N},~{j=1,\dots,M}
528 If the computed initial frequency for a node is not available in the gears of
529 that node, it is replaced by the nearest available frequency. In
530 Figure~\ref{fig:st_freq}, the nodes are sorted by their computing powers in
531 ascending order and the frequencies of the faster nodes are scaled down
532 according to the computed initial frequency scaling factors. The resulting new
533 frequencies are highlighted in Figure~\ref{fig:st_freq}. This set of
534 frequencies can be considered as a higher bound for the search space of the
535 optimal vector of frequencies because selecting higher frequencies
536 than the higher bound will not improve the performance of the application and it
537 will increase its overall energy consumption. Therefore the algorithm that
538 selects the frequency scaling factors starts the search method from these
539 initial frequencies and takes a downward search direction toward lower
540 frequencies until reaching the nodes' minimum frequencies or lower bounds. A node's frequency is considered its lower bound if the computed distance between the energy and performance at this frequency is less than zero.
541 A negative distance means that the performance degradation ratio is higher than the energy saving ratio.
542 In this situation, the algorithm must stop the downward search because it has reached the lower bound and it is useless to test the lower frequencies. Indeed, they will all give worse distances.
544 Therefore, the algorithm iterates on all remaining frequencies, from the higher
545 bound until all nodes reach their minimum frequencies or their lower bounds, to compute the overall
546 energy consumption and performance and selects the optimal vector of the frequency scaling
547 factors. At each iteration the algorithm determines the slowest node
548 according to the equation (\ref{eq:perf}) and keeps its frequency unchanged,
549 while it lowers the frequency of all other nodes by one gear. The new overall
550 energy consumption and execution time are computed according to the new scaling
551 factors. The optimal set of frequency scaling factors is the set that gives the
552 highest distance according to the objective function (\ref{eq:max}).
554 Figures~\ref{fig:r1} and \ref{fig:r2} illustrate the normalized performance and
555 consumed energy for an application running on a homogeneous cluster and a
556 grid platform respectively while increasing the scaling factors. It can
557 be noticed that in a homogeneous cluster the search for the optimal scaling
558 factor should start from the maximum frequency because the performance and the
559 consumed energy decrease from the beginning of the plot. On the other hand, in
560 the grid platform the performance is maintained at the beginning of the
561 plot even if the frequencies of the faster nodes decrease until the computing
562 power of scaled down nodes are lower than the slowest node. In other words,
563 until they reach the higher bound. It can also be noticed that the higher the
564 difference between the faster nodes and the slower nodes is, the bigger the
565 maximum distance between the energy curve and the performance curve is, which results in bigger energy savings.
568 \section{Experimental results}
570 While in~\cite{pdsec2015} the energy model and the scaling factors selection algorithm were applied to a heterogeneous cluster and evaluated over the SimGrid simulator~\cite{SimGrid},
571 in this paper real experiments were conducted over the grid'5000 platform.
573 \subsection{Grid'5000 architature and power consumption}
575 Grid'5000~\cite{grid5000} is a large-scale testbed that consists of ten sites distributed over all metropolitan France and Luxembourg. All the sites are connected together via a special long distance network called RENATER,
576 which is the French National Telecommunication Network for Technology.
577 Each site of the grid is composed of few heterogeneous
578 computing clusters and each cluster contains many homogeneous nodes. In total,
579 grid'5000 has about one thousand heterogeneous nodes and eight thousand cores. In each site,
580 the clusters and their nodes are connected via high speed local area networks.
581 Two types of local networks are used, Ethernet or Infiniband networks which have different characteristics in terms of bandwidth and latency.
583 Since grid'5000 is dedicated for testing, contrary to production grids it allows a user to deploy its own customized operating system on all the booked nodes. The user could have root rights and thus apply DVFS operations while executing a distributed application. Moreover, the grid'5000 testbed provides at some sites a power measurement tool to capture
584 the power consumption for each node in those sites. The measured power is the overall consumed power by by all the components of a node at a given instant, such as CPU, hard drive, main-board, memory, ... For more details refer to
585 \cite{Energy_measurement}. To just measure the CPU power of one core in a node $j$,
586 firstly, the power consumed by the node while being idle at instant $y$, noted as $\Pidle[jy]$, was measured. Then, the power was measured while running a single thread benchmark with no communication (no idle time) over the same node with its CPU scaled to the maximum available frequency. The latter power measured at time $x$ with maximum frequency for one core of node $j$ is noted $\Pmax[jx]$. The difference between the two measured power consumption represents the
587 dynamic power consumption of that core with the maximum frequency, see figure(\ref{fig:power_cons}).
590 The dynamic power $\Pd[j]$ is computed as in equation (\ref{eq:pdyn})
593 \Pd[j] = \max_{x=\beta_1,\dots \beta_2} (\Pmax[jx]) - \min_{y=\Theta_1,\dots \Theta_2} (\Pidle[jy])
596 where $\Pd[j]$ is the dynamic power consumption for one core of node $j$,
597 $\lbrace \beta_1,\beta_2 \rbrace$ is the time interval for the measured maximum power values,
598 $\lbrace\Theta_1,\Theta_2\rbrace$ is the time interval for the measured idle power values.
599 Therefore, the dynamic power of one core is computed as the difference between the maximum
600 measured value in maximum powers vector and the minimum measured value in the idle powers vector.
602 On the other hand, the static power consumption by one core is a part of the measured idle power consumption of the node. Since in grid'5000 there is no way to measure precisely the consumed static power and in~\cite{Our_first_paper,pdsec2015,Rauber_Analytical.Modeling.for.Energy} it was assumed that the static power represents a ratio of the dynamic power, the value of the static power is assumed as 20\% of dynamic power consumption of the core.
604 In the experiments presented in the following sections, two sites of grid'5000 were used, Lyon and Nancy sites. These two sites have in total seven different clusters as in figure (\ref{fig:grid5000}).
606 Four clusters from the two sites were selected in the experiments: one cluster from
607 Lyon's site, Taurus cluster, and three clusters from Nancy's site, Graphene,
608 Griffon and Graphite. Each one of these clusters has homogeneous nodes inside, while nodes from different clusters are heterogeneous in many aspects such as: computing power, power consumption, available
609 frequency ranges and local network features: the bandwidth and the latency. Table \ref{table:grid5000} shows
610 the details characteristics of these four clusters. Moreover, the dynamic powers were computed using the equation (\ref{eq:pdyn}) for all the nodes in the
611 selected clusters and are presented in table \ref{table:grid5000}.
618 \includegraphics[scale=1]{fig/grid5000}
619 \caption{The selected two sites of grid'5000}
624 The energy model and the scaling factors selection algorithm were applied to the NAS parallel benchmarks v3.3 \cite{NAS.Parallel.Benchmarks} and evaluated over grid'5000.
625 The benchmark suite contains seven applications: CG, MG, EP, LU, BT, SP and FT. These applications have different computations and communications ratios and strategies which make them good testbed applications to evaluate the proposed algorithm and energy model.
626 The benchmarks have seven different classes, S, W, A, B, C, D and E, that represent the size of the problem that the method solves. In this work, the class D was used for all benchmarks in all the experiments presented in the next sections.
633 \includegraphics[scale=0.6]{fig/power_consumption.pdf}
634 \caption{The power consumption by one core from Taurus cluster}
635 \label{fig:power_cons}
642 \caption{CPUs characteristics of the selected clusters}
645 \begin{tabular}{|*{7}{c|}}
647 Cluster & CPU & Max & Min & Diff. & no. of cores & dynamic power \\
648 Name & model & Freq. & Freq. & Freq. & per CPU & of one core \\
649 & & GHz & GHz & GHz & & \\
651 Taurus & Intel & 2.3 & 1.2 & 0.1 & 6 & \np[W]{35} \\
653 & E5-2630 & & & & & \\
655 Graphene & Intel & 2.53 & 1.2 & 0.133 & 4 & \np[W]{23} \\
659 Griffon & Intel & 2.5 & 2 & 0.5 & 4 & \np[W]{46} \\
663 Graphite & Intel & 2 & 1.2 & 0.1 & 8 & \np[W]{35} \\
665 & E5-2650 & & & & & \\
668 \label{table:grid5000}
673 \subsection{The experimental results of the scaling algorithm}
675 In this section, the results of the the application of the scaling factors selection algorithm \ref{HSA}
676 to the NAS parallel benchmarks are presented.
678 As mentioned previously, the experiments
679 were conducted over two sites of grid'5000, Lyon and Nancy sites.
680 Two scenarios were considered while selecting the clusters from these two sites :
682 \item In the first scenario, nodes from two sites and three heterogeneous clusters were selected. The two sites are connected
683 are connected via a long distance network.
684 \item In the second scenario nodes from three clusters that are
685 located in one site, Nancy site.
689 behind using these two scenarios is to evaluate the influence of long distance communications (higher latency) on the performance of the
690 scaling factors selection algorithm. Indeed, in the first scenario the computations to communications ratio
691 is very low due to the higher communication times which reduces the effect of DVFS operations.
693 The NAS parallel benchmarks are executed over
694 16 and 32 nodes for each scenario. The number of participating computing nodes form each cluster
695 are different because all the selected clusters do not have the same available number of nodes and all benchmarks do not require the same number of computing nodes.
696 Table \ref{tab:sc} shows the number of nodes used from each cluster for each scenario.
700 \caption{The different clusters scenarios}
702 \begin{tabular}{|*{4}{c|}}
704 \multirow{2}{*}{Scenario name} & \multicolumn{3}{c|} {The participating clusters} \\ \cline{2-4}
705 & Cluster & Site & No. of nodes \\
707 \multirow{3}{*}{Two sites / 16 nodes} & Taurus & Lyon & 5 \\ \cline{2-4}
708 & Graphene & Nancy & 5 \\ \cline{2-4}
709 & Griffon & Nancy & 6 \\
711 \multirow{3}{*}{Tow sites / 32 nodes} & Taurus & Lyon & 10 \\ \cline{2-4}
712 & Graphene & Nancy & 10 \\ \cline{2-4}
713 & Griffon &Nancy & 12 \\
715 \multirow{3}{*}{One site / 16 nodes} & Graphite & Nancy & 4 \\ \cline{2-4}
716 & Graphene & Nancy & 6 \\ \cline{2-4}
717 & Griffon & Nancy & 6 \\
719 \multirow{3}{*}{One site / 32 nodes} & Graphite & Nancy & 4 \\ \cline{2-4}
720 & Graphene & Nancy & 12 \\ \cline{2-4}
721 & Griffon & Nancy & 12 \\
729 \includegraphics[scale=0.5]{fig/eng_con_scenarios.eps}
730 \caption{The energy consumptions of NAS benchmarks over different scenarios }
738 \includegraphics[scale=0.5]{fig/time_scenarios.eps}
739 \caption{The execution times of NAS benchmarks over different scenarios }
743 The NAS parallel benchmarks are executed over these two platforms
744 with different number of nodes, as in Table \ref{tab:sc}.
745 The overall energy consumption of all the benchmarks solving the class D instance and
746 using the proposed frequency selection algorithm is measured
747 using the equation of the reduced energy consumption, equation
748 (\ref{eq:energy}). This model uses the measured dynamic and static
749 power values showed in Table \ref{table:grid5000}. The execution
750 time is measured for all the benchmarks over these different scenarios.
752 The energy consumptions and the execution times for all the benchmarks are
753 presented in the plots \ref{fig:eng_sen} and \ref{fig:time_sen} respectively.
755 For the majority of the benchmarks, the energy consumed while executing the NAS benchmarks over one site scenario
756 for 16 and 32 nodes is lower than the energy consumed while using two sites.
757 The long distance communications between the two distributed sites increase the idle time which leads to more static energy consumption.
758 The execution times of these benchmarks
759 over one site with 16 and 32 nodes are also lower when compared to those of the two sites
762 However, the execution times and the energy consumptions of EP and MG benchmarks, which have no or small communications, are not significantly affected
763 in both scenarios. Even when the number of nodes is doubled. On the other hand, the communications of the rest of the benchmarks increases when using long distance communications between two sites or increasing the number of computing nodes.
767 \includegraphics[scale=0.5]{fig/eng_s.eps}
768 \caption{The energy saving of NAS benchmarks over different scenarios }
775 \includegraphics[scale=0.5]{fig/per_d.eps}
776 \caption{The performance degradation of NAS benchmarks over different scenarios }
783 \includegraphics[scale=0.5]{fig/dist.eps}
784 \caption{The tradeoff distance of NAS benchmarks over different scenarios }
788 The energy saving percentage is computed as the ratio between the reduced
789 energy consumption, equation (\ref{eq:energy}), and the original energy consumption,
790 equation (\ref{eq:eorginal}), for all benchmarks as in figure \ref{fig:eng_s}.
791 This figure shows that the energy saving percentages of one site scenario for
792 16 and 32 nodes are bigger than those of the two sites scenario which is due
793 to the higher computations to communications ratio in the first scenario
794 than in the second one. Moreover, the frequency selecting algorithm selects smaller frequencies when the computations times are higher than the communication times which
795 results in a lower energy consumption. Indeed, the dynamic consumed power
796 is exponentially related to the CPU's frequency value. On the other side, the increase in the number of computing nodes can
797 increase the communication times and thus produces less energy saving depending on the
798 benchmarks being executed. The results of the benchmarks CG, MG, BT and FT show more
799 energy saving percentage in one site scenario when executed over 16 nodes comparing to 32 nodes. While, LU and SP consume more energy with 16 nodes than 32 in one site because there computations to
800 communications ratio is not affected by the increase of the number of local communications.
803 The energy saving percentage is reduced for all the benchmarks because of the long distance communications in the two sites
804 scenario, except for the EP benchmark which has no communications. Therefore, the energy saving percentage of this benchmark is
805 dependent on the maximum difference between the computing powers of the heterogeneous computing nodes, for example
806 in the one site scenario, the graphite cluster is selected but in the two sits scenario
807 this cluster is replaced with Taurus cluster which is more powerful.
808 Therefore, the energy saving of EP benchmarks are bigger in the two site scenario due
809 to the higher maximum difference between the computing powers of the nodes.
811 differences between the nodes' computing powers make the proposed frequencies selecting
812 algorithm select smaller frequencies for the powerful nodes which
813 produces less energy consumption and thus more energy saving.
814 The best energy saving percentage was obtained in the one site scenario with 16 nodes, The energy consumption was on average reduced up to 30\%.
817 Figure \ref{fig:per_d} presents the performance degradation percentages for all benchmarks.
818 The performance degradation percentage for the benchmarks running on two sites with
819 16 or 32 nodes is on average equal to 8\% or 4\% respectively.
822 The proposed scaling algorithm selecting smaller frequencies in two sites scenario,
823 due to decreasing in the computations to communications ratio when the number of nodes is increased and
824 leads to less performance degradation percentage.
825 In contrast, the performance degradation percentage for the benchmarks running on one site with
826 16 or 32 nodes is on average equal to 3\% or 10\% respectively.
827 The inverse is happens in this scenario when the number of computing nodes is increased
828 the performance degradation percentage is decreased. So, using double number of computing
829 nodes when the communications occur in high speed network not decreased the computations to
830 communication ratio. Moreover, as shown in the figure \ref{fig:time_sen}, the execution time of one site scenario with 32 nodes
831 are less by approximately double, linear speed-up, for most of the benchmarks comparing to the one site with 16 nodes scenario.
832 This leads to increased the number of the critical nodes which any one of them may increased the overall the execution time of the benchmarks.
833 The EP benchmarks is gives the bigger performance degradation ratio, because there is no
834 communications and no slack times in this benchmarks which their performance govern
835 The tradeoff between these scenarios can be computed as in the tradeoff function \ref{eq:max}.
836 Figure \ref{fig:dist}, presents the tradeoff distance for all benchmarks over all
837 platform scenarios. The one site scenario with 16 and 32 nodes had the best tradeoff distance
838 compared to the two sites scenarios, due to the increase or decreased in the communications as mentioned before.
839 The one site scenario with 16 nodes is the best scenario in term of energy and performance tradeoff,
840 which on average is up 26\%. Therefore, the tradeoff distance is related linearly to the energy saving
841 percentage. Finally, the best energy and performance tradeoff depends on the all of the following:
842 1) the computations to communications ratio when there is a communications and slack times, 2) the differences in computing powers
843 between the computing nodes and 3) the differences in static and the dynamic powers of the nodes.}
844 \subsection{The experimental results of multicores clusters}
846 The grid'5000 clusters have different number of cores embedded in their nodes
847 as in the Table \ref{table:grid5000}. Moreover, the cores of each node are
848 connected via shared memory model, the data transfer between cores' local
849 memories achieved via the global memory \cite{rauber_book}. Therefore, in
850 this section the proposed scaling algorithm is implemented over the grid'5000
851 clusters which are included multicores in the selected nodes as same as the
852 two previous platform scenarios that mentioned in the section \ref{sec.res}.
853 The two platform scenarios, the two sites and one site scenarios, with 32
854 nodes are reconfigured to used multicores for each node. For example if
855 the participating number of nodes from a certain cluster is equal to 12 nodes,
856 in the multicores scenario the selected nodes is equal to 3 nodes with using
857 4 cores for each of them to produced 12 cores. These scenarios with one
858 core and multicores are demonstrated in Table \ref{table:sen-mc}.
859 The energy consumptions and execution times of running the NAS parallel
860 benchmarks, class D, over these four different scenarios are represented
861 in the figures \ref{fig:eng-cons-mc} and \ref{fig:time-mc} respectively.
862 The execution times of NAS benchmarks over the one site multicores scenario
863 is higher than the execution time of those running over one site multicores scenario.
864 The reason in the one site multicores scenario the communication is increased significantly,
865 and all node's cores share the same node network link which increased
866 the communication times. Whereas, the execution times of the NAS benchmarks over
867 the two site multicores scenario is less than those executed over the two
868 sites one core scenario. This goes back when using multicores is decreasing the communications.
869 As explained previously, the cores shared same nodes' linkbut the communications between the cores
870 are still less than the communication times between the nodes over the long distance
871 networks, and thus the over all execution time decreased. Generally, executing
872 the NAS benchmarks over the one site one core scenario gives smaller execution times
873 comparing to other scenarios. This due to each node in this scenario has it's
874 dedicated network link that used independently by one core, while in the other
875 scenarios the communication times are higher when using long distance communications
876 link or using the shared link communications between cores of each node.
877 On the other hand, the energy consumptions of the NAS benchmarks over the
878 one site one cores is less than the one site multicores scenario because
879 this scenario had less execution time as mentioned before. Also, in the
880 one site one core scenario the computations to communications ratio is
881 higher, then the new scaled frequencies are decreased the dynamic energy
882 consumption which is decreased exponentially
883 with the new frequency scaling factors. These experiments also showed, the energy
884 consumption and the execution times of EP and MG benchmarks over these four
885 scenarios are not change a lot, because there are no or small communications
886 which are increase or decrease the static power consumptions.
887 The other benchmarks were showed that their energy consumptions and execution times
888 are changed according to the decreasing or increasing in the communication
889 times that are different from scenario to other or due to the amount of
890 communications in each of them.
892 The energy saving percentages of all NAS benchmarks, as in figure
893 \ref{fig:eng-s-mc}, running over these four scenarios are presented. The figure
894 showed the energy saving percentages of NAS benchmarks over two sites multicores scenario is higher
895 than two sites once core scenario, because the computation
896 times in this scenario is higher than the other one, then the more reduction in the
897 dynamic energy can be obtained as mentioned previously. In contrast, in the one site one
898 core and one site multicores scenarios the energy saving percentages
899 are approximately equivalent, on average they are up to 25\%. In these both scenarios there are a small difference in the
900 computations to communications ratio, leading the proposed scaling algorithm
901 to selects the frequencies proportionally to these ratios and keeping
902 as much as possible the energy saving percentages the same. The
903 performance degradation percentages of NAS benchmarks are presented in
904 figure \ref{fig:per-d-mc}. This figure indicates that performance
905 degradation percentages of running NAS benchmarks over two sites
906 multocores scenario, on average is equal to 7\%, gives more performance degradation percentage
907 than two sites one core scenario, which on average is equal to 4\%.
908 Moreover, using the two sites multicores scenario increased
909 the computations to communications ratio, which may be increased
910 the overall execution time when the proposed scaling algorithm is applied and scaling down the frequencies.
911 The inverse was happened when the benchmarks are executed over one
912 site one core scenario their performance degradation percentages, on average
913 is equal to 10\%, are higher than those executed over one sit one core,
914 which on average is equal to 7\%. So, in one site
915 multicores scenario the computations to communications ratio is decreased
916 as mentioned before, thus selecting new frequencies are not increased
917 the overall execution time. The tradeoff distances of all NAS
918 benchmarks over all scenarios are presented in the figure \ref{fig:dist-mc}.
919 These tradeoff distances are used to verified which scenario is the best in term of
920 energy and performance ratio. The one sites multicores scenario is the best scenario in term of
921 energy and performance tradeoff, on average is equal to 17.6\%, when comparing to the one site one core
922 scenario, one average is equal to 15.3\%. The one site multicores scenario
923 has the same energy saving percentages of the one site one core scenario but
924 with less performance degradation. The two sites multicores scenario is gives better
925 energy and performance tradeoff, one average is equal to 14.7\%, than the two sites
926 one core, on average is equal to 13.3\%.
927 Finally, using multicore in both scenarios increased the energy and performance tradeoff
928 distance. This generally due to using multicores was increased the computations to communications
929 ratio in two sites scenario and thus the energy saving percentage increased over the performance degradation percentage, whereas this ratio was decreased
930 in one site scenario causing the performance degradation percentage decreased over the energy saving percentage.
938 \caption{The multicores scenarios}
940 \begin{tabular}{|*{4}{c|}}
942 Scenario name & Cluster name & \begin{tabular}[c]{@{}c@{}}No. of nodes\\ in each cluster\end{tabular} &
943 \begin{tabular}[c]{@{}c@{}}No. of cores\\ for each node\end{tabular} \\ \hline
944 \multirow{3}{*}{Two sites/ one core} & Taurus & 10 & 1 \\ \cline{2-4}
945 & Graphene & 10 & 1 \\ \cline{2-4}
946 & Griffon & 12 & 1 \\ \hline
947 \multirow{3}{*}{Two sites/ multicores} & Taurus & 3 & 3 or 4 \\ \cline{2-4}
948 & Graphene & 3 & 3 or 4 \\ \cline{2-4}
949 & Griffon & 3 & 4 \\ \hline
950 \multirow{3}{*}{One site/ one core} & Graphite & 4 & 1 \\ \cline{2-4}
951 & Graphene & 12 & 1 \\ \cline{2-4}
952 & Griffon & 12 & 1 \\ \hline
953 \multirow{3}{*}{One site/ multicores} & Graphite & 3 & 3 or 4 \\ \cline{2-4}
954 & Graphene & 3 & 3 or 4 \\ \cline{2-4}
955 & Griffon & 3 & 4 \\ \hline
962 \includegraphics[scale=0.5]{fig/eng_con.eps}
963 \caption{Comparing the energy consumptions of running NAS benchmarks over one core and multicores scenarios }
964 \label{fig:eng-cons-mc}
970 \includegraphics[scale=0.5]{fig/time.eps}
971 \caption{Comparing the execution times of running NAS benchmarks over one core and multicores scenarios }
977 \includegraphics[scale=0.5]{fig/eng_s_mc.eps}
978 \caption{The energy saving of running NAS benchmarks over one core and multicores scenarios }
984 \includegraphics[scale=0.5]{fig/per_d_mc.eps}
985 \caption{The performance degradation of running NAS benchmarks over one core and multicores scenarios }
991 \includegraphics[scale=0.5]{fig/dist_mc.eps}
992 \caption{The tradeoff distance of running NAS benchmarks over one core and multicores scenarios }
996 \subsection{The results for different power consumption scenarios}
1002 \subsection{The comparison of the proposed scaling algorithm }
1003 \label{sec.compare_EDP}
1007 \section{Conclusion}
1012 \section*{Acknowledgment}
1014 This work has been partially supported by the Labex ACTION project (contract
1015 ``ANR-11-LABX-01-01''). Computations have been performed on the supercomputer
1016 facilities of the Mésocentre de calcul de Franche-Comté. As a PhD student,
1017 Mr. Ahmed Fanfakh, would like to thank the University of Babylon (Iraq) for
1018 supporting his work.
1020 % trigger a \newpage just before the given reference
1021 % number - used to balance the columns on the last page
1022 % adjust value as needed - may need to be readjusted if
1023 % the document is modified later
1024 %\IEEEtriggeratref{15}
1026 \bibliographystyle{IEEEtran}
1027 \bibliography{IEEEabrv,my_reference}
1030 %%% Local Variables:
1034 %%% ispell-local-dictionary: "american"
1037 % LocalWords: Fanfakh Charr FIXME Tianhe DVFS HPC NAS NPB SMPI Rauber's Rauber
1038 % LocalWords: CMOS EPSA Franche Comté Tflop Rünger IUT Maréchal Juin cedex GPU
1039 % LocalWords: de badri muslim MPI SimGrid GFlops Xeon EP BT GPUs CPUs AMD
1040 % LocalWords: Spiliopoulos scalability