\newcommand{\JC}[2][inline]{%
\todo[color=red!10,#1]{\sffamily\textbf{JC:} #2}\xspace}
-\newcommand{\Xsub}[2]{\ensuremath{#1_\textit{#2}}}
+\newcommand{\Xsub}[2]{{\ensuremath{#1_\mathit{#2}}}}
-\newcommand{\Dist}{\textit{Dist}}
+\newcommand{\CL}{\Xsub{C}{L}}
+\newcommand{\Dist}{\mathit{Dist}}
+\newcommand{\EdNew}{\Xsub{E}{dNew}}
\newcommand{\Eind}{\Xsub{E}{ind}}
\newcommand{\Enorm}{\Xsub{E}{Norm}}
\newcommand{\Eoriginal}{\Xsub{E}{Original}}
\newcommand{\Ereduced}{\Xsub{E}{Reduced}}
+\newcommand{\Es}{\Xsub{E}{S}}
\newcommand{\Fdiff}{\Xsub{F}{diff}}
\newcommand{\Fmax}{\Xsub{F}{max}}
\newcommand{\Fnew}{\Xsub{F}{new}}
\newcommand{\Ileak}{\Xsub{I}{leak}}
\newcommand{\Kdesign}{\Xsub{K}{design}}
-\newcommand{\MaxDist}{\textit{Max Dist}}
+\newcommand{\MaxDist}{\mathit{Max}\Dist}
+\newcommand{\MinTcm}{\mathit{Min}\Tcm}
\newcommand{\Ntrans}{\Xsub{N}{trans}}
-\newcommand{\Pdyn}{\Xsub{P}{dyn}}
-\newcommand{\PnormInv}{\Xsub{P}{NormInv}}
+\newcommand{\PdNew}{\Xsub{P}{dNew}}
+\newcommand{\PdOld}{\Xsub{P}{dOld}}
+%\newcommand{\Pdyn}{\Xsub{P}{dyn}}
+\newcommand{\Pd}{\Xsub{P}{d}}
+%\newcommand{\PnormInv}{\Xsub{P}{NormInv}}
\newcommand{\Pnorm}{\Xsub{P}{Norm}}
-\newcommand{\Tnorm}{\Xsub{T}{Norm}}
-\newcommand{\Pstates}{\Xsub{P}{states}}
-\newcommand{\Pstatic}{\Xsub{P}{static}}
+%\newcommand{\Pstates}{\Xsub{P}{states}}
+%\newcommand{\Pstatic}{\Xsub{P}{static}}
+\newcommand{\Ps}{\Xsub{P}{s}}
+\newcommand{\Scp}{\Xsub{S}{cp}}
\newcommand{\Sopt}{\Xsub{S}{opt}}
-\newcommand{\Tcomp}{\Xsub{T}{comp}}
-\newcommand{\TmaxCommOld}{\Xsub{T}{Max Comm Old}}
-\newcommand{\TmaxCompOld}{\Xsub{T}{Max Comp Old}}
-\newcommand{\Tmax}{\Xsub{T}{max}}
+\newcommand{\Tcm}{\Xsub{T}{cm}}
+%\newcommand{\Tcomp}{\Xsub{T}{comp}}
+\newcommand{\TcpOld}{\Xsub{T}{cpOld}}
+\newcommand{\Tcp}{\Xsub{T}{cp}}
+%\newcommand{\TmaxCommOld}{\Xsub{T}{Max Comm Old}}
+%\newcommand{\TmaxCompOld}{\Xsub{T}{Max Comp Old}}
+%\newcommand{\Tmax}{\Xsub{T}{max}}
\newcommand{\Tnew}{\Xsub{T}{New}}
+%\newcommand{\Tnorm}{\Xsub{T}{Norm}}
\newcommand{\Told}{\Xsub{T}{Old}}
+
\begin{document}
\title{Energy Consumption Reduction for Message Passing Iterative Applications in Heterogeneous Architecture Using DVFS}
heterogeneous CPUs. Many methods were conceived to reduce the energy
consumption of this type of platform. Naveen et
al.~\cite{Naveen_Power.Efficient.Resource.Scaling} developed a method that
-minimizes the value of $energy\cdot delay^2$ (the delay is the sum of slack
-times that happen during synchronous communications) by dynamically assigning
-new frequencies to the CPUs of the heterogeneous cluster. Lizhe et
-al.~\cite{Lizhe_Energy.aware.parallel.task.scheduling} proposed an algorithm
-that divides the executed tasks into two types: the critical and non critical
-tasks. The algorithm scales down the frequency of non critical tasks
-proportionally to their slack and communication times while limiting the
-performance degradation percentage to less than
+minimizes the value of $\mathit{energy}\times \mathit{delay}^2$ (the delay is
+the sum of slack times that happen during synchronous communications) by
+dynamically assigning new frequencies to the CPUs of the heterogeneous
+cluster. Lizhe et al.~\cite{Lizhe_Energy.aware.parallel.task.scheduling}
+proposed an algorithm that divides the executed tasks into two types: the
+critical and non critical tasks. The algorithm scales down the frequency of non
+critical tasks proportionally to their slack and communication times while
+limiting the performance degradation percentage to less than
10\%. In~\cite{Joshi_Blackbox.prediction.of.impact.of.DVFS}, they developed a
heterogeneous cluster composed of two types of Intel and AMD processors. They
use a gradient method to predict the impact of DVFS operations on performance.
as in (\ref{eq:s}).
\begin{equation}
\label{eq:s}
- S = \frac{F_\textit{max}}{F_\textit{new}}
+ S = \frac{\Fmax}{\Fnew}
\end{equation}
The execution time of a compute bound sequential program is linearly proportional
to the frequency scaling factor $S$. On the other hand, message passing
vector of scaling factors can be predicted using (\ref{eq:perf}).
\begin{equation}
\label{eq:perf}
- \textit T_\textit{new} =
- \max_{i=1,2,\dots,N} ({TcpOld_{i}} \cdot S_{i}) + MinTcm
+ \Tnew = \max_{i=1,2,\dots,N} ({\TcpOld_{i}} \cdot S_{i}) + \MinTcm
\end{equation}
Where:
\begin{equation}
\label{eq:perf2}
- MinTcm = \min_{i=1,2,\dots,N} (Tcm_i)
+ \MinTcm = \min_{i=1,2,\dots,N} (\Tcm_i)
\end{equation}
-where $TcpOld_i$ is the computation time of processor $i$ during the first
-iteration and $MinTcm$ is the communication time of the slowest processor from
+where $\TcpOld_i$ is the computation time of processor $i$ during the first
+iteration and $\MinTcm$ is the communication time of the slowest processor from
the first iteration. The model computes the maximum computation time with
scaling factor from each node added to the communication time of the slowest
node. It means only the communication time without any slack time is taken into
Rizvandi_Some.Observations.on.Optimal.Frequency} divide the power consumed by a processor into
two power metrics: the static and the dynamic power. While the first one is
consumed as long as the computing unit is turned on, the latter is only consumed during
-computation times. The dynamic power $Pd$ is related to the switching
-activity $\alpha$, load capacitance $C_L$, the supply voltage $V$ and
+computation times. The dynamic power $\Pd$ is related to the switching
+activity $\alpha$, load capacitance $\CL$, the supply voltage $V$ and
operational frequency $F$, as shown in (\ref{eq:pd}).
\begin{equation}
\label{eq:pd}
- Pd = \alpha \cdot C_L \cdot V^2 \cdot F
+ \Pd = \alpha \cdot \CL \cdot V^2 \cdot F
\end{equation}
-The static power $Ps$ captures the leakage power as follows:
+The static power $\Ps$ captures the leakage power as follows:
\begin{equation}
\label{eq:ps}
- Ps = V \cdot N_{trans} \cdot K_{design} \cdot I_{leak}
+ \Ps = V \cdot \Ntrans \cdot \Kdesign \cdot \Ileak
\end{equation}
-where V is the supply voltage, $N_{trans}$ is the number of transistors,
-$K_{design}$ is a design dependent parameter and $I_{leak}$ is a
+where V is the supply voltage, $\Ntrans$ is the number of transistors,
+$\Kdesign$ is a design dependent parameter and $\Ileak$ is a
technology dependent parameter. The energy consumed by an individual processor
to execute a given program can be computed as:
\begin{equation}
\label{eq:eind}
- E_\textit{ind} = Pd \cdot Tcp + Ps \cdot T
+ \Eind = \Pd \cdot \Tcp + \Ps \cdot T
\end{equation}
-where $T$ is the execution time of the program, $Tcp$ is the computation
-time and $Tcp \le T$. $Tcp$ may be equal to $T$ if there is no
+where $T$ is the execution time of the program, $\Tcp$ is the computation
+time and $\Tcp \le T$. $\Tcp$ may be equal to $T$ if there is no
communication and no slack time.
The main objective of DVFS operation is to reduce the overall energy consumption~\cite{Le_DVFS.Laws.of.Diminishing.Returns}.
ratio between the maximum and the new frequency as in (\ref{eq:s}).
The CPU governors are power schemes supplied by the operating
system's kernel to lower a core's frequency. The new frequency
-$F_{new}$ from (\ref{eq:s}) can be calculated as follows:
+$\Fnew$ from (\ref{eq:s}) can be calculated as follows:
\begin{equation}
\label{eq:fnew}
- F_\textit{new} = S^{-1} \cdot F_\textit{max}
+ \Fnew = S^{-1} \cdot \Fmax
\end{equation}
-Replacing $F_{new}$ in (\ref{eq:pd}) as in (\ref{eq:fnew}) gives the following
+Replacing $\Fnew$ in (\ref{eq:pd}) as in (\ref{eq:fnew}) gives the following
equation for dynamic power consumption:
\begin{multline}
\label{eq:pdnew}
- {P}_\textit{dNew} = \alpha \cdot C_L \cdot V^2 \cdot F_{new} = \alpha \cdot C_L \cdot \beta^2 \cdot F_{new}^3 \\
- {} = \alpha \cdot C_L \cdot V^2 \cdot F_{max} \cdot S^{-3} = P_{dOld} \cdot S^{-3}
+ \PdNew = \alpha \cdot \CL \cdot V^2 \cdot \Fnew = \alpha \cdot \CL \cdot \beta^2 \cdot \Fnew^3 \\
+ {} = \alpha \cdot \CL \cdot V^2 \cdot \Fmax \cdot S^{-3} = \PdOld \cdot S^{-3}
\end{multline}
-where $ {P}_\textit{dNew}$ and $P_{dOld}$ are the dynamic power consumed with the
+where $\PdNew$ and $\PdOld$ are the dynamic power consumed with the
new frequency and the maximum frequency respectively.
According to (\ref{eq:pdnew}) the dynamic power is reduced by a factor of $S^{-3}$ when
and is given by the following equation:
\begin{equation}
\label{eq:Edyn}
- E_\textit{dNew} = P_{dOld} \cdot S^{-3} \cdot (Tcp \cdot S)= S^{-2}\cdot P_{dOld} \cdot Tcp
+ \EdNew = \PdOld \cdot S^{-3} \cdot (\Tcp \cdot S)= S^{-2}\cdot \PdOld \cdot \Tcp
\end{equation}
The static power is related to the power leakage of the CPU and is consumed during computation
and even when idle. As in~\cite{Rauber_Analytical.Modeling.for.Energy,Zhuo_Energy.efficient.Dynamic.Task.Scheduling},
The static energy of a processor after scaling its frequency is computed as follows:
\begin{equation}
\label{eq:Estatic}
- E_\textit{s} = Ps \cdot (Tcp \cdot S + Tcm)
+ \Es = \Ps \cdot (\Tcp \cdot S + \Tcm)
\end{equation}
In the considered heterogeneous platform, each processor $i$ might have
-different dynamic and static powers, noted as $Pd_{i}$ and $Ps_{i}$
+different dynamic and static powers, noted as $\Pd_{i}$ and $\Ps_{i}$
respectively. Therefore, even if the distributed message passing iterative
application is load balanced, the computation time of each CPU $i$ noted
-$Tcp_{i}$ might be different and different frequency scaling factors might be
+$\Tcp_{i}$ might be different and different frequency scaling factors might be
computed in order to decrease the overall energy consumption of the application
and reduce slack times. The communication time of a processor $i$ is noted as
-$Tcm_{i}$ and could contain slack times when communicating with slower
+$\Tcm_{i}$ and could contain slack times when communicating with slower
nodes, see Figure~\ref{fig:heter}. Therefore, all nodes do not have equal
communication times. While the dynamic energy is computed according to the
frequency scaling factor and the dynamic power of each node as in
static energies for each processor. It is computed as follows:
\begin{multline}
\label{eq:energy}
- E = \sum_{i=1}^{N} {(S_i^{-2} \cdot Pd_{i} \cdot Tcp_i)} + {} \\
- \sum_{i=1}^{N} (Ps_{i} \cdot (\max_{i=1,2,\dots,N} (Tcp_i \cdot S_{i}) +
- {MinTcm))}
+ E = \sum_{i=1}^{N} {(S_i^{-2} \cdot \Pd_{i} \cdot \Tcp_i)} + {} \\
+ \sum_{i=1}^{N} (\Ps_{i} \cdot (\max_{i=1,2,\dots,N} (\Tcp_i \cdot S_{i}) +
+ {\MinTcm))}
\end{multline}
Reducing the frequencies of the processors according to the vector of
maximum frequency for all nodes) as follows:
\begin{multline}
\label{eq:pnorm}
- P_\textit{Norm} = \frac{T_\textit{New}}{T_\textit{Old}}\\
- {} = \frac{ \max_{i=1,2,\dots,N} (Tcp_{i} \cdot S_{i}) +MinTcm}
- {\max_{i=1,2,\dots,N}{(Tcp_i+Tcm_i)}}
+ \Pnorm = \frac{\Tnew}{\Told}\\
+ {} = \frac{ \max_{i=1,2,\dots,N} (\Tcp_{i} \cdot S_{i}) +\MinTcm}
+ {\max_{i=1,2,\dots,N}{(\Tcp_i+\Tcm_i)}}
\end{multline}
while scaling down the frequency and the consumed energy with maximum frequency for all nodes:
\begin{multline}
\label{eq:enorm}
- E_\textit{Norm} = \frac{E_\textit{Reduced}}{E_\textit{Original}} \\
- {} = \frac{ \sum_{i=1}^{N}{(S_i^{-2} \cdot Pd_i \cdot Tcp_i)} +
- \sum_{i=1}^{N} {(Ps_i \cdot T_{New})}}{\sum_{i=1}^{N}{( Pd_i \cdot Tcp_i)} +
- \sum_{i=1}^{N} {(Ps_i \cdot T_{Old})}}
+ \Enorm = \frac{\Ereduced}{\Eoriginal} \\
+ {} = \frac{ \sum_{i=1}^{N}{(S_i^{-2} \cdot \Pd_i \cdot \Tcp_i)} +
+ \sum_{i=1}^{N} {(\Ps_i \cdot \Tnew)}}{\sum_{i=1}^{N}{( \Pd_i \cdot \Tcp_i)} +
+ \sum_{i=1}^{N} {(\Ps_i \cdot \Told)}}
\end{multline}
-Where $E_\textit{Reduced}$ and $E_\textit{Original}$ are computed using (\ref{eq:energy}) and
- $T_{New}$ and $T_{Old}$ are computed as in (\ref{eq:pnorm}).
+Where $\Ereduced$ and $\Eoriginal$ are computed using (\ref{eq:energy}) and
+ $\Tnew$ and $\Told$ are computed as in (\ref{eq:pnorm}).
While the main
goal is to optimize the energy and execution time at the same time, the normalized
normalized execution time is inverted which gives the normalized performance equation, as follows:
\begin{multline}
\label{eq:pnorm_inv}
- P_\textit{Norm} = \frac{T_\textit{Old}}{T_\textit{New}}\\
- = \frac{\max_{i=1,2,\dots,N}{(Tcp_i+Tcm_i)}}
- { \max_{i=1,2,\dots,N} (Tcp_{i} \cdot S_{i}) + MinTcm}
+ \Pnorm = \frac{\Told}{\Tnew}\\
+ = \frac{\max_{i=1,2,\dots,N}{(\Tcp_i+\Tcm_i)}}
+ { \max_{i=1,2,\dots,N} (\Tcp_{i} \cdot S_{i}) + \MinTcm}
\end{multline}
Figure~\ref{fig:r2}. Then the objective function has the following form:
\begin{equation}
\label{eq:max}
- Max Dist =
- \max_{i=1,\dots F, j=1,\dots,N}
- (\overbrace{P_\textit{Norm}(S_{ij})}^{\text{Maximize}} -
- \overbrace{E_\textit{Norm}(S_{ij})}^{\text{Minimize}} )
+ \MaxDist =
+ \mathop{\max_{i=1,\dots F}}_{j=1,\dots,N}
+ (\overbrace{\Pnorm(S_{ij})}^{\text{Maximize}} -
+ \overbrace{\Enorm(S_{ij})}^{\text{Minimize}} )
\end{equation}
where $N$ is the number of nodes and $F$ is the number of available frequencies for each node.
Then, the optimal set of scaling factors that satisfies (\ref{eq:max}) can be selected.
of the slowest node and the computation time of the node $i$ as follows:
\begin{equation}
\label{eq:Scp}
- Scp_{i} = \frac{\max_{i=1,2,\dots,N}(Tcp_i)}{Tcp_i}
+ \Scp_{i} = \frac{\max_{i=1,2,\dots,N}(\Tcp_i)}{\Tcp_i}
\end{equation}
Using the initial frequency scaling factors computed in (\ref{eq:Scp}), the algorithm computes
the initial frequencies for all nodes as a ratio between the maximum frequency of node $i$
-and the computation scaling factor $Scp_i$ as follows:
+and the computation scaling factor $\Scp_i$ as follows:
\begin{equation}
\label{eq:Fint}
- F_{i} = \frac{Fmax_i}{Scp_i},~{i=1,2,\cdots,N}
+ F_{i} = \frac{\Fmax_i}{\Scp_i},~{i=1,2,\dots,N}
\end{equation}
If the computed initial frequency for a node is not available in the gears of
that node, it is replaced by the nearest available frequency. In
% \footnotesize
\Require ~
\begin{description}
- \item[$Tcp_i$] array of all computation times for all nodes during one iteration and with highest frequency.
- \item[$Tcm_i$] array of all communication times for all nodes during one iteration and with highest frequency.
- \item[$Fmax_i$] array of the maximum frequencies for all nodes.
- \item[$Pd_i$] array of the dynamic powers for all nodes.
- \item[$Ps_i$] array of the static powers for all nodes.
- \item[$Fdiff_i$] array of the difference between two successive frequencies for all nodes.
+ \item[$\Tcp_i$] array of all computation times for all nodes during one iteration and with highest frequency.
+ \item[$\Tcm_i$] array of all communication times for all nodes during one iteration and with highest frequency.
+ \item[$\Fmax_i$] array of the maximum frequencies for all nodes.
+ \item[$\Pd_i$] array of the dynamic powers for all nodes.
+ \item[$\Ps_i$] array of the static powers for all nodes.
+ \item[$\Fdiff_i$] array of the difference between two successive frequencies for all nodes.
\end{description}
- \Ensure $Sopt_1,Sopt_2 \dots, Sopt_N$ is a vector of optimal scaling factors
+ \Ensure $\Sopt_1,\Sopt_2 \dots, \Sopt_N$ is a vector of optimal scaling factors
- \State $ Scp_i \gets \frac{\max_{i=1,2,\dots,N}(Tcp_i)}{Tcp_i} $
- \State $F_{i} \gets \frac{Fmax_i}{Scp_i},~{i=1,2,\cdots,N}$
+ \State $\Scp_i \gets \frac{\max_{i=1,2,\dots,N}(\Tcp_i)}{\Tcp_i} $
+ \State $F_{i} \gets \frac{\Fmax_i}{\Scp_i},~{i=1,2,\cdots,N}$
\State Round the computed initial frequencies $F_i$ to the closest one available in each node.
\If{(not the first frequency)}
- \State $F_i \gets F_i+Fdiff_i,~i=1,\dots,N.$
+ \State $F_i \gets F_i+\Fdiff_i,~i=1,\dots,N.$
\EndIf
- \State $T_\textit{Old} \gets max_{~i=1,\dots,N } (Tcp_i+Tcm_i)$
- \State $E_\textit{Original} \gets \sum_{i=1}^{N}{( Pd_i \cdot Tcp_i)} +\sum_{i=1}^{N} {(Ps_i \cdot T_{Old})}$
- \State $Sopt_{i} \gets 1,~i=1,\dots,N. $
- \State $Dist \gets 0 $
+ \State $\Told \gets max_{~i=1,\dots,N } (\Tcp_i+\Tcm_i)$
+ % \State $\Eoriginal \gets \sum_{i=1}^{N}{( \Pd_i \cdot \Tcp_i)} +\sum_{i=1}^{N} {(\Ps_i \cdot \Told)}$
+ \State $\Eoriginal \gets \sum_{i=1}^{N}{( \Pd_i \cdot \Tcp_i + \Ps_i \cdot \Told)}$
+ \State $\Sopt_{i} \gets 1,~i=1,\dots,N. $
+ \State $\Dist \gets 0 $
\While {(all nodes not reach their minimum frequency)}
\If{(not the last freq. \textbf{and} not the slowest node)}
- \State $F_i \gets F_i - Fdiff_i,~i=1,\dots,N.$
- \State $S_i \gets \frac{Fmax_i}{F_i},~i=1,\dots,N.$
+ \State $F_i \gets F_i - \Fdiff_i,~i=1,\dots,N.$
+ \State $S_i \gets \frac{\Fmax_i}{F_i},~i=1,\dots,N.$
\EndIf
- \State $T_{New} \gets max_\textit{~i=1,\dots,N} (Tcp_{i} \cdot S_{i}) + MinTcm $
- \State $E_\textit{Reduced} \gets \sum_{i=1}^{N}{(S_i^{-2} \cdot Pd_i \cdot Tcp_i)} + $ \hspace*{43 mm}
- $\sum_{i=1}^{N} {(Ps_i \cdot T_{New})} $
- \State $ P_\textit{Norm} \gets \frac{T_\textit{Old}}{T_\textit{New}}$
- \State $E_\textit{Norm}\gets \frac{E_\textit{Reduced}}{E_\textit{Original}}$
+ \State $\Tnew \gets max_\textit{~i=1,\dots,N} (\Tcp_{i} \cdot S_{i}) + \MinTcm $
+% \State $\Ereduced \gets \sum_{i=1}^{N}{(S_i^{-2} \cdot \Pd_i \cdot \Tcp_i)} + \sum_{i=1}^{N} {(\Ps_i \cdot \rlap{\Tnew)}} $
+ \State $\Ereduced \gets \sum_{i=1}^{N}{(S_i^{-2} \cdot \Pd_i \cdot \Tcp_i + \Ps_i \cdot \rlap{\Tnew)}} $
+ \State $\Pnorm \gets \frac{\Told}{\Tnew}$
+ \State $\Enorm\gets \frac{\Ereduced}{\Eoriginal}$
\If{$(\Pnorm - \Enorm > \Dist)$}
- \State $Sopt_{i} \gets S_{i},~i=1,\dots,N. $
+ \State $\Sopt_{i} \gets S_{i},~i=1,\dots,N. $
\State $\Dist \gets \Pnorm - \Enorm$
\EndIf
\EndWhile
- \State Return $Sopt_1,Sopt_2,\dots,Sopt_N$
+ \State Return $\Sopt_1,\Sopt_2,\dots,\Sopt_N$
\end{algorithmic}
\caption{frequency scaling factors selection algorithm}
\label{HSA}
\subsection{The comparison of the proposed scaling algorithm }
\label{sec.compare_EDP}
-In this section, the scaling factors selection algorithm, called MaxDist,
-is compared to Spiliopoulos et al. algorithm \cite{Spiliopoulos_Green.governors.Adaptive.DVFS}, called EDP.
-They developed a green governor that regularly applies an online frequency selecting algorithm to reduce the energy consumed by a multicore architecture without degrading much its performance. The algorithm selects the frequencies that minimize the energy and delay products, $EDP=Energy\cdot Delay$ using the predicted overall energy consumption and execution time delay for each frequency.
-To fairly compare both algorithms, the same energy and execution time models, equations (\ref{eq:energy}) and (\ref{eq:fnew}), were used for both algorithms to predict the energy consumption and the execution times. Also Spiliopoulos et al. algorithm was adapted to start the search from the
-initial frequencies computed using the equation (\ref{eq:Fint}). The resulting algorithm is an exhaustive search algorithm that minimizes the EDP and has the initial frequencies values as an upper bound.
+In this section, the scaling factors selection algorithm, called MaxDist, is
+compared to Spiliopoulos et al. algorithm
+\cite{Spiliopoulos_Green.governors.Adaptive.DVFS}, called EDP. They developed a
+green governor that regularly applies an online frequency selecting algorithm to
+reduce the energy consumed by a multicore architecture without degrading much
+its performance. The algorithm selects the frequencies that minimize the energy
+and delay products, $\mathit{EDP}=\mathit{energy}\times \mathit{delay}$ using
+the predicted overall energy consumption and execution time delay for each
+frequency. To fairly compare both algorithms, the same energy and execution
+time models, equations (\ref{eq:energy}) and (\ref{eq:fnew}), were used for both
+algorithms to predict the energy consumption and the execution times. Also
+Spiliopoulos et al. algorithm was adapted to start the search from the initial
+frequencies computed using the equation (\ref{eq:Fint}). The resulting algorithm
+is an exhaustive search algorithm that minimizes the EDP and has the initial
+frequencies values as an upper bound.
Both algorithms were applied to the parallel NAS benchmarks to compare their
efficiency. Table~\ref{table:compare_EDP} presents the results of comparing the
% LocalWords: Fanfakh Charr FIXME Tianhe DVFS HPC NAS NPB SMPI Rauber's Rauber
% LocalWords: CMOS EPSA Franche Comté Tflop Rünger IUT Maréchal Juin cedex GPU
-% LocalWords: de badri muslim MPI TcpOld TcmOld dNew dOld cp Sopt Tcp Tcm Ps
-% LocalWords: Scp Fmax Fdiff SimGrid GFlops Xeon EP BT GPUs CPUs AMD
+% LocalWords: de badri muslim MPI SimGrid GFlops Xeon EP BT GPUs CPUs AMD
% LocalWords: Spiliopoulos scalability