predict both energy consumption and execution time over all available scaling
factors. The prediction achieved depends on some computing time information,
gathered at the beginning of the runtime. We apply this algorithm to the NAS parallel benchmarks (NPB v3.3)~\cite{44}. Our experiments are executed using the simulator
-SimGrid/SMPI v3.10~\cite{Casanova:2008:SGF:1397760.1398183} over an homogeneous
+SimGrid/SMPI v3.10~\cite{Casanova:2008:SGF:1397760.1398183} over a homogeneous
distributed memory architecture. Furthermore, we compare the proposed algorithm
with Rauber and Rünger methods~\cite{3}. The comparison's results show that our
algorithm gives better energy-time trade-off.
% paper in homogeneous clusters}
-\section{Energy model for homogeneous platform}
+\section{Energy model for a homogeneous platform}
\label{sec.exe}
Many researchers~\cite{9,3,15,26} divide the power consumed by a processor into
two power metrics: the static and the dynamic power. While the first one is
\Pstatic \cdot T_1 \cdot S_1 \cdot N
\end{equation}
where $N$ is the number of parallel nodes, $T_i$ for $i=1,\dots,N$ are
-the execution times and scaling factors of the sorted tasks. Therefore, $T1$ is
+the execution times of the sorted tasks. Therefore, $T_1$ is
the time of the slowest task, and $S_1$ its scaling factor which should be the
highest because they are proportional to the time values $T_i$. The scaling
-factors are computed as in EQ~\eqref{eq:si}.
+factors $S_i$ are computed as in EQ~\eqref{eq:si}.
\begin{equation}
\label{eq:si}
S_i = S \cdot \frac{T_1}{T_i}
\begin{figure}[tp]
\begin{algorithmic}[1]
% \footnotesize
- \State Initialize the variable $\Dist=0$
- \State Set dynamic and static power values.
- \State Set $\Pstates$ to the number of available frequencies.
- \State Set the variable $\Fnew$ to max. frequency, $\Fnew = \Fmax $
- \State Set the variable $\Fdiff$ to the difference between two successive
- frequencies.
- \For {$j := 1$ to $\Pstates $}
- \State $\Fnew = \Fnew - \Fdiff $
- \State $S = \frac{\Fmax}{\Fnew}$
- \State $S_i = S \cdot \frac{T_1}{T_i}
+ \Require ~
+ \begin{description}
+ \item[$\Pstatic$] static power value
+ \item[$\Pdyn$] dynamic power value
+ \item[$\Pstates$] number of available frequencies
+ \item[$\Fmax$] maximum frequency
+ \item[$\Fdiff$] difference between two successive freq.
+ \end{description}
+ \Ensure $\Sopt$ is the optimal scaling factor
+
+ \State $\Sopt \gets 1$
+ \State $\Dist \gets 0$
+ \State $\Fnew \gets \Fmax$
+ \For {$j = 2$ to $\Pstates$}
+ \State $\Fnew \gets \Fnew - \Fdiff$
+ \State $S \gets \Fmax / \Fnew$
+ \State $S_i \gets S \cdot \frac{T_1}{T_i}
= \frac{\Fmax}{\Fnew} \cdot \frac{T_1}{T_i}$
for $i=1,\dots,N$
- \State $\Enorm =
+ \State $\Enorm \gets
\frac{\Pdyn \cdot S_1^{-2} \cdot
\left( T_1 + \sum_{i=2}^{N}\frac{T_i^3}{T_1^2}\right) +
\Pstatic \cdot T_1 \cdot S_1 \cdot N }{
\Pdyn \cdot
\left(T_1+\sum_{i=2}^{N}\frac{T_i^3}{T_1^2}\right) +
\Pstatic \cdot T_1 \cdot N }$
- \State $\PnormInv = \Told / \Tnew$
+ \State $\PnormInv \gets \Told / \Tnew$
\If{$(\PnormInv - \Enorm > \Dist)$}
- \State $\Sopt = S$
- \State $\Dist = \PnormInv - \Enorm$
+ \State $\Sopt \gets S$
+ \State $\Dist \gets \PnormInv - \Enorm$
\EndIf
\EndFor
\State Return $\Sopt$
\begin{figure}[tp]
\begin{algorithmic}[1]
% \footnotesize
- \For {$k:=1$ to \textit{some iterations}}
+ \For {$k=1$ to \textit{some iterations}}
\State Computations section.
\State Communications section.
\If {$(k=1)$}
In this paper, we have presented a new online scaling factor selection method
that optimizes simultaneously the energy and performance of a distributed
-application running on an homogeneous cluster. It uses the computation and
+application running on a homogeneous cluster. It uses the computation and
communication times measured at the first iteration to predict energy
consumption and the performance of the parallel application at every available
frequency. Then, it selects the scaling factor that gives the best trade-off