predict both energy consumption and execution time over all available scaling
factors. The prediction achieved depends on some computing time information,
gathered at the beginning of the runtime. We apply this algorithm to the NAS parallel benchmarks (NPB v3.3)~\cite{44}. Our experiments are executed using the simulator
predict both energy consumption and execution time over all available scaling
factors. The prediction achieved depends on some computing time information,
gathered at the beginning of the runtime. We apply this algorithm to the NAS parallel benchmarks (NPB v3.3)~\cite{44}. Our experiments are executed using the simulator
distributed memory architecture. Furthermore, we compare the proposed algorithm
with Rauber and Rünger methods~\cite{3}. The comparison's results show that our
algorithm gives better energy-time trade-off.
distributed memory architecture. Furthermore, we compare the proposed algorithm
with Rauber and Rünger methods~\cite{3}. The comparison's results show that our
algorithm gives better energy-time trade-off.
\label{sec.exe}
Many researchers~\cite{9,3,15,26} divide the power consumed by a processor into
two power metrics: the static and the dynamic power. While the first one is
\label{sec.exe}
Many researchers~\cite{9,3,15,26} divide the power consumed by a processor into
two power metrics: the static and the dynamic power. While the first one is
-where $N$ is the number of parallel nodes, $T_i$ and $S_i$ for $i=1,\dots,N$ are
-the execution times and scaling factors of the sorted tasks. Therefore, $T_1$ is
+where $N$ is the number of parallel nodes, $T_i$ for $i=1,\dots,N$ are
+the execution times of the sorted tasks. Therefore, $T_1$ is
the time of the slowest task, and $S_1$ its scaling factor which should be the
highest because they are proportional to the time values $T_i$. The scaling
the time of the slowest task, and $S_1$ its scaling factor which should be the
highest because they are proportional to the time values $T_i$. The scaling
\Pdyn \cdot \left(T_1+\sum_{i=2}^{N}\frac{T_i^3}{T_1^2}\right) +
\Pstatic \cdot T_1 \cdot N }
\end{multline}
In the same way we can normalize the performance as follows:
\begin{equation}
\label{eq:pnorm}
\Pdyn \cdot \left(T_1+\sum_{i=2}^{N}\frac{T_i^3}{T_1^2}\right) +
\Pstatic \cdot T_1 \cdot N }
\end{multline}
In the same way we can normalize the performance as follows:
\begin{equation}
\label{eq:pnorm}
\TmaxCompOld + \TmaxCommOld}
\end{equation}
The second problem is that the optimization operation for both energy and
\TmaxCompOld + \TmaxCommOld}
\end{equation}
The second problem is that the optimization operation for both energy and
\For {$j = 2$ to $\Pstates$}
\State $\Fnew \gets \Fnew - \Fdiff$
\State $S \gets \Fmax / \Fnew$
\For {$j = 2$ to $\Pstates$}
\State $\Fnew \gets \Fnew - \Fdiff$
\State $S \gets \Fmax / \Fnew$
In this paper, we have presented a new online scaling factor selection method
that optimizes simultaneously the energy and performance of a distributed
In this paper, we have presented a new online scaling factor selection method
that optimizes simultaneously the energy and performance of a distributed
communication times measured at the first iteration to predict energy
consumption and the performance of the parallel application at every available
frequency. Then, it selects the scaling factor that gives the best trade-off
communication times measured at the first iteration to predict energy
consumption and the performance of the parallel application at every available
frequency. Then, it selects the scaling factor that gives the best trade-off