application must be selected.
In this paper, a new online frequencies selecting algorithm for heterogeneous platforms is presented.
application must be selected.
In this paper, a new online frequencies selecting algorithm for heterogeneous platforms is presented.
for each node computing the message passing iterative application. The algorithm has a small overhead and
works without training or profiling. It uses a new energy model for message passing iterative applications
for each node computing the message passing iterative application. The algorithm has a small overhead and
works without training or profiling. It uses a new energy model for message passing iterative applications
running the NAS parallel benchmarks. The experiments demonstrated that it reduces the energy consumption
up to 35\% while limiting the performance degradation as much as possible.
\end{abstract}
running the NAS parallel benchmarks. The experiments demonstrated that it reduces the energy consumption
up to 35\% while limiting the performance degradation as much as possible.
\end{abstract}
The computing platforms must be more energy efficient and offer the highest number of FLOPS per watt possible,
such as the L-CSC from the GSI Helmholtz Center which
became the top of the Green500 list in November 2014 \cite{Green500_List}.
The computing platforms must be more energy efficient and offer the highest number of FLOPS per watt possible,
such as the L-CSC from the GSI Helmholtz Center which
became the top of the Green500 list in November 2014 \cite{Green500_List}.
Besides hardware improvements, there are many software techniques to lower the energy consumption of these platforms,
such as scheduling, DVFS, ... DVFS is a widely used process to reduce the energy consumption of a processor by lowering
Besides hardware improvements, there are many software techniques to lower the energy consumption of these platforms,
such as scheduling, DVFS, ... DVFS is a widely used process to reduce the energy consumption of a processor by lowering
Moreover, they are not measured using the same metric. To solve this problem, the
execution time is normalized by computing the ratio between the new execution time (after
scaling down the frequencies of some processors) and the initial one (with maximum
Moreover, they are not measured using the same metric. To solve this problem, the
execution time is normalized by computing the ratio between the new execution time (after
scaling down the frequencies of some processors) and the initial one (with maximum
While the main
goal is to optimize the energy and execution time at the same time, the normalized
energy and execution time curves are not in the same direction. According
While the main
goal is to optimize the energy and execution time at the same time, the normalized
energy and execution time curves are not in the same direction. According
scaling factors $S_1,S_2,\dots,S_N$ reduce both the energy and the execution
time simultaneously. But the main objective is to produce maximum energy
reduction with minimum execution time reduction.
scaling factors $S_1,S_2,\dots,S_N$ reduce both the energy and the execution
time simultaneously. But the main objective is to produce maximum energy
reduction with minimum execution time reduction.
-between the energy curve EQ~(\ref{eq:enorm}) and the performance
-curve EQ~(\ref{eq:pnor@+eYd162m_inv}) over all available sets of scaling factors. This
+between the energy curve (\ref{eq:enorm}) and the performance
+curve (\ref{eq:pnorm_inv}) over all available sets of scaling factors. This
represents the minimum energy consumption with minimum execution time (maximum
performance) at the same time, see figure~(\ref{fig:r1}) or figure~(\ref{fig:r2}). Then the objective
function has the following form:
represents the minimum energy consumption with minimum execution time (maximum
performance) at the same time, see figure~(\ref{fig:r1}) or figure~(\ref{fig:r2}). Then the objective
function has the following form:
(\overbrace{P_\textit{Norm}(S_{ij})}^{\text{Maximize}} -
\overbrace{E_\textit{Norm}(S_{ij})}^{\text{Minimize}} )
\end{equation}
(\overbrace{P_\textit{Norm}(S_{ij})}^{\text{Maximize}} -
\overbrace{E_\textit{Norm}(S_{ij})}^{\text{Minimize}} )
\end{equation}
-where $N$ is the number of nodes and $F$ is the number of available frequencies for each nodes.
-Then, the optimal set of scaling factors that satisfies EQ~(\ref{eq:max}) can be selected.
+where $N$ is the number of nodes and $F$ is the number of available frequencies for each node.
+Then, the optimal set of scaling factors that satisfies (\ref{eq:max}) can be selected.
The objective function can work with any energy model or any power values for each node
(static and dynamic powers). However, the most energy reduction gain can be achieved when
the energy curve has a convex form as shown in~\cite{Zhuo_Energy.efficient.Dynamic.Task.Scheduling,Rauber_Analytical.Modeling.for.Energy,Hao_Learning.based.DVFS}.
The objective function can work with any energy model or any power values for each node
(static and dynamic powers). However, the most energy reduction gain can be achieved when
the energy curve has a convex form as shown in~\cite{Zhuo_Energy.efficient.Dynamic.Task.Scheduling,Rauber_Analytical.Modeling.for.Energy,Hao_Learning.based.DVFS}.
vector that gives the best trade-off between minimizing the energy consumption and maximizing
the performance of a message passing synchronous iterative application executed on a heterogeneous
platform. It works online during the execution time of the iterative message passing program.
vector that gives the best trade-off between minimizing the energy consumption and maximizing
the performance of a message passing synchronous iterative application executed on a heterogeneous
platform. It works online during the execution time of the iterative message passing program.
\State Round the computed initial frequencies $F_i$ to the closest one available in each node.
\If{(not the first frequency)}
\State $F_i \gets F_i+Fdiff_i,~i=1,\dots,N.$
\State Round the computed initial frequencies $F_i$ to the closest one available in each node.
\If{(not the first frequency)}
\State $F_i \gets F_i+Fdiff_i,~i=1,\dots,N.$
\EndIf
\State $T_\textit{Old} \gets max_{~i=1,\dots,N } (Tcp_i+Tcm_i)$
\State $E_\textit{Original} \gets \sum_{i=1}^{N}{( Pd_i \cdot Tcp_i)} +\sum_{i=1}^{N} {(Ps_i \cdot T_{Old})}$
\EndIf
\State $T_\textit{Old} \gets max_{~i=1,\dots,N } (Tcp_i+Tcm_i)$
\State $E_\textit{Original} \gets \sum_{i=1}^{N}{( Pd_i \cdot Tcp_i)} +\sum_{i=1}^{N} {(Ps_i \cdot T_{Old})}$
- \State $Sopt_{i} \gets \frac{Fmax_i}{F_i},~i=1,\dots,N. $
- \State Computing the initial distance $Dist \gets\Pnorm(Sopt_i) - \Enorm(Sopt_i) $
+ \State $Sopt_{i} \gets 1,~i=1,\dots,N. $
+ \State $Dist \gets 0 $
\While {(all nodes not reach their minimum frequency)}
\If{(not the last freq. \textbf{and} not the slowest node)}
\State $F_i \gets F_i - Fdiff_i,~i=1,\dots,N.$
\While {(all nodes not reach their minimum frequency)}
\If{(not the last freq. \textbf{and} not the slowest node)}
\State $F_i \gets F_i - Fdiff_i,~i=1,\dots,N.$
\label{sec.verif.algo}
The precision of the proposed algorithm mainly depends on the execution time prediction model defined in
(\ref{eq:perf}) and the energy model computed by (\ref{eq:energy}).
\label{sec.verif.algo}
The precision of the proposed algorithm mainly depends on the execution time prediction model defined in
(\ref{eq:perf}) and the energy model computed by (\ref{eq:energy}).
different number of nodes. The solutions returned by the brute force algorithm and the proposed algorithm were identical
and the proposed algorithm was on average 10 times faster than the brute force algorithm. It has a small execution time:
for a heterogeneous cluster composed of four different types of nodes having the characteristics presented in
different number of nodes. The solutions returned by the brute force algorithm and the proposed algorithm were identical
and the proposed algorithm was on average 10 times faster than the brute force algorithm. It has a small execution time:
for a heterogeneous cluster composed of four different types of nodes having the characteristics presented in
to compute the best scaling factors vector. The algorithm complexity is $O(F\cdot (N \cdot4) )$, where $F$ is the number
of iterations and $N$ is the number of computing nodes. The algorithm needs from 12 to 20 iterations to select the best
vector of frequency scaling factors that gives the results of the next sections.
to compute the best scaling factors vector. The algorithm complexity is $O(F\cdot (N \cdot4) )$, where $F$ is the number
of iterations and $N$ is the number of computing nodes. The algorithm needs from 12 to 20 iterations to select the best
vector of frequency scaling factors that gives the results of the next sections.