that predicts the energy consumed with each frequency gear and selects the one that gives
the best ratio between energy consumption reduction and performance. Furthermore, the main
objective of HPC systems is to execute as fast as possible the application. Therefore, our
-algorithm selects the scaling factor online with very small overhead. The proposed algorithm
-takes into account both the computation and communication times of the Message passing
-programs (MPI) to choose the frequency scaling factor.
+algorithm selects the scaling factor online with a very small overhead. The proposed algorithm
+takes into account both the computation and communication times of the Message Passing
+Interface (MPI) programs to choose the frequency scaling factor.
This algorithm has the ability to predict both energy consumption and execution time over
all available scaling factors. The prediction achieved depends on some computing time information,
gathered at the beginning of the runtime. We have applied this algorithm to the NAS parallel
-benchmarks (NPB v3.3) developed by NASA~\cite{ref65}. Our experiments are executed using the simulator
+benchmarks (NPB v3.3) developed by the NASA~\cite{ref65}. Our experiments are executed using the simulator
SimGrid/SMPI v3.10~\cite{ref66} over an homogeneous
distributed memory architecture.
-This chapter is composed of two parts. The first part, the proposed frequency scaling selection algorithm uses the energy model of the Rauber and Rünger \cite{ref47}. and is compared to Rauber and Rünger's method. The comparison results show that our algorithm gives better energy-time trade-off. In the second part, a new energy model that takes into account both the communication and computation times of the MPI programs running over a homogeneous cluster is developed.
-It also shows the new results obtained using the new energy model. The results are compared to the ones given by Rauber and Rünger energy model.
+This chapter is composed of two parts. In the first part, the proposed frequency scaling selection algorithm uses the energy model of Rauber and Rünger \cite{ref47} and is compared to Rauber and Rünger's method. The comparison results show that our algorithm gives better energy-time trade-off. In the second part, a new energy model that takes into account both the communication and computation times of the MPI programs running over a homogeneous cluster is developed.
+It also shows the new results obtained using the new energy model. The results are compared to the ones given by Rauber and Rünger's energy model.
This chapter is organized as follows: Section~\ref{ch2:2} explains the execution
performance of MPI programs can be predicted. Section~\ref{ch2:4} presents
the energy-performance objective function that maximizes the reduction of energy
consumption while minimizing the degradation of the program's performance.
-Section~\ref{ch2:5} details the algorithm that returns the scaling factor that gives the best energy-performance
-trade-off for a parallel iterative application.
+Section~\ref{ch2:5} details the algorithm that returns the scaling factor that gives the best energy-performance
+trade-off for a parallel application with iterations
Section~\ref{ch2:6} verifies the accuracy of the performance prediction model
and presents the results of the proposed algorithm. It also shows the
-comparison results between our method and other existing method.
+comparison results between our method and other existing methods.
Section~\ref{ch2:7} describes the new proposed energy consumption model for
homogeneous platforms. Section~\ref{ch2:8} presents the experimental results
of using the new energy model. Finally, section~\ref{ch2:9} summarizes this chapter.
\subsection{Energy consumption model for a homogeneous platform}
\label{ch2:2:2}
-The total energy consumption model for a parallel homogeneous
+The total energy for a parallel homogeneous
platform, as presented by Rauber and Rünger~\cite{ref47}, can be written as a
function of the scaling factor $S$, as in EQ~\ref{eq:energy}.
Moreover, the scaling factor $S_1$ is the scaling factor which should be the
\begin{equation}
\label{eq:si}
S_i = S \cdot \frac{T_1}{T_i}
- = \frac{F_{max}}{F_{new}} \cdot \frac{T_1}{T_i}
+ = \frac{F_{max}}{F_{new}} \cdot \frac{T_1}{T_i},~{i=1,2,\dots,N}
\end{equation}
Rauber and Rünger's scaling factor selection
\section{Performance evaluation of MPI programs}
\label{ch2:3}
-The execution time of a parallel synchronous iterative application is equal
+The execution time of a parallel synchronous application with iteration is equal
to the execution time of its slowest task as in figure~(\ref{fig:homo}).
If there is no communication in the application and it is not data bounded, the execution time of this
parallel application is linearly proportional to the operational frequency. Any
\end{tabular}
\label{table:platform-homo}
\end{table}
+
+
\subsection{Performance prediction verification}
\label{ch2:6:1}
In this section, the precision of the proposed performance prediction method
%see Figure~\ref{fig:pred}
In our cluster there are 18 available frequency states for each processor. This
leads to 18 run states for each program. Seven MPI programs of the NAS
-parallel benchmarks were used: CG, MG, EP, FT, BT, LU and SP.~Figure~(\ref{fig:pred})
-presents plots of the real execution times compared to the simulated ones. The maximum
+parallel benchmarks were used: CG, MG, EP, FT, BT, LU and SP. Table \ref{table:NAS-dec} shows
+the description of these seven benchmarks. Some of these benchmarks are considered MPI parallel applications with synchronous iterations or iterative applications that repeat the same block of instructions until convergence.
+However, the proposed method can be applied to any application that executes the same block
+of instructions many times and it is not limited to iterative methods.
+Figure~(\ref{fig:pred}) presents plots of the real execution times compared to the simulated ones. The maximum
normalized error between these two execution times varies between 0.0073 to
0.031 depending on the executed benchmark. The smallest prediction error
was for CG and the worst one was for LU.
+\begin{table}[!t]
+\centering
+\caption{NAS Benchmarks description}
+\label{table:NAS-dec}
+\begin{tabular}{|l|l|l|}
+\hline
+Benchmark & Full Name & Description \\ \hline
+CG & Conjugate Gradiant & \begin{tabular}[c]{@{}l@{}}
+It solves a system of linear equations by estimating\\ the smallest eigenvalue of a large sparse matrix \end{tabular}\\ \hline
+
+MG & MultiGrid & \begin{tabular}[c]{@{}l@{}}It uses the multigrid method to approximate the solution \\of a three-dimensional discrete Poisson equation\end{tabular}
+ \\ \hline
+EP & Embarrassingly Parallel & \begin{tabular}[c]{@{}l@{}} It applies the Marsaglia polar method to randomly \\generates independent Gaussian variates
+\end{tabular} \\ \hline
+FT & Fast Fourier Transform & \begin{tabular}[c]{@{}l@{}}It uses the fast Fourier transform to solve a \\ three-dimensional partial differential equation
+
+\end{tabular} \\ \hline
+BT & Block Tridiagonal & \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}} \\They solve nonlinear partial differential equations \end{tabular}} \\ \cline{1-2}
+LU & \begin{tabular}[c]{@{}l@{}}Lower-Upper symmetric \\ Gauss-Seidel\end{tabular} & \\ \cline{1-2}
+SP & \begin{tabular}[c]{@{}l@{}}Scalar Pentadiagonal\end{tabular} & \\ \hline
+\end{tabular}
+\end{table}
\subsection{The experimental results for the scaling algorithm }
\label{ch2:6:2}
\subsection{Results comparison}
\label{ch2:6:3}
-In this section, we compare our scaling factor selection method with Rauber and
-Rünger's methods~\cite{ref47}. They had two scenarios, the first is to reduce energy
+In this section, we compare our scaling factor selection method with the Rauber and
+Rünger's method~\cite{ref47}. They had two scenarios, the first is to reduce energy
to the optimal level without considering the performance as in
EQ~(\ref{eq:sopt}). We refer to this scenario as $R_{E}$. The second scenario
is similar to the first except setting the slower task to the maximum frequency
possible. We refer to this scenario as $R_{E-P}$ and to our
algorithm as EPSA (Energy to Performance Scaling Algorithm). The comparison is
made in table ~\ref{table:compareC}. This table shows the results of our method and
-Rauber and Rünger scenarios for all the NAS benchmarks programs for class C.
+the Rauber and Rünger's scenarios for all the NAS benchmarks programs for class C.
\begin{table}[h!]
\caption{Comparing results for the NAS class C}
that our algorithm returns the best scaling factor that satisfy our objective
method: the largest distance between energy reduction and performance
degradation. Figure~\ref{fig:compare} illustrates even better the distance between the energy reduction and performance degradation. The negative values mean that one of
-the two objectives (energy or performance) have been degraded more than the
+the two objectives (energy or performance) has been degraded more than the
other. The positive trade-offs with the highest values lead to maximum energy
savings while keeping the performance degradation as low as possible. Our
algorithm always gives the highest positive energy to performance trade-offs
-while Rauber and Rünger's method, ($R_{E-P}$), gives sometimes negative
+while the Rauber and Rünger's method, ($R_{E-P}$), gives sometimes negative
trade-offs such as for BT and EP.
time and $T_{Comp} \leq T$. $T_{Comp}$ may be equal to $T$ if there is no
communication, no slack time and no synchronization.
-Applying a DVFS operation leads to a new frequency state which is represented by the frequency scaling factor $S$, computed as in the equation \ref{eq:s}. According to Rauber and Rünger's energy model \ref{eq:energy}, the dynamic energy is consumed during the overall program's execution time. This assumption is not precise because the CPU only consumes the dynamic power during computation time. Moreover, the CPU involved remains idle during the communication times and only consumes the static power, see \cite{ref53}. We have also conducted some experiments over a real homogeneous cluster where some MPI programs of the NAS benchmarks were executed while varying the CPUs frequencies at each execution. The results prove that changing the frequency do not effect on the communication times of these programs. Therefore, the frequency scaling factor $S$ can increase the computation times proportionally to its value, and do not effect the communication times. This assumption consort with the used performance prediction model \ref{eq:tnew}. This model is evaluated and its prediction accuracy is showed in section \ref{ch2:6:1}. Therefore, the new dynamic energy is the dynamic power multiplied by the new time of computation and is given by the following equation:
+Applying a DVFS operation leads to a new frequency state which is represented by the frequency scaling factor $S$, computed as in the equation \ref{eq:s}. According to Rauber and Rünger's energy model \ref{eq:energy}, the dynamic energy is consumed during the overall program's execution time. This assumption is not precise because the CPU only consumes the dynamic power during computation time. Moreover, the CPU involved remains idle during the communication times and only consumes the static power, see \cite{ref53}. We have also conducted some experiments over a real homogeneous cluster where some MPI programs of the NAS benchmarks were executed while varying the CPUs frequencies at each execution. The results prove that changing the frequency does not effect on the communication times of these programs. Therefore, the frequency scaling factor $S$ can increase the computation times proportionally to its value, and does not effect the communication times. This assumption consort with the used performance prediction model \ref{eq:tnew}. This model is evaluated and its prediction accuracy is showed in section \ref{ch2:6:1}. Therefore, the new dynamic energy is the dynamic power multiplied by the new time of computation and is given by the following equation:
\begin{equation}
\label{eq:Edyn_new}
\end{equation}
According to this model, the frequency scaling factor $S$ reduces the energy consumption of the homogeneous architecture by a factor of $S^{-2}$ and increases the execution time by a factor of $S$.
-This model can be used to predict the energy consumption of the message passing synchronous iterative applications after gathering the computation and communication times of the first iteration.
-Furthermore, it can be used to measure the energy consumption of the iterative application by multiplying the energy consumed of all tasks in one iteration by the number of the iterations.
+This model can be used to predict the energy consumption of the message passing applications with synchronous iterations after gathering the computation and communication times of the first iteration.
+Furthermore, it can be used to measure the energy consumption of the parallel application with iterations by multiplying the energy consumed of all tasks in one iteration by the number of the iterations.
This model is used by the algorithm \ref{EPSA} to predict the energy consumption and to select the optimal frequency scaling factor. The new frequency $F_i$ can be computed as in \ref{eq:fi} while using the new selected frequency scaling factor.
In the next section, algorithm \ref{EPSA} is re-evaluated while using this new energy model and the new results are presented.
\label{ch2:8}
This section presents the results of applying the frequency selection algorithm \ref{EPSA} using the new proposed energy model \ref{eq:e-new} to NAS parallel benchmarks.
-The class C of the benchmarks was executed on a homogeneous architecture composed of 16 nodes and simulated by SimGrid. The same static and dynamic power values were used as in section \ref{ch2:6:2}. Figure \ref{fig:energy_con} presents the energy consumption of the NAS benchmarks class C using the new energy model and Rauber and Rünger's model. The energy consumptions of both models are computed using similar parameters: frequency scaling factors, dynamic and static powers values. As shown in this figure, the majority of the benchmarks consumes less energy using the new model than when using Rauber and Rünger's model.
+The class C of the benchmarks was executed on a homogeneous architecture composed of 16 nodes and simulated by SimGrid. The same static and dynamic power values were used as in section \ref{ch2:6:2}. Figure \ref{fig:energy_con} presents the energy consumption of the NAS benchmarks class C using the new energy model and the Rauber and Rünger's model. The energy consumptions of both models are computed using similar parameters: frequency scaling factors, dynamic and static powers values. As shown in this figure, the majority of the benchmarks consumes less energy using the new model than when using the Rauber and Rünger's model.
Two reasons explain these differences in the energy consumptions: the first one is related to the dynamic power consumption, where the new energy model ensures that this power metric is only consumed during the computation time, while the other model assumes that the dynamic power is consumed during both computation and communication times and thus increasing the dynamic energy consumption.
The second reason is related to the execution time. In the new model only the computation times are increased when the frequency of a processor is scaled down, while
Rauber and Rünger's model indicates that both the computation and communication times
-are increased according to the scaling factor and hence more static energy is consumed. Therefore, the MPI programs that have big communication times, have bigger energy consumption values using Rauber and Rünger's model when compared to the new model as for the CG, SP, LU and FT benchmarks. Whereas, if the MPI programs have very small communication times, their computed energy values have very small differences using both models such as in MG and BT benchmarks, or they are identical such as for the EP benchmark where there is no communication and no idle times.
+are increased according to the scaling factor and hence more static energy is consumed. Therefore, the MPI programs that have big communication times, have bigger energy consumption values using Rauber and Rünger's model when compared to the new model as for the CG, SP, LU and FT benchmarks. Whereas, if the MPI programs have very small communication times, their computed energy values have very small differences using both models such as for the MG and BT benchmarks, or they are identical such as for the EP benchmark where there is no communication and no idle times.
\begin{figure*}[h!]
\label{table:new-res}
\end{table}
-Table \ref{table:new-res} shows the energy saving and performance degradation percentages when applying the frequency selecting algorithm using the new proposed energy model. It also presents the new selected frequency scaling factors and compares them to the ones used by Rauber and Rünger's model. It shows that the new selected frequency scaling factors are smaller than those selected using the other model because the predicted energies by the new energy model are smaller.
+Table \ref{table:new-res} shows the energy saving and performance degradation percentages when applying the frequency selecting algorithm using the new proposed energy model. It also presents the new selected frequency scaling factors and compares them to the ones used by the Rauber and Rünger's model. It shows that the new selected frequency scaling factors are smaller than those selected using the other model because the predicted energies by the new energy model are smaller.
Consequently, less energy savings and performance degradation percentages are produced according to
these smaller frequency scaling factors such as for the CG, MG, LU, SP and FT benchmarks. While in the BT and EP benchmarks where there are very small or no communication times, similar scaling factors are selected because the predicted energies by the two models are approximately equivalent.
frequency. Then, it selects the scaling factor that gives the best trade-off
between energy reduction and performance which is the maximum distance between
the energy and the performance curves. To evaluate this method, we
-have applied it to the NAS benchmarks and it was compared to Rauber and Rünger's
+have applied it to the NAS benchmarks and it was compared to the Rauber and Rünger's
method while being executed on the SimGrid simulator. The results showed that
-our method, outperforms Rauber and Rünger's method in terms of energy-performance
+our method, outperforms the Rauber and Rünger's method in terms of energy-performance
ratio. Finally, this chapter presents a new energy consumption model for parallel
-synchronous iterative methods running on homogeneous clusters. This model takes into consideration
+ applications with synchronous iterations running on homogeneous clusters. This model takes into consideration
both the computation and communication times and their relation with the frequency scaling
factor. The results obtained using the new energy model have shown that different frequency scaling factors
were selected which gave new experimental results that are more accurate and realistic.
\ No newline at end of file