\begin{document}
-\title{Optimal Dynamic Frequency Scaling for Energy-Performance of Parallel MPI Programs}
+\title{The Simultaneous Dynamic Frequency Scaling for Energy-Performance of Parallel MPI Programs}
\author{%
\IEEEauthorblockN{%
\maketitle
-\AG{``Optimal'' is a bit pretentious in the title.\\
- Complete affiliation, add an email address, etc.}
+\AG{Complete affiliation, add an email address, etc.}
\begin{abstract}
The important technique for energy reduction of parallel systems is CPU
new frequency value~(\emph {P-state}) in governor, the CPU governor is an
interface driver supplied by the operating system kernel (e.g. Linux) to
lowering core's frequency. The scaling factor is equal to 1 when the new frequency is
-set to the maximum frequency. The energy consumption model for parallel
-homogeneous platform depends on the scaling factor \emph S. This factor reduces
-quadratically the dynamic power. Also, this factor increases the static energy
-linearly because the execution time is increased~\cite{36}. The energy model
+set to the maximum frequency. The energy model
depending on the frequency scaling factor for homogeneous platform for any
number of concurrent tasks was developed by Rauber and Rünger~\cite{3}. This
model considers the two power metrics for measuring the energy of the parallel
-tasks as in EQ~(\ref{eq:energy}):
+tasks as in EQ~(\ref{eq:energy}). This factor reduces
+quadratically the dynamic power. Also, it increases the static energy
+linearly because the execution time is increased~\cite{36}.
\begin{equation}
\label{eq:energy}
E = P_\textit{dyn} \cdot S_1^{-2} \cdot
EQ~(\ref{eq:sopt}).
\begin{equation}
\label{eq:sopt}
- S_\textit{opt} = \sqrt[3]{\frac{2}{n} \cdot \frac{P_\textit{dyn}}{P_\textit{static}} \cdot
+ S_\textit{opt} = \sqrt[3]{\frac{2}{N} \cdot \frac{P_\textit{dyn}}{P_\textit{static}} \cdot
\left( 1 + \sum_{i=2}^{N} \frac{T_i^3}{T_1^3} \right) }
\end{equation}
\section{Performance Evaluation of MPI Programs}
\label{sec.mpip}
-The performance (execution time) of parallel MPI applications depend on
+The performance (execution time) of parallel MPI applications depends on
the time of the slowest task as in figure~(\ref{fig:homo}). Normally the
execution time of the parallel programs are proportional to the operational
frequency. Therefore, any DVFS operation for the energy reduction increases the
\end{algorithmic}
\end{algorithm}
The proposed EPSA algorithm works online during the execution time of the MPI
-program. It selects the optimal scaling factor by gathering the computation and communication times
-from the program after one iteration.
- This algorithm has small execution time
+program. This algorithm has small execution time
(between 0.00152 $ms$ for 4 nodes to 0.00665 $ms$ for 32 nodes). The algorithm complexity is O(F$\cdot$N),
-where F is the number of available frequencies and N is the number of computing nodes. The data required
-by this algorithm is the computation time and the communication time for each task from the first iteration only.
+where F is the number of available frequencies and N is the number of computing nodes. It selects the optimal scaling factor by gathering the computation and communication times
+from the program after one iteration.
When these times are measured, the MPI program calls the EPSA algorithm to choose the new frequency using the
-optimal scaling factor. Then the program changes the new frequency of the system. The algorithm is called just
-one time during the execution of the program. The DVFS algorithm~(\ref{dvfs}) shows where and when the EPSA algorithm is called
-in the MPI program.
+optimal scaling factor. Then the program changes the new frequency of the system. The algorithm is called just one time during the execution of the program. The DVFS algorithm~(\ref{dvfs}) shows where and when the EPSA algorithm is called in the MPI program.
%\begin{minipage}{\textwidth}
%\AG{Use the same format as for Algorithm~\ref{$EPSA$}}
show that this method keep or improve energy saving. Because of the energy
consumption decrease when the execution time decreased while the frequency value
increased.
-
+\begin{figure}[t]
+ \centering
+ \includegraphics[width=.33\textwidth]{compare_class_A.pdf}
+ \includegraphics[width=.33\textwidth]{compare_class_B.pdf}
+ \includegraphics[width=.33\textwidth]{compare_class_c.pdf}
+ \caption {Comparing Our EPSA with Rauber and Rünger Methods}
+ \label{fig:compare}
+\end{figure}
Figure~(\ref{fig:compare}) shows the maximum distance between the energy saving
percent and the performance degradation percent. Therefore, this means it is the
same resultant of our objective function EQ~(\ref{eq:max}). Our algorithm always
paper. While the negative trade offs refers to improving energy saving (or may
be the performance) while degrading the performance (or may be the energy) more
than the first.
-\begin{figure}[t]
- \centering
- \includegraphics[width=.33\textwidth]{compare_class_A.pdf}
- \includegraphics[width=.33\textwidth]{compare_class_B.pdf}
- \includegraphics[width=.33\textwidth]{compare_class_c.pdf}
- \caption {Comparing Our EPSA with Rauber and Rünger Methods}
- \label{fig:compare}
-\end{figure}
+
\section{Conclusion}
\label{sec.concl}
In this paper we developed the simultaneous energy-performance algorithm. It works based on the trade off relation between the energy and performance. The results showed that when the scaling factor is big value refer to more energy saving. Also, when the scaling factor is smaller value, then it has bigger impact on performance than energy. The algorithm optimizes the energy saving and performance in the same time to have positive trade off. The optimal trade off represents the maximum distance between the energy and the inversed performance curves. Also, the results explained when setting the slowest task to maximum frequency usually not have a big improvement on performance. In future, we will apply the EPSA algorithm on heterogeneous platform.