\usepackage[english]{babel}
\usepackage{algpseudocode}
\usepackage{graphicx}
+\usepackage{algorithm}
\usepackage{subfig}
\usepackage{amsmath}
\end{equation}
where $TcpOld_i$ is the computation time of processor $i$ during the first iteration and $MinTcm$ is the communication time of the slowest processor from the first iteration. The model computes the maximum computation time
with scaling factor from each node added to the communication time of the slowest node, it means only the
- communication time without any slack time. Therefore, we can consider the execution time of the iterative application is the execution time of one iteration as in EQ(\ref{eq:perf}) multiply by the number of iterations of the application.
+ communication time without any slack time. Therefore, we can consider the execution time of the iterative application is equal to the execution time of one iteration as in EQ(\ref{eq:perf}) multiplied by the number of iterations of that application.
This prediction model is based on our model for predicting the execution time of message passing distributed applications for homogeneous architectures~\cite{45}. The execution time prediction model is used in our method for optimizing both energy consumption and performance of iterative methods, which is presented in the following sections.
scaling factors $(S_1, S_2,\dots, S_N)$ may degrade the performance of the
application and thus, increase the static energy because the execution time is
increased~\cite{36}. We can measure the overall energy consumption for the iterative
-application by measuring the energy consumption from one iteration as in EQ(\ref{eq:energy}) multiply by
-the number of iterations of the iterative application.
+application by measuring the energy consumption for one iteration as in EQ(\ref{eq:energy}) multiplied by
+the number of iterations of that application.
\section{Optimization of both energy consumption and performance}
work with any energy model or any power values for each node (static and dynamic powers).
However, the most energy reduction gain can be achieved when the energy curve has a convex form as shown in~\cite{15,3,19}.
-\section{The heterogeneous scaling algorithm }
+\section{The scaling factors selection algorithm for heterogeneous platforms }
\label{sec.optim}
-In this section we are proposed a heterogeneous scaling algorithm,
-(figure~\ref{HSA}), that selects the optimal vector of the frequency scaling factors from each
-node. The algorithm is numerates the suitable range of available vectors of the frequency scaling
-factors for all node in a heterogeneous cluster, returns a vector of optimal
-frequency scaling factors define as $Sopt_1,Sopt_2,\dots,Sopt_N$. Using heterogeneous cluster
-has different computing powers is produces different workloads for each node. Therefore, the fastest nodes waiting at the
-synchronous barrier for the slowest nodes to finish there work as in figure
-(\ref{fig:heter}). Our algorithm is takes into account these imbalanced workloads
-when is starts to search for selecting the best vector of the frequency scaling factors. So, the
-algorithm is selects the initial frequencies values for each node proportional
-to the times of computations that gathered from the first iteration. As an
-example in figure (\ref{fig:st_freq}), the algorithm don't tests the first
-frequencies of the computing nodes until their frequencies is converge to the
-frequency of the slowest node. The operational frequency gear not surly related to computing power, therefore the algorithm
-rapprochement the frequencies according to the computing power of each frequency. Moreover, If the algorithm is starts to test change the
-frequency of the slowest node from the first gear, we are loosing the performance and
-then the best trade-off relation (the maximum distance) be not reachable. This case will be similar
-to a homogeneous cluster when all nodes scale down their frequencies together from
-the first gears. Therefore, there is a small distance between the energy and
-the performance curves in a homogeneous cluster compare to heterogeneous one, for example see the figure(\ref{fig:r1}) and figure(\ref{fig:r2}) . Then the
-algorithm starts to search for the optimal vector of the frequency scaling factors from the selected initial
-frequencies until all node reach their minimum frequencies.
-\begin{figure}[t]
- \centering
- \includegraphics[scale=0.5]{fig/start_freq}
- \caption{Selecting the initial frequencies}
- \label{fig:st_freq}
-\end{figure}
+In this section we propose algorithm~\ref{HSA}) which selects the frequency scaling factors vector that gives the best trade-off between minimizing the energy consumption and maximizing the performance of a message passing synchronous iterative application executed on a heterogeneous platform.
+IT works online during the execution time of the iterative message passing program. It uses information gathered during the first iteration such as the computation time and the communication time in one iteration for each node. The algorithm is executed after the first iteration and returns a vector of optimal frequency scaling factors that satisfies the objective function EQ(\ref{eq:max}). The program apply DVFS operations to change the frequencies of the CPUs according to the computed scaling factors. This algorithm is called just once during the execution of the program. Algorithm~(\ref{dvfs}) shows where and when the proposed scaling algorithm is called in the iterative MPI program.
-To compute the initial frequencies in each node, the algorithm firstly needs to compute the computation scaling factors $Scp_i$ of the node $i$. Each one of these factors is represents a ratio between the computation time of the slowest node and the computation time of the node $i$ as follow:
+The nodes in a heterogeneous platform have different computing powers, thus while executing message passing iterative synchronous applications, fast nodes have to wait for the slower ones to finish their computations before being able to synchronously communicate with them as in figure (\ref{fig:heter}). These periods are called idle or slack times.
+Our algorithm takes into account this problem and tries to reduce these slack times when selecting the frequency scaling factors vector. At first, it selects initial frequency scaling factors that increase the execution times of fast nodes and minimize the differences between the computation times of fast and slow nodes. The value of the initial frequency scaling factor for each node is inversely proportional to its computation time that was gathered from the first iteration. These initial frequency scaling factors are computed as a ratio between the computation time of the slowest node and the computation time of the node $i$ as follows:
\begin{equation}
\label{eq:Scp}
Scp_{i} = \frac{\max_{i=1,2,\dots,N}(Tcp_i)}{Tcp_i}
\end{equation}
-Depending on the initial computation scaling factors EQ(\ref{eq:Scp}), the algorithm computes the initial frequencies for all nodes as a ratio between the
-maximum frequency of node $i$ and the computation scaling factor $Scp_i$ as follow:
+Using the initial frequency scaling factors computed in EQ(\ref{eq:Scp}), the algorithm computes the initial frequencies for all nodes as a ratio between the
+maximum frequency of node $i$ and the computation scaling factor $Scp_i$ as follows:
\begin{equation}
\label{eq:Fint}
F_{i} = \frac{Fmax_i}{Scp_i},~{i=1,2,\cdots,N}
\end{equation}
-\begin{figure}[tp]
+If the computed initial frequency for a node is not available in the gears of that node, the computed initial frequency is replaced by the nearest available frequency.
+In figure (\ref{fig:st_freq}), the nodes are sorted by their computing powers in ascending order and the frequencies of the faster nodes are scaled down according to the computed initial frequency scaling factors. The resulting new frequencies are colored in blue in figure (\ref{fig:st_freq}). This set of frequencies can be considered as a higher bound for the search space of the optimal set of frequencies because selecting frequency scaling factors higher than the higher bound will not improve the performance of the application and it will increase its overall energy consumption. Therefore the frequency selecting factors algorithm starts its search method from these initial frequencies and takes a downward search direction. The algorithm iterates on all left frequencies, from the higher bound until all nodes reach their minimum frequencies, to compute their overall energy consumption and performance, and select the optimal frequency scaling factors vector. At each iteration the algorithm determines the slowest node according to EQ(\ref{eq:perf}) and keeps its frequency unchanged, while it lowers the frequency of all other nodes by one gear. The new overall energy consumption and execution time are computed according to the new scaling factors. The optimal set of frequency scaling factors is the set that gives the highest distance according to the objective function EQ(\ref{eq:max}).
+
+
+
+
+
+This algorithm has a small
+execution time: for a heterogeneous cluster composed of four different types of
+nodes having the characteristics presented in table~(\ref{table:platform}), it
+takes \np[ms]{0.04} on average for 4 nodes and \np[ms]{0.15} on average for 144
+nodes. The algorithm complexity is $O(F\cdot (N \cdot4) )$, where $F$ is the
+number of iterations and $N$ is the number of computing nodes. The algorithm
+needs on average from 12 to 20 iterations to selects the best vector of frequency scaling factors that give the results of the next section. \textbf{put the lst paragraph in experiments}
+
+
+
+
+
+
+\begin{algorithm}
\begin{algorithmic}[1]
% \footnotesize
\Require ~
\end{algorithmic}
\caption{Heterogeneous scaling algorithm}
\label{HSA}
-\end{figure}
-When the initial frequencies are computed, the algorithm numerates all available
-frequency scaling factors starting from the initial frequencies until all nodes reach their
-minimum frequencies. At each iteration the algorithm determine the slowest node according to EQ(\ref{eq:perf}).
-It is remains the frequency of the slowest node without change, while it is scales down the frequency of the other
-nodes. This is improved the execution time degradation and energy saving in the same time.
-The proposed algorithm works online during the execution time of the iterative MPI program. It is
-returns a vector of optimal frequency scaling factors depending on the
-objective function EQ(\ref{eq:max}). The program changes the new frequencies of
-the CPUs according to the computed scaling factors. This algorithm has a small
-execution time: for a heterogeneous cluster composed of four different types of
-nodes having the characteristics presented in table~(\ref{table:platform}), it
-takes \np[ms]{0.04} on average for 4 nodes and \np[ms]{0.15} on average for 144
-nodes. The algorithm complexity is $O(F\cdot (N \cdot4) )$, where $F$ is the
-number of iterations and $N$ is the number of computing nodes. The algorithm
-needs on average from 12 to 20 iterations to selects the best vector of frequency scaling factors that give the results of the next section. It is called just once during the execution of the program. The DVFS algorithm in figure~(\ref{dvfs}) shows where
-and when the proposed scaling algorithm is called in the iterative MPI program.
-\begin{figure}[tp]
+\end{algorithm}
+
+\begin{algorithm}
\begin{algorithmic}[1]
% \footnotesize
\For {$k=1$ to \textit{some iterations}}
\end{algorithmic}
\caption{DVFS algorithm}
\label{dvfs}
-\end{figure}
+\end{algorithm}
\section{Experimental results}
\label{sec.expe}