X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/mpi-energy2.git/blobdiff_plain/a7b2d3ad0eebcf3146e7022e26bce6a992def500..86af3c2806c5bf9a5677cc2157db3c08c6282141:/Heter_paper.tex?ds=sidebyside diff --git a/Heter_paper.tex b/Heter_paper.tex index 052aea9..11e9475 100644 --- a/Heter_paper.tex +++ b/Heter_paper.tex @@ -5,6 +5,7 @@ \usepackage[english]{babel} \usepackage{algpseudocode} \usepackage{graphicx} +\usepackage{algorithm} \usepackage{subfig} \usepackage{amsmath} @@ -49,9 +50,8 @@ \newcommand{\TmaxCompOld}{\Xsub{T}{Max Comp Old}} \newcommand{\Tmax}{\Xsub{T}{max}} \newcommand{\Tnew}{\Xsub{T}{New}} -\newcommand{\Told}{\Xsub{T}{Old}} - -\begin{document} +\newcommand{\Told}{\Xsub{T}{Old}} +\begin{document} \title{Energy Consumption Reduction in heterogeneous architecture using DVFS} @@ -143,9 +143,9 @@ vector of scaling factors can be predicted using EQ (\ref{eq:perf}). \textit T_\textit{new} = \max_{i=1,2,\dots,N} (TcpOld_{i} \cdot S_{i}) + MinTcm \end{equation} -where $TcpOld_i$ is the computation time of processor $i$ during the first iteration and $MinT_{c}m$ is the communication time of the slowest processor from the first iteration. The model computes the maximum computation time +where $TcpOld_i$ is the computation time of processor $i$ during the first iteration and $MinTcm$ is the communication time of the slowest processor from the first iteration. The model computes the maximum computation time with scaling factor from each node added to the communication time of the slowest node, it means only the - communication time without any slack time. Therefore, we can consider the execution time of the iterative application is the execution time of one iteration as in EQ(\ref{eq:perf}) multiply by the number of iterations of the application. + communication time without any slack time. Therefore, we can consider the execution time of the iterative application is equal to the execution time of one iteration as in EQ(\ref{eq:perf}) multiplied by the number of iterations of that application. This prediction model is based on our model for predicting the execution time of message passing distributed applications for homogeneous architectures~\cite{45}. The execution time prediction model is used in our method for optimizing both energy consumption and performance of iterative methods, which is presented in the following sections. @@ -198,7 +198,7 @@ power consumption: \begin{multline} \label{eq:pdnew} {P}_\textit{dNew} = \alpha \cdot C_L \cdot V^2 \cdot F_{new} = \alpha \cdot C_L \cdot \beta^2 \cdot F_{new}^3 \\ - {} = \alpha \cdot C_L \cdot V^2 \cdot F \cdot S^{-3} = P_{dOld} \cdot S^{-3} + {} = \alpha \cdot C_L \cdot V^2 \cdot F_{max} \cdot S^{-3} = P_{dOld} \cdot S^{-3} \end{multline} where $ {P}_\textit{dNew}$ and $P_{dOld}$ are the dynamic power consumed with the new frequency and the maximum frequency respectively. @@ -206,7 +206,7 @@ According to EQ(\ref{eq:pdnew}) the dynamic power is reduced by a factor of $S^{ reducing the frequency by a factor of $S$~\cite{3}. Since the FLOPS of a CPU is proportional to the frequency of a CPU, the computation time is increased proportionally to $S$. The new dynamic energy is the dynamic power multiplied by the new time of computation and is given by the following equation: \begin{equation} \label{eq:Edyn} - E_\textit{dNew} = P_{dOld} \cdot S^{-3} \cdot (T_{cp} \cdot S)= S^{-2}\cdot P_{dOld} \cdot Tcp + E_\textit{dNew} = P_{dOld} \cdot S^{-3} \cdot (Tcp \cdot S)= S^{-2}\cdot P_{dOld} \cdot Tcp \end{equation} The static power is related to the power leakage of the CPU and is consumed during computation and even when idle. As in~\cite{3,46}, we assume that the static power of a processor is constant during idle and computation periods, and for all its available frequencies. The static energy is the static power multiplied by the execution time of the program. According to the execution time model in EQ(\ref{eq:perf}), @@ -230,8 +230,8 @@ Reducing the frequencies of the processors according to the vector of scaling factors $(S_1, S_2,\dots, S_N)$ may degrade the performance of the application and thus, increase the static energy because the execution time is increased~\cite{36}. We can measure the overall energy consumption for the iterative -application by measuring the energy consumption from one iteration as in EQ(\ref{eq:energy}) multiply by -the number of iterations of the iterative application. +application by measuring the energy consumption for one iteration as in EQ(\ref{eq:energy}) multiplied by +the number of iterations of that application. \section{Optimization of both energy consumption and performance} @@ -320,50 +320,46 @@ Then we can select the optimal set of scaling factors that satisfies EQ~(\ref{eq work with any energy model or any power values for each node (static and dynamic powers). However, the most energy reduction gain can be achieved when the energy curve has a convex form as shown in~\cite{15,3,19}. -\section{The heterogeneous scaling algorithm } +\section{The scaling factors selection algorithm for heterogeneous platforms } \label{sec.optim} -In this section we are proposed a heterogeneous scaling algorithm, -(figure~\ref{HSA}), that selects the optimal vector of the frequency scaling factors from each -node. The algorithm is numerates the suitable range of available frequency scaling -factors for each node in a heterogeneous cluster, returns a vector of optimal -frequency scaling factors for all node define as $Sopt_1,Sopt_2,\dots,Sopt_N$. Using heterogeneous cluster -has different computing powers is produces different workloads for each node. Therefore, the fastest nodes waiting at the -synchronous barrier for the slowest nodes to finish there work as in figure -(\ref{fig:heter}). Our algorithm is takes into account these imbalanced workloads -when is starts to search for selecting the best vector of the frequency scaling factors. So, the -algorithm is selects the initial frequencies values for each node proportional -to the times of computations that gathered from the first iteration. As an -example in figure (\ref{fig:st_freq}), the algorithm don't tests the first -frequencies of the computing nodes until it is converge their frequencies to the -frequency of the slowest node. If the algorithm is starts to test changing the -frequency of the slowest node from the first gear, we are loosing the performance and -then the best trade-off relation (the maximum distance) be not reachable. This case will be similar -to a homogeneous cluster when all nodes scales their frequencies together from -the first gear. Therefore, there is a small distance between the energy and -the performance curves in a homogeneous cluster compare to heterogeneous one, for example see the figure(\ref{fig:r1}). Then the -algorithm starts to search for the optimal vector of the frequency scaling factors from the selected initial -frequencies until all node reach their minimum frequencies. -\begin{figure}[t] - \centering - \includegraphics[scale=0.5]{fig/start_freq} - \caption{Selecting the initial frequencies} - \label{fig:st_freq} -\end{figure} +In this section we propose algorithm~\ref{HSA}) which selects the frequency scaling factors vector that gives the best trade-off between minimizing the energy consumption and maximizing the performance of a message passing synchronous iterative application executed on a heterogeneous platform. +IT works online during the execution time of the iterative message passing program. It uses information gathered during the first iteration such as the computation time and the communication time in one iteration for each node. The algorithm is executed after the first iteration and returns a vector of optimal frequency scaling factors that satisfies the objective function EQ(\ref{eq:max}). The program apply DVFS operations to change the frequencies of the CPUs according to the computed scaling factors. This algorithm is called just once during the execution of the program. Algorithm~(\ref{dvfs}) shows where and when the proposed scaling algorithm is called in the iterative MPI program. -To compute the initial frequencies in each node, the algorithm firstly needs to compute the computation scaling factors $Scp_i$ of the node $i$. Each one of these factors is represents a ratio between the computation time of the slowest node and the computation time of the node $i$ as follow: +The nodes in a heterogeneous platform have different computing powers, thus while executing message passing iterative synchronous applications, fast nodes have to wait for the slower ones to finish their computations before being able to synchronously communicate with them as in figure (\ref{fig:heter}). These periods are called idle or slack times. +Our algorithm takes into account this problem and tries to reduce these slack times when selecting the frequency scaling factors vector. At first, it selects initial frequency scaling factors that increase the execution times of fast nodes and minimize the differences between the computation times of fast and slow nodes. The value of the initial frequency scaling factor for each node is inversely proportional to its computation time that was gathered from the first iteration. These initial frequency scaling factors are computed as a ratio between the computation time of the slowest node and the computation time of the node $i$ as follows: \begin{equation} \label{eq:Scp} Scp_{i} = \frac{\max_{i=1,2,\dots,N}(Tcp_i)}{Tcp_i} \end{equation} -Depending on the initial computation scaling factors EQ(\ref{eq:Scp}), the algorithm computes the initial frequencies for all nodes as a ratio between the -maximum frequency of node $i$ and the computation scaling factor $Scp_i$ as follow: +Using the initial frequency scaling factors computed in EQ(\ref{eq:Scp}), the algorithm computes the initial frequencies for all nodes as a ratio between the +maximum frequency of node $i$ and the computation scaling factor $Scp_i$ as follows: \begin{equation} \label{eq:Fint} F_{i} = \frac{Fmax_i}{Scp_i},~{i=1,2,\cdots,N} \end{equation} -\begin{figure}[tp] +If the computed initial frequency for a node is not available in the gears of that node, the computed initial frequency is replaced by the nearest available frequency. +In figure (\ref{fig:st_freq}), the nodes are sorted by their computing powers in ascending order and the frequencies of the faster nodes are scaled down according to the computed initial frequency scaling factors. The resulting new frequencies are colored in blue in figure (\ref{fig:st_freq}). This set of frequencies can be considered as a higher bound for the search space of the optimal set of frequencies because selecting frequency scaling factors higher than the higher bound will not improve the performance of the application and it will increase its overall energy consumption. Therefore the frequency selecting factors algorithm starts its search method from these initial frequencies and takes a downward search direction. The algorithm iterates on all left frequencies, from the higher bound until all nodes reach their minimum frequencies, to compute their overall energy consumption and performance, and select the optimal frequency scaling factors vector. At each iteration the algorithm determines the slowest node according to EQ(\ref{eq:perf}) and keeps its frequency unchanged, while it lowers the frequency of all other nodes by one gear. The new overall energy consumption and execution time are computed according to the new scaling factors. The optimal set of frequency scaling factors is the set that gives the highest distance according to the objective function EQ(\ref{eq:max}). + + + + + +This algorithm has a small +execution time: for a heterogeneous cluster composed of four different types of +nodes having the characteristics presented in table~(\ref{table:platform}), it +takes \np[ms]{0.04} on average for 4 nodes and \np[ms]{0.15} on average for 144 +nodes. The algorithm complexity is $O(F\cdot (N \cdot4) )$, where $F$ is the +number of iterations and $N$ is the number of computing nodes. The algorithm +needs on average from 12 to 20 iterations to selects the best vector of frequency scaling factors that give the results of the next section. \textbf{put the lst paragraph in experiments} + + + + + + +\begin{algorithm} \begin{algorithmic}[1] % \footnotesize \Require ~ @@ -375,7 +371,7 @@ maximum frequency of node $i$ and the computation scaling factor $Scp_i$ as fol \item[$Ps_i$] array of the static powers for all nodes. \item[$Fdiff_i$] array of the difference between two successive frequencies for all nodes. \end{description} - \Ensure $Sopt_1, \dots, Sopt_N$ is a set of optimal scaling factors + \Ensure $Sopt_1,Sopt_2 \dots, Sopt_N$ is a vector of optimal scaling factors \State $ Scp_i \gets \frac{\max_{i=1,2,\dots,N}(Tcp_i)}{Tcp_i} $ \State $F_{i} \gets \frac{Fmax_i}{Scp_i},~{i=1,2,\cdots,N}$ @@ -406,24 +402,9 @@ maximum frequency of node $i$ and the computation scaling factor $Scp_i$ as fol \end{algorithmic} \caption{Heterogeneous scaling algorithm} \label{HSA} -\end{figure} -When the initial frequencies are computed, the algorithm numerates all available -frequency scaling factors starting from the initial frequencies until all nodes reach their -minimum frequencies. At each iteration the algorithm determine the slowest node according to EQ(\ref{eq:perf}). -It is remains the frequency of the slowest node without change, while it is scale down the frequency of the other -nodes. This is improved the execution time degradation and energy saving in the same time. -The proposed algorithm works online during the execution time of the iterative MPI program. It is -returns a vector of optimal frequency scaling factors depending on the -objective function EQ(\ref{eq:max}). The program changes the new frequencies of -the CPUs according to the computed scaling factors. This algorithm has a small -execution time: for a heterogeneous cluster composed of four different types of -nodes having the characteristics presented in table~(\ref{table:platform}), it is -takes \np[ms]{0.04} on average for 4 nodes and \np[ms]{0.15} on average for 144 -nodes. The algorithm complexity is $O(F\cdot (N \cdot4) )$, where $F$ is the -number of iterations and $N$ is the number of computing nodes. The algorithm -needs on average from 12 to 20 iterations to selects the best vector of frequency scaling factors that give the results of the next section. It is called just once during the execution of the program. The DVFS algorithm in figure~(\ref{dvfs}) shows where -and when the proposed scaling algorithm is called in the iterative MPI program. -\begin{figure}[tp] +\end{algorithm} + +\begin{algorithm} \begin{algorithmic}[1] % \footnotesize \For {$k=1$ to \textit{some iterations}} @@ -441,7 +422,7 @@ and when the proposed scaling algorithm is called in the iterative MPI program. \end{algorithmic} \caption{DVFS algorithm} \label{dvfs} -\end{figure} +\end{algorithm} \section{Experimental results} \label{sec.expe} @@ -500,7 +481,7 @@ The proposed algorithm was applied to seven MPI programs of the NAS benchmarks ( \cite{44}, which were run with three classes (A, B and C). In this experiments we are interested to run the class C, the biggest class compared to A and B, on different number of nodes, from 4 to 128 or 144 nodes according to the type of the iterative MPI program. Depending on the proposed energy consumption model EQ(\ref{eq:energy}), - we are measure the energy consumption for all NAS MPI programs. The dynamic and static power values are used under the same assumption used by \cite{45,3}. We are used a percentage of 80\% for dynamic power and 20\% for static of the total power consumption of a CPU. The heterogeneous nodes in table (\ref{table:platform}) have different simulated computing power (FLOPS), ranked from the node of type 1 with smaller computing power (FLOPS) to the highest computing power (FLOPS) for node of type 4. Therefore, the power values are used proportionally increased from nodes of type 1 to nodes of type 4 that with highest computing power. Then, we are used an assumption that the power consumption is increased linearly when the computing power (FLOPS) of the processor is increased, see \cite{48}. + we are measure the energy consumption for all NAS MPI programs. The dynamic and static power values are used under the same assumption used by \cite{45,3}, we are used a percentage of 80\% for dynamic power and 20\% for static of the total power consumption of a CPU. The heterogeneous nodes in table (\ref{table:platform}) have different simulated computing power (FLOPS), ranked from the node of type 1 with smaller computing power (FLOPS) to the highest computing power (FLOPS) for node of type 4. Therefore, the power values are used proportionally increased from nodes of type 1 to nodes of type 4 that with highest computing power. Then, we are used an assumption that the power consumption is increased linearly when the computing power (FLOPS) of the processor is increased, see \cite{48}. \begin{table}[htb] \caption{Running NAS benchmarks on 4 nodes }