X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/mpi-energy.git/blobdiff_plain/84592eb087b0638fc8464a8b0332a4707f6c4c96..068765674898390f6a849e6426474b76e4ebf291:/paper.tex diff --git a/paper.tex b/paper.tex index c38368c..bfa1bc1 100644 --- a/paper.tex +++ b/paper.tex @@ -1,5 +1,4 @@ -\documentclass[12pt]{article} -%\documentclass[12pt,twocolumn]{article} +\documentclass[conference]{IEEEtran} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} @@ -15,28 +14,42 @@ % \usepackage{secdot} %\usepackage[font={footnotesize,bt}]{caption} %\usepackage[font=scriptsize,labelfont=bf]{caption} -\usepackage{lmodern} -\usepackage{todonotes} -\newcommand{\AG}[2][inline]{\todo[color=green!50,#1]{\sffamily\small\textbf{AG:} #2}} +\usepackage[textsize=footnotesize]{todonotes} +\newcommand{\AG}[2][inline]{\todo[color=green!50,#1]{\sffamily\textbf{AG:} #2}} \begin{document} \title{Optimal Dynamic Frequency Scaling for Energy-Performance of Parallel MPI Programs} -\author{A. Badri \and J.-C. Charr \and R. Couturier \and A. Giersch} + +\author{% + \IEEEauthorblockN{% + Ahmed Badri, + Jean-Claude Charr, + Raphaël Couturier and + Arnaud Giersch + } + \IEEEauthorblockA{% + FEMTO-ST Institute\\ + University of Franche-Comté + } +} + \maketitle -\AG{``Optimal'' is a bit pretentious in the title} +\AG{``Optimal'' is a bit pretentious in the title.\\ + Complete affiliation, add an email address, etc.} \begin{abstract} - \AG{FIXME} + \AG{complete the abstract\dots} \end{abstract} \section{Introduction} +\label{sec.intro} The need for computing power is still increasing and it is not expected to slow down in the coming years. To satisfy this demand, researchers and supercomputers constructors have been regularly increasing the number of computing cores in -supercomputers (for example in November 2013, according to the top 500 +supercomputers (for example in November 2013, according to the TOP500 list~\cite{43}, the Tianhe-2 was the fastest supercomputer. It has more than 3 millions of cores and delivers more than 33 Tflop/s while consuming 17808 kW). This large increase in number of computing cores has led to large energy @@ -71,10 +84,22 @@ this algorithm to seven MPI benchmarks. These MPI programs are the NAS parallel benchmarks (NPB v3.3) developed by NASA~\cite{44}. Our experiments are executed using the simulator SimGrid/SMPI v3.10~\cite{Casanova:2008:SGF:1397760.1398183} over an homogeneous distributed memory architecture. Furthermore, we compare the -proposed algorithm with Rauber's methods. The comparison's results show that our +proposed algorithm with Rauber's methods. +\AG{Add citation for Rauber's methods. Moreover, Rauber was not alone to to this work (use ``Rauber et al.'', or ``Rauber and Gudula'', or \dots)} +The comparison's results show that our algorithm gives better energy-time trade off. +% +\AG{Correctly reword the following}% +In Section~\ref{sec.relwork} we present works from other +authors. Then, in Sections~\ref{sec.ptasks} and~\ref{sec.energy}, we +introduce our model. [\dots] Finally, we conclude in +Section~\ref{sec.concl}. \section{Related Works} +\label{sec.relwork} + +\AG{Consider introducing the models (sec.~\ref{sec.ptasks}, + maybe~\ref{sec.energy}) before related works} In the this section some heuristics, to compute the scaling factor, are presented and classified in two parts : offline and online methods. @@ -131,6 +156,7 @@ paper. However, the primary contributions of this paper are: \end{enumerate} \section{Parallel Tasks Execution on Homogeneous Platform} +\label{sec.ptasks} A homogeneous cluster consists of identical nodes in terms of the hardware and the software. Each node has its own memory and at least one processor which can @@ -138,13 +164,13 @@ be a multi-core. The nodes are connected via a high bandwidth network. Tasks executed on this model can be either synchronous or asynchronous. In this paper we consider execution of the synchronous tasks on distributed homogeneous platform. These tasks can exchange the data via synchronous memory passing. -\begin{figure}[h] +\begin{figure*}[t] \centering \subfloat[Synch. Imbalanced Communications]{\includegraphics[scale=0.67]{synch_tasks}\label{fig:h1}} \subfloat[Synch. Imbalanced Computations]{\includegraphics[scale=0.67]{compt}\label{fig:h2}} \caption{Parallel Tasks on Homogeneous Platform} \label{fig:homo} -\end{figure} +\end{figure*} Therefore, the execution time of a task consists of the computation time and the communication time. Moreover, the synchronous communications between tasks can lead to idle time while tasks wait at the synchronous point for others tasks to @@ -161,6 +187,7 @@ of the program is the execution time of the slowest task as : where $T_i$ is the execution time of process $i$. \section{Energy Model for Homogeneous Platform} +\label{sec.energy} The energy consumption by the processor consists of two powers metric: the dynamic and the static power. This general power formulation is used by many @@ -245,6 +272,7 @@ scaling factor as in EQ~(\ref{eq:sopt}). \end{equation} \section{Performance Evaluation of MPI Programs} +\label{sec.mpip} The performance (execution time) of the parallel MPI applications are depends on the time of the slowest task as in figure~(\ref{fig:homo}). Normally the @@ -284,6 +312,7 @@ method as we will show in the coming sections. In the next section we make an investigation study for the EQ~(\ref{eq:tnew}). \section{Performance Prediction Verification} +\label{sec.verif} In this section we evaluate the precision of our performance prediction methods on the NAS benchmark. We use the EQ~(\ref{eq:tnew}) that predicts the execution @@ -293,17 +322,15 @@ with all available scaling factors on 8 or 9 nodes to produce real execution time values. These scaling factors are computed by dividing the maximum frequency by the new one see EQ~(\ref{eq:s}). In all tests, we use the simulator SimGrid/SMPI v3.10 to run the NAS programs. -\AG{Fig.~\ref{fig:pred} is hard to read when printed in black and white, - especially the ``Normalize Real Perf.'' curve.} -\begin{figure}[width=\textwidth,height=\textheight,keepaspectratio] +\begin{figure*}[t] \centering - \includegraphics[scale=0.60]{cg_per.eps} - \includegraphics[scale=0.60]{mg_pre.eps} - \includegraphics[scale=0.60]{bt_pre.eps} - \includegraphics[scale=0.60]{lu_pre.eps} + \includegraphics[width=.4\textwidth]{cg_per.eps}\qquad% + \includegraphics[width=.4\textwidth]{mg_pre.eps} + \includegraphics[width=.4\textwidth]{bt_pre.eps}\qquad% + \includegraphics[width=.4\textwidth]{lu_pre.eps} \caption{Fitting Predicted to Real Execution Time} \label{fig:pred} -\end{figure} +\end{figure*} %see Figure~\ref{fig:pred} In our cluster there are 18 available frequency states for each processor from 2.5 GHz to 800 MHz, there is 100 MHz difference between two successive @@ -316,6 +343,8 @@ example, we are present the execution times of the NAS benchmarks as in the figure~(\ref{fig:pred}). \section{Performance to Energy Competition} +\label{sec.compet} + This section demonstrates our approach for choosing the optimal scaling factor. This factor gives maximum energy reduction taking into account the execution time for both computation and communication times . The relation @@ -326,15 +355,15 @@ is not straightforward. Moreover, they are not measured using the same metric. For solving this problem, we normalize the energy by calculating the ratio between the consumed energy with scaled frequency and the consumed energy without scaled frequency : -\begin{equation} +\begin{multline} \label{eq:enorm} - E_\textit{Norm} = \frac{E_{Reduced}}{E_{Original}} - = \frac{ P_{dyn} \cdot S_i^{-2} \cdot + E_\textit{Norm} = \frac{E_{Reduced}}{E_{Original}}\\ + {} = \frac{ P_{dyn} \cdot S_i^{-2} \cdot \left( T_1 + \sum_{i=2}^{N}\frac{T_i^3}{T_1^2}\right) + P_{static} \cdot T_1 \cdot S_i \cdot N }{ P_{dyn} \cdot \left(T_1+\sum_{i=2}^{N}\frac{T_i^3}{T_1^2}\right) + P_{static} \cdot T_1 \cdot N } -\end{equation} +\end{multline} \AG{Use \texttt{\textbackslash{}text\{xxx\}} or \texttt{\textbackslash{}textit\{xxx\}} for all subscripted words in equations (e.g. \mbox{\texttt{E\_\{\textbackslash{}text\{Norm\}\}}}). @@ -370,13 +399,16 @@ performance as follows : = \frac{T_{Old}}{T_{\textit{Max Comp Old}} \cdot S + T_{\textit{Max Comm Old}}} \end{equation} -\begin{figure} +\begin{figure*} \centering - \subfloat[Converted Relation.]{\includegraphics[scale=0.70]{file.eps}\label{fig:r1}} - \subfloat[Real Relation.]{\includegraphics[scale=0.70]{file3.eps}\label{fig:r2}} + \subfloat[Converted Relation.]{% + \includegraphics[width=.33\textwidth]{file.eps}\label{fig:r1}}% + \qquad% + \subfloat[Real Relation.]{% + \includegraphics[width=.33\textwidth]{file3.eps}\label{fig:r2}} \label{fig:rel} \caption{The Energy and Performance Relation} -\end{figure} +\end{figure*} Then, we can modelize our objective function as finding the maximum distance between the energy curve EQ~(\ref{eq:enorm}) and the inverse of performance curve EQ~(\ref{eq:pnorm_en}) over all available scaling factors. This represent @@ -397,13 +429,14 @@ objective of this paper and we choose Rauber's model as an example with two reasons that mentioned before. \section{Optimal Scaling Factor for Performance and Energy} +\label{sec.optim} In the previous section we described the objective function that satisfy our goal in discovering optimal scaling factor for both performance and energy at the same time. Therefore, we develop an energy to performance scaling algorithm (EPSA). This algorithm is simple and has a direct way to calculate the optimal scaling factor for both energy and performance at the same time. -\begin{algorithm}[t] +\begin{algorithm}[tp] \caption{EPSA} \label{EPSA} \begin{algorithmic}[1] @@ -437,16 +470,16 @@ for each task from the first iteration only. When these times are measured, the MPI program calls the EPSA algorithm to choose the new frequency using the optimal scaling factor. Then the program set the new frequency to the system. The algorithm is called just one time during the execution of the -program. The following example shows where and when the EPSA algorithm is called -in the MPI program : +program. The DVFS algorithm~(\ref{dvfs}) shows where and when the EPSA algorithm is called +in the MPI program. %\begin{minipage}{\textwidth} %\AG{Use the same format as for Algorithm~\ref{EPSA}} -\begin{algorithm}[d] +\begin{algorithm}[tp] \caption{DVFS} \label{dvfs} \begin{algorithmic} - \For {$J:=1$ to $Some_iterations Do$} + \For {$J:=1$ to $Some-Iterations \; $} \State -Computations Section. \State -Communications Section. \If {$(J==1)$} @@ -459,7 +492,7 @@ in the MPI program : \EndFor \end{algorithmic} \end{algorithm} -\clearpage + After obtaining the optimal scale factor from the EPSA algorithm. The program calculates the new frequency $F_i$ for each task proportionally to its time value $T_i$. By substitution of the EQ~(\ref{eq:s}) in the EQ~(\ref{eq:si}), we @@ -474,6 +507,7 @@ have imbalanced workloads. Then EQ~(\ref{eq:fi}) works in adaptive way to change the frequency according to the nodes workloads. \section{Experimental Results} +\label{sec.expe} The proposed EPSA algorithm was applied to seven MPI programs of the NAS benchmarks (EP, CG, MG, FT, BT, LU and SP). We work on three classes (A, B and @@ -488,16 +522,15 @@ detailed characteristics of our platform file are shown in the table~(\ref{table:platform}). Each node in the cluster has 18 frequency values from 2.5 GHz to 800 MHz with 100 MHz difference between each two successive frequencies. -\begin{table}[ht] +\begin{table}[htb] \caption{Platform File Parameters} % title of Table \centering - \AG{Use e.g. $5\times 10^{-7}$ instead of 5E-7} \begin{tabular}{ | l | l | l |l | l |l |l | p{2cm} |} \hline Max & Min & Backbone & Backbone&Link &Link& Sharing \\ Freq. & Freq. & Bandwidth & Latency & Bandwidth& Latency&Policy \\ \hline - 2.5 &800 & 2.25 GBps &5E-7 s & 1 GBps & 5E-5 s&Full \\ + 2.5 &800 & 2.25 GBps &$5\times 10^{-7} s$& 1 GBps & $5\times 10^{-5}$ s&Full \\ GHz& MHz& & & & &Duplex \\\hline \end{tabular} \label{table:platform} @@ -521,18 +554,18 @@ programs. In table~(\ref{table:factors results}), we record all optimal scaling factors results for each program on class C. These factors give the maximum energy saving percent and the minimum performance degradation percent in the same time over all available scales. -\begin{figure}[width=\textwidth,height=\textheight,keepaspectratio] +\begin{figure*}[t] \centering - \includegraphics[scale=0.47]{ep.eps} - \includegraphics[scale=0.47]{cg.eps} - \includegraphics[scale=0.47]{sp.eps} - \includegraphics[scale=0.47]{lu.eps} - \includegraphics[scale=0.47]{bt.eps} - \includegraphics[scale=0.47]{ft.eps} + \includegraphics[width=.33\textwidth]{ep.eps}\hfill% + \includegraphics[width=.33\textwidth]{cg.eps}\hfill% + \includegraphics[width=.33\textwidth]{sp.eps} + \includegraphics[width=.33\textwidth]{lu.eps}\hfill% + \includegraphics[width=.33\textwidth]{bt.eps}\hfill% + \includegraphics[width=.33\textwidth]{ft.eps} \caption{Optimal scaling factors for The NAS MPI Programs} \label{fig:nas} -\end{figure} -\begin{table}[width=\textwidth,height=\textheight,keepaspectratio] +\end{figure*} +\begin{table}[htb] \caption{Optimal Scaling Factors Results} % title of Table \centering @@ -543,13 +576,13 @@ same time over all available scales. \hline Program & Optimal & Energy & Performance&Energy-Perf.\\ Name & Scaling Factor& Saving \%&Degradation \% &Distance \\ \hline - CG & 1.56 &39.23 & 14.88 & 24.35\\ \hline - MG & 1.47 &34.97&21.7& 13.27 \\ \hline + CG & 1.56 &39.23&14.88 &24.35\\ \hline + MG & 1.47 &34.97&21.70 &13.27 \\ \hline EP & 1.04 &22.14&20.73 &1.41\\ \hline LU & 1.38 &35.83&22.49 &13.34\\ \hline BT & 1.31 &29.60&21.28 &8.32\\ \hline - SP & 1.38 &33.48 &21.36&12.12\\ \hline - FT & 1.47 &34.72 &19.00&15.72\\ \hline + SP & 1.38 &33.48&21.36 &12.12\\ \hline + FT & 1.47 &34.72&19.00 &15.72\\ \hline \end{tabular} \label{table:factors results} % is used to refer this table in the text @@ -564,6 +597,7 @@ cases. In EP there are no communications inside the iterations. This make our EPSA to selects smaller scaling factor values (inducing smaller energy savings). \section{Comparing Results} +\label{sec.compare} In this section, we compare our EPSA algorithm results with Rauber's methods~\cite{3}. He had two scenarios, the first is to reduce energy to optimal @@ -575,7 +609,7 @@ scenario as $Rauber_{E-P}$. The comparison is made in tables~(\ref{table:compare Class A},\ref{table:compare Class B},\ref{table:compare Class C}). These tables show the results of our EPSA and Rauber's two scenarios for all the NAS benchmarks programs for classes A,B and C. -\begin{table}[ht] +\begin{table*}[p] \caption{Comparing Results for The NAS Class A} % title of Table \centering @@ -615,8 +649,8 @@ benchmarks programs for classes A,B and C. \end{tabular} \label{table:compare Class A} % is used to refer this table in the text -\end{table} -\begin{table}[ht] +\end{table*} +\begin{table*}[p] \caption{Comparing Results for The NAS Class B} % title of Table \centering @@ -656,9 +690,9 @@ benchmarks programs for classes A,B and C. \end{tabular} \label{table:compare Class B} % is used to refer this table in the text -\end{table} +\end{table*} -\begin{table}[ht] +\begin{table*}[p] \caption{Comparing Results for The NAS Class C} % title of Table \centering @@ -698,7 +732,7 @@ benchmarks programs for classes A,B and C. \end{tabular} \label{table:compare Class C} % is used to refer this table in the text -\end{table} +\end{table*} As shown in these tables our scaling factor is not optimal for energy saving such as Rauber's scaling factor EQ~(\ref{eq:sopt}), but it is optimal for both the energy and the performance simultaneously. Our EPSA optimal scaling factors @@ -720,20 +754,28 @@ concatenating with less performance degradation and this the objective of this paper. While the negative trade offs refers to improving energy saving (or may be the performance) while degrading the performance (or may be the energy) more than the first. -\begin{figure}[width=\textwidth,height=\textheight,keepaspectratio] +\begin{figure}[t] \centering - \includegraphics[scale=0.60]{compare_class_A.pdf} - \includegraphics[scale=0.60]{compare_class_B.pdf} - \includegraphics[scale=0.60]{compare_class_c.pdf} - % use scale 35 for all to be in the same line + \includegraphics[width=.33\textwidth]{compare_class_A.pdf} + \includegraphics[width=.33\textwidth]{compare_class_B.pdf} + \includegraphics[width=.33\textwidth]{compare_class_c.pdf} \caption{Comparing Our EPSA with Rauber's Methods} \label{fig:compare} \end{figure} -\AG{\texttt{bibtex} gives many errors, please correct them} -\clearpage -\bibliographystyle{plain} -\bibliography{my_reference} +\section{Conclusion} +\label{sec.concl} + +\AG{the conclusion needs to be written\dots{} one day} + +\section*{Acknowledgment} + +\AG{Right?} +Computations have been performed on the supercomputer facilities of the +Mésocentre de calcul de Franche-Comté. + +\bibliographystyle{IEEEtran} +\bibliography{IEEEabrv,my_reference} \end{document} %%% Local Variables: