+In this section, some heuristics to compute the scaling factor are presented and
+classified into two categories: offline and online methods.
+
+\subsection{Offline scaling factor selection methods}
+
+The offline scaling factor selection methods are executed before the runtime of
+the program. They return static scaling factor values to the processors
+participating in the execution of the parallel program. On the one hand, the
+scaling factor values could be computed based on information retrieved by
+analyzing the code of the program and the computing system that will execute it.
+In~\cite{40}, Azevedo et al. detect during compilation the dependency points
+between tasks in a multi-task program. This information is then used to lower
+the frequency of some processors in order to eliminate slack times. A slack
+time is the period of time during which a processor that has already finished
+its computation, has to wait for a set of processors to finish their
+computations and send their results to the waiting processor in order to
+continue its task that is dependent on the results of computations being
+executed on other processors. Freeh et al. showed in~\cite{17} that the
+communication times of MPI programs do not change when the frequency is scaled
+down. On the other hand, some offline scaling factor selection methods use the
+information gathered from previous full or partial executions of the program. The whole program or, a
+part of it, is usually executed over all the available frequency
+gears and the execution time and the energy consumed with each frequency
+gear are measured. Then a heuristic or an exact method uses the retrieved
+information to compute the values of the scaling factor for the processors.
+In~\cite{8} , Rountree et al. use a linear programming algorithm, while in~\cite{34}, Cochran et
+al. use a multi-logistic regression algorithm for the same goal. The main
+drawback of these methods is that they all require executing the
+whole program or, a part of it, on all frequency gears for each new instance of the same program.
+
+\subsection{Online scaling factor selection methods}
+
+The online scaling factor selection methods are executed during the runtime of
+the program. They are usually integrated into iterative programs where the same
+block of instructions is executed many times. During the first few iterations,
+a lot of information is measured such as the execution time, the energy consumed
+using a multimeter, the slack times, \dots{} Then a method will exploit these
+measurements to compute the scaling factor values for each processor. This
+operation, measurements and computing new scaling factor, can be repeated as
+much as needed if the iterations are not regular. Peraza, Yu-Liang et
+al.~\cite{2,31} used varied heuristics to select the appropriate scaling
+factor values to eliminate the slack times during runtime. However, as seen
+in~\cite{19}, machine learning method takes a lot of time to converge
+when the number of available gears is big. To reduce the impact of slack times,
+in~\cite{1}, Lim et al. developed an algorithm that detects the communication
+sections and changes the frequency during these sections only. This approach
+might change the frequency of each processor many times per iteration if an
+iteration contains more than one communication section. In~\cite{3}, Rauber and
+Rünger used an analytical model that can predict the consumed energy for every frequency gear after measuring the consumed energy. They
+maintain the performance as mush as possible by setting the highest frequency gear to the slowest task.
+
+The primary contribution of
+our paper is to present a new online scaling factor selection method which has the
+ following characteristics:\\
+1) It is based on Rauber and Rünger analytical model to predict the energy
+ consumption of the application with different frequency gears.
+2) It selects the frequency scaling factor for simultaneously optimizing
+ energy reduction and maintaining performance.
+3) It is well adapted to distributed architectures because it takes into
+ account the communication time.
+4) It is well adapted to distributed applications with imbalanced tasks.
+5) It has a very small overhead when compared to other methods
+ (e.g.,~\cite{19}) and does not require profiling or training as
+ in~\cite{34}.
+
+
+% \JC{The whole subsection ``Parallel Tasks Execution on Homogeneous Platform'',
+% can be deleted if we need space, we can just say we are interested in this
+% paper in homogeneous clusters}
+
+
+\section{Energy model for a homogeneous platform}
+\label{sec.exe}
+Many researchers~\cite{9,3,15,26} divide the power consumed by a processor into
+two power metrics: the static and the dynamic power. While the first one is
+consumed as long as the computing unit is on, the latter is only consumed during
+computation times. The dynamic power $P_{dyn}$ is related to the switching
+activity $\alpha$, load capacitance $C_L$, the supply voltage $V$ and
+operational frequency $f$, as shown in EQ~(\ref{eq:pd}).