+\subsection{Performance Prediction Verification}
+
+In this section we evaluate the precision of our performance prediction methods
+on the NAS benchmarks. We use EQ~(\ref{eq:tnew}) that predicts the execution
+time for any scale value. The NAS programs run the class B for comparing the
+real execution time with the predicted execution time. Each program runs offline
+with all available scaling factors on 8 or 9 nodes to produce real execution
+time values. These scaling factors are computed by dividing the maximum
+frequency by the new one see EQ~(\ref{eq:s}).
+\begin{figure*}[t]
+ \centering
+ \includegraphics[width=.4\textwidth]{cg_per.eps}\qquad%
+ \includegraphics[width=.4\textwidth]{mg_pre.eps}
+ \includegraphics[width=.4\textwidth]{bt_pre.eps}\qquad%
+ \includegraphics[width=.4\textwidth]{lu_pre.eps}
+ \caption{Fitting Predicted to Real Execution Time}
+ \label{fig:pred}
+\end{figure*}
+%see Figure~\ref{fig:pred}
+In our cluster there are 18 available frequency states for each processor.
+This lead to 18 run states for each program. We use seven MPI programs of the
+ NAS parallel benchmarks: CG, MG, EP, FT, BT, LU
+and SP. The average normalized errors between the predicted execution time and
+the real time (SimGrid time) for all programs is between 0.0032 to 0.0133. AS an
+example, we are present the execution times of the NAS benchmarks as in the
+figure~(\ref{fig:pred}).
+
+\subsection{The EPSA Results}
+The proposed EPSA algorithm was applied to seven MPI programs of the NAS
+benchmarks (EP, CG, MG, FT, BT, LU and SP). We work on three classes (A, B and
+C) for each program. Each program runs on specific number of processors
+proportional to the size of the class. Each class represents the problem size
+ascending from the class A to C. Additionally, depending on some speed up points
+for each class we run the classes A, B and C on 4, 8 or 9 and 16 nodes
+respectively.
+Depending on EQ~(\ref{eq:energy}), we measure the energy consumption for all