+\label{sec.res-mc}
+The clusters of grid'5000 have different number of cores embedded in their nodes
+as shown in Table \ref{table:grid5000}. The cores of each node can exchange
+data via the shared memory \cite{rauber_book}. In
+this section, the proposed scaling algorithm is evaluated over the grid'5000 grid while using multi-core nodes
+selected according to the two platform scenarios described in the section \ref{sec.res}.
+The two platform scenarios, the two sites and one site scenarios, use 32
+cores from multi-cores nodes instead of 32 distinct nodes. For example if
+the participating number of cores from a certain cluster is equal to 12,
+in the multi-core scenario the selected nodes is equal to 3 nodes while using
+4 cores from each node. The platforms with one
+core per node and multi-cores nodes are shown in Table \ref{table:sen-mc}.
+The energy consumptions and execution times of running the NAS parallel
+benchmarks, class D, over these four different scenarios are presented
+in the figures \ref{fig:eng-cons-mc} and \ref{fig:time-mc} respectively.
+
+The execution times for most of the NAS benchmarks are higher over the one site multi-cores per node scenario
+ than the execution time of those running over one site single core per node scenario. Indeed,
+ the communication times are higher in the one site multi-cores scenario than in the latter scenario because all the cores of a node share the same node network link which can be saturated when running communication bound applications.
+
+ \textcolor{blue}{On the other hand, the execution times for most of the NAS benchmarks are lower over
+the two sites multi-cores scenario than those over the two sites one core scenario. ???????
+}
+
+The experiments showed that for most of the NAS benchmarks and between the four scenarios,
+the one site one core scenario gives the best execution times because the communication times are the lowest.
+Indeed, in this scenario each core has a dedicated network link and all the communications are local.
+Moreover, the energy consumptions of the NAS benchmarks are lower over the
+one site one core scenario than over the one site multi-cores scenario because
+the first scenario had less execution time than the latter which results in less static energy being consumed.
+
+The computations to communications ratios of the NAS benchmarks are higher over
+the one site one core scenario when compared to the ratios of the other scenarios.
+More energy reduction was achieved when this ratio is increased because the proposed scaling algorithm selects smaller frequencies that decrease the dynamic power consumption.
+
+ \textcolor{blue}{ Whereas, the energy consumption in the two sites one core scenario is higher than the energy consumption of the two sites multi-core scenario. This is according to the increase in the execution time of the two sites one core scenario. }
+
+
+These experiments also showed that the energy
+consumption and the execution times of the EP and MG benchmarks do not change significantly over these four
+scenarios because there are no or small communications,
+which could increase or decrease the static power consumptions. Contrary to EP and MG, the energy consumptions
+and the execution times of the rest of the benchmarks vary according to the communication times that are different from one scenario to the other.
+
+
+The energy saving percentages of all NAS benchmarks running over these four scenarios are presented in the figure \ref{fig:eng-s-mc}. It shows that the energy saving percentages over the two sites multi-cores scenario
+and over the two sites one core scenario are on average equal to 22\% and 18\%
+respectively. The energy saving percentages are higher in the former scenario because its computations to communications ratio is higher than the ratio of the latter scenario as mentioned previously.
+
+In contrast, in the one site one
+core and one site multi-cores scenarios the energy saving percentages
+are approximately equivalent, on average they are up to 25\%. In both scenarios there
+are a small difference in the computations to communications ratios, which leads
+the proposed scaling algorithm to select similar frequencies for both scenarios.
+
+The performance degradation percentages of the NAS benchmarks are presented in
+figure \ref{fig:per-d-mc}. It shows that the performance degradation percentages for the NAS benchmarks are higher over the two sites
+multi-cores scenario than over the two sites one core scenario, equal on average to 7\% and 4\% respectively.
+Moreover, using the two sites multi-cores scenario increased
+the computations to communications ratio, which may increase
+the overall execution time when the proposed scaling algorithm is applied and the frequencies scaled down.
+
+
+When the benchmarks are executed over the one
+site one core scenario, their performance degradation percentages are equal on average
+to 10\% and are higher than those executed over the one site multi-cores scenario,
+which on average is equal to 7\%.
+
+\textcolor{blue}{
+The performance degradation percentages over one site multi-cores is lower because the computations to communications ratio is decreased. Therefore, selecting small
+frequencies by the scaling algorithm are proportional to this ratio, and thus the execution time do not increase significantly.}
+
+
+The tradeoff distance percentages of the NAS
+benchmarks over all scenarios are presented in the figure \ref{fig:dist-mc}.
+These tradeoff distance percentages are used to verify which scenario is the best in terms of energy reduction and performance. The figure shows that using muti-cores in both of the one site and two sites scenarios gives bigger tradeoff distance percentages, on overage equal to 17.6\% and 15.3\% respectively, than using one core per node in both of one site and two sites scenarios, on average equal to 14.7\% and 13.3\% respectively.
+
+\begin{table}[]
+\centering
+\caption{The multicores scenarios}
+
+\begin{tabular}{|*{4}{c|}}
+\hline
+Scenario name & Cluster name & \begin{tabular}[c]{@{}c@{}}No. of nodes\\ in each cluster\end{tabular} &
+ \begin{tabular}[c]{@{}c@{}}No. of cores\\ for each node\end{tabular} \\ \hline
+\multirow{3}{*}{Two sites/ one core} & Taurus & 10 & 1 \\ \cline{2-4}
+ & Graphene & 10 & 1 \\ \cline{2-4}
+ & Griffon & 12 & 1 \\ \hline
+\multirow{3}{*}{Two sites/ multicores} & Taurus & 3 & 3 or 4 \\ \cline{2-4}
+ & Graphene & 3 & 3 or 4 \\ \cline{2-4}
+ & Griffon & 3 & 4 \\ \hline
+\multirow{3}{*}{One site/ one core} & Graphite & 4 & 1 \\ \cline{2-4}
+ & Graphene & 12 & 1 \\ \cline{2-4}
+ & Griffon & 12 & 1 \\ \hline
+\multirow{3}{*}{One site/ multicores} & Graphite & 3 & 3 or 4 \\ \cline{2-4}
+ & Graphene & 3 & 3 or 4 \\ \cline{2-4}
+ & Griffon & 3 & 4 \\ \hline
+\end{tabular}
+\label{table:sen-mc}
+\end{table}
+
+\begin{figure}
+ \centering
+ \includegraphics[scale=0.5]{fig/eng_con.eps}
+ \caption{Comparing the energy consumptions of running NAS benchmarks over one core and multicores scenarios }
+ \label{fig:eng-cons-mc}
+\end{figure}
+
+
+ \begin{figure}
+ \centering
+ \includegraphics[scale=0.5]{fig/time.eps}
+ \caption{Comparing the execution times of running NAS benchmarks over one core and multicores scenarios }
+ \label{fig:time-mc}
+\end{figure}