1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
7 \newcommand{\Tnorm}{\Xsub{T}{Norm}}
8 \newcommand{\Ltcm}[1][]{\Xsub{L}{tcm}_{\fxheight{#1}}}
9 \newcommand{\Etcm}[1][]{\Xsub{E}{tcm}_{\fxheight{#1}}}
10 \newcommand{\Niter}[1][]{\Xsub{N}{iter}_{\fxheight{#1}}}
12 \chapter{Energy Optimization of Asynchronous Applications}
16 \section{Introduction}
19 A grid is composed of heterogeneous clusters: CPUs from distinct clusters might have different computing power, energy consumption or frequency range.
20 Running synchronous parallel applications on grids results in long slack times where the fast nodes have to wait for the slower ones to finish their computations before synchronously exchanging data with them. Therefore, it is widely accepted that asynchronous parallel methods are more suitable than synchronous ones for such architectures because there is no slack time and the asynchronous communications are overlapped by computations. However, they usually execute more iterations than the synchronous ones and thus consume more energy.
21 In order to make the asynchronous method a good alternative to the synchronous one, it should not be just competitive in performance but also in energy consumption.
22 To reduce the energy consumption of a CPU executing the asynchronous iterative method, the Dynamic voltage and frequency scaling (DVFS) technique can be used. Modern operating systems automatically adjust the frequency of the processor according to their needs using DVFS operations. However, the user can scale down the frequency of the CPU using the on-demand governor \cite{ref96}. It lowers the frequency of a CPU to reduce its energy
23 consumption, but it also decreases its computing power and thus it might increase the
24 execution time of an application running on that processor. Therefore, the frequency that gives the best trade-off between energy consumption and performance must be selected. For parallel asynchronous methods running over a grid, a different frequency might be selected for each CPU in the grid depending on its characteristics.
25 In chapters \ref{ch2} and \ref{ch3}, three frequencies selecting algorithms were proposed
26 to reduce the energy consumption of synchronous message passing iterative applications running over homogeneous and heterogeneous platforms respectively. In this chapter, a new frequency selecting algorithm for asynchronous iterative message passing applications running over grids is presented. An adaptation for hybrid methods, with synchronous and asynchronous communications, is also proposed.
27 The algorithm and its adaptation select the vector of frequencies which simultaneously offers a maximum energy reduction and minimum performance degradation ratio. The algorithm has a very small overhead and works online without needing any training nor any profiling.
30 This chapter is organized as follows: Section~\ref{ch4:2} presents some
31 related works from other authors. models for predicting the performance and the energy consumption
32 of both synchronous and asynchronous message passing programs
33 running over a grid are explained in Section~\ref{ch4:3}.
34 It also presents the objective function that maximizes the reduction of energy consumption while minimizing
35 the degradation of the program's performance, used to select the frequencies.
36 Section~\ref{ch4:5} details the proposed frequencies selecting algorithm.
37 Section~\ref{ch4:6} presents the iterative multi-splitting application which is a hybrid method and was used as a benchmark to evaluate the efficiency of the proposed algorithm.
38 Section~\ref{ch4:7} presents the simulation results of applying the algorithm on the multi-splitting application
39 and executing it on different grid scenarios. It also shows the results of running
40 three different power scenarios and comparing them. Moreover, in the last subsection, the proposed algorithm is compared to the energy and delay product (EDP) method. Section \ref{ch4:8} shows the real experiment results of applying the proposed algorithm over Grid'5000 platform and the results with the EDP method . Finally, the chapter ends with a summary in section
46 \section{Related works}
50 A message passing application is in general composed of two types of sections, which are the computations and the communications sections. The communications can be done synchronously or asynchronously. In a synchronous message passing application, when a process synchronously sends a message to another node, it is blocked until the latter receives the message. During that time, there is no computation on both nodes and that period is called slack time.
51 On the contrary, in an asynchronous message passing application, the asynchronous communications are overlapped by computations, thus, there is no slack time.
52 Many techniques have been used to reduce the energy consumption of message passing applications,
53 such as scheduling, heuristics and DVFS. For example, different scheduling techniques, to switch off the idle nodes to save their energy consumption, were presented in \cite{ref83,ref84,ref85} and \cite{ref86}. In \cite{ref87}
54 and \cite{ref88}, an heuristic to manage the workloads between the computing resources of the cluster and reduce their energy, was published.
55 However, the dynamic voltage and frequency scaling (DVFS) is the most popular technique to reduce the energy consumption of computing processors.
57 As shown in the related works of chapter \ref{ch2}, most of the works in this field targeted the synchronous message passing applications because they are more common than the asynchronous ones and easier to work on. Some researchers tried to reduce slack times in synchronous applications running over homogeneous clusters. These slack times can happen on such architectures if the distributed workloads over the computing nodes are imbalanced.
58 Other works focused on reducing the energy consumption of synchronous applications running over heterogeneous architectures such as heterogeneous clusters or grids. When executing synchronous message passing applications on these architectures, slack times are generated when fast nodes have to communicate with slower ones. Indeed, the fast nodes have to wait for the slower ones to finish their computations to be able to communicate with them. In this case, some energy was saved as in the work of chapter \ref{ch3} and its related works by reducing the frequencies of the fast nodes with DVFS operations while minimizing the slack times.
60 Whereas, no work has been conducted to optimize the energy consumption of asynchronous message passing applications. Some works use asynchronous communications when applying DVFS operations on synchronous applications. For example, Hsu et al. \cite{ref92} proposed an online adaptive algorithm that divides the synchronous message passing application into several time periods and selects the suitable frequency for each one. The algorithm asynchronously applies the new computed frequencies to overlap the multiple DVFS switching times with computation. Similarly to this work, Zhu et al. \cite{ref93} studied the difference between applying synchronously or asynchronously the frequency changing algorithm during the execution time of the program. The results of the proposed asynchronous scheduler were more energy efficient than synchronous one. In \cite{ref94}, Vishnu et al. presented an energy efficient asynchronous agent that reduces the slack times in a parallel program to reduce the energy consumption. They used asynchronous communications in the proposed algorithm, which calls the DVFS algorithm many times during the execution time of the program. The three previous presented works were applied on applications running over homogeneous platforms.
62 In \cite{ref95}, the energy consumption of an asynchronous iterative linear solver running over a heterogeneous platform, is evaluated. The results showed that the asynchronous version of the application had less execution time than the synchronous one. Therefore, according to their energy model the asynchronous method consumes less energy.
63 However, in their model they do not consider that during synchronous communications only static power which is significantly lower than dynamic power, is consumed.
65 This chapter presents the following contributions:
67 \item new model to predict the energy consumption and the execution time
68 of asynchronous iterative message passing applications running over a grid platform.
70 \item a new online algorithm that selects a vector of frequencies which gives the best trade-off between energy consumption and performance for asynchronous iterative message passing applications running over a grid platform. The algorithm has a very small overhead
71 and does not need any training or profiling. The new algorithm can be applied synchronously and asynchronously on an iterative message passing application.
78 \section{The performance and the energy consumption measurement models}
81 \subsection{The execution time of iterative asynchronous message passing applications}
83 In this chapter, we are interested in running asynchronous iterative message
84 passing distributed applications over a grid while reducing the energy consumption of the
85 CPUs during the execution.
86 Figure \ref{fig:heter} is an example of a grid with four different clusters. Inside each cluster, all the nodes are homogeneous, have the same specifications, but are different from the nodes of the other clusters.
87 To reduce the energy consumption of these applications while running on a grid,
88 the heterogeneity of the clusters' nodes, such as nodes' computing powers (FLOPS), energy consumptions and
89 CPU's frequency ranges, must be taken into account. To reduce the complexity of the experiments and focus on the heterogeneity of the nodes, the local networks of all the clusters are assumed to be identical, with the same latency and bandwidth. The networks connecting the clusters are also assumed to be homogeneous but they are slower than the local networks.
93 \includegraphics[scale=0.9]{fig/ch4/GRID}
94 \caption{A grid platform composed of heterogeneous clusters}
99 An iterative application consists of a block of instructions that is repeatedly executed until convergence. A distributed iterative application with interdependent tasks requires, at each iteration, exchanging data between nodes to compute the distributed tasks. The communications between the nodes can be done synchronously or asynchronously. In the synchronous model, each node has to wait to receive data from all its neighbors to compute its iteration, see figures \ref{fig:ch1:15} and \ref{fig:ch1:16}.
100 Since the tasks are synchronized, all the nodes execute the same number of iterations.
101 Then, The overall execution time of an iterative synchronous message passing application with balanced tasks, running on the grid described above, is equal to the execution time of the slowest node in the slowest cluster running a task as presented in \ref{eq:perf_heter}.
104 Whereas, in the asynchronous model, the fast nodes do not have to wait for the slower nodes to finish their computations to exchange data, see Figure \ref{fig:ch1:17}. Therefore, there are no idle times between successive iterations, the node executes the computations with the last received data from its neighbors and the communications are overlapped by computations. Since there are no synchronizations between nodes, all nodes do not have the same number of iterations.
105 The difference in the number of executed iterations between the nodes depends on the heterogeneity of the computing powers of the nodes. The execution time of an asynchronous iterative message passing application is not equal to the execution time of the slowest node like in the synchronous mode because each node executes a different number of iterations. Moreover, the overall execution time is directly dependent on the method used to detect the global convergence of the asynchronous iterative application. The global convergence detection method might be synchronous or asynchronous and centralized or distributed.
107 In a grid, the nodes in each cluster have different characteristics, especially different frequency gears.
108 Therefore, when applying DVFS operations on these nodes, they may get different scaling factors represented
109 by a scaling vector: $(S_{11}, S_{12},\dots, S_{NM_i})$ where $S_{ij}$ is the
110 scaling factor of processor $j$ in the cluster $i$.
111 To be able to predict the execution time of asynchronous iterative message passing applications running
112 over a grid, for different vectors of scaling factors, the communication times and the computation times for all the tasks must be measured during the first iteration before applying any DVFS operation. Then, the execution time of one iteration of an asynchronous iterative message passing application,
113 running on a grid after applying a vector of scaling factors, is equal to the execution time of the synchronous application but without its communication times. The communication times are overlapped by computations and the execution time can be evaluated for all the application as the average of the execution time of all the parallel tasks. This is presented in Equation \ref{eq:asyn_time}.
117 \Tnew = \frac{\sum_{i=1}^{N} \sum_{j=1}^{M_i}({\TcpOld[ij]} \cdot S_{ij})} {N \cdot M_i }
121 In this work, a hybrid (synchronous/asynchronous) message passing application \cite{ref99} is being used. It is composed of two loops:
123 \item In the inner loop, at each iteration, the nodes in a cluster synchronously exchange data between them. There is no communication between nodes from different clusters.
124 \item In the outer loop, at each iteration, the nodes from different clusters asynchronously exchange their data between them because the network interconnecting the clusters has a high latency.
127 Therefore, the execution time of one outer iteration of such a hybrid application can be evaluated by computing the average of the execution time of the slowest node in each cluster. The overall execution time of the asynchronous iterative applications can be evaluated as follows:
131 \Tnew = \frac{\sum_{i=1}^{N} (\max_{j=1,\dots, M_i} ({\TcpOld[ij]} \cdot S_{ij}) +
132 \min_{j=1,\dots,M_i} ({\Ltcm[ij]}))}{N}
135 In Equation (\ref{eq:asyn_perf}), the communication times $\Ltcm[ij]$ are only the communications between the local nodes because the communications between the clusters are asynchronous and overlapped by computations.
138 \subsection{The energy model and tradeoff optimization}
141 The energy consumption of an asynchronous application running over a heterogeneous grid is the summation of
142 the dynamic and static power of each node multiplied by the computation time of that node as in Equation (\ref{eq:asyn_energy1}). The computation time of each node is equal to the overall execution time of
143 the node because the asynchronous communications are overlapped by computations.
145 \label{eq:asyn_energy1}
146 E = \sum_{i=1}^{N} \sum_{j=1}^{M_i} {(S_{ij}^{-2} \cdot \Tcp[ij] \cdot (\Pd[ij]+\Ps[ij]) )}
150 It is common for distributed algorithms running over grids to have asynchronous external communications between clusters and synchronous ones between the nodes of the same cluster. In this hybrid communication scheme, the dynamic energy consumption can be computed in the same way as for the synchronous application with Equation (\ref{eq:Edyn_new}).
151 However, since the nodes of different clusters are not synchronized and do not have the same execution time as in the synchronous application, the static energy consumption is different between them. The cluster execution time is equal to the execution time of the slowest task in that cluster. The energy
152 consumption of the asynchronous iterative message passing application running on a
153 heterogeneous grid platform during one iteration can be computed as follows:
156 \label{eq:asyn_energy}
157 E = \sum_{i=1}^{N} \sum_{j=1}^{M_i} {(S_{ij}^{-2} \cdot \Pd[ij] \cdot \Tcp[ij])} + \sum_{i=1}^{N} \sum_{j=1}^{M_i} (\Ps[ij] \cdot
158 ( \mathop{\max_{j=1,\dots,M_i}} ({\Tcp[ij]} \cdot S_{ij}) + \mathop{\min_{j=1,\dots,M_i}} ({\Ltcm[ij]})))
160 Where $\Ltcm[ij]$ is the local communication time of the cluster $i$ of node $j$.
161 Reducing the frequencies of the processors according to the vector of scaling
162 factors $(S_{11}, S_{12},\dots, S_{NM_i})$ may degrade the performance of the application
163 and thus, increase the static energy consumed because the execution time is
164 increased~\cite{ref78}. The overall
165 energy consumption for the asynchronous application can be computed by multiplying the energy consumption
166 from one iteration of each cluster by the number of the iterations of that cluster, $\Niter[i]$,
167 as in Equation (\ref{eq:asyn_energy_it}).
171 \label{eq:asyn_energy_it}
172 E = \sum_{i=1}^{N} (\sum_{j=1}^{M_i} {(S_{ij}^{-2} \cdot \Pd[ij] \cdot \Tcp[ij])}) \cdot \Niter[i] + \sum_{i=1}^{N} (\sum_{j=1}^{M_i} (\Ps[ij] \cdot \\
173 ( \mathop{\max_{j=1,\dots,M_i}} ({\Tcp[ij]} \cdot S_{ij}) + \mathop{\min_{j=1,\dots,M_i}} ({\Ltcm[ij]})))) \cdot \Niter[i]
176 In order to optimize the energy consumption and the performance of the asynchronous iterative applications at the same time, the maximum distance between the two metrics can be computed as in the previous chapters.
177 However, both the energy model and performance must be normalized as in the Equations \ref{eq:enorm-heter} and
178 \ref{eq:pnorm-heter} respectively.
179 Hence, $\Tnew$ should be computed as in Equation~\ref{eq:asyn_perf} and $\Told$ computed as follows:
182 \Told = \frac{\sum_{i=1}^{N} (\max_{j=1,\dots, M_i} ({\TcpOld[ij]}) +
183 \min_{j=1,\dots,M_i} ({\Ltcm[ij]}))}{N}
186 The original energy consumption of asynchronous applications, $\Eoriginal$ is computed as in (\ref{eq:asyn_energy_original}).
191 \label{eq:asyn_energy_original}
192 E_{original} = \sum_{i=1}^{N} \sum_{j=1}^{M_i} {( \Pd[ij] \cdot \TcpOld[ij])} + \sum_{i=1}^{N} \sum_{j=1}^{M_i} (\Ps[ij] \cdot
193 ( \mathop{\max_{j=1,\dots,M_i}} ({\TcpOld[ij]} ) + \mathop{\min_{j=1,\dots,M_i}} ({\Ltcm[ij]})))
197 Then, the objective function can be modeled as the maximum
198 distance between the normalized energy curve and the normalized
199 performance curve over all available sets of scaling factors and is computed as in the
200 objective function \ref{eq:max-grid}.
203 \section[The scaling algorithm of asynchronous applications]{The scaling factors selection algorithm of asynchronous applications over grid}
205 The frequency selection algorithm~(\ref{HSA-asyn}) works online during the first iteration of asynchronous iterative message passing program running over a grid. The algorithm selects
206 the set of frequency scaling factors $\Sopt[11],\Sopt[12],\dots,\Sopt[NM_i]$ which maximizes
207 the distance, the tradeoff function (\ref{eq:max-grid}), between the predicted normalized energy consumption
208 and the normalized performance of the program. The algorithm is called just once in the iterative program and
209 it uses information gathered from the first iteration to approximate the vector of frequency scaling factors that gives the best tradeoff.
210 According to the returned vector of scaling factors, the DVFS algorithm (\ref{dvfs-heter}) computes the new frequency
211 for each node in the grid. It also shows where and when the proposed scaling algorithm is called in
212 the iterative message passing program.
216 \includegraphics[scale=0.65]{fig/ch4/init_freq}
217 \caption{Selecting the initial frequencies in a grid composed of four clusters}
221 In contrast to the scaling factors selection algorithm of synchronous applications running on the grid
222 (algorithm \ref{HSA-grid}), this algorithm computed the initial frequencies depending on the Equations
223 \ref{eq:Scp-grid} and \ref{eq:Fint-grid}. Figure~\ref{fig:st_freq} shows the selected initial frequencies of the grid composed of four different types of clusters that are presented in the Figure \ref{fig:heter}.
224 The only difference between the two algorithms is the energy and performance models that are used. Furthermore, this algorithm scales down all frequencies of nodes at each iteration, while other algorithm don't scaled down the frequency of the slowest node. However, the performance of asynchronous application does not depend on the performance of the slower nodes, while it depends on the performance of all nodes.
227 \begin{algorithmic}[1]
230 \item [{$N$}] number of clusters in the grid.
231 \item [{$M$}] number of nodes in each cluster.
232 \item[{$\Tcp[ij]$}] array of all computation times for all nodes during one iteration and with the highest frequency.
233 \item[{$\Tcm[ij]$}] array of all communication times for all nodes during one iteration and with the highest frequency.
234 \item[{$\Fmax[ij]$}] array of the maximum frequencies for all nodes.
235 \item[{$\Pd[ij]$}] array of the dynamic powers for all nodes.
236 \item[{$\Ps[ij]$}] array of the static powers for all nodes.
237 \item[{$\Fdiff[ij]$}] array of the differences between two successive frequencies for all nodes.
239 \Ensure $\Sopt[11],\Sopt[12] \dots, \Sopt[NM_i]$, a vector of scaling factors that gives the optimal tradeoff between energy consumption and execution time
241 \State $\Scp[ij] \gets \frac{\max_{i=1,2,\dots,N}(\max_{j=1,2,\dots,M_i}(\Tcp[ij]))}{\Tcp[ij]} $
242 \State $F_{ij} \gets \frac{\Fmax[ij]}{\Scp[i]},~{i=1,2,\cdots,N},~{j=1,2,\dots,M_i}.$
243 \State Round the computed initial frequencies $F_i$ to the closest available frequency for each node.
244 \If{(not the first frequency)}
245 \State $F_{ij} \gets F_{ij}+\Fdiff[ij],~i=1,\dots,N,~{j=1,\dots,M_i}.$
247 \State $\Told \gets \frac{\sum_{i=1}^{N} (\max\limits_{j=1,\dots, M_i} ({\TcpOld[ij]}) +
248 \min\limits_{j=1,\dots,M_i} ({\Ltcm[ij]}))}{N} $
250 \State $\Eoriginal \gets \sum\limits_{i=1}^{N} \sum\limits_{j=1}^{M_i} {( \Pd[ij] \cdot \TcpOld[ij])} + \sum\limits_{i=1}^{N} \sum\limits_{j=1}^{M_i} (\Ps[ij] \cdot
251 (\mathop{\max\limits_{j=1,\dots,M_i}} ({\TcpOld[ij]} ) + \mathop{\min\limits_{j=1,\dots,M_i}} ({\Ltcm[ij]})))$
252 \State $\Sopt[ij] \gets 1,~i=1,\dots,N,~{j=1,\dots,M_i}. $
253 \State $\Dist \gets 0 $
254 \While {(all nodes have not reached their minimum frequency \textbf{or} $\Pnorm - \Enorm < 0 $)}
255 \If{(not the last frequency)}
256 \State $F_{ij} \gets F_{ij} - \Fdiff[ij],~{i=1,\dots,N},~{j=1,\dots,M_i}$.
257 \State $S_{ij} \gets \frac{\Fmax[ij]}{F_{ij}},~{i=1,\dots,N},~{j=1,\dots,M_i}.$
259 \State $\Tnew \gets \frac{\sum\limits_{i=1}^{N} (\max\limits_{j=1,\dots, M_i} ({\TcpOld[ij]} \cdot S_{ij}) + \min\limits_{j=1,\dots,M_i} ({\Ltcm[ij]}))}{N} $
261 \State $\Ereduced \gets \sum\limits_{i=1}^{N} \sum\limits_{j=1}^{M_i} {(S_{ij}^{-2} \cdot \Pd[ij] \cdot \Tcp[ij])} + \sum\limits_{i=1}^{N} \sum\limits_{j=1}^{M_i} (\Ps[ij] \cdot
262 ( \mathop{\max\limits_{j=1,\dots,M_i}} ({\Tcp[ij]} \cdot S_{ij}) + \mathop{\min\limits_{j=1,\dots,M_i}} ({\Ltcm[ij]}))) $
263 \State $\Pnorm \gets \frac{\Told}{\Tnew}$
264 \State $\Enorm\gets \frac{\Ereduced}{\Eoriginal}$
265 \If{$(\Pnorm - \Enorm > \Dist)$}
266 \State $\Sopt[ij] \gets S_{ij},~i=1,\dots,N,~j=1,\dots,M_i. $
267 \State $\Dist \gets \Pnorm - \Enorm$
270 \State Return $\Sopt[11],\Sopt[12],\dots,\Sopt[NM_i]$
272 \caption{Scaling factors selection algorithm of asynchronous applications over grid}
279 \section{The iterative multi-splitting method }
282 Multi-splitting algorithms have been initially studied to
283 solve linear systems of equations in parallel
284 \cite{ref97}. Thereafter, they were used to design
285 non linear iterative algorithms and asynchronous iterative
286 algorithms~\cite{ref98}. The principle of multi-splitting
287 algorithms lies in splitting the system of equations, then solving
288 each sub-system using a direct or an iterative method and then
289 combining the results in order to build a global solution. Since a
290 multi-splitting method is iterative, it requires executing several iterations
291 in order to reach global convergence.
293 In this chapter, we have used an asynchronous iterative multisplitting method
294 to solve a 3D Poisson problem as described in~\cite{ref99}. The
295 problem is divided into small 3D sub-problems and each one is solved by a
296 parallel GMRES method. For more information about multi-splitting
297 algorithms, interested readers are invited to
298 consult the previous references.
301 \section{The experimental results over SimGrid}
303 In this section, the heterogeneous scaling algorithm (HSA), Algorithm~(\ref{HSA-asyn}), is applied to the parallel iterative
304 multi-splitting method. The performance of this algorithm is evaluated by
305 executing the iterative multi-splitting method on the Simgrid/SMPI simulator v3.10
306 \cite{ref66}. This simulator offers flexible tools to create a
307 grid architecture and run the iterative application over it. The grid used in these
308 experiments has four different types of nodes. Two types of nodes have different computing powers, frequency ranges, static and dynamic powers. Table \ref{table:platform}
309 presents the characteristics of the four types of nodes. The specifications of the simulated nodes are similar to real Intel processors.
310 Many grid configurations have been used in the experiments where the number of clusters and the number of nodes per cluster are equal to 4 or 8.
311 For the grids composed of 8 clusters, two clusters of each type of nodes were used. The number of nodes per cluster is the same for all the clusters in a given grid.
315 \caption{The characteristics of the four types of nodes}
318 \begin{tabular}{|*{7}{r|}}
320 node& Simulated & Max & Min & Diff. & Dynamic & Static \\
321 type & GFLOPS & Freq. & Freq. & Freq. & power & power \\
322 & of one node & GHz & GHz & GHz & & \\
324 A & 40 & 2.50 & 1.20 & 0.100 & \np[W]{20} & \np[W]{4} \\
326 B & 50 & 2.66 & 1.60 & 0.133 & \np[W]{25} & \np[W]{5} \\
328 C & 60 & 2.90 & 1.20 & 0.100 & \np[W]{30} & \np[W]{6} \\
330 D & 70 & 3.40 & 1.60 & 0.133 & \np[W]{35} & \np[W]{7} \\
333 \label{table:platform}
337 The CPUs' constructors do not specify the amount of static and dynamic powers their CPUs consume.
338 The maximum power consumption for each node's CPU was chosen to be proportional to its computing power (FLOPS). The dynamic power was assumed to represent \np[\%]{80} of the overall power consumption and the rest (\np[\%]{20}) is the static power. Similar assumptions were made in last two chapters and \cite{ref47}.
339 The clusters of the grid are connected via a long distance Ethernet network with
340 \np[Gbit/s]{1} bandwidth, while inside each cluster the nodes are connected via a high-speed \np[Gbit/s]{10} bandwidth local Ethernet network. The local networks have ten times less latency than the network connecting the clusters.
342 \subsection{The energy consumption and the execution time of the multi-splitting application}
344 The multi-splitting (MS) method solves a three dimensional problem of size $N=N_x \cdot N_y \cdot N_z$. The problem is divided into equal sub-problems which are distributed to the computing nodes of the grid and then solved.
345 The experiments were conducted on problems of size $N=400^3$
346 or $N=500^3$ that require more than $12$ and $24$ Gigabyte of memory, respectively.
347 Table \ref{table:comp} presents the different experiment scenarios with different numbers of clusters, nodes per cluster and problem sizes. A name, consisting in the values of these parameters was given to each scenario.
350 \caption{The different experiment scenarios}
353 \begin{tabular}{|*{7}{r|}}
355 Platform & Clusters & Number of nodes &Vector & Total number of \\
356 scenario & number & in cluster &size & nodes in grid \\
358 Grid.4*4.400 & 4 & 4 &$400^3$ &16 \\
360 Grid.4*8.400 & 4 & 8 &$400^3$ &32 \\
362 Grid.8*4.400 & 8 & 4 &$400^3$ &32 \\
364 Grid.8*8.400 & 8 & 8 &$400^3$ &64 \\
366 Grid.4*4.500 & 4 & 4 &$500^3$ &16 \\
368 Grid.4*8.500 & 4 & 8 &$500^3$ &32 \\
370 Grid.8*4.500 & 8 & 4 &$500^3$ &32 \\
372 Grid.8*8.500 & 8 & 8 &$500^3$ &64 \\
378 This section focuses on the execution time and the energy consumed by
379 the MS application while running over the grid platform without using DVFS operations.
380 The energy consumption of the synchronous and asynchronous MS was
381 measured using the energy Equations \ref{eq:energy-grid} and \ref{eq:asyn_energy} respectively.
382 Figures \ref{fig:eng_time_ms} (a) and (b) show the energy consumption
383 and the execution time, respectively, of the multi-splitting application running over a heterogeneous grid
384 with different numbers of clusters and nodes per cluster.
385 The synchronous and the asynchronous versions of the MS application were executed over each scenario in Table \ref{table:comp}.
386 As shown in Figure \ref{fig:eng_time_ms} (a), the asynchronous MS consumes more
387 energy than the synchronous one. Indeed, the asynchronous application overlaps the asynchronous communications with computations and thus it executes more iterations than the synchronous one and has no slack times. More computations result in more dynamic energy consumption by the CPU in the asynchronous MS and since the dynamic power is chosen to be four times higher than the static power, the asynchronous MS method consumes more overall energy than the synchronous one. However, the execution times of the experiments, presented in Figure \ref{fig:eng_time_ms} (b), show that the execution times of the
388 asynchronous MS are smaller than the execution times of the synchronous one. Indeed, in the
389 asynchronous application the fast nodes do not have to wait for the slower ones to exchange data. So there are no slack times and more iterations are executed by fast nodes which accelerates the convergence to the final solution.
394 \includegraphics[width=.80\textwidth]{fig/ch4/energy_ms.eps}\\~~~~~~(a)\\
395 \includegraphics[width=.82\textwidth]{fig/ch4/time_ms.eps}\\~~~~~~~~(b)
396 \caption{(a) energy consumption and (b) execution time of multi-splitting application without applying the HSA algorithm}
397 \label{fig:eng_time_ms}
401 The synchronous and asynchronous MS scale well. The execution times of both methods decrease linearly with the increase of the
402 number of computing nodes in the grid, whereas the energy consumption is approximately
403 the same when the number of computing nodes increases. Therefore, the energy consumption
404 of this application is not directly related to the number of computing nodes.
406 \subsection{The results of the scaling factor selection algorithm}
408 The scaling factor selection algorithm~\ref{HSA-asyn} was applied to both
409 synchronous and asynchronous MS applications which were
410 executed over the 8 possible scenarios presented in table~\ref{table:comp}.
411 The DVFS algorithm \ref{dvfs} needs to send and receive some information before
412 calling the scaling factor selection algorithm algorithm~\ref{HSA-asyn}. The communications of the DVFS algorithm
413 can be applied synchronously or asynchronously which results in four different versions of the application: synchronous or asynchronous MS with synchronous or asynchronous DVFS communications. Figures \ref{fig:eng_time_dvfs} (a) and (b) present the energy consumption and the execution time for the four different versions of the application running on all the scenarios in Table \ref{table:comp}.
419 \includegraphics[width=.82\textwidth]{fig/ch4/energy_dvfs.eps}\\~~~~~~~(a)\\
420 \includegraphics[width=.80\textwidth]{fig/ch4/time_dvfs.eps}\\~~~~~~~~(b)
421 \caption{(a) energy consumption and (b) execution time of different versions of the multi-splitting application after applying the HSA algorithm}
422 \label{fig:eng_time_dvfs}
424 Figure \ref{fig:eng_time_dvfs} (a) shows that the energy
425 consumption of all four versions of the method, running over the 8 grid scenarios described in Table \ref{table:comp}, are not affected by the increase in the number of computing nodes. MS without applying DVFS operations had the same behavior. On the other hand, Figure \ref{fig:eng_time_dvfs} (b) shows that the execution time of the MS application with DVFS operations
426 decreases in inverse proportion to the number of nodes. Moreover, it can be noticed that the asynchronous MS with synchronous DVFS consumes less energy when compared to the other versions of the method. Two reasons explain this energy consumption reduction:
428 \item The asynchronous MS with synchronous DVFS version uses synchronous DVFS communications which allow it to apply the new computed frequencies at the begining of the second iteration. Thus, reducing the consumption of dynamic energy by the application from the second iteration until the end of the application. Whereas in
429 asynchronous DVFS versions where the DVFS communications are asynchronous, the new frequencies cannot be computed at the end of the first iteration and consequently cannot be applied at the begining of the second iteration.
430 Indeed, since the performance information gathered during the first iteration is not sent synchronously at the end of the first iteration, fast nodes might execute many iterations before receiving the performance information, computing the new frequencies based on this information and applying the new computed frequencies. Therefore, many iterations might be computed by CPUs running on their highest frequency and consuming more dynamic energy than scaled down processors.
432 \item As shown in Figure \ref{fig:eng_time_ms} (b), the execution time of the asynchronous MS version is lower than the execution time of the synchronous MS version because there is no idle time in the asynchronous version and the communications are overlapped by computations. Since the consumption of static energy is proportional to the execution time, the asynchronous MS version consumes less static energy than the synchronous version.
438 \includegraphics[scale=0.7]{fig/ch4/energy_saving.eps}
439 \caption{The energy saving percentages after applying the HSA algorithm to the different versions and scenarios}
440 \label{fig:energy_saving}
444 The energy saving percentage is the ratio between the reduced energy consumption after applying the HSA algorithm and the original energy consumption of synchronous MS without DVFS.
445 Whereas, the performance degradation percentage is the ratio between the original execution time of the synchronous MS without DVFS and the new execution time after applying the HSA algorithm.
446 Therefore, in this section, the synchronous MS method without DVFS serves as a reference for comparison with the other methods for the following terms: energy saving, performance degradation and the distance between the two previous terms.
448 In Figure \ref{fig:energy_saving}, the energy saving is computed for the four versions of the MS method which
449 are the synchronous or asynchronous MS that apply synchronously or asynchronously the HSA algorithm.
450 The fifth version is the asynchronous MS without any DVFS operations. Figure \ref{fig:energy_saving} shows that some versions have positive or negative energy saving percentages which means that the corresponding version respectively consumes less or more energy than the reference method.
451 As in Figure \ref{fig:eng_time_dvfs} (a) and for the same reasons presented above, the asynchronous MS with synchronous DVFS version gives the best energy saving percentage when compared to the other versions.
456 \includegraphics[scale=0.7]{fig/ch4/perf_degra.eps}
457 \caption{The results of the performance degradation}
458 \label{fig:perf_degr}
463 \includegraphics[scale=0.7]{fig/ch4/dist.eps}
464 \caption{The results of the tradeoff distance}
468 Figure \ref{fig:perf_degr} shows that some versions have negative performance
469 degradation percentages which means that the new execution time of a given version of the application is less than the execution time of the synchronous MS without DVFS.
470 Therefore, the version with the smallest negative performance degradation percentage has actually the best speed up when compared to the other versions. The version that gives the best execution time is the
471 asynchronous MS without DVFS which on average outperforms the synchronous MS without DVFS version by
472 $16.9\%$. While the worst case is the synchronous MS with synchronous DVFS where the performance is on average degraded by $2.9\%$ when compared to the reference method.
475 The energy consumption and performance tradeoff between these five versions is presented in Figure \ref{fig:dist}.
476 These distance values are computed as the differences between the energy saving
477 and the performance degradation percentages as in the optimization function
478 (\ref{eq:max-grid}). Thus, the best MS version is the one that has the maximum distance between the energy saving and performance degradation. The distance can be negative if the energy saving percentage is less than the performance degradation percentage.
479 The asynchronous MS applying synchronously the HSA algorithm gives the best distance which is on average equal to $27.72\%$.
480 This version saves on average up to $22\%$ of energy and on average speeds up the application by $5.72\%$. This overall improvement is due to combining asynchronous computing and the synchronous application of the HSA algorithm.
483 The two platform scenarios, Grid 4*8 and Grid 8*4, use the same
484 number of computing nodes but give different trade-off results.
485 The versions applying the HSA algorithm and running over the Grid 4*8 platform, give higher distance percentages than those running on the Grid 8*4 platform. In the Grid 8*4 platform scenario more clusters are used than in the Grid 4*8 platform and thus the global system is divided into 8 small subsystems instead of 4. Indeed, each subsystem is assigned to a cluster and synchronously solved by the nodes of that cluster. Dividing the global system into smaller subsystems, increases the number of outer iterations required for the global convergence of the system because for the multi-splitting system the more the system is decomposed the higher the spectral radius is. For example, the asynchronous MS, applying synchronously the HSA algorithm, requires on average 135 outer iterations when running over the Grid 4*8 platform and 148 outer iterations when running over the Grid 8*4 platform. The increase in the number of executed iterations over the Grid 8*4 platform justifies the increase in energy consumption by applications running over that platform.
488 \subsection{Comparing the number of iterations executed by the different MS versions}
491 The heterogeneity in the computing power of the nodes in the grid has a direct
492 effect on the number of iterations executed by the nodes of each cluster when running an asynchronous iterative message passing method. The fast nodes execute more iterations than the slower ones because the iterations are not synchronized.
493 On the other hand, in the synchronous versions, all the nodes in all the clusters have the same number of iterations and have to wait for the slowest node to finish its iteration before starting the next iteration because the iterations are synchronized.
495 When the fast nodes asynchronously execute more iterations than the slower ones, they consume more energy without significantly improving the global convergence of the system. Reducing the frequency of the fast nodes will decrease the number of iterations executed by them. If all the nodes, the fast and the slow ones, execute close numbers of iterations, the asynchronous application will consume less energy and its performance will not be significantly affected.
496 Therefore, applying the HSA algorithm over asynchronous applications is very promising. In this section, the number of iterations executed by the asynchronous MS method, while solving a 3D problem of size $400^3$ with and without applying the HSA algorithm, is evaluated. In Table \ref{table:sd}, the standard deviation of the number of iterations executed by the asynchronous application over all the grid platform scenarios, is presented.
501 \caption{The standard deviation of the numbers of iterations for different asynchronous MS versions running over different grid platforms}
503 \begin{tabular}{|l|l|l|l|}
505 \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Grid\\platform \end{tabular}}
506 & \multicolumn{3}{c|}{Standard deviation} \\ \cline{2-4}
507 & \begin{tabular}[c]{@{}l@{}}Asyn. MS without \\ HSA\end{tabular}
508 & \begin{tabular}[c]{@{}l@{}}Asyn. MS with \\ Asyn. HSA\end{tabular}
509 & \begin{tabular}[c]{@{}l@{}}Asyn. MS with \\ Syn. HSA\end{tabular} \\ \hline
510 Grid.4*4.400 & 60.43 & 13.86 & 1.12 \\ \hline
511 Grid.4*8.400 & 58.06 & 27.43 & 1.22 \\ \hline
512 Grid.8*4.400 & 50.97 & 20.76 & 1.15 \\ \hline
513 Grid.8*8.400 & 52.46 & 48.40 & 2.38 \\ \hline
517 A small standard deviation value means that there is a very small difference between
518 the numbers of iterations executed by the nodes which means fast nodes did not uselessly execute more iterations than the slower ones and the application does not waste a lot of energy. As shown in Table \ref{table:sd},
519 the asynchronous MS that applies synchronously the HSA algorithm has the best standard deviation value when compared to the other versions. Two reasons explain the advantage of this method:
521 \item The applied HSA algorithm selects new frequencies that reduce the computation power of the fast nodes while maintaining the computation power of the slower nodes. Therefore, it tries to balance as much as possible the computation powers of the heterogeneous nodes.
523 \item Applying synchronously the HSA algorithm scales down the frequencies of the CPUs at the end of the first iteration of the application. Therefore the computation power of all the nodes is balanced as much as possible since the beginning of the application. On the other hand, applying asynchronously the HSA algorithm onto the asynchronous MS application only changes the frequencies of the nodes after executing many iterations. Therefore, before the frequencies are scaled down, the fast nodes have enough time to execute many more iterations than the slower ones and consequently increase the overall energy consumption of the application.
527 Finally, the asynchronous MS version that does not apply the HSA algorithm gives the worst standard deviation values because there is a big difference between the numbers of iterations executed by the heterogeneous nodes. Therefore, this version consumes more energy than the other versions and thus saves less energy as shown in Figure \ref{fig:eng_time_dvfs} (a).
530 \subsection{Comparing different power scenarios}
533 In the previous sections, all the results were obtained by assuming that the dynamic and the static powers are respectively equal to 80\% and 20\% of the total power consumed by a CPU during computation at its highest frequency. The goal
534 of this section is to evaluate the proposed frequency scaling factors selection algorithm when
535 these two power ratios are changed. Two new power scenarios are proposed in this section:
537 \item The dynamic and the static power are respectively equal to 90\% and 10\% of the total power consumed by a CPU during computation at its highest frequency.
538 \item The dynamic and the static power are respectively equal to 70\% and 30\% of the total power consumed by a CPU during computation at its highest frequency.
540 The asynchronous MS method solving a 3D problem of size $400^3$ was executed over two
541 platform scenarios, the Grid 4*4 and Grid 8*4. Two versions of the asynchronous MS method, with synchronous or asynchronous application of the HSA algorithm, were evaluated on each platform scenario.
542 The energy saving, performance degradation and distance percentages for both versions over both platform
543 scenarios and with the three power scenarios are presented in Figures \ref{fig:three_power_syn} and \ref{fig:three_power_asyn}.
547 \includegraphics[width=.7\textwidth]{fig/ch4/three_powers_syn.eps}
548 \caption{The results of the three power scenarios: Synchronous application of the HSA algorithm}
549 \label{fig:three_power_syn}
554 \includegraphics[width=.7\textwidth]{fig/ch4/three_powers_Asyn.eps}
555 \caption{The results of the three power scenarios: Asynchronous application of the HSA algorithm}
556 \label{fig:three_power_asyn}
561 \includegraphics[scale=.7]{fig/ch4/three_scenarios.pdf}
562 \caption{Comparison of the selected frequency scaling factors by the HSA algorithm for the three power scenarios}
563 \label{fig:three_scenarios}
567 The displayed results are the average of the percentages obtained from multiple runs.
568 Both figures show that the \np[\%]{90}-\np[\%]{10} power scenario gives the biggest energy saving percentages.
569 The high dynamic power ratio pushes the HSA algorithm to select bigger scaling factors
570 which decreases exponentially the dynamic energy consumption. Figure \ref{fig:three_scenarios} shows that the HSA algorithm selects in the \np[\%]{90}-\np[\%]{10} power scenario higher frequency scaling factors than in the other power scenarios for the same application. Moreover, the \np[\%]{90}-\np[\%]{10} power scenario has the smallest static power consumption per CPU which reduces the effect of the performance degradation, due to scaling down the frequencies of the CPUs, on the total energy consumption of the application. Finally, the \np[\%]{90}-\np[\%]{10} power scenario gives higher distance percentages than the other two scenarios which means the difference between the energy reduction and the performance degradation percentages is the highest for this scenario. From these observations, it can be concluded that in a platform with CPUs that consume low static power and high dynamic power, a lot of energy consumption can be reduced by applying the HSA algorithm but the performance degradation might be significant.
572 The energy saving percentages are the smallest with the \np[\%]{70}-\np[\%]{30} power scenario. The high static power consumption in this scenario force the HSA algorithm to select small scaling factors in order not to significantly decrease the performance of the application. Indeed, scaling down more the frequency of the CPUs will significantly increase the total execution time and consequently increase the static energy consumption which will outweigh the reduction of dynamic energy consumption. Finally, since the dynamic power consumption ratio is relatively small in this power scenario less dynamic energy reduction can be gained in lowering the frequencies of the CPUs than in the other power scenarios. On the other hand, the \np[\%]{70}-\np[\%]{30} power scenario's main advantage is that its performance suffers the least from the application of the HSA algorithm. From these observations, it can be concluded that in a high static power model just a small percentage of energy can be saved by applying the HSA algorithm.
574 The asynchronous application of the HSA algorithm on average
575 improves the performance of the application more than the synchronous
576 application of the HSA algorithm. This difference can be explained by the fact that applying the HSA algorithm synchronously scales down the frequencies of the CPUs after the first iteration, while applying the HSA algorithm asynchronously scales them down after many iterations, depending on the heterogeneity of the platform.
577 However, for the same reasons as above, the synchronous application of the HSA algorithm reduces the energy consumption more than the asynchronous one even though, the method applying the first has a bigger execution time than the one applying the latter.
579 \subsection{Comparing the HSA algorithm to the energy and delay product method}
582 Many methods have been proposed to optimize the trade-off between the energy consumption and the performance of message passing applications. A well known optimization model used to solve this
583 problem is the energy and delay product, $\mathit{EDP}=\mathit{energy}\times \mathit{delay}$.
584 In \cite{ref100,ref60,ref55}, the researchers used equal weights for the energy and delay factors.
585 However, others added some weights to the factors in order to direct the optimization towards more energy saving or less performance degradation. For example, in ~\cite{ref71} they used the product $\mathit{EDP}=\mathit{energy}\times \mathit{delay}^2$ which favour performance over energy consumption reduction.
587 In this work, the proposed scaling factors selection algorithm optimizes both the energy consumption and the performance at the same time and gives the same weight to both factors as in Equation \ref{eq:max-grid}. In this section, to evaluate the performance of the HSA algorithm, it is compared to the algorithm proposed by Spiliopoulos et al. \cite{ref67}. The latter is an online method that selects for each processor the frequency that minimizes the energy and delay product in order to reduce the energy consumption of a parallel application running over a homogeneous multi-cores platform. It gives the same weight to both metrics and predicts both the energy consumption and the execution time for each frequency gear as in the HSA algorithm.
588 To fairly compare the HSA algorithm with the algorithm of Spiliopoulos et al., the same energy models, Equation (\ref{eq:energy-grid}) or (\ref{eq:asyn_energy}), and execution time models, Equation (\ref{eq:perf-grid}) or (\ref{eq:asyn_perf}), are used to predict the energy consumptions and the execution times.
590 The EDP objective function can be equal to zero when the predicted delay is equal to zero. Moreover, this product is equal to zero before applying any DVFS operation. To eliminate the zero values, the EDP function must take the following form:
595 EDP = E_{Norm} \times (1+ D_{Norm})
597 where $E_{Norm}$ is the normalized energy consumption which is computed as in Equation (\ref{eq:enorm})
598 and $D_{Norm}$ is the normalized delay of the execution time which is computed as follows:
601 D_{Norm}= 1 -P_{Norm}= 1- (\frac{T_{old}}{T_{new}})
603 Where $P_{Norm}$ is computed as in Equation (\ref{eq:pnorm}). Furthermore, the EDP algorithm starts the search process from the initial frequencies that are computed as in Equation (\ref{eq:Fint}). It stops the search process when it reaches the minimum available frequency for each processor. The EDP algorithm was applied to the synchronous and asynchronous MS algorithm solving a 3D problem of size $400^3$. Two platform scenarios, Grid 4*4 and Grid 4*8, were chosen for this experiment. The EDP method was applied synchronously and asynchronously to the MS application as for the HSA algorithm. The comparison results of the EDP and HSA algorithms are presented in the Figures \ref{fig:compare_syndvfs_synms}, \ref{fig:compare_asyndvfs_asynms},\ref{fig:compare_asyndvfs_synms} and \ref{fig:compare_asyndvfs_asynms}. Each of these figures presents the energy saving, performance degradation and distance percentages for one version of the MS algorithm. The results shown in these figures are also the average of the results obtained from running each version of the MS method over the two platform scenarios described above.
610 \includegraphics[width=.7\textwidth]{fig/ch4/compare_syndvfs_synms.eps}
611 \caption{Synchronous application of the frequency scaling selection method on the synchronous MS version}
612 \label{fig:compare_syndvfs_synms}
616 \includegraphics[width=.7\textwidth]{fig/ch4/compare_syndvfs_asynms.eps}
617 \caption{Synchronous application of the frequency scaling selection method on the asynchronous MS version}
618 \label{fig:compare_syndvfs_asynms}
622 \includegraphics[width=.7\textwidth]{fig/ch4/compare_asyndvfs_synms.eps}
623 \caption{Asynchronous application of the frequency scaling selection method on the synchronous MS version}
624 \label{fig:compare_asyndvfs_synms}
628 \includegraphics[width=.7\textwidth]{fig/ch4/compare_asyndvfs_asynms.eps}
629 \caption{Asynchronous application of the frequency scaling selection method on the asynchronous MS version}
630 \label{fig:compare_asyndvfs_asynms}
636 All the figures show that the proposed HSA algorithm outperforms the EDP algorithm
637 in terms of energy saving and performance degradation. EDP gave for some scenarios negative trade-off values which mean that the performance degradation percentages are higher than
638 the energy saving percentages, while the HSA algorithm gives positive trade-off values over all scenarios.
639 The frequency scaling factors selected by the EDP are most of the time higher than those selected by the HSA algorithm as shown in Figure \ref{fig:three_methods}.
640 The results confirm that higher frequency scaling factors do not always give more energy saving, especially when the overall execution time is drastically increased. Therefore, the HSA method that computes the maximum distance between the energy saving and the performance degradation is an effective method to optimize these two metrics at the same time.
644 \includegraphics[scale=0.6]{fig/ch4/compare_scales.eps}
645 \caption{Comparison of the selected frequency scaling factors by the two algorithms
646 over the Grid 4*4 platform scenario}
647 \label{fig:three_methods}
653 \section{The Experimental Results over Grid'5000}
655 The performance of algorithm ~(\ref{HSA-asyn}) was evaluated by
656 executing the iterative multi-splitting method on the Grid'5000 textbed \cite{ref21}.
657 This testbed is a large-scale platform that consists of ten sites distributed
658 all over metropolitan France and Luxembourg. Moreover, some sites are equipped with power measurement tools that capture the power consumption for each node on those sites. Same method for computing the dynamic power consumption described in section \ref{ch3:4} is used.
659 Table \ref{table:grid5000} presents the characteristics of the selected clusters which are located on four different sites.
661 \caption{CPUs characteristics of the selected clusters}
664 \begin{tabular}{|*{7}{c|}}
666 Cluster & CPU & Max Freq. & Min Freq. & Diff. Freq. & Site & Dynamic power \\
667 Name & model & GHz & GHz & GHz & & of one core \\
669 Taurus & Intel & 2.3 & 1.2 & 0.1 & Lyon & \np[W]{35} \\
670 & E5-2630 & & & & & \\
672 Graphene & Intel & 2.53 & 1.2 & 0.133 & Nancy & \np[W]{23} \\
675 Parapide & Inte & 2.93 & 1.6 & 0.133 & Rennes & \np[W]{23} \\
678 StRemi & AMD & 1.7 & 0.8 & 0.2 & Reims & \np[W]{6} \\
679 &6164 HE & & & & & \\
682 \label{table:grid5000}
684 The dynamic power of each core with maximum frequency is computed as the difference between the measured power of the core, only when it is computing at maximum frequency, and the measured power of that core when it is idle as in \ref{eq:pdyn}. The CPUs' constructors do not specify the amount of static power their CPUs consume. Therefore, the static power consumption is assumed to be equal to \np[\%]{20} of the dynamic power consumption.
685 The experiments were conducted on problems of size $N=400^3$ and $N=500^3$ over 4 distributed clusters described in Table \ref{table:grid5000}. Each cluster is composed of 8 homogeneous nodes.
688 Algorithm~\ref{HSA-asyn} was applied synchronously and asynchronously to both synchronous and asynchronous MS applications.
689 Figures \ref{fig:time-compare} and \ref{fig:energy-compare} show the energy consumption and the execution time of the multi-splitting application with and without the application of the HSA algorithm respectively.
690 The asynchronous MS consumes more energy than the synchronous one.
691 Also, it can be noticed that both the asynchronous and synchronous MS with synchronous application of the HSA algorithm consume less energy than the other versions of the application. Synchronously applying the HSA algorithm allows them to scale down the CPUs' frequencies at the beginning of the second iteration. Thus, the consumption of dynamic energy by the application is reduced from the second iteration until the end of the application. On the contrary, with the asynchronous application of the HSA algorithm, the new frequencies cannot be computed at the end of the first iteration and consequently cannot be applied at the beginning of the second iteration. Indeed, since the performance information gathered during the first iteration is not sent synchronously at the end of the first iteration, fast nodes might execute many iterations before receiving the performance information, computing the new frequencies based on this information and applying the new computed frequencies. Therefore, many iterations might be computed by CPUs running on their highest frequency and consuming more dynamic energy than the scaled down processors.
692 Moreover, the execution time of the asynchronous MS version is lower than the execution time of the synchronous MS version because there is no idle time in the asynchronous version and the communications are overlapped by computations. Since the consumption of static energy is proportional to the execution time, the asynchronous MS version consumes less static energy than the synchronous version.
696 \includegraphics[width=.8\textwidth]{fig/ch4/time-compare.eps}
697 \caption{ Comparing the execution time}
698 \label{fig:time-compare}
703 \includegraphics[width=.8\textwidth]{fig/ch4/energy-compare.eps}
704 \caption{ Comparing the energy consumption}
705 \label{fig:energy-compare}
711 \begin{tabular}{|l|l|l|l|l|}
713 Size & Method &\begin{tabular}[c]{@{}l@{}}Energy\\ saving \%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Perf. \\ degra.\%\end{tabular} & Distance \\ \hline
714 \multirow{4}{*}{400} & Sync MS with Sync DVFS & 23.16 & 4.12 & 19.04 \\ \cline{2-5}
715 & Sync MS with Async DVFS & 18.36 & 2.59 & 15.77 \\ \cline{2-5}
716 & Async MS with Sync DVFS & 26.93 & -21.48 & 48.41 \\ \cline{2-5}
717 & Async MS with Async DVFS & 14.9 & -26.41 & 41.31 \\ \hline
718 \multirow{4}{*}{500} & Sync MS with Sync DVFS & 24.57 & 3.15 & 21.42 \\ \cline{2-5}
719 & Sync MS with Async DVFS & 19.97 & 0.60 & 19.37 \\ \cline{2-5}
720 & Async MS with Sync DVFS & 20.69 & -10.95 & 31.64 \\ \cline{2-5}
721 & Async MS with Async DVFS & 9.06 & -18.22 & 27.28 \\ \hline
723 \caption{The experimental results of HSA algorithm}
727 Table \ref{table:exper} shows that there are positive and negative performance
728 degradation percentages. A negative value means that the new execution time of a given version of the application is less than the execution time of the synchronous MS without DVFS.
729 Therefore, the version with the smallest negative performance degradation percentage has actually the best speed up when compared to the other versions.
730 The energy consumption and performance tradeoffs between these four versions can be computed as in the optimization Function
731 (\ref{eq:max-grid}). The asynchronous MS applying synchronously the HSA algorithm gives the best distance which is equal to $48.41\%$.
732 This version saves up to $26.93\%$ of energy and even reduces the execution time of the application by
733 $21.48\%$. This overall improvement is due to combining asynchronous computing and the synchronous application of the HSA algorithm.
738 Finally, this section shows that the obtained results over Grid'5000 are comparable to
739 simulation results of section \ref{ch4:7:2}, where the asynchronous MS applying synchronously the HSA algorithm is the best version in both of them. Moreover, results of Grid'5000 are better
740 than simulation ones because its computing clusters are more heterogeneous in term of the computing power and network characteristics. For example, the StRemi cluster has smaller computing power compared to others three clusters of Grid'5000 platform.
741 As a result, The increase in the idle times forces the proposed algorithm to select a big scaling factors and thus more energy saving.
746 \subsection{Comparing the HSA algorithm to the energy and delay product method}
749 The EDP algorithm, described in section \ref{ch4:7:5}, was applied synchronously and asynchronously to both the synchronous and asynchronous MS application of size $N=400^3$. The experiments were conducted over 4 distributed clusters, described in Table \ref{table:grid5000}, and 8 homogeneous nodes were used from each cluster.
750 Table \ref{table:comapre} presents the results of energy saving, performance degradation and distance percentages when applying the EDP method on four different MS versions.
751 Figure \ref{fig:compare} compares the distance percentages, computed as the difference between energy saving and performance degradation percentages, of the EDP and HSA
752 algorithms. This comparison shows that the proposed HSA algorithm gives better energy reduction and performance trade-off than the EDP method. The results of EDP method over Grid'5000 are better than those for EDP obtained by the simulation according to the increase in the heterogeneity between the computing clusters of Grid'5000 as mentioned before.
756 \caption{The EDP algorithm results over the Grid'5000}
757 \label{table:comapre}
758 \begin{tabular}{|l|l|l|l|}
760 Method name & Energy saving \% & Perf. degra.\% & Distance \% \\ \hline
761 Sync MS with Sync DVFS & 21.83 & 12.78 & 9.05 \\ \hline
762 Sync MS with Async DVFS & 18.26 & 7.68 & 10.58 \\ \hline
763 Async MS with Sync DVFS & 24.95 & -12.24 & 37.19 \\ \hline
764 Async MS with Async DVFS & 10.32 & -17.04 & 27.36 \\ \hline
770 \includegraphics[scale=0.65]{fig/ch4/compare.eps}
771 \caption{Comparing the trade-off percentages of HSA and EDP methods over the Grid'5000}
777 \section{Conclusions}
780 This chapter presents a new online frequency selection algorithm for asynchronous iterative
781 applications running over a grid. It selects the best vector of frequencies that maximizes
782 the distance between the predicted energy consumption and the predicted execution time.
783 The algorithm uses new
784 energy and performance models to predict the energy consumption and the execution time of asynchronous or hybrid message passing iterative applications running over grids.
785 The proposed algorithm was evaluated twice over the SimGrid simulator and Grid'5000 testbed while running a multi-splitting (MS) application that solves 3D problems.
786 The experiments were executed over different
787 grid scenarios composed of different numbers of clusters and different numbers of nodes per cluster.
788 The HSA algorithm was applied synchronously and asynchronously on a synchronous and an asynchronous version of the MS application. Both the simulation and real experiment results show that applying synchronous HSA algorithm on an asynchronous MS application gives the best tradeoff between energy consumption reduction and performance compared to other scenarios.
789 In the simulation results, this scenario saves on average the energy consumption by 22\% and reduces the execution time of the application by 5.72\%. This version optimizes both of the dynamic energy consumption by applying synchronously the HSA algorithm at the end of the first iteration and the static energy consumption by using asynchronous communications between nodes from different clusters which are overlapped by computations. The HSA algorithm was also evaluated over three power scenarios. As expected, the algorithm selects different vectors of frequencies for each power scenario. The highest energy consumption reduction was achieved in the power scenario with the highest dynamic power and the lowest performance degradation was obtained in the power scenario with the highest static power.
790 The proposed algorithm was compared to another method that
791 uses the well known energy and delay product as an objective function.
792 The comparison results showed that the proposed algorithm outperforms the latter
793 by selecting a vector of frequencies that gives a better trade-off between the energy
794 consumption reduction and the performance.
796 The experiments conducted over Grid'5000 were showed that applying the synchronous HSA algorithm on an asynchronous MS application saves the energy consumption by 26.93\% and reduces the execution time of the application by 21.48\%. On the other hand, these results are better than simulation ones, according to the increase in the heterogeneity level between the clusters of Grid'5000 compared to the simulated grid platform.