%% %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+% declaration of the new block
+\algblock{ParFor}{EndParFor}
+% customising the new block
+\algnewcommand\algorithmicparfor{\textbf{parfor}}
+\algnewcommand\algorithmicpardo{\textbf{do}}
+\algnewcommand\algorithmicendparfor{\textbf{end\ parfor}}
+\algrenewtext{ParFor}[1]{\algorithmicparfor\ #1\ \algorithmicpardo}
+\algrenewtext{EndParFor}{\algorithmicendparfor}
\chapter{Parallel Architectures and Iterative Applications}
\label{ch1}
%% Introduction
with each new generation of microprocessors, users of sequential applications expected that these applications should run faster over them than over the previous ones.
Nowadays, this idea is no longer valid since recent releases of microprocessors have many computing units that are embedded in one chip and programs are running only over one computing unit sequentially.
Indeed, new applications have significantly improved their performance over new architectures in parallel compared to traditional applications.
-To improve the performance of applications, they should parallelized and executed simultaneously over all available computing units.
-Furthermore, the concurrency revolution has been referred primarily to software revolution, that all applications are amenable to parallelization over the new parallel architectures \cite{ref51}.
-Moreover, parallel applications should be optimized to the parallel hardware that must execute them.
+To improve the performance of applications, they should be parallelized and executed simultaneously over all available computing units.
+Moreover, parallel applications should be optimized to the parallel hardwares that will execute them.
Therefore, parallel applications and parallel architectures are closely tied together.
-For example, the energy consumption of one parallel system mainly depends on both: (1) parallel applications and (2) parallel architectures. Indeed, an energy consumption model or any measurement system depends on many specifications, some of them are related to the parallel hardware features such as: (1) the frequency of processor, (2) the power consumption of processor and (3) the communication model. Others are relied to the parallel application such as: (1) the computation time and (2) the communication time of the application.
+For example, the energy consumption of one parallel system mainly depends on both: (1) parallel applications and (2) parallel architectures. Indeed, an energy consumption model or any measurement system depends on many specifications, some of them are related to the parallel hardware features such as: (1) the frequency of processor, (2) the power consumption of processor and (3) the communication model. Others rely to the parallel application such as: (1) the computation time and (2) the communication time of the application.
-This work is focused on studying the iterative parallel applications, where different parallel architectures
+This work of this thesis is focused on studying the iterative parallel applications, where different parallel architectures
are used to execute them in parallel, while optimizing their energy consumptions.
In this context, this chapter gives a brief overview about parallel hardware architectures and parallel iterative applications. Also, it discusses an energy model proposed by other authors used to measure the energy consumption of these applications.
-The reminder of this chapter is organized as follows: section \ref{ch1:2} describes different types of parallelism and different types of parallel platforms. It also explains some models of parallel programming. Section \ref{ch1:3} discusses both types of parallel iterative methods, synchronous and asynchronous one and comparing them. Section \ref{ch1:4}, presents a well accepted energy model from the state of the art that can be used to measure the energy consumption of parallel iterative applications when the frequency of processor is changed. Finally, section \ref{ch1:5} summarizes this chapter.
+The reminder of this chapter is organized as follows: section \ref{ch1:2} describes different types of parallelism and different types of parallel platforms. It also explains some models of parallel programming. Section \ref{ch1:3} discusses both types of parallel iterative methods, synchronous and asynchronous ones and comparing them. Section \ref{ch1:4}, presents a well accepted energy model from the state of the art that can be used to measure the energy consumption of parallel iterative applications when the frequency of processor is changed. Finally, section \ref{ch1:5} summarizes this chapter.
\section{Parallel Computing Architectures}
\label{ch1:2}
The process of executing the calculations simultaneously over many computing units is called parallel computing.
Its main principle refers to the ability of dividing a large problem into smaller sub-problems that can be solved at the same time \cite{ref2}.
-Solving the sub-problems of one main problem in parallel computing is carried out in parallel on multiple processors.
+Solving the sub-problems of one main problem in parallel is carried out in parallel on multiple processors.
Indeed, a parallel architecture can be defined as a computing system that is composed of many processing elements, which are connected via a network model and some tools that are used to make the processing units work together \cite{ref1}.
In other words, the parallel computing architecture consists of software and hardware resources.
Hardware resources are: (1) the processing units, (2) the memory model and (3) the network system that connects them. Software resources include (1) the specific operating system, (2) the programming language and (3) the compile or the runtime libraries. Besides, parallel computing may have different levels of parallelism that can be performed in a software or a hardware level. Five types of parallelism levels have been defined as follows:
\begin{itemize}
-\item \textbf{Bit-level parallelism (BLP)}: The appearance of very-large-scale integration (VLSI) in 1970s has been viewed as the first step towards parallel computing. It is used to increase the number of bits in the word size which is processed by a processor as illustrated in the figure~\ref{fig:ch1:1}. For many successive years, the number of bits have been increased starting from 4 bit to 64 bit microprocessors. For example nowadays, the recent x86-64 architecture is the most common architecture. For a given application, the biggest the word size is the lesser in instructions to be executed by the processor.
+\item \textbf{Bit-level parallelism (BLP)}: The appearance of very-large-scale integration (VLSI) in 1970s has been viewed as the first step towards parallel computing. It is used to increase the number of bits in the word size which is processed by a processor as illustrated in the figure~\ref{fig:ch1:1}. For many successive years, the number of bits have been increased starting from 4 bit to 64 bit microprocessors. For example nowadays, the recent x86-64 architecture is the most common architecture. For a given application, the biggest the word size is the lesser instructions to be executed by the processor.
\begin{figure}[h!]
\centering
\label{fig:ch1:1}
\end{figure}
-\item \textbf{Data-level parallelism (DLP)}: Data parallelism is the process of distributing data vector between parallel processors, where each one performs the same operations on its data sub-vector. Therefore, many arithmetic operations can be performed on the same data vector in a simultaneous manner. This type of parallelism can be used in many programs, especially in the area of scientific computing. Usually, data-parallel operations are only provided to arrays operations, for example, as shown in figure \ref{fig:ch1:2}. Vector multiplication, image and signal processing can be considered as an example of applications that use this type of parallelism.
+\item \textbf{Data-level parallelism (DLP)}: Data parallelism is the process of distributing data vector between processors, where each one performs the same operations on its data sub-vector. Therefore, many arithmetic operations can be performed on the same data vector in a simultaneous manner. This type of parallelism can be used in many programs, especially in the area of scientific computing. Usually, data-parallel operations are only provided to arrays operations, for example, as shown in figure \ref{fig:ch1:2}. Vector multiplication, image and signal processing can be considered as an example of applications that use this type of parallelism.
\begin{figure}[h!]
\centering
\item \textbf{Thread-level parallelism (TLP)}: It is also known as task-level parallelism.
According to Moore’s law \cite{ref9}, the number of transistors in a processor doubles each two years to increase its performance. Cache and main memory sizes must also be increased in order to avoid data bottlenecks.
However, increasing the number of transistors may generate some issues: (1) the first issue is related to drastically increase in cache size, which leads to a large access time. (2) the second issue is related to the huge increase in the number of the transistors per CPU, which can increase significantly the heat dissipation.
-Thus, CPUs constructors couldn't increase the frequency of the processor any more due to these reasons. Therefore, they created multi-core processor that programmers subdivided their programs into multiple tasks which can be then executed in parallel over them to improve the performance, see figure~\ref{fig:ch1:4}. Each processor can have individual threads or multiple threads dedicated to each task. A thread can be defined as a part of the parallel program that shares processor resources with other threads.
+Thus, CPUs constructors couldn't increase the frequency of the processor anymore due to these reasons. Therefore, they created multi-core processors. With multi-core processors, programmers subdivide their programs into multiple tasks which can be then executed in parallel over them to improve the performance, see figure~\ref{fig:ch1:4}. Each processor can have individual threads or multiple threads dedicated to each task. A thread can be defined as a part of the parallel program that shares processor resources with other threads.
\begin{figure}[h!]
\centering
Sequential~execution~time = \sum_{i=1}^{N} T_i
\end{equation}
-Whereas, if tasks are executed synchronously over multiple processing units in parallel, the execution time of the program is defined as the execution time of the task that has maximum execution time (the slowest task) as follows:
+Whereas, if tasks are executed synchronously over multiple processing units in parallel, the execution time of the program is defined as the execution time of the task that has maximum the execution time (the slowest task) as follows:
\begin{equation}
\label{ch1:eq2}
The main goal behind using a parallel architecture is to solve a big problem faster.
A collection of processing elements must work together to compute the final solution of the main problem. Many different architectures have been proposed
and classified according to parallelism in instruction and data
-streams. In 1966, Michel Flynn has proposed a simple model to categorize all computers models that still useful until now \cite{ref10}. His taxonomy is based on considering the data and the operations performed on this data to classify the computing system into four types as follows:
+streams. In 1966, Michel Flynn has proposed a simple model to categorize all computers models that is still useful until now \cite{ref10}. His taxonomy is based on considering the data and the operations performed on this data to classify the computing systems into four types as follows:
\begin{itemize}
The first version of MPI was designed by a group of researchers in
1991. It is a specification and have been implemented in many programming
languages such as C, Fortran and
- Java. Programmes written in these languages are compiled MPI with ordinary compilers.
- The functions are not only limited to peer to peer operations for
+ Java.
+ The MPI functions are not only limited to point to point operations for
sending and receiving messages, there are many others collective
- operations such as gathering and reduction operations. Furthermore, it has an
- asynchronous point to point operations, which make the computations
- to overlap with communications. While MPI is not designed for grid,
- \textbf{MPICH} is widely used as the communication interface for grid applications
- \cite{ref52}.
- In this work, MPI was used in programming our algorithms and applications which are
- implemented in both Fortran and C programming languages.
+ operations such as gathering and reduction operations.
+ While MPI is not designed for grid,
+ it is widely used as the communication interface for grid applications
+ \cite{ref52}.
+ In this work, MPI was used in programming our algorithms and applications which are
+ implemented in both Fortran and C programming languages.
\end{itemize}
\item \textbf{GPU programming models}
\begin{itemize}
\item \textbf{CUDA} \cite{ref37} Modern graphical processing units (GPUs) have increased its chip-level
- parallelism. Current NVIDIA GPUs consist of many-cores processor that have
+ parallelism. Current NVIDIA GPUs consist of many-cores processors that have
thousands of cores. To make their GPUs a general purpose computing processor in 2007
- the NVIDIA has developed CUDA as parallel programming language.
+ the NVIDIA has developed CUDA a parallel programming language.
A CUDA program has two parts: host and kernels. The host code is sequentially
executed over the CPU.
While, the kernels are executed in parallel over the GPUs.
\item \textbf{OpenCL}\cite{ref38} is for Open Computing Language. It is a parallel
programming language dedicated for heterogeneous platforms composed
- of CPUs and GPUs. The first release of this language has initially developed by Apple
- in 2008. Functions that are executed over OpenCL devices are called kernels.
- They are portable and can be execute on any computing hardware such as CPU or GPU cores.
+ of CPUs and GPUs. The first release of this language has initially been developed
+ by Apple in 2008. Functions that are executed over OpenCL devices are called kernels.
+ They are portable and can be executed on any computing hardware such as CPU or GPU
+ cores.
\section{Iterative Methods}
\label{ch1:3}
-In this work, we are interesting in solving linear equations which are well known in the scientific area.
+In this work, we are interested in solving linear equations which are well known in the scientific area.
It is generally expressed in the following form:
\begin{equation}
The first type of methods is called \textbf{Direct methods}, which consist of a finite number of steps depending on the
size of the linear system to give the exact solution. If the problem size is very big, these methods are expensive or their
solutions are impossible in some cases. The second type is called \textbf{Iterative methods}, which computes
-the same block of operations several times, starting from the initial vector until reaching the acceptable
-approximation of the exact solution. However, it can be effectively applied in parallel. Moreover, iterative methods can be used to solve both linear and non-linear equations.
+the same block of operations several times, starting from the initial vector until reaching the acceptable
+approximation of the exact solution. However, they can be effectively applied in parallel. Moreover, iterative methods can be used to solve both linear and non-linear equations.
The sequential iterative algorithm is typically organized as a series of steps essentially of the form:
\State Initialize the sub-vectors $(X_1^0,\dots,X_M^0)$
\For {$k:=1$ step $1$ to \textit{convergence}}
- \For {$i:=1$ to \textit{M}}
+ \ParFor {$i:=1$ to \textit{M}}
\State $X^{(k+1)} = F(X^k)$
- \EndFor
+ \EndParFor
\EndFor
The algorithm \ref{spia} represents the synchronous parallel iterative algorithm. Similarly to
the sequential iterative algorithm \ref{spia}, this algorithm stops iterating when the convergence condition is satisfied.
+We consider that the keyword \textbf{parfor} is used to make a for loop in parallel.
-This algorithm needs to satisfy some convergence condition which is called the global convergence condition. In order to detect the global convergence overall computing units, first we need to compute
+This algorithm needs to satisfy a convergence condition which is called the global convergence condition. In order to detect the global convergence overall computing units, first we need to compute
at each iteration the local residual. Then at the end of each iteration, all the local residuals from $M$ computing units must be reduced to one maximum value represented by the global residual.
For example, in MPI this operation is directly applied using a high level communication procedure called \textit{AllReduce}. The goal of this communication procedure is to apply the reduction operation on all local residuals computed by the computing units.
\label{fig:ch1:16}
\end{figure}
-Furthermore, the communications of the synchronous iterative algorithm can be replaced by asynchronous ones. The resulting algorithm is called Synchronous Iteration and Asynchronous
-Communication and denoted as \textbf{SIAC} algorithm. The main principle of this algorithm is to use a synchronized iterations while exchanging the data between the computing units asynchronously.
-Moreover, each computing unit doesn't need to wait for its neighbours to receive the data messages
+Furthermore, the communications of the synchronous iterative algorithm can be replaced by asynchronous ones. The resulting algorithm is called Synchronous Iterations with Asynchronous
+Communications and denoted as \textbf{SIAC} algorithm. The main principle of this algorithm is to use synchronize iterations while exchanging the data between the computing units asynchronously.
+Moreover, each computing unit does not need to wait for its neighbours to receive the data messages
that it has sent, while it only waits to receive data from them. This can be implemented with SISC algorithm that is programmed in MPI by replacing the synchronous send of the messages by asynchronous ones, while keeping
the synchronous receive. The only advantage of this technique is to reduce the idle times between iterations by allowing the communications to overlap partially
with computations, see figure \ref{fig:ch1:16}. The idle times are not totally eliminated because the
\subsection{Asynchronous Parallel Iterative method}
\label{ch1:3:2}
-The asynchronous iterations mean that all processors perform their iterations without considering the works of other processors. Each processor doesn't have to wait to receive
-data messages from other processors and continues to compute the next iteration using the last data received from neighbours. Therefore, there are no idle times at all between the iterations as in figure \ref{fig:ch1:17}. This figure indicates that fast processors can perform more iterations than the slower ones at the same time.
+The asynchronous iterations mean that all processors perform their iterations without considering the works of other processors. Each processor does not have to wait to receive
+data messages from other processors and continues to compute the next iteration using the last data received from neighbours. Therefore, there are no idle times at all between the iterations as in Figure \ref{fig:ch1:17}. This figure indicates that fast processors can perform more iterations than the slower ones at the same time.
The asynchronous iterative algorithm that uses an asynchronous communications is called \textbf{AIAC} algorithm. Similarly to the SISC algorithm, the AIAC algorithm subdivides the global vectors $X$ into $M$ sub-vectors between the computing units. The main difference between the two algorithms is that these $M$ sub-vectors are not updated at each iteration in the AIAC algorithm because both iterations and communications are asynchronous.
\label{fig:ch1:17}
\end{figure}
-The global convergence detection of the asynchronous parallel iterative is problem dependent process.
+The global convergence detection of the asynchronous parallel iterative is not trivial.
For more information about the convergence detection techniques of the asynchronous iterative methods, refer to \cite{ref40,ref41,ref42,ref43} for more details.
to receive the data messages from its neighbours to compute the next iteration.
\item Less sensitive for the heterogeneous communications and nodes' computing powers. In heterogeneous
- platform, the fast nodes don't need to wait for the slow ones, and they can perform more iterations
+ platform, the fast nodes do not need to wait for the slow ones, and they can perform more iterations
compared to them. While in the traditional synchronous iterative methods, the fast computing nodes perform
the same number of iterations as the slow ones because they are blocked.
\item In the grid architecture, the local clusters from different sites are
connected via a slow network with a high latency. The use of the AIAC model
reduces the delay of sending the data message over such slow network link and thus the performance
- of the applications is not effected.
+ of the applications is not affected.
\end{itemize}
\end{itemize}
-In our works, we are interested in optimizing the energy consumption of parallel iterative methods running over
-clusters or grids.
+In work of this thesis, we are interested in optimizing the energy consumption of parallel iterative
+methods running over clusters or grids.
There are two drawbacks in this energy model as follows:
\begin{itemize}
-\item The message passing iterative program consists of the communication and computation times.
+\item The message passing iterative program consists of communication and computation times.
This energy model assumes that the dynamic power is consumed during both these times.
While the processor during the communication times remains idle and only consumes the static
power, for more details see \cite{ref53}.
In the second section, the two types of parallel iterative methods: synchronous and asynchronous ones were presented. The synchronous iterative methods are well adapted to local homogeneous clusters with a high speed network link, while the asynchronous iterative methods are more suited to the distributed heterogeneous clusters.
Finally, in the third section, an energy consumption model proposed in the state of the art to measure the energy consumption of parallel applications was explained. This model cannot be used for all types of parallel architectures. Since, it assumes that the dynamic power is consumed during both of the communication and computation times, while the processor involved remains idle during the communication times and only consumes the static power. Moreover, it is not well adapted to heterogeneous architectures when there are different types of processors, that consume different dynamic and static powers.
-For these reasons, in the next chapters of this thesis new energy consumption models are developed to effectively predict the energy consumed by parallel iterative methods running on both homogeneous and heterogeneous architectures. Additionally, these energy models are used in a method optimizes both energy consumption and performance of an iterative message passing application.
\ No newline at end of file
+For these reasons, in the next chapters of this thesis new energy consumption models are developed to efficiently predict the energy consumed by parallel iterative methods running on both homogeneous and heterogeneous architectures. Additionally, these energy models are used in a method that optimizes both energy consumption and performance of an iterative message passing application.
\ No newline at end of file
Dynamic Voltage and Frequency Scaling (DVFS) can be applied to modern CPUs. This technique is
usually used to reduce the energy consumed by a CPU while computing. Indeed,
-power consumption by a processor at a given time is exponentially related to its frequency.
+power consumption by a processor is exponentially related to its frequency.
Thus, decreasing the frequency reduces the power consumed by the CPU. However, it can also
-significantly affect the performance of the executed program if it is compute bound and
-if a low CPU frequency is selected. The performance degradation ratio can even be higher
+significantly affect the performance of the executed program if it is compute bound. The performance degradation ratio can even be higher
than the saved energy ratio. Therefore, the chosen frequency scaling factor must give the best possible
trade-off between energy reduction and performance. This chapter presents an algorithm
that predicts the energy consumed with each frequency gear and selects the one that gives
programs (MPI) to choose the frequency scaling factor.
This algorithm has the ability to predict both energy consumption and execution time over
all available scaling factors. The prediction achieved depends on some computing time information,
-gathered at the beginning of the runtime. We apply this algorithm to seven MPI
-benchmarks. These MPI programs are the NAS parallel benchmarks (NPB v3.3)
-developed by NASA~\cite{ref65}. Our experiments are executed using the simulator
+gathered at the beginning of the runtime. We have applied this algorithm to the NAS parallel
+benchmarks (NPB v3.3) developed by NASA~\cite{ref65}. Our experiments are executed using the simulator
SimGrid/SMPI v3.10~\cite{ref66} over an homogeneous
distributed memory architecture.
-Mainly there are two tasks in this chapter, the first one is showed the results of the proposed frequency scaling selection algorithm using the energy model of the Rauber and Rünger \cite{ref47}. Furthermore, the proposed algorithm is compared with Rauber and Rünger's method. The comparison results show that our algorithm gives better energy-time trade-off. In the second task, a new energy model that takes into account both the communication and computation times of the MPI programs running over homogeneous cluster is developed.
-It also shows the new results obtained using the new energy model and comparing them with the ones use Rauber and Rünger energy model.
+This chapter is composed of two parts. The first part, the proposed frequency scaling selection algorithm uses the energy model of the Rauber and Rünger \cite{ref47}. and is compared to Rauber and Rünger's method. The comparison results show that our algorithm gives better energy-time trade-off. In the second part, a new energy model that takes into account both the communication and computation times of the MPI programs running over a homogeneous cluster is developed.
+It also shows the new results obtained using the new energy model. The results are compared to the ones given by Rauber and Rünger energy model.
This chapter is organized as follows: Section~\ref{ch2:2} explains the execution
of parallel tasks and the sources of slack times. It also presents an energy
-model for homogeneous platforms from other authors. Section~\ref{ch2:3} describes how the
+model for homogeneous platforms from other researchers. Section~\ref{ch2:3} describes how the
performance of MPI programs can be predicted. Section~\ref{ch2:4} presents
the energy-performance objective function that maximizes the reduction of energy
consumption while minimizing the degradation of the program's performance.
-Section~\ref{ch2:5} details the proposed energy-performance algorithm.
+Section~\ref{ch2:5} details the algorithm that returns the scaling factor that gives the best energy-performance
+trade-off for a parallel iterative application.
Section~\ref{ch2:6} verifies the accuracy of the performance prediction model
and presents the results of the proposed algorithm. It also shows the
comparison results between our method and other existing method.
communication times of MPI programs do not change when the frequency is scaled
down. On the other hand, some offline scaling factor selection methods use the
information gathered from previous full or partial executions of the program.
-The whole program or, a part of it, is usually executed over all the available frequency
+The whole program or a part of it is usually executed over all the available frequency
gears and the execution time and the energy consumed with each frequency
gear are measured. Then a heuristic or an exact method uses the retrieved
information to compute the values of the scaling factor for the processors.
In~\cite{ref57}, Xie et al. use an exact exponential breadth-first search algorithm
to compute the scaling factor values that give the optimal energy reduction
while respecting a deadline for a sequential program. They also present a
-linear heuristic that approximates the optimal solution. In~\cite{ref58} , Rountree
+linear heuristic that approximates the optimal solution. In~\cite{ref58}, Rountree
et al. use a linear programming algorithm, while in~\cite{ref59,ref60}, Cochran et
al. use a multi-logistic regression algorithm for the same goal. The main
drawback of these methods is that they all require executing the
The online scaling factor selection methods are executed during the runtime of
the program. They are usually integrated into iterative programs where the same
block of instructions is executed many times. During the first few iterations,
-a lot of information is measured such as the execution time, the energy consumed
+a lot of information are measured such as the execution time, the energy consumed
using a multimeter, the slack times, \dots{} Then a method will exploit these
measurements to compute the scaling factor values for each processor. This
-operation, measurements and computing new scaling factor, can be repeated as
+operation, measurements and computing new scaling factors, can be repeated as
much as needed if the iterations are not regular. Kimura, Peraza, Yu-Liang et
-al.~\cite{ref61,ref55,ref62} used varied heuristics to select the appropriate scaling
+al.~\cite{ref61,ref55,ref62} used many heuristics to select the appropriate scaling
factor values to eliminate the slack times during runtime. However, as seen
in~\cite{ref63,ref64}, machine learning methods can take a lot of time to converge
when the number of available gears is big. To reduce the impact of slack times,
primary contribution of this chapter is to present a new online scaling factor
selection method which has the following characteristics:
\begin{enumerate}
-\item It is based on both Rauber and Rünger and the new developed energy models to predict the energy consumption of the application with different frequency gears.
+\item It is based on both Rauber and Rünger and the new energy model to predict the energy consumption of the application with different frequency gears.
\item It selects the frequency scaling factor for simultaneously optimizing
energy reduction and maintaining performance.
\item It is well adapted to distributed architectures because it takes into
\end{enumerate}
-\section{Execution and energy of parallel tasks on homogeneous platform}
+\section{Execution time and energy consumption of parallel tasks running on a homogeneous platform}
\label{ch2:2}
-\subsection{Parallel tasks execution on homogeneous platform}
+\subsection{Parallel tasks execution on a homogeneous platform}
\label{ch2:2:1}
A homogeneous cluster consists in identical nodes in terms of hardware and
software. Each node has its own memory and at least one processor which can be
a multi-core. The nodes are connected via a high bandwidth network. Tasks
executed on this model can be either synchronous or asynchronous. In this chapter
-we consider execution of the synchronous tasks on distributed homogeneous
-platform. These tasks can exchange the data via synchronous message passing.
+we consider the execution of synchronous tasks on distributed homogeneous
+platform. These tasks can synchronously exchange data via message passing.
\centering
\includegraphics[scale=0.73]{fig/ch2/commtasks}
\includegraphics[scale=0.73]{fig/ch2/compt}\\ ~ ~ ~ ~ ~ ~(a) ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~(b)
-\caption{Parallel tasks execution on homogeneous platform (a) imbalanced communications and (b) imbalanced
+\caption{Parallel tasks execution on a homogeneous platform (a) imbalanced communications and (b) imbalanced
computations}
\label{fig:homo}
\end{figure}
-Therefore, the execution time of a task consists in the computation time and the
+The execution time of a task consists in the computation time and the
communication time. Moreover, the synchronous communications between tasks can
-lead to slack times while tasks wait at the synchronization barrier for other
+lead to slack times while tasks wait at a synchronization barrier for other
tasks to finish their tasks (see figure~\ref{fig:homo}(a)). The imbalanced
communications happen when nodes have to send/receive different amounts of data
or they communicate with different numbers of nodes. Other sources of slack
times are imbalanced computations. This happens when processing different
amounts of data on each processor (see figure~\ref{fig:homo}(b)). In this case the
fastest tasks have to wait at the synchronization barrier for the slowest ones
-to begin the next task. In both cases the overall execution time of the program
+to continue their computations. In both cases the overall execution time of the program
is the execution time of the slowest task as in EQ~(\ref{eq:T1}).
\begin{equation}
\label{eq:T1}
where $T_i$ is the execution time of task $i$ and all the tasks are executed
concurrently on different processors.
-\subsection{Energy model for homogeneous platform}
+\subsection{Energy consumption model for a homogeneous platform}
\label{ch2:2:2}
The total energy consumption model for a parallel homogeneous
= \frac{F_{max}}{F_{new}} \cdot \frac{T_1}{T_i}
\end{equation}
-We compare our algorithm with Rauber and Rünger's scaling factor selection
-method which uses the same energy model. In their method, the optimal scaling factor is
+Rauber and Rünger's scaling factor selection
+method uses the same energy model. In their method, the optimal scaling factor is
computed by minimizing the derivation of EQ~(\ref{eq:energy}) which produces
EQ~(\ref{eq:sopt}).
\left( 1 + \sum_{i=2}^{N} \frac{T_i^3}{T_1^3} \right) }
\end{equation}
-This model is computed the frequency scaling factor which optimizes the energy consumption of the parallel
-program into minimal level.
+This model computes the frequency scaling factor which minimizes the energy consumption of the parallel program.
\section{Performance evaluation of MPI programs}
\label{ch2:3}
The execution time of a parallel synchronous iterative application is equal
-to the execution time of the slowest task as in figure~(\ref{fig:homo}).
-If there is no communication and the application is not data bounded, the execution time of a
-parallel program is linearly proportional to the operational frequency and any
+to the execution time of its slowest task as in figure~(\ref{fig:homo}).
+If there is no communication in the application and it is not data bounded, the execution time of this
+parallel application is linearly proportional to the operational frequency. Any
DVFS operation for energy reduction increases the execution time of the parallel
program. Therefore, the scaling factor $S$ is linearly proportional to the
-execution time. However, in most MPI applications the processes exchange
-data. During these communications the processors involved remain idle until the
-communications are finished. For that reason, any change in the frequency has no
+execution time of the application. However, in most MPI applications the processes exchange
+data. During these communications the processors involved remain idle during a synchronous communication. For that reason, any change in the frequency has no
impact on the time of communication~\cite{ref53}. The communication time for a
task is the summation of periods of time that begin with an MPI call for sending
-or receiving a message till the message is synchronously sent or received. To
+or receiving a message until the message is synchronously sent or received. To
be able to predict the execution time of MPI program, the communication time and
the computation time for the slowest task must be measured before scaling. These
times are used to predict the execution time for any MPI program as a function
\section{Performance and energy reduction trade-off}
\label{ch2:4}
-This section presents our method for choosing the optimal scaling factor that
+This section presents our method for choosing the scaling factor that
gives the best tradeoff between energy reduction and performance. This method
takes into account the execution times for both computation and communication to
compute the scaling factor. Since the energy consumption and the performance
T_\textit{Min Comm Old}}
\end{equation}
-The relation between the execution time and the consumed energy of a program is nonlinear and complex. In consequences, the relation between the consumed energy and the scaling factor is also nonlinear, for more details refer to~\cite{ref53}. Therefore, the resulting normalized energy consumption curve and execution time curve, for different scaling factors, do not have the same direction see Figure~\ref{fig:rel}(b). To tackle this problem and optimize both terms, we inverse the equation of the normalized execution time (normalized performance) as follows:
+The relation between the execution time and the consumed energy of a program is nonlinear and complex. In consequences, the relation between the consumed energy and the scaling factor is also nonlinear, for more details refer to~\cite{ref53}. The resulting normalized energy consumption curve and execution time curve, for different scaling factors, do not have the same direction see Figure~\ref{fig:rel}(b). To tackle this problem and optimize both terms, we inverse the equation of the normalized execution time which gives the normalized performance and is computed as follows:
\begin{equation}
\label{eq:pnorm_en}
\centering
\centering
\includegraphics[scale=1]{fig/ch2/file}\\~ ~ ~ ~ ~(a) \\
-
\includegraphics[scale=1]{fig/ch2/file3}\\~ ~ ~ ~ ~(b)
-
\caption{The energy and performance relation (a) Converted relation and (b) Real relation}
\label{fig:rel}
\end{figure}
Then, we can model our objective function as finding the maximum distance
-between the energy curve EQ~\ref{eq:enorm} and the inverse of the execution time (performance)
+between the energy curve EQ~\ref{eq:enorm} and the performance
curve EQ~\ref{eq:pnorm_en} over all available scaling factors. This
represents the minimum energy consumption with minimum execution time (better
performance) at the same time, see Figure~\ref{fig:rel}(a). Then
\section{Optimal scaling factor for performance and energy}
\label{ch2:5}
-Algorithm~\ref{EPSA} computes the optimal scaling factor according to the
-objective function described above.
\begin{algorithm}[!t]
- \caption{Scaling factor selection algorithm for homogeneous cluster}
+ \caption{Scaling factor selection algorithm for a homogeneous cluster}
\label{EPSA}
\begin{algorithmic}[1]
\State Initialize the variable $Dist=0$
\end{algorithm}
-
+Algorithm~\ref{EPSA} computes the optimal scaling factor according to the
+objective function described above.
The proposed algorithm works online during the execution time of the MPI
program. It selects the optimal scaling factor after gathering the computation
and communication times from the program after one iteration. Then the program
changes the new frequencies of the CPUs according to the computed scaling
-factors. In our experiments over a homogeneous cluster described in
-Section~\ref{ch2:6}, this algorithm has a small execution time. It takes
-\np[$\mu$s]{1.52} on average for 4 nodes and \np[$\mu$s]{6.65} on average for 32
+factors. The experiments conducted over a homogeneous cluster and described in
+Section~\ref{ch2:6}, showed that this algorithm has a small execution time. It takes on average \np[$\mu$s]{1.52} for 4 nodes and \np[$\mu$s]{6.65} for 32
nodes. The algorithm complexity is $O(F\cdot N)$, where $F$ is the number of
available frequencies and $N$ is the number of computing nodes. The algorithm
is called just once during the execution of the program. The DVFS algorithm
-~\ref{dvfs} shows where and when the algorithm is called in the MPI
+~\ref{dvfs} shows where and when the algorithm \ref{EPSA} is called in the MPI
program.
After obtaining the optimal scaling factor, the program calculates the new
-frequency $F_i$ for each task proportionally to its time value $T_i$. By
+frequency $F_i$ for each task proportionally to its execution time, $T_i$. By
substitution of EQ~(\ref{eq:s}) in EQ~(\ref{eq:si}), we can calculate the new
frequency $F_i$ as follows:
\begin{equation}
\end{figure*}
%see Figure~\ref{fig:pred}
In our cluster there are 18 available frequency states for each processor. This
-leads to 18 run states for each program. We use seven MPI programs of the NAS
-parallel benchmarks: CG, MG, EP, FT, BT, LU and SP.~Figure~(\ref{fig:pred})
+leads to 18 run states for each program. Seven MPI programs of the NAS
+parallel benchmarks were used: CG, MG, EP, FT, BT, LU and SP.~Figure~(\ref{fig:pred})
presents plots of the real execution times compared to the simulated ones. The maximum
normalized error between these two execution times varies between 0.0073 to
-0.031 dependent on the executed benchmark. The smallest prediction error
+0.031 depending on the executed benchmark. The smallest prediction error
was for CG and the worst one was for LU.
\label{ch2:6:2}
The proposed algorithm was applied to seven MPI programs of the NAS benchmarks
(EP, CG, MG, FT, BT, LU and SP) which were run with three classes (A, B and C).
-For each instance the benchmarks were executed on a number of processors
-proportional to the size of the class. Each class represents the problem size
-ascending from class A to C. Additionally, depending on some speed up
-points for each class we run the classes A, B and C on 4, 8 or 9 and 16 nodes
-respectively. Depending on EQ~(\ref{eq:energy}), we measure the energy
-consumption for all the NAS MPI programs while assuming that the dynamic power with
-the highest frequency is equal to 20 W and the power static is equal to
-4 W for all experiments. These power values were also used by Rauber and
+For each instance, the benchmarks were executed on a number of processors
+proportional to the size of the class. Each class represents the problem size in
+ascending order from class A to C. The classes A, B and C were executed on 4, 8 or 9 and 16 nodes respectively. The energy consumption for all the NAS MPI programs was measured while assuming that the dynamic power with
+the highest frequency is equal to 20 W and the static power is equal to
+4 W for all the experiments. These power values were also used by Rauber and
Rünger in~\cite{ref47}. The results showed that the algorithm selected different
scaling factors for each program depending on the communication features of the
program as in the plots~(\ref{fig:nas}). These plots illustrate that there are
the two curves is very small. This leads to small energy savings. The opposite
happens when there are a lot of communication, the distance between the two
curves is big. This leads to more energy savings (e.g. CG and FT), see
-table~(\ref{table:factors results}). All discovered frequency scaling factors
-optimize both the energy and the performance simultaneously for all NAS
-benchmarks. In table~(\ref{table:factors results}), we record all optimal
-scaling factors results for each benchmark running class C. These scaling
+table~(\ref{table:factors results}). All the discovered frequency scaling factors
+optimize both the energy and the performance simultaneously for all the NAS
+benchmarks. In table~(\ref{table:factors results}), the optimal
+scaling factors results for each benchmark running class C are presented. These scaling
factors give the maximum energy saving percentage and the minimum performance
degradation percentage at the same time from all available scaling factors.
\begin{figure*}[h!]
As shown in table~(\ref{table:factors results}), when the optimal scaling
factor has a big value we can gain more energy savings as in CG and
FT benchmarks. The opposite happens when the optimal scaling factor has a small value as in BT and EP benchmarks. Our algorithm selects a big scaling factor value when the
-communication and other slacks times are big and smaller ones in opposite
-cases. In EP there are no communication inside the iterations. This leads our
-algorithm to select smaller scaling factor values (inducing smaller energy
-savings).
+communication and other slacks times are big. In EP there are no communication inside the iterations, which leads our algorithm to select smaller scaling factors (inducing smaller energy savings).
\subsection{Results comparison}
\label{ch2:6:3}
In this section, we compare our scaling factor selection method with Rauber and
-Rünger methods~\cite{ref47}. They had two scenarios, the first is to reduce energy
+Rünger's methods~\cite{ref47}. They had two scenarios, the first is to reduce energy
to the optimal level without considering the performance as in
EQ~(\ref{eq:sopt}). We refer to this scenario as $R_{E}$. The second scenario
is similar to the first except setting the slower task to the maximum frequency
-(when the scale $S=1$) to keep the performance from degradation as mush as
-possible. We refer to this scenario as $R_{E-P}$. While we refer to our
+(the scale $S=1$) to keep the performance from degradation as mush as
+possible. We refer to this scenario as $R_{E-P}$ and to our
algorithm as EPSA (Energy to Performance Scaling Algorithm). The comparison is
made in table ~\ref{table:compareC}. This table shows the results of our method and
Rauber and Rünger scenarios for all the NAS benchmarks programs for class C.
that our algorithm returns the best scaling factor that satisfy our objective
method: the largest distance between energy reduction and performance
degradation. Figure~\ref{fig:compare} illustrates even better the distance between the energy reduction and performance degradation. The negative values mean that one of
-the two objectives (energy or performance) have been degraded more than
+the two objectives (energy or performance) have been degraded more than the
other. The positive trade-offs with the highest values lead to maximum energy
savings while keeping the performance degradation as low as possible. Our
algorithm always gives the highest positive energy to performance trade-offs
while Rauber and Rünger's method, ($R_{E-P}$), gives sometimes negative
-trade-offs such as in BT and EP.
+trade-offs such as for BT and EP.
-\section{The new energy model for homogeneous cluster}
+\section{The new energy model for a homogeneous cluster}
\label{ch2:7}
As mentioned in chapter \ref{ch1} section \ref{ch1:3}, the power consumed by a processor is divided
-into two power metrics: the static and the dynamic power. The first power metrics is
-consumed as long as the computing unit is on, while the other one is consumed when the processor is
-doing the computations. Consequentially, the energy consumed by an individual processor
+into two power metrics: the static and the dynamic power. The first power metric is
+consumed as long as the computing unit is on, while the other one is consumed when the processor is doing the computations. Consequentially, the energy consumed by an individual processor
to execute a given program can be computed as follows:
time and $T_{Comp} \leq T$. $T_{Comp}$ may be equal to $T$ if there is no
communication, no slack time and no synchronization.
-Applying DVFS operation leads to a new frequency state, which is represented by the frequency scaling factor $S$ computed as in the equation \ref{eq:s}. According to the Rauber and Rünger energy model \ref{eq:energy}, the dynamic energy is consumed during the overall program's execution time. This assumption is not precise because the CPU is only consumed the dynamic power during computation time. Moreover, the CPU involved remain idle during the communication times and only consumed the static power, see \cite{ref53}. Also, we have conducted some experiments over a real homogeneous cluster by running some MPI programs of the NAS benchmarks. The results prove that there is no effect of changing the frequency on the communication times of these programs. Therefore, the frequency scaling factor $S$ can be increased the computation times propositionally, while not effecting the communication times. This assumption is acceptable according the used performance prediction model \ref{eq:tnew}. This model is evaluated and its prediction accuracy is showed in section \ref{ch2:6:1}. Therefore, the new dynamic energy is the dynamic power multiplied by the new time of computation and is given by the following equation:
+Applying a DVFS operation leads to a new frequency state which is represented by the frequency scaling factor $S$, computed as in the equation \ref{eq:s}. According to Rauber and Rünger's energy model \ref{eq:energy}, the dynamic energy is consumed during the overall program's execution time. This assumption is not precise because the CPU only consumes the dynamic power during computation time. Moreover, the CPU involved remains idle during the communication times and only consumes the static power, see \cite{ref53}. We have also conducted some experiments over a real homogeneous cluster where some MPI programs of the NAS benchmarks were executed while varying the CPUs frequencies at each execution. The results prove that changing the frequency do not effect on the communication times of these programs. Therefore, the frequency scaling factor $S$ can increase the computation times proportionally to its value, and do not effect the communication times. This assumption consort with the used performance prediction model \ref{eq:tnew}. This model is evaluated and its prediction accuracy is showed in section \ref{ch2:6:1}. Therefore, the new dynamic energy is the dynamic power multiplied by the new time of computation and is given by the following equation:
\begin{equation}
\label{eq:Edyn_new}
Es = P_{static} \cdot (T_{comp} \cdot S + T_{comm})
\end{equation}
-In particular, in the homogeneous cluster all the computing nodes have the same computing power and thus they have similar frequencies gears. The execution time of the MPI application is the execution time of the slowest task as shown in section \ref{ch2:2:1}. Therefore, the frequency scaling factor $S$ of the slowest task can be used to modelize the energy consumption of the parallel tasks execution.
-The dynamic energy consumed by $N$ parallel tasks is the summation of all the dynamic energies of all tasks during the computation time $\Tcomp[i]$ of each task. The static energy of each task is the static power consumed during the execution time of the slower task because they are synchronised together.
-Therefore, the energy consumption model of $N$ parallel task executed synchronously over a homogeneous platforms is represented in \ref{eq:e-new}.
+In particular, in a homogeneous cluster all the computing nodes have the same specification and thus their CPUs have similar frequencies gears. The execution time of the MPI application is the execution time of the slowest task as shown in section \ref{ch2:2:1}. Therefore, the frequency scaling factor $S$ of the slowest task can be used to modelize the energy consumption of the parallel application.
+The dynamic energy consumed by $N$ parallel tasks is the summation of all the dynamic energies of all tasks during the computation time $\Tcomp[i]$ of each task. The static energy of each task is the static power consumed during the execution time of the slower task because all the tasks are synchronised and have the same execution time.
+Therefore, the energy consumption model of $N$ parallel task executed synchronously over a homogeneous platforms can be represented as in \ref{eq:e-new}.
\begin{equation}
\label{eq:e-new}
( Ps \cdot ( T_\textit{Max Comp Old} \cdot S + T_{\textit{Min Comm Old}} ) ) \cdot N
\end{equation}
-According to this model, the frequency scaling factor $S$ reducing the energy consumption of the homogeneous architecture by a factor of $S^{-2}$ and increases the execution time by a factor of $S$.
-This model can be used to predict the energy consumption of the message passing synchronous iterative applications depending on the gathered computation and communication times from the first iteration.
-Furthermore, it can be used to measured the energy consumption of the iterative application by multiplying the energy consumed of all tasks in one iteration by the number of the iterations.
+According to this model, the frequency scaling factor $S$ reduces the energy consumption of the homogeneous architecture by a factor of $S^{-2}$ and increases the execution time by a factor of $S$.
+This model can be used to predict the energy consumption of the message passing synchronous iterative applications after gathering the computation and communication times of the first iteration.
+Furthermore, it can be used to measure the energy consumption of the iterative application by multiplying the energy consumed of all tasks in one iteration by the number of the iterations.
-Consequently, this model is used in the prediction process of the energy consumption by the
-algorithm \ref{EPSA} to selects the optimal frequency scaling factor. By the same way in the last section, the new frequency $F_i$ can be computed as in \ref{eq:fi} depending on the new selected frequency scaling factor.
-In the next section, the algorithm \ref{EPSA} is reimplemented using this new energy model \ref{eq:e-new} to select new frequency scaling factors and thus a new results are obtained.
+This model is used by the algorithm \ref{EPSA} to predict the energy consumption and to select the optimal frequency scaling factor. The new frequency $F_i$ can be computed as in \ref{eq:fi} while using the new selected frequency scaling factor.
+In the next section, algorithm \ref{EPSA} is re-evaluated while using this new energy model and the new results are presented.
\section{The experimental results using the new energy model}
\label{ch2:8}
-This section presents results of applying the frequency selection algorithm \ref{EPSA} using
-the new proposed energy model \ref{eq:e-new}. The algorithm is applied to NAS parallel benchmarks class
-C running on 16 computing nodes using the SimGrid simulator. Same values are used for the static and dynamic powers values as in section \ref{ch2:6:2}. Two measured energy consumptions of the NAS benchmarks class C using the new energy model and Rauber and Rünger's model are presented in the figure \ref{fig:energy_con}. The energy consumptions of both models are computed using similar parameters: frequency scaling factors, dynamic and static powers values. As shown in this figure, the majority of the benchmarks have smaller computed energies values using the new model compare to those use Rauber and Rünger's model.
-Indeed, there are two reasons explaining these differences in the energy consumptions. The first one is related to the dynamic power consumption, where the new energy model ensures that this power metric is consumed only during the computation time, while the other model assumes to consume the dynamic power during both computation and communication times and thus more dynamic energy consumption is given.
-The second one is related to the execution time, that is only its computation times is increased with the
-scaling factor value in the new energy model, while other energy model indicates that both the
-computation and communication times are increased with scaling factor and hence more static energy consumption is given. Therefore, the MPI programs that have big communication times, they have bigger measured energy consumption values using Rauber and Rünger's model compare to the new model as in CG, SP, LU and FT benchmarks. Whereas, if the MPI programs have very small communication times, their computed energy values have very small differences such as in MG and BT benchmarks, or they are identical such as in EP benchmark where there is no communication and no idle times.
+This section presents the results of applying the frequency selection algorithm \ref{EPSA} using the new proposed energy model \ref{eq:e-new} to NAS parallel benchmarks.
+The class C of the benchmarks was executed on a homogeneous architecture composed of 16 nodes and simulated by SimGrid. The same static and dynamic power values were used as in section \ref{ch2:6:2}. Figure \ref{fig:energy_con} presents the energy consumption of the NAS benchmarks class C using the new energy model and Rauber and Rünger's model. The energy consumptions of both models are computed using similar parameters: frequency scaling factors, dynamic and static powers values. As shown in this figure, the majority of the benchmarks consumes less energy using the new model than when using Rauber and Rünger's model.
+Two reasons explain these differences in the energy consumptions: the first one is related to the dynamic power consumption, where the new energy model ensures that this power metric is only consumed during the computation time, while the other model assumes that the dynamic power is consumed during both computation and communication times and thus increasing the dynamic energy consumption.
+The second reason is related to the execution time. In the new model only the computation times are increased when the frequency of a processor is scaled down, while
+Rauber and Rünger's model indicates that both the computation and communication times
+are increased according to the scaling factor and hence more static energy is consumed. Therefore, the MPI programs that have big communication times, have bigger energy consumption values using Rauber and Rünger's model when compared to the new model as for the CG, SP, LU and FT benchmarks. Whereas, if the MPI programs have very small communication times, their computed energy values have very small differences using both models such as in MG and BT benchmarks, or they are identical such as for the EP benchmark where there is no communication and no idle times.
\begin{figure*}[h!]
\centering
\includegraphics[width=.7\textwidth]{fig/ch2/energy_con.eps}
- \caption{Comparing the energy consumptions measured using the new and Rauber energy models}
+ \caption{Comparing the energy consumptions estimated using Rauber energy model and our own}
\label{fig:energy_con}
\end{figure*}
\begin{table}[h!]
\label{table:new-res}
\end{table}
-Table \ref{table:new-res} shows results of energy saving and performance degradation percentages of applying the frequency selecting algorithm using the new propose energy model. It also presents the new selected frequency scaling factor values and comparing them to ones use Rauber and Rünger's model. It indicates that the new selected frequency scaling factors are smaller compared to those selected using other model because the predicted energies by the new energy model are smaller.
-Consequentially, less energy savings and performance degradation percentages are produced according to
-these smaller frequency scaling factors selected using the new energy model, such as in CG, MG, LU, SP and FT benchmarks. While in BT and EP benchmarks where a very small or no communication times, similar scaling factors are selected because the predicted energies by the two models are approximately equivalent.
-On the other hand, the scaling factor selection algorithm can work with any energy model and it selects proportionally the scaling factor values depending on the predicted energies values.
+Table \ref{table:new-res} shows the energy saving and performance degradation percentages when applying the frequency selecting algorithm using the new proposed energy model. It also presents the new selected frequency scaling factors and compares them to the ones used by Rauber and Rünger's model. It shows that the new selected frequency scaling factors are smaller than those selected using the other model because the predicted energies by the new energy model are smaller.
+Consequently, less energy savings and performance degradation percentages are produced according to
+these smaller frequency scaling factors such as for the CG, MG, LU, SP and FT benchmarks. While in the BT and EP benchmarks where there are very small or no communication times, similar scaling factors are selected because the predicted energies by the two models are approximately equivalent.
-As a results, the new proposed energy model is more accurate than Rauber and Rünger's energy
+Therefore, the new proposed energy model is more accurate than Rauber and Rünger's energy
model, because it takes into consideration both the communication and idle times in addition to
-the computation times of MPI programs running over homogeneous clusters.
+the computation times of message passing programs running over homogeneous clusters.
+The scaling factor selection algorithm can work with any energy model and it selects the scaling factor values according to the predicted energy values.
\section{Conclusion}
\label{ch2:9}
frequency. Then, it selects the scaling factor that gives the best trade-off
between energy reduction and performance which is the maximum distance between
the energy and the performance curves. To evaluate this method, we
-have applied it to the NAS benchmarks and it was compared to Rauber and Rünger
-methods while being executed on the SimGrid simulator. The results showed that
-our method, outperforms Rauber and Rünger's methods in terms of energy-performance
-ratio. Finally, this chapter presents a new energy consumption model for the parallel
+have applied it to the NAS benchmarks and it was compared to Rauber and Rünger's
+method while being executed on the SimGrid simulator. The results showed that
+our method, outperforms Rauber and Rünger's method in terms of energy-performance
+ratio. Finally, this chapter presents a new energy consumption model for parallel
synchronous iterative methods running on homogeneous clusters. This model takes into consideration
-both the computation and communication times and them relation with the frequency scaling
-factor. The results obtained using the new energy model have shown selecting different frequency scaling factors than using other energy model and thus different experimental results have been produced.
+both the computation and communication times and their relation with the frequency scaling
+factor. The results obtained using the new energy model have shown that different frequency scaling factors
+were selected which gave new experimental results that are more accurate and realistic.
\ No newline at end of file