-Parallelization of such algorithms generally involve the division of the problem into several \emph{blocks} that will
-be solved in parallel on multiple processing units. The latter will communicate each intermediate results before a new
-iteration starts and until the approximate solution is reached. These parallel computations can be performed either in
-\emph{synchronous} mode where a new iteration begins only when all nodes communications are completed,
-or in \emph{asynchronous} mode where processors can continue independently with few or no synchronization points. For
-instance in the \textit{Asynchronous Iterations~-- Asynchronous Communications (AIAC)} model~\cite{bcvc06:ij}, local
-computations do not need to wait for required data. Processors can then perform their iterations with the data present
-at that time. Even if the number of iterations required before the convergence is generally greater than for the
-synchronous case, AIAC algorithms can significantly reduce overall execution times by suppressing idle times due to
-synchronizations especially in a grid computing context (see~\cite{Bahi07} for more details).
-
-Parallel (synchronous or asynchronous) applications may have different
-configuration and deployment requirements. Quantifying their resource
-allocation policies and application scheduling algorithms in grid computing
-environments under varying load, CPU power and network speeds is very costly,
-very labor intensive and very time
-consuming~\cite{Calheiros:2011:CTM:1951445.1951450}. The case of AIAC
-algorithms is even more problematic since they are very sensible to the
-execution environment context. For instance, variations in the network bandwidth
-(intra and inter-clusters), in the number and the power of nodes, in the number
-of clusters\dots{} can lead to very different number of iterations and so to
-very different execution times. Then, it appears that the use of simulation
-tools to explore various platform scenarios and to run large numbers of
-experiments quickly can be very promising. In this way, the use of a simulation
-environment to execute parallel iterative algorithms found some interests in
-reducing the highly cost of access to computing resources: (1) for the
-applications development life cycle and in code debugging (2) and in production
-to get results in a reasonable execution time with a simulated infrastructure
-not accessible with physical resources. Indeed, the launch of distributed
-iterative asynchronous algorithms to solve a given problem on a large-scale
-simulated environment challenges to find optimal configurations giving the best
-results with a lowest residual error and in the best of execution time.
+Parallelization of such algorithms generally involves the division of the problem
+into several \emph{blocks} that will be solved in parallel on multiple
+processing units. The latter will communicate each intermediate results before a
+new iteration starts and until the approximate solution is reached. These
+parallel computations can be performed either in \emph{synchronous} mode where a
+new iteration begins only when all nodes communications are completed, or in
+\emph{asynchronous} mode where processors can continue independently with no
+synchronization points~\cite{bcvc06:ij}. In this case, local computations do not
+need to wait for required data. Processors can then perform their iterations
+with the data present at that time. Even if the number of iterations required
+before the convergence is generally greater than for the synchronous case,
+asynchronous iterative algorithms can significantly reduce overall execution
+times by suppressing idle times due to synchronizations especially in a grid
+computing context (see~\cite{Bahi07} for more details).
+
+Parallel applications based on a (synchronous or asynchronous) iteration model
+may have different configuration and deployment requirements. Quantifying their
+resource allocation policies and application scheduling algorithms in grid
+computing environments under varying load, CPU power and network speeds is very
+costly, very labor intensive and very time
+consuming~\cite{Calheiros:2011:CTM:1951445.1951450}. The case of asynchronous
+iterative algorithms is even more problematic since they are very sensible to
+the execution environment context. For instance, variations in the network
+bandwidth (intra and inter-clusters), in the number and the power of nodes, in
+the number of clusters\dots{} can lead to very different number of iterations
+and so to very different execution times. Then, it appears that the use of
+simulation tools to explore various platform scenarios and to run large numbers
+of experiments quickly can be very promising. In this way, the use of a
+simulation environment to execute parallel iterative algorithms found some
+interests in reducing the highly cost of access to computing resources: (1) for
+the applications development life cycle and in code debugging (2) and in
+production to get results in a reasonable execution time with a simulated
+infrastructure not accessible with physical resources. Indeed, the launch of
+distributed iterative asynchronous algorithms to solve a given problem on a
+large-scale simulated environment challenges to find optimal configurations
+giving the best results with a lowest residual error and in the best of
+execution time.