+\section{Introduction} The use of multi-core architectures to solve large
+scientific problems seems to become imperative in many situations.
+Whatever the scale of these architectures (distributed clusters, computational
+grids, embedded multi-core,~\ldots) they are generally well adapted to execute
+complex parallel applications operating on a large amount of data.
+Unfortunately, users (industrials or scientists), who need such computational
+resources, may not have an easy access to such efficient architectures. The cost
+of using the platform and/or the cost of testing and deploying an application
+are often very important. So, in this context it is difficult to optimize a
+given application for a given architecture. In this way and in order to reduce
+the access cost to these computing resources it seems very interesting to use a
+simulation environment. The advantages are numerous: development life cycle,
+code debugging, ability to obtain results quickly\dots{} In counterpart, the simulation results need to be consistent with the real ones.
+
+In this paper we focus on a class of highly efficient parallel algorithms called
+\emph{iterative algorithms}. The parallel scheme of iterative methods is quite
+simple. It generally involves the division of the problem into several
+\emph{blocks} that will be solved in parallel on multiple processing
+units. Each processing unit has to compute an iteration to send/receive some
+data dependencies to/from its neighbors and to iterate this process until the
+convergence of the method. Several well-known studies demonstrate the
+convergence of these algorithms~\cite{BT89,bahi07}. In this processing mode a
+task cannot begin a new iteration while it has not received data dependencies
+from its neighbors. We say that the iteration computation follows a
+\textit{synchronous} scheme. In the asynchronous scheme a task can compute a new
+iteration without having to wait for the data dependencies coming from its
+neighbors. Both communication and computations are \textit{asynchronous}
+inducing that there is no more idle time, due to synchronizations, between two
+iterations~\cite{bcvc06:ij}. This model presents some advantages and drawbacks
+that we detail in section~\ref{sec:asynchro} but even if the number of
+iterations required to converge is generally greater than for the synchronous
+case, it appears that the asynchronous iterative scheme can significantly
+reduce overall execution times by suppressing idle times due to
+synchronizations~(see~\cite{bahi07} for more details).
+
+Nevertheless, in both cases (synchronous or asynchronous) it is very time
+consuming to find optimal configuration and deployment requirements for a given
+application on a given multi-core architecture. Finding good resource
+allocations policies under varying CPU power, network speeds and loads is very
+challenging and labor intensive~\cite{Calheiros:2011:CTM:1951445.1951450}. This
+problematic is even more difficult for the asynchronous scheme where a small
+parameter variation of the execution platform and of the application data can
+lead to very different numbers of iterations to reach the converge and so to
+very different execution times. In this challenging context we think that the
+use of a simulation tool can greatly leverage the possibility of testing various
+platform scenarios.
+
+The {\bf main contribution of this paper} is to show that the use of a
+simulation tool (i.e. the SimGrid toolkit~\cite{SimGrid}) in the context of real
+parallel applications (i.e. large linear system solvers) can help developers to
+better tune their application for a given multi-core architecture. To show the
+validity of this approach we first compare the simulated execution of the Krylov
+multisplitting algorithm with the GMRES (Generalized Minimal Residual)
+solver~\cite{saad86} in synchronous mode. The simulation results allow us to
+determine which method to choose given a specified multi-core architecture.
+Moreover the obtained results on different simulated multi-core architectures
+confirm the real results previously obtained on non simulated architectures.
+More precisely the simulated results are in accordance (i.e. with the same order
+of magnitude) with the works presented in~\cite{couturier15}, which show that
+the synchronous multisplitting method is more efficient than GMRES for large
+scale clusters. Simulated results also confirm the efficiency of the
+asynchronous multisplitting algorithm compared to the synchronous GMRES
+especially in case of geographically distant clusters.
+
+In this way and with a simple computing architecture (a laptop) SimGrid allows us
+to run a test campaign of a real parallel iterative applications on
+different simulated multi-core architectures. To our knowledge, there is no
+related work on the large-scale multi-core simulation of a real synchronous and
+asynchronous iterative application.
+
+This paper is organized as follows. Section~\ref{sec:asynchro} presents the
+iteration model we use and more particularly the asynchronous scheme. In
+section~\ref{sec:simgrid} the SimGrid simulation toolkit is presented.
+Section~\ref{sec:04} details the different solvers that we use. Finally our
+experimental results are presented in section~\ref{sec:expe} followed by some
+concluding remarks and perspectives.
+
+
+\section{The asynchronous iteration model and the motivations of our work}
+\label{sec:asynchro}
+
+Asynchronous iterative methods have been studied for many years theoritecally and
+practically. Many methods have been considered and convergence results have been
+proved. These methods can be used to solve, in parallel, fixed point problems
+(i.e. problems for which the solution is $x^\star =f(x^\star)$. In practice,
+asynchronous iterations methods can be used to solve, for example, linear and
+non-linear systems of equations or optimization problems, interested readers are
+invited to read~\cite{BT89,bahi07}.
+
+Before using an asynchronous iterative method, the convergence must be
+studied. Otherwise, the application is not ensure to reach the convergence. An
+algorithm that supports both the synchronous or the asynchronous iteration model
+requires very few modifications to be able to be executed in both variants. In
+practice, only the communications and convergence detection are different. In
+the synchronous mode, iterations are synchronized whereas in the asynchronous
+one, they are not. It should be noticed that non blocking communications can be
+used in both modes. Concerning the convergence detection, synchronous variants
+can use a global convergence procedure which acts as a global synchronization
+point. In the asynchronous model, the convergence detection is more tricky as
+it must not synchronize all the processors. Interested readers can
+consult~\cite{myBCCV05c,bahi07,ccl09:ij}.
+
+The number of iterations required to reach the convergence is generally greater
+for the asynchronous scheme (this number depends depends on the delay of the
+messages). Note that, it is not the case in the synchronous mode where the
+number of iterations is the same than in the sequential mode. In this way, the
+set of the parameters of the platform (number of nodes, power of nodes,
+inter and intra clusters bandwidth and latency, \ldots) and of the
+application can drastically change the number of iterations required to get the
+convergence. It follows that asynchronous iterative algorithms are difficult to
+optimize since the financial and deployment costs on large scale multi-core
+architecture are often very important. So, prior to delpoyment and tests it
+seems very promising to be able to simulate the behavior of asynchronous
+iterative algorithms. The problematic is then to show that the results produce
+by simulation are in accordance with reality i.e. of the same order of
+magnitude. To our knowledge, there is no study on this problematic.