\maketitle
\section{Introduction}
+The use of multi-core architectures for solving large scientific problems seems to become imperative in a lot of cases.
+Whatever the scale of these architectures (distributed clusters, computational grids, embedded multi-core \ldots) they are generally
+well adapted to execute complexe parallel applications operating on a large amount of data. Unfortunately, users (industrials or scientists),
+who need such computational resources, may not have an easy access to such efficient architectures. The cost of using the platform and/or the cost of
+testing and deploying an application are often very important. So, in this context it is difficult to optimize a given application for a given
+architecture. In this way and in order to reduce the access cost to these computing resources it seems very interesting to use a simulation environment.
+The advantages are numerous: development life cycle, code debugging, ability to obtain results quickly \ldots at the condition that the simulation results are in education with the real ones.
+
+In this paper we focus on a class of highly efficient parallel algorithms called \emph{iterative algorithms}. The
+parallel scheme of iterative methods is quite simple. It generally involves the division of the problem
+into several \emph{blocks} that will be solved in parallel on multiple
+processing units. Each processing unit has to
+compute an iteration, to send/receive some data dependencies to/from
+its neighbors and to iterate this process until the convergence of
+the method. Several well-known methods demonstrate the convergence of these algorithms~\cite{BT89,Bahi07}.
+In this processing mode a task cannot begin a new iteration while it
+has not received data dependencies from its neighbors. We say that the iteration computation follows a synchronous scheme.
+In the asynchronous scheme a task can compute a new iteration without having to
+wait for the data dependencies coming from its neighbors. Both
+communication and computations are asynchronous inducing that there is
+no more idle times, due to synchronizations, between two
+iterations~\cite{bcvc06:ij}. This model presents some advantages and drawbacks that we detail in section 2 but even if the number of iterations required to converge is
+generally greater than for the synchronous case, it appears that the asynchronous iterative scheme can significantly reduce overall execution
+times by suppressing idle times due to synchronizations~(see \cite{Bahi07} for more details).
+
+Nevertheless, in both cases (synchronous or asynchronous) it is very time consuming to find optimal configuration and deployment requirements
+for a given application on a given multi-core architecture. Finding good resource allocations policies under varying CPU power, network speeds and
+loads is very challenging and labor intensive.~\cite{Calheiros:2011:CTM:1951445.1951450}. This problematic is even more difficult for the asynchronous scheme
+where variations of the parameters of the execution platform can lead to very different number of iterations required to converge and so to very different execution times.
+In this challenging context we think that the use of a simulation tool can greatly leverage the possibility of testing various platform scenarios.
+
+The main contribution of this paper is to show that the use of a simulation tool (i.e. the SimGrid toolkit~\cite{SimGrid}) in the context of real
+parallel applications (i.e. large linear system solver) can help developers to better tune their application for a given multi-core architecture.
+To show the validity of this approach we first compare the simulated execution of the multisplitting algorithm with the GMRES (Generalized Minimal Residual) solver
+\cite{ref1} in synchronous mode. The obtained results on different simulated multi-core architectures confirm the real results previously obtained on non simulated architectures.
+We also confirm the efficiency of the asynchronous multisplitting algorithm comparing to the synchronous GMRES. In this way and with a simple computing architecture (a laptop)
+SimGrid allows us to run a test campaign of a real parallel iterative applications on different simulated multi-core architectures.
+To our knowledge, there is no related work on the large-scale multi-core simulation of a real synchronous and asynchronous iterative application.
+
+This paper is organized as follows. Section 1 \ref{sec:synchro} presents the iteration model we use and more particularly the asynchronous scheme.
+In section \ref{sec:simgrid} the SimGrid simulation toolkit is presented. Section \ref{sec:04} details the different solvers that we use.
+Finally our experimental results are presented in section \ref{sec:expe} followed by some concluding remarks and perspectives.
+
\section{The asynchronous iteration model}
+\label{sec:asynchro}
\section{SimGrid}
-
+ \label{sec:simgrid}
+
%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Experimental Results}
+\label{sec:expe}
\subsection{Setup study and Methodology}
\end{figure}
-According the results in table and figure 5, degradation of the network
+According the results in figure 5, degradation of the network
latency from 8.10$^{-6}$ to 6.10$^{-5}$ implies an absolute time
increase more than 75\% (resp. 82\%) of the execution for the classical
GMRES (resp. multisplitting) algorithm. In addition, it appears that the
\begin{tabular}{r c }
\hline
Grid & 2x16\\ %\hline
- Network & N1 : bw=1Gbs - lat=5E-05 \\ %\hline
- Input matrix size & N$_{x}$ =150 x 150 x 150\\ \hline
+ Network & N1 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline
+ Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \\
\end{tabular}
-
Table 4 : Network bandwidth impact \\
\end{footnotesize}
The results of increasing the network bandwidth depict the improvement
of the performance by reducing the execution time for both of the two
-algorithms. However, and again in this case, the multisplitting method
+algorithms (Figure 6). However, and again in this case, the multisplitting method
presents a better performance in the considered bandwidth interval with
a gain of 40\% which is only around 24\% for classical GMRES.
\begin{tabular}{r c }
\hline
Grid & 4x8\\ %\hline
- Network & N2 : bw=1Gbs - lat=5E-05 \\ %\hline
- Input matrix size & N$_{x}$ = From 40 to 200\\ \hline
+ Network & N2 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline
+ Input matrix size & N$_{x}$ = From 40 to 200\\ \hline \\
\end{tabular}
Table 5 : Input matrix size impact\\
\end{figure}
In this experimentation, the input matrix size has been set from
-Nx=Ny=Nz=40 to 200 side elements that is from 40$^{3}$ = 64.000 to
-200$^{3}$ = 8.000.000 points. Obviously, as shown in the figure 5,
-the execution time for the algorithms convergence increases with the
-input matrix size. But the interesting result here direct on (i) the
+N$_{x}$ = N$_{y}$ = N$_{z}$ = 40 to 200 side elements that is from 40$^{3}$ = 64.000 to
+200$^{3}$ = 8.000.000 points. Obviously, as shown in the figure 7,
+the execution time for the two algorithms convergence increases with the
+input matrix size. But the interesting results here direct on (i) the
drastic increase (300 times) of the number of iterations needed before
the convergence for the classical GMRES algorithm when the matrix size
-go beyond Nx=150; (ii) the classical GMRES execution time also almost
-the double from Nx=140 compared with the convergence time of the
+go beyond N$_{x}$=150; (ii) the classical GMRES execution time also almost
+the double from N$_{x}$=140 compared with the convergence time of the
multisplitting method. These findings may help a lot end users to setup
the best and the optimal targeted environment for the application
deployment when focusing on the problem size scale up. Note that the