-%\documentclass[conference]{IEEEtran}
\documentclass[times]{cpeauth}
\usepackage{moreverb}
\section{SimGrid}
-\section{Simulation of the multisplitting method}
+%%%%%%%%%%%%%%%%%%%%%%%%%
+%%%%%%%%%%%%%%%%%%%%%%%%%
+
+\section{Two-stage splitting methods}
+\label{sec:04}
+\subsection{Multisplitting methods for sparse linear systems}
+\label{sec:04.01}
+Let us consider the following sparse linear system of $n$ equations in $\mathbb{R}$:
+\begin{equation}
+Ax=b,
+\label{eq:01}
+\end{equation}
+where $A$ is a sparse square and nonsingular matrix, $b$ is the right-hand side and $x$ is the solution of the system. The multisplitting methods solve the linear system~(\ref{eq:01}) iteratively as follows:
+\begin{equation}
+x^{k+1}=\displaystyle\sum^L_{\ell=1} E_\ell M^{-1}_\ell (N_\ell x^k + b),~k=1,2,3,\ldots
+\label{eq:02}
+\end{equation}
+where a collection of $L$ triplets $(M_\ell, N_\ell, E_\ell)$ defines the multisplitting of matrix $A$, such that: the different splittings are defined as $A=M_\ell-N_\ell$ where $M_\ell$ are nonsingular matrices, and $\sum_\ell{E_\ell=I}$ are diagonal nonnegative weighting matrices and $I$ is the identity matrix.
+
+\subsection{Simulation of two-stage methods using SimGrid framework}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%
+%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Experimental, Results and Comments}
iterations of classical GMRES for a given input matrix size; it is not
the case for the multisplitting method.
-%%\begin{wrapfigure}{l}{60mm}
+%\begin{wrapfigure}{l}{60mm}
\begin{figure} [ht!]
\centering
-\includegraphics[width=60mm]{Cluster x Nodes NX=150 and NX=170.jpg}
-\caption{Cluster x Nodes NX=150 and NX=170 \label{overflow}}
+\includegraphics[width=60mm]{cluster_x_nodes_nx_150_and_nx_170.pdf}
+\caption{Cluster x Nodes NX=150 and NX=170}
+%\label{overflow}}
\end{figure}
-%%\end{wrapfigure}
-
+%\end{wrapfigure}
Unless the 8x8 cluster, the time
execution difference between the two algorithms is important when
%\RCE{idem pour tous les tableaux de donnees}
+%\begin{wrapfigure}{l}{60mm}
\begin{figure} [ht!]
\centering
-\includegraphics[width=60mm]{Cluster x Nodes N1 x N2.jpg}
-\caption{Cluster x Nodes N1 x N2\label{overflow}}
+\includegraphics[width=60mm]{cluster_x_nodes_n1_x_n2.pdf}
+\caption{Cluster x Nodes N1 x N2}
+%\label{overflow}}
\end{figure}
+%\end{wrapfigure}
The experiments compare the behavior of the algorithms running first on
speed inter- cluster network (N1) and a less performant network (N2).
\begin{figure} [ht!]
\centering
-\includegraphics[width=60mm]{Network latency impact on execution time.jpg}
-\caption{Network latency impact on execution time\label{overflow}}
+\includegraphics[width=60mm]{network_latency_impact_on_execution_time.pdf}
+\caption{Network latency impact on execution time}
+%\label{overflow}}
\end{figure}
\begin{figure} [ht!]
\centering
-\includegraphics[width=60mm]{Network bandwith impact on execution time.jpg}
-\caption{Network bandwith impact on execution time\label{overflow}}
+\includegraphics[width=60mm]{network_bandwith_impact_on_execution_time.pdf}
+\caption{Network bandwith impact on execution time}
+%\label{overflow}
\end{figure}
\begin{figure} [ht!]
\centering
-\includegraphics[width=60mm]{Pb size impact on execution time.jpg}
-\caption{Pb size impact on execution time\label{overflow}}
+\includegraphics[width=60mm]{pb_size_impact_on_execution_time.pdf}
+\caption{Pb size impact on execution time}
+%\label{overflow}}
\end{figure}
In this experimentation, the input matrix size has been set from
\begin{figure} [ht!]
\centering
-\includegraphics[width=60mm]{CPU Power impact on execution time.jpg}
-\caption{CPU Power impact on execution time\label{overflow}}
+\includegraphics[width=60mm]{cpu_power_impact_on_execution_time.pdf}
+\caption{CPU Power impact on execution time}
+%\label{overflow}}
\end{figure}
Using the SIMGRID simulator flexibility, we have tried to determine the