magnitude. To our knowledge, there is no study on this problematic.
\section{SimGrid}
- \label{sec:simgrid}
+\label{sec:simgrid}
+SimGrid~\cite{SimGrid,casanova+legrand+quinson.2008.simgrid,casanova+giersch+legrand+al.2014.versatile} is a discrete event simulation framework to study the behavior of large-scale distributed computing platforms as Grids, Peer-to-Peer systems, Clouds and High Performance Computation systems. It is widely used to simulate and evaluate heuristics, prototype applications or even assess legacy MPI applications. It is still actively developed by the scientific community and distributed as an open source software.
%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%
\end{figure}
- According to the results of Figure~\ref{fig:03}, a degradation of the network
- latency from $8.10^{-6}$ to $6.10^{-5}$ implies an absolute time increase of more
- than $75\%$ (resp. $82\%$) of the execution for the classical GMRES (resp. Krylov
- multisplitting) algorithm. In addition, it appears that the Krylov
- multisplitting method tolerates more the network latency variation with a less
- rate increase of the execution time. Consequently, in the worst case
- ($lat=6.10^{-5 }$), the execution time for GMRES is almost the double than the
- time of the Krylov multisplitting, even though, the performance was on the same
- order of magnitude with a latency of $8.10^{-6}$.
+ According to the results of Figure~\ref{fig:03}, a degradation of the network
+ latency from $8.10^{-6}$ to $6.10^{-5}$ implies an absolute time increase of
+ more than $75\%$ (resp. $82\%$) of the execution for the classical GMRES
+ (resp. Krylov multisplitting) algorithm. In addition, it appears that the
+ Krylov multisplitting method tolerates more the network latency variation with a
+ less rate increase of the execution time.\RC{Les 2 précédentes phrases me
+ semblent en contradiction....} Consequently, in the worst case ($lat=6.10^{-5
+ }$), the execution time for GMRES is almost the double than the time of the
+ Krylov multisplitting, even though, the performance was on the same order of
+ magnitude with a latency of $8.10^{-6}$.
\subsubsection{Network bandwidth impacts on performance}
\ \\
Network & N1 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline
Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \\
\end{tabular}
- \caption{Test conditions: Network bandwidth impacts}
+ \caption{Test conditions: Network bandwidth impacts\RC{Qu'est ce qui varie ici? Il n'y a pas de variation dans le tableau}}
\label{tab:04}
\end{table}
time for both algorithms increases when the input matrix size also increases.
But the interesting results are:
\begin{enumerate}
- \item the drastic increase ($10$ times) \RC{Je ne vois pas cela sur la figure}
- \RCE{Corrige} of the number of iterations needed to reach the convergence for the classical
- GMRES algorithm when the matrix size go beyond $N_{x}=150$;
+ \item the drastic increase ($10$ times) of the number of iterations needed to
+ reach the convergence for the classical GMRES algorithm when the matrix size
+ go beyond $N_{x}=150$; \RC{C'est toujours pas clair... ok le nommbre d'itérations est 10 fois plus long mais la suite de la phrase ne veut rien dire}
\item the classical GMRES execution time is almost the double for $N_{x}=140$
compared with the Krylov multisplitting method.
\end{enumerate}