From: ziane Date: Thu, 7 May 2015 13:19:34 +0000 (+0200) Subject: SimGrid section X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rce2015.git/commitdiff_plain/1a0318a4e9b5af6c26d37412f90d56932f69286d?ds=sidebyside;hp=-c SimGrid section Merge branch 'master' of ssh://bilbo.iut-bm.univ-fcomte.fr/rce2014 --- 1a0318a4e9b5af6c26d37412f90d56932f69286d diff --combined paper.tex index 835f1e4,af2303e..d65672a --- a/paper.tex +++ b/paper.tex @@@ -246,8 -246,7 +246,8 @@@ by simulation are in accordance with re magnitude. To our knowledge, there is no study on this problematic. \section{SimGrid} - \label{sec:simgrid} +\label{sec:simgrid} +SimGrid~\cite{SimGrid,casanova+legrand+quinson.2008.simgrid,casanova+giersch+legrand+al.2014.versatile} is a discrete event simulation framework to study the behavior of large-scale distributed computing platforms as Grids, Peer-to-Peer systems, Clouds and High Performance Computation systems. It is widely used to simulate and evaluate heuristics, prototype applications or even assess legacy MPI applications. It is still actively developed by the scientific community and distributed as an open source software. %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%% @@@ -615,15 -614,16 +615,16 @@@ the network speed drops down (variati \end{figure} - According to the results of Figure~\ref{fig:03}, a degradation of the network - latency from $8.10^{-6}$ to $6.10^{-5}$ implies an absolute time increase of more - than $75\%$ (resp. $82\%$) of the execution for the classical GMRES (resp. Krylov - multisplitting) algorithm. In addition, it appears that the Krylov - multisplitting method tolerates more the network latency variation with a less - rate increase of the execution time. Consequently, in the worst case - ($lat=6.10^{-5 }$), the execution time for GMRES is almost the double than the - time of the Krylov multisplitting, even though, the performance was on the same - order of magnitude with a latency of $8.10^{-6}$. + According to the results of Figure~\ref{fig:03}, a degradation of the network + latency from $8.10^{-6}$ to $6.10^{-5}$ implies an absolute time increase of + more than $75\%$ (resp. $82\%$) of the execution for the classical GMRES + (resp. Krylov multisplitting) algorithm. In addition, it appears that the + Krylov multisplitting method tolerates more the network latency variation with a + less rate increase of the execution time.\RC{Les 2 précédentes phrases me + semblent en contradiction....} Consequently, in the worst case ($lat=6.10^{-5 + }$), the execution time for GMRES is almost the double than the time of the + Krylov multisplitting, even though, the performance was on the same order of + magnitude with a latency of $8.10^{-6}$. \subsubsection{Network bandwidth impacts on performance} \ \\ @@@ -635,7 -635,7 +636,7 @@@ Network & N1 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \\ \end{tabular} - \caption{Test conditions: Network bandwidth impacts} + \caption{Test conditions: Network bandwidth impacts\RC{Qu'est ce qui varie ici? Il n'y a pas de variation dans le tableau}} \label{tab:04} \end{table} @@@ -681,9 -681,9 +682,9 @@@ In these experiments, the input matrix time for both algorithms increases when the input matrix size also increases. But the interesting results are: \begin{enumerate} - \item the drastic increase ($10$ times) \RC{Je ne vois pas cela sur la figure} - \RCE{Corrige} of the number of iterations needed to reach the convergence for the classical - GMRES algorithm when the matrix size go beyond $N_{x}=150$; + \item the drastic increase ($10$ times) of the number of iterations needed to + reach the convergence for the classical GMRES algorithm when the matrix size + go beyond $N_{x}=150$; \RC{C'est toujours pas clair... ok le nommbre d'itérations est 10 fois plus long mais la suite de la phrase ne veut rien dire} \item the classical GMRES execution time is almost the double for $N_{x}=140$ compared with the Krylov multisplitting method. \end{enumerate}