X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rce2015.git/blobdiff_plain/6cf9ae48517bcca32ee10fc0e2140e3df0386bd7..4a76c4a8fcc10af92df927a0bebb9a1793d7e1aa:/paper.tex?ds=inline diff --git a/paper.tex b/paper.tex index 6ac52c3..0b7dc1d 100644 --- a/paper.tex +++ b/paper.tex @@ -21,7 +21,6 @@ \usepackage{algpseudocode} %\usepackage{amsthm} \usepackage{graphicx} -\usepackage[american]{babel} % Extension pour les liens intra-documents (tagged PDF) % et l'affichage correct des URL (commande \url{http://example.com}) %\usepackage{hyperref} @@ -75,23 +74,26 @@ analysis of simulated grid-enabled numerical iterative algorithms} %\itshape{\journalnamelc}\footnotemark[2]} -\author{ Charles Emile Ramamonjisoa and - David Laiymani and - Arnaud Giersch and - Lilia Ziane Khodja and - Raphaël Couturier +\author{Charles Emile Ramamonjisoa\affil{1}, + David Laiymani\affil{1}, + Arnaud Giersch\affil{1}, + Lilia Ziane Khodja\affil{2} and + Raphaël Couturier\affil{1} } \address{ - \centering - Femto-ST Institute - DISC Department\\ - Université de Franche-Comté\\ - Belfort\\ - Email: \email{{raphael.couturier,arnaud.giersch,david.laiymani,charles.ramamonjisoa}@univ-fcomte.fr} + \affilnum{1}% + Femto-ST Institute, DISC Department, + University of Franche-Comté, + Belfort, France. + Email:~\email{{charles.ramamonjisoa,david.laiymani,arnaud.giersch,raphael.couturier}@univ-fcomte.fr}\break + \affilnum{2} + Department of Aerospace \& Mechanical Engineering, + Non Linear Computational Mechanics, + University of Liege, Liege, Belgium. + Email:~\email{l.zianekhodja@ulg.ac.be} } -%% Lilia Ziane Khodja: Department of Aerospace \& Mechanical Engineering\\ Non Linear Computational Mechanics\\ University of Liege\\ Liege, Belgium. Email: l.zianekhodja@ulg.ac.be - \begin{abstract} The behavior of multi-core applications is always a challenge to predict, especially with a new architecture for which no experiment has been performed. With some applications, it is difficult, if not impossible, to build @@ -134,7 +136,7 @@ are often very important. So, in this context it is difficult to optimize a given application for a given architecture. In this way and in order to reduce the access cost to these computing resources it seems very interesting to use a simulation environment. The advantages are numerous: development life cycle, -code debugging, ability to obtain results quickly~\ldots. In counterpart, the simulation results need to be consistent with the real ones. +code debugging, ability to obtain results quickly\dots{} In counterpart, the simulation results need to be consistent with the real ones. In this paper we focus on a class of highly efficient parallel algorithms called \emph{iterative algorithms}. The parallel scheme of iterative methods is quite @@ -235,7 +237,7 @@ for the asynchronous scheme (this number depends depends on the delay of the messages). Note that, it is not the case in the synchronous mode where the number of iterations is the same than in the sequential mode. In this way, the set of the parameters of the platform (number of nodes, power of nodes, -inter and intra clusters bandwidth and latency \ldots) and of the +inter and intra clusters bandwidth and latency, \ldots) and of the application can drastically change the number of iterations required to get the convergence. It follows that asynchronous iterative algorithms are difficult to optimize since the financial and deployment costs on large scale multi-core @@ -246,7 +248,8 @@ by simulation are in accordance with reality i.e. of the same order of magnitude. To our knowledge, there is no study on this problematic. \section{SimGrid} - \label{sec:simgrid} +\label{sec:simgrid} +SimGrid~\cite{SimGrid,casanova+legrand+quinson.2008.simgrid,casanova+giersch+legrand+al.2014.versatile} is a discrete event simulation framework to study the behavior of large-scale distributed computing platforms as Grids, Peer-to-Peer systems, Clouds and High Performance Computation systems. It is widely used to simulate and evaluate heuristics, prototype applications or even assess legacy MPI applications. It is still actively developed by the scientific community and distributed as an open source software. %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%% @@ -614,15 +617,16 @@ the network speed drops down (variation of 12.5\%), the difference between t \end{figure} -According to the results of Figure~\ref{fig:03}, a degradation of the network -latency from $8.10^{-6}$ to $6.10^{-5}$ implies an absolute time increase of more -than $75\%$ (resp. $82\%$) of the execution for the classical GMRES (resp. Krylov -multisplitting) algorithm. In addition, it appears that the Krylov -multisplitting method tolerates more the network latency variation with a less -rate increase of the execution time. Consequently, in the worst case -($lat=6.10^{-5 }$), the execution time for GMRES is almost the double than the -time of the Krylov multisplitting, even though, the performance was on the same -order of magnitude with a latency of $8.10^{-6}$. +According to the results of Figure~\ref{fig:03}, a degradation of the network +latency from $8.10^{-6}$ to $6.10^{-5}$ implies an absolute time increase of +more than $75\%$ (resp. $82\%$) of the execution for the classical GMRES +(resp. Krylov multisplitting) algorithm. In addition, it appears that the +Krylov multisplitting method tolerates more the network latency variation with a +less rate increase of the execution time.\RC{Les 2 précédentes phrases me + semblent en contradiction....} Consequently, in the worst case ($lat=6.10^{-5 +}$), the execution time for GMRES is almost the double than the time of the +Krylov multisplitting, even though, the performance was on the same order of +magnitude with a latency of $8.10^{-6}$. \subsubsection{Network bandwidth impacts on performance} \ \\ @@ -634,7 +638,7 @@ order of magnitude with a latency of $8.10^{-6}$. Network & N1 : bw=1Gbs - lat=5.10$^{-5}$ \\ %\hline Input matrix size & N$_{x}$ x N$_{y}$ x N$_{z}$ =150 x 150 x 150\\ \hline \\ \end{tabular} -\caption{Test conditions: Network bandwidth impacts} +\caption{Test conditions: Network bandwidth impacts\RC{Qu'est ce qui varie ici? Il n'y a pas de variation dans le tableau}} \label{tab:04} \end{table} @@ -680,9 +684,9 @@ In these experiments, the input matrix size has been set from $N_{x} = N_{y} time for both algorithms increases when the input matrix size also increases. But the interesting results are: \begin{enumerate} - \item the drastic increase ($10$ times) \RC{Je ne vois pas cela sur la figure} -\RCE{Corrige} of the number of iterations needed to reach the convergence for the classical -GMRES algorithm when the matrix size go beyond $N_{x}=150$; + \item the drastic increase ($10$ times) of the number of iterations needed to + reach the convergence for the classical GMRES algorithm when the matrix size + go beyond $N_{x}=150$; \RC{C'est toujours pas clair... ok le nommbre d'itérations est 10 fois plus long mais la suite de la phrase ne veut rien dire} \item the classical GMRES execution time is almost the double for $N_{x}=140$ compared with the Krylov multisplitting method. \end{enumerate} @@ -819,11 +823,10 @@ geographically distant clusters through the internet. CONCLUSION -\section*{Acknowledgment} - +%\section*{Acknowledgment} +\ack This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01). - \bibliographystyle{wileyj} \bibliography{biblio}