X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rce2015.git/blobdiff_plain/239f7fccc1064a9fccc94e25ce91e23a297402c1..b0694e2563db1ac6146d29848f55971b9b021fd2:/paper.tex?ds=inline diff --git a/paper.tex b/paper.tex index 0ec97f3..8583e51 100644 --- a/paper.tex +++ b/paper.tex @@ -1,4 +1,3 @@ -%\documentclass[conference]{IEEEtran} \documentclass[times]{cpeauth} \usepackage{moreverb} @@ -226,14 +225,14 @@ The results in figure 1 show the non-variation of the number of iterations of classical GMRES for a given input matrix size; it is not the case for the multisplitting method. -%%\begin{wrapfigure}{l}{60mm} +%\begin{wrapfigure}{l}{60mm} \begin{figure} [ht!] \centering -\includegraphics[width=60mm]{Cluster_x_Nodes_NX=150_and_NX=170.jpg} -\caption{Cluster x Nodes NX=150 and NX=170 \label{overflow}} +\includegraphics[width=60mm]{cluster_x_nodes_nx_150_and_nx_170.pdf} +\caption{Cluster x Nodes NX=150 and NX=170} +%\label{overflow}} \end{figure} -%%\end{wrapfigure} - +%\end{wrapfigure} Unless the 8x8 cluster, the time execution difference between the two algorithms is important when @@ -260,11 +259,14 @@ matrix size. %\RCE{idem pour tous les tableaux de donnees} +%\begin{wrapfigure}{l}{60mm} \begin{figure} [ht!] \centering -\includegraphics[width=60mm]{Cluster_x_Nodes_N1_x_N2.jpg} -\caption{Cluster x Nodes N1 x N2\label{overflow}} +\includegraphics[width=60mm]{cluster_x_nodes_n1_x_n2.pdf} +\caption{Cluster x Nodes N1 x N2} +%\label{overflow}} \end{figure} +%\end{wrapfigure} The experiments compare the behavior of the algorithms running first on speed inter- cluster network (N1) and a less performant network (N2). @@ -291,8 +293,9 @@ Table 3 : Network latency impact \begin{figure} [ht!] \centering -\includegraphics[width=60mm]{Network_latency_impact_on_execution_time.jpg} -\caption{Network latency impact on execution time\label{overflow}} +\includegraphics[width=60mm]{network_latency_impact_on_execution_time.pdf} +\caption{Network latency impact on execution time} +%\label{overflow}} \end{figure} @@ -322,8 +325,9 @@ Table 4 : Network bandwidth impact \begin{figure} [ht!] \centering -\includegraphics[width=60mm]{Network_bandwith_impact_on_execution_time.jpg} -\caption{Network bandwith impact on execution time\label{overflow}} +\includegraphics[width=60mm]{network_bandwith_impact_on_execution_time.pdf} +\caption{Network bandwith impact on execution time} +%\label{overflow} \end{figure} @@ -350,8 +354,9 @@ Table 5 : Input matrix size impact \begin{figure} [ht!] \centering -\includegraphics[width=60mm]{Pb_size_impact_on_execution_time.jpg} -\caption{Pb size impact on execution time\label{overflow}} +\includegraphics[width=60mm]{pb_size_impact_on_execution_time.pdf} +\caption{Pb size impact on execution time} +%\label{overflow}} \end{figure} In this experimentation, the input matrix size has been set from @@ -384,8 +389,9 @@ Table 6 : CPU Power impact \begin{figure} [ht!] \centering -\includegraphics[width=60mm]{CPU_Power_impact_on_execution_time.jpg} -\caption{CPU Power impact on execution time\label{overflow}} +\includegraphics[width=60mm]{cpu_power_impact_on_execution_time.pdf} +\caption{CPU Power impact on execution time} +%\label{overflow}} \end{figure} Using the SIMGRID simulator flexibility, we have tried to determine the