\includegraphics[width=100mm]{cluster_x_nodes_nx_150_and_nx_170.pdf}
\end{center}
\caption{Various grid configurations with the matrix sizes 150$^3$ and 170$^3$}
-\LZK{CE, la légende de la Figure 3 est trop large. Remplacer les N$_x\times$N$_y\times$N$_z$ par $Mat1$=150$^3$ et $Mat2$=170$^3$ comme dans la Table 1}
\label{fig:01}
\end{figure}
the GMRES algorithms. This means that the multisplitting methods are more
efficient for distributed systems with high latency networks.
-%% In this section, the experiments compare the behavior of the algorithms running on a
-%% speeder inter-cluster network (N2) and also on a less performant network (N1) respectively defined in the test conditions Table~\ref{tab:02}.
-%% %\RC{Il faut définir cela avant...}
-%% Figure~\ref{fig:02} shows that end users will reduce the execution time
-%% for both algorithms when using a grid architecture like 4 $\times$ 16 or 8 $\times$ 8: the reduction factor is around $2$. The results depict also that when
-%% the network speed drops down (variation of 12.5\%), the difference between the two Multisplitting algorithms execution times can reach more than 25\%.
-
\begin{figure}[t]
\centering
\includegraphics[width=100mm]{cluster_x_nodes_n1_x_n2.pdf}
\label{fig:02}
\end{figure}
+\subsubsection{Network latency impacts on performance\\}
+Figure~\ref{fig:03} shows the impact of the network latency on the performances of both algorithms. The simulation is conducted on a computational grid of 2 clusters of 16 processors each (i.e. configuration 2$\times$16) interconnected by a network of bandwidth $bw$=1Gbs to solve a 3D Poison problem of size $150^3$. According to the results, a degradation of the network latency from $8\times 10^{-6}$ to $6\times 10^{-5}$ implies an absolute execution time increase for both algorithms, but not with the same rate of degradation. The GMRES algorithm is more sensitive to the latency degradation than the Krylov two-stage algorithm.
+\begin{figure}[t]
+\centering
+\includegraphics[width=100mm]{network_latency_impact_on_execution_time.pdf}
+\caption{Network latency impacts on execution times}
+\label{fig:03}
+\end{figure}
-\subsubsection{Network latency impacts on performance\\}
-\begin{table} [ht!]
-\centering
-\begin{tabular}{r c }
- \hline
- Grid Architecture & 2 $\times$ 16\\ %\hline
- \multirow{2}{*}{Inter Network N1} & $bw$=1Gbs, \\ %\hline
- & $lat$= From 8$\times$10$^{-6}$ to $6.10^{-5}$ second \\
- Input matrix size & $N_{x} \times N_{y} \times N_{z} = 150 \times 150 \times 150$\\ \hline
- \end{tabular}
-\caption{Test conditions: network latency impacts}
-\label{tab:03}
-\end{table}
-
-\begin{figure} [htbp]
-\centering
-\includegraphics[width=100mm]{network_latency_impact_on_execution_time.pdf}
-\caption{Network latency impacts on execution time}
-%\AG{\np{E-6}}}
-\label{fig:03}
-\end{figure}
-In Table~\ref{tab:03}, parameters for the influence of the network latency are
-reported. According to the results of Figure~\ref{fig:03}, a degradation of the
-network latency from $8.10^{-6}$ to $6.10^{-5}$ implies an absolute time
-increase of more than $75\%$ (resp. $82\%$) of the execution for the classical
-GMRES (resp. Krylov multisplitting) algorithm. The execution time factor
-between the two algorithms varies from 2.2 to 1.5 times with a network latency
-decreasing from $8.10^{-6}$ to $6.10^{-5}$ second.
\subsubsection{Network bandwidth impacts on performance\\}