The execution times between both algorithms is significant with different grid architectures. The synchronous Krylov two-stage algorithm presents better performances than the GMRES algorithm, even for a high number of clusters (about $32\%$ more efficient on a grid of 8$\times$8 than GMRES). In addition, we can observe a better sensitivity of the Krylov two-stage algorithm (compared to the GMRES one) when scaling up the number of the processors in the computational grid: the Krylov two-stage algorithm is about $48\%$ and the GMRES algorithm is about $40\%$ better on 64 processors (grid of 8$\times$8) than 32 processors (grid of 2$\times$16).
-\begin{figure}[t]
+\begin{figure}[ht]
\begin{center}
\includegraphics[width=100mm]{cluster_x_nodes_nx_150_and_nx_170.pdf}
\end{center}
the GMRES algorithms. This means that the multisplitting methods are more
efficient for distributed systems with high latency networks.
-\begin{figure}[t]
+\begin{figure}[ht]
\centering
\includegraphics[width=100mm]{cluster_x_nodes_n1_x_n2.pdf}
\caption{Various grid configurations with networks $N1$ vs. $N2$}
\label{fig:02}
\end{figure}
-\subsubsection{Network latency impacts on performance\\}
+\subsubsection{Network latency impacts on performances\\}
Figure~\ref{fig:03} shows the impact of the network latency on the performances of both algorithms. The simulation is conducted on a computational grid of 2 clusters of 16 processors each (i.e. configuration 2$\times$16) interconnected by a network of bandwidth $bw$=1Gbs to solve a 3D Poisson problem of size $150^3$. According to the results, a degradation of the network latency from $8\mu$s to $60\mu$s implies an absolute execution time increase for both algorithms, but not with the same rate of degradation. The GMRES algorithm is more sensitive to the latency degradation than the Krylov two-stage algorithm.
-\begin{figure}[t]
+\begin{figure}[ht]
\centering
\includegraphics[width=100mm]{network_latency_impact_on_execution_time.pdf}
\caption{Network latency impacts on execution times}
\label{fig:03}
\end{figure}
-\subsubsection{Network bandwidth impacts on performance\\}
+\subsubsection{Network bandwidth impacts on performances\\}
Figure~\ref{fig:04} reports the results obtained for the simulation of a grid of 2$\times$16 processors interconnected by a network of latency $lat=50\mu$s to solve a 3D Poisson problem of size $150^3$. The results of increasing the network bandwidth from 1Gbs to 10Gbs show the performances improvement for both algorithms by reducing the execution times. However, the Krylov two-stage algorithm presents a better performance in the considered bandwidth interval with a gain of $40\%$ compared to only about $24\%$ for the classical GMRES algorithm.
-\begin{figure}[t]
+\begin{figure}[ht]
\centering
\includegraphics[width=100mm]{network_bandwith_impact_on_execution_time.pdf}
\caption{Network bandwith impacts on execution time}
\label{fig:04}
\end{figure}
+\subsubsection{Matrix size impacts on performances\\}
+In these experiments, the matrix size of the 3D Poisson problem is varied from $50^3$ to $190^3$ elements. The simulated computational grid is composed of 4 clusters of 8 processors each interconnected by the network $N2$ (see Table~\ref{tab:01}). Obviously, as shown in Figure~\ref{fig:05}, the execution times for both algorithms increase with increased matrix sizes. For all problem sizes, GMRES algorithm is always slower than the Krylov two-stage algorithm. Moreover, for this benchmark, it seems that the greater the problem size is, the bigger the ratio between execution times of both algorithms is. We can also observe that for some problem sizes, the convergence (and thus the execution time) of the Krylov two-stage algorithm varies quite a lot. %This is due to the 3D partitioning of the 3D matrix of the Poisson problem.
+These findings may help a lot end users to setup the best and the optimal targeted environment for the application deployment when focusing on the problem size scale up.
+\begin{figure}[ht]
+\centering
+\includegraphics[width=100mm]{pb_size_impact_on_execution_time.pdf}
+\caption{Problem size impacts on execution times}
+\label{fig:05}
+\end{figure}
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-\subsubsection{Input matrix size impacts on performance\\}
-
-\begin{table} [ht!]
-\centering
-\begin{tabular}{r c }
- \hline
- Grid Architecture & 4 $\times$ 8\\ %\hline
- Inter Network & $bw$=1Gbs - $lat$=5.10$^{-5}$ \\
- Input matrix size & $N_{x} \times N_{y} \times N_{z}$ = From 50$^{3}$ to 190$^{3}$\\ \hline
- \end{tabular}
-\caption{Test conditions: Input matrix size impacts}
-\label{tab:05}
-\end{table}
-
-
-\begin{figure} [htbp]
-\centering
-\includegraphics[width=100mm]{pb_size_impact_on_execution_time.pdf}
-\caption{Problem size impacts on execution time}
-\label{fig:05}
-\end{figure}
-
-In these experiments, the input matrix size has been set from $50^3$ to
-$190^3$. Obviously, as shown in Figure~\ref{fig:05}, the execution time for both
-algorithms increases when the input matrix size also increases. For all problem
-sizes, GMRES is always slower than the Krylov multisplitting. Moreover, for this
-benchmark, it seems that the greater the problem size is, the bigger the ratio
-between both algorithm execution times is. We can also observ that for some
-problem sizes, the Krylov multisplitting convergence varies quite a
-lot. Consequently the execution times in that cases also varies.
-
-
-These findings may help a lot end users to setup the best and the optimal
-targeted environment for the application deployment when focusing on the problem
-size scale up. It should be noticed that the same test has been done with the
-grid 4 $\times$ 8 leading to the same conclusion.
\subsubsection{CPU Power impacts on performance\\}