X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/book_gpu.git/blobdiff_plain/2373d6731790822c6e738cfa54aec1ccaf802222..177d75ae3d1a1061fb9caa43de9afca760ca0d1a:/BookGPU/Chapters/chapter12/ch12.tex?ds=sidebyside diff --git a/BookGPU/Chapters/chapter12/ch12.tex b/BookGPU/Chapters/chapter12/ch12.tex index 254c0cb..8b869b3 100755 --- a/BookGPU/Chapters/chapter12/ch12.tex +++ b/BookGPU/Chapters/chapter12/ch12.tex @@ -1,4 +1,4 @@ -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +1%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% CHAPTER 12 %% %% %% @@ -729,19 +729,6 @@ initial matrix. \label{ch12:tab:04} \end{table} -We have used the parallel CG and GMRES algorithms for solving sparse linear systems of $25$ -million unknown values. The sparse matrices associated to these linear systems are generated -from those presented in Table~\ref{ch12:tab:01}. Their main characteristics are given in Table~\ref{ch12:tab:04}. -Tables~\ref{ch12:tab:05} and~\ref{ch12:tab:06} shows the performances of the parallel CG and -GMRES solvers, respectively, obtained on a cluster of $24$ CPU cores and on a cluster of $12$ -GPUs. Obviously, we can notice from these tables that solving large sparse linear systems on -a GPU cluster is more efficient than on a CPU cluster (see relative gains $\tau$). We can also -notice that the execution times of the CG method, whether in a CPU cluster or in a GPU cluster, -are better than those of the GMRES method for solving large symmetric linear systems. In fact, the -CG method is characterized by a better convergence\index{Convergence} rate and a shorter execution -time of an iteration than those of the GMRES method. Moreover, an iteration of the parallel GMRES -method requires more data exchanges between computing nodes compared to the parallel CG method. - \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} @@ -802,6 +789,21 @@ on a cluster of 12 GPUs.} \end{center} \end{table} + +We have used the parallel CG and GMRES algorithms for solving sparse linear systems of $25$ +million unknown values. The sparse matrices associated to these linear systems are generated +from those presented in Table~\ref{ch12:tab:01}. Their main characteristics are given in Table~\ref{ch12:tab:04}. +Tables~\ref{ch12:tab:05} and~\ref{ch12:tab:06} shows the performances of the parallel CG and +GMRES solvers, respectively, obtained on a cluster of $24$ CPU cores and on a cluster of $12$ +GPUs. Obviously, we can notice from these tables that solving large sparse linear systems on +a GPU cluster is more efficient than on a CPU cluster (see relative gains $\tau$). We can also +notice that the execution times of the CG method, whether in a CPU cluster or in a GPU cluster, +are better than those of the GMRES method for solving large symmetric linear systems. In fact, the +CG method is characterized by a better convergence\index{Convergence} rate and a shorter execution +time of an iteration than those of the GMRES method. Moreover, an iteration of the parallel GMRES +method requires more data exchanges between computing nodes compared to the parallel CG method. + + %%--------------------------%% %% SECTION 5 %% %%--------------------------%%