From: Kahina Date: Mon, 28 Dec 2015 07:03:03 +0000 (+0100) Subject: commenter fig 3 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/kahina_paper2.git/commitdiff_plain/a65ed0e53d03bcf65eeb8c281dea8e57823299a3?hp=67b37118a273b5e87b90da1ccce91d97a55e09e5 commenter fig 3 --- diff --git a/paper.tex b/paper.tex index 414bd83..7878657 100644 --- a/paper.tex +++ b/paper.tex @@ -803,7 +803,7 @@ This figure~\ref{fig:01} shows that (CUDA OpenMP) Multi-GPU approach reduce the \subsubsection{Execution times in seconds of the Ehrlich-Aberth method for solving full polynomials on GPUs using shared memory paradigm with OpenMP} -This experiments shows the execution time of the EA algorithm, on single GPU (CUDA) and Multi-GPU (CUDA OpenMP)approach for full polynomials of degrees ranging from 100,000 to 1,400,000 +This experiments shows the execution time of the EA algorithm, on single GPU (CUDA) and Multi-GPU (CUDA OpenMP) approach for full polynomials of degrees ranging from 100,000 to 1,400,000 \begin{figure}[htbp] \centering @@ -814,13 +814,21 @@ This experiments shows the execution time of the EA algorithm, on single GPU (CU The second test with full polynomial shows a very important saving of time, for a polynomial of degrees 1,4M (CUDA OpenMP) approach with 4 GPUs compute and solve it 4 times as fast as single GPU. We notice that curves are positioned one below the other one, more the number of used GPUs increases more the execution time decreases. +\subsection{Test with Multi-GPU (CUDA MPI) approach} +In this part we perform a set of experiment to compare Multi-GPU (CUDA MPI) approach with single GPU, for solving full and sparse polynomials of degrees ranging from 100,000 to 1,400,000. + +\subsubsection{Execution times in seconds of the Ehrlich-Aberth method for solving sparse polynomials on GPUs using distributed memory paradigm with MPI} + \begin{figure}[htbp] \centering \includegraphics[angle=-90,width=0.5\textwidth]{Sparse_mpi} \caption{Execution times in seconds of the Ehrlich-Aberth method for solving sparse polynomials on GPUs using distributed memory paradigm with MPI} \label{fig:02} \end{figure} - +~\\ +This figure shows 4 curves of execution time of EA algorithm, a curve with single GPU, 3 curves with Multi-GPUs (2, 3, 4) GPUs. We see clearly that the curve with single GPU is above the other curves, which shows consumption in execution time compared to the Multi-GPU. We can see the approach Multi-GPU (CUDA MPI) reduces the execution time up to the scale 100 for polynomial of degrees more than 1,000,000 whereas single GPU is of the scale 1000. +\\ +\subsubsection{Execution times in seconds of the Ehrlich-Aberth method for solving full polynomials on GPUs using distributed memory paradigm with MPI} \begin{figure}[htbp] \centering