X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/kahina_paper2.git/blobdiff_plain/048a430c1ced6539008e4733fa88179f738cb5b0..22d12994404e33bf3854ae4c59c972eeb997da62:/paper.tex?ds=sidebyside diff --git a/paper.tex b/paper.tex index 18dd3c2..a236a5a 100644 --- a/paper.tex +++ b/paper.tex @@ -178,7 +178,7 @@ % *** SPECIALIZED LIST PACKAGES *** % -\usepackage{algorithmic} + % algorithmic.sty was written by Peter Williams and Rogerio Brito. % This package provides an algorithmic environment fo describing algorithms. % You can use the algorithmic environment in-text or within a figure @@ -359,9 +359,8 @@ Email: \{kahina.ghidouche,ar.sider\}@univ-bejaia.dz} \and \IEEEauthorblockN{Lilia Ziane Khodja, Raphaël Couturier} \IEEEauthorblockA{FEMTO-ST Institute\\ - University of Bourgogne Franche-Comte\\ - France\\ -Email: zianekhodja.lilia@gmail.com, raphael.couturier@univ-fcomte.fr}} + University of Bourgogne Franche-Comte, France\\ +Email: zianekhodja.lilia@gmail.com\\ raphael.couturier@univ-fcomte.fr}} % conference papers do not typically use \thanks and this command % is locked out in conference mode. If really needed, such as for @@ -773,10 +772,9 @@ CUDA running threads like threads on a CPU host. In the following paragraph Algorithm~\ref{alg1-cuda} shows the GPU parallel implementation of Ehrlich-Aberth method. -\begin{enumerate} \begin{algorithm}[htpb] \label{alg1-cuda} -%\LinesNumbered +\LinesNumbered \caption{CUDA Algorithm to find roots with the Ehrlich-Aberth method} \KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance @@ -786,22 +784,19 @@ implementation of Ehrlich-Aberth method. %\BlankLine -\item Initialization of P\; -\item Initialization of Pu\; -\item Initialization of the solution vector $Z^{0}$\; -\item Allocate and copy initial data to the GPU global memory\; -\item k=0\; -\item \While {$\Delta z_{max} > \epsilon$}{ -\item Let $\Delta z_{max}=0$\; -\item $ kernel\_save(ZPrec,Z)$\; -\item k=k+1\; -\item $ kernel\_update(Z,P,Pu)$\; -\item $kernel\_testConverge(\Delta z_{max},Z,ZPrec)$\; +Initialization of P\; +Initialization of Pu\; +Initialization of the solution vector $Z^{0}$\; +Allocate and copy initial data to the GPU global memory\; +\While {$\Delta z_{max} > \epsilon$}{ + $ kernel\_save(ZPrec,Z)$\; + $ kernel\_update(Z,P,Pu)$\; + $\Delta z_{max}=kernel\_testConverge(Z,ZPrec)$\; } -\item Copy results from GPU memory to CPU memory\; +Copy results from GPU memory to CPU memory\; \end{algorithm} -\end{enumerate} + ~\\ \RC{Au final, on laisse ce code, on l'explique, si c'est kahina qui @@ -1219,13 +1214,25 @@ Under 1 million, OpenMPI and MPI are almost equivalent. \section{Conclusion} \label{sec6} -In this paper, we have presented a parallel implementation of Ehrlich-Aberth algorithm for solving full and sparse polynomials, on single GPU with CUDA and on multiple GPUs using two parallel paradigms : shared memory with OpenMP and distributed memory with MPI. These architectures were addressed by a CUDA-OpenMP approach and CUDA-MPI approach, respectively. -The experiments show that, using parallel programming model like (OpenMP, MPI), we can efficiently manage multiple graphics cards to work together to solve the same problem and accelerate the parallel execution with 4 GPUs and solve a polynomial of degree 1,000,000, four times faster than on single GPU, that is a quasi-linear speedup. +In this paper, we have presented a parallel implementation of +Ehrlich-Aberth algorithm to solve full and sparse polynomials, on +single GPU with CUDA and on multiple GPUs using two parallel +paradigms: shared memory with OpenMP and distributed memory with +MPI. These architectures were addressed by a CUDA-OpenMP approach and +CUDA-MPI approach, respectively. Experiments show that, using +parallel programming model like (OpenMP, MPI). We can efficiently +manage multiple graphics cards to solve the same +problem and accelerate the parallel execution with 4 GPUs and solve a +polynomial of degree up to 5,000,000, four times faster than on single +GPU. %In future, we will evaluate our parallel implementation of Ehrlich-Aberth algorithm on other parallel programming model -Our next objective is to extend the model presented here at clusters of nodes featuring multiple GPUs, with a three-level scheme: inter-node communication via MPI processes (distributed memory), management of multi-GPU node by OpenMP threads (shared memory). +Our next objective is to extend the model presented here with clusters +of GPU nodes, with a three-level scheme: inter-node communication via +MPI processes (distributed memory), management of multi-GPU node by +OpenMP threads (shared memory). %present a communication approach between multiple GPUs. The comparison between MPI and OpenMP as GPUs controllers shows that these %solutions can effectively manage multiple graphics cards to work together