X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/kahina_paper2.git/blobdiff_plain/f7cf9d24e2bcb8efb27d03b167b8c97b5b9560ec..7620333f88cf490eac21e8a1c1e1d29a6934533f:/paper.tex diff --git a/paper.tex b/paper.tex index cca9a54..fbdcac7 100644 --- a/paper.tex +++ b/paper.tex @@ -315,7 +315,7 @@ \bibliographystyle{IEEEtran} % argument is your BibTeX string definitions and bibliography database(s) %\bibliography{IEEEabrv,../bib/paper} -\bibliographystyle{elsarticle-num} +%\bibliographystyle{elsarticle-num} \begin{document} % % paper title @@ -442,18 +442,18 @@ point $z$. %Aberth, Ehrlich and Farmer-Loizou~\cite{Loizou83} have proved that %the Ehrlich-Aberth method (EA) has a cubic order of convergence for simple roots whereas the Durand-Kerner has a quadratic order of %convergence. -The main problem of the simultaneous methods is that the necessary time needed for the convergence is increased with the increasing of the degree of the polynomial. Many authors have treated the problem of implementing simultaneous methods in parallel. Freeman [10] implemented and compared DK, EA and another method of the fourth order proposed by Farmer -and Loizou [9], on a 8-processor linear chain, for polynomials of degree up to 8. -The third method often diverges, but the first two methods have speed-up equal to 5.5. Later, Freeman and Bane [11] considered asynchronous algorithms, in which each processor continues to update its approximations even though the latest values of other $z^{k}_{i}$ have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before -making a new iteration. Couturier and al. [12] proposed two methods of parallelization for a shared memory architecture with \textit{OpenMP} and for distributed memory one with \textit{MPI}. They were able to compute the roots of sparse polynomials of degree 10,000 in 116 seconds with \textit{OpenMP} and 135 seconds with \textit{MPI} only by using 8 personal computers and 2 communications per iteration. Comparing to the sequential implementation where it takes up to 3,300 seconds to obtain the same results, the authors show an interesting speedup. +The main problem of the simultaneous methods is that the necessary time needed for the convergence is increased with the increasing of the degree of the polynomial. Many authors have treated the problem of implementing simultaneous methods in parallel. Freeman~\cite{Freeman89} implemented and compared DK, EA and another method of the fourth order proposed by Farmer +and Loizou~\cite{Loizou83}, on a 8-processor linear chain, for polynomials of degree up to 8. +The third method often diverges, but the first two methods have speed-up equal to 5.5. Later, Freeman and Bane~\cite{Freemanall90} considered asynchronous algorithms, in which each processor continues to update its approximations even though the latest values of other $z^{k}_{i}$ have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before +making a new iteration. Couturier and al.~\cite{Raphaelall01} proposed two methods of parallelization for a shared memory architecture with \textit{OpenMP} and for distributed memory one with \textit{MPI}. They were able to compute the roots of sparse polynomials of degree 10,000 in 116 seconds with \textit{OpenMP} and 135 seconds with \textit{MPI} only by using 8 personal computers and 2 communications per iteration. Comparing to the sequential implementation where it takes up to 3,300 seconds to obtain the same results, the authors show an interesting speedup. -Very few work had been performed since then until the appearing of the Compute Unified Device Architecture (CUDA) [13], a parallel computing platform and a programming model invented by NVIDIA. The computing power of GPUs (Graphics Processing Unit) has exceeded that of CPUs. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche and al [14] proposed an implementation of the Durand-Kerner method on GPU. Their main result showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000. +Very few work had been performed since then until the appearing of the Compute Unified Device Architecture (CUDA)~\cite{CUDA10}, a parallel computing platform and a programming model invented by NVIDIA. The computing power of GPUs (Graphics Processing Unit) has exceeded that of CPUs. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche and al~\cite{Kahinall14} proposed an implementation of the Durand-Kerner method on GPU. Their main result showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000. -Finding polynomial roots rapidly and accurately is the main objective of our work. In this paper we propose the parallelization of Ehrlich-Aberth method using parallel programming paradigms (OpenMP, MPI) on GPUs. We consider two architectures: shared memory with OpenMP API and distributed memory MPI API. The first approach is based on threads from the same system process, with each thread attached to one GPU and after the various memory allocations, each thread launches its part of computations. To do this we must first load on the GPU required data and after the computations are carried, repatriate the result on the host. The second approach i.e distributed memory with MPI relies on the MPI library which is often used for parallel programming [11] in +Finding polynomial roots rapidly and accurately is the main objective of our work. In this paper we propose the parallelization of Ehrlich-Aberth method using parallel programming paradigms (OpenMP, MPI) on GPUs. We consider two architectures: shared memory with OpenMP API and distributed memory MPI API. The first approach is based on threads from the same system process, with each thread attached to one GPU and after the various memory allocations, each thread launches its part of computations. To do this we must first load on the GPU required data and after the computations are carried, repatriate the result on the host. The second approach i.e distributed memory with MPI relies on the MPI library which is often used for parallel programming~\cite{Peter96} in cluster systems because it is a message-passing programming language. Each GPU is attached to one MPI process, and a loop is in charge of the distribution of tasks between the MPI processes. This solution can be used on one GPU, or executed on a distributed cluster of GPUs, employing the Message Passing Interface (MPI) to communicate between separate CUDA cards. This solution permits scaling of the problem size to larger classes than would be possible on a single device and demonstrates the performance which users might expect from future HPC architectures where accelerators are deployed. -This paper is organized as follows, in section 2 we recall the Ehrlich-Aberth method. In section 3 we present EA algorithm on single GPU. In section 4 we propose the EA algorithm implementation on MGPU for (OpenMP-CUDA) approach and (MPI-CUDA) approach. In section 5 we present our experiments and discus it. Finally, Section~\ref{sec6} concludes this paper and gives some hints for future research directions in this topic. +This paper is organized as follows, in section 2 we recall the Ehrlich-Aberth method. In section 3 we present EA algorithm on single GPU. In section 4 we propose the EA algorithm implementation on Multi-GPU for (OpenMP-CUDA) approach and (MPI-CUDA) approach. In section 5 we present our experiments and discus it. Finally, Section~\ref{sec6} concludes this paper and gives some hints for future research directions in this topic. \section{Parallel Programmings Model} @@ -477,7 +477,7 @@ to parallelize a loop. In this way, a set of loops can be distributed along the The MPI (Message Passing Interface) library allows to create computer programs that run on a distributed memory architecture. The various processes have their own environment of execution and execute their code in a asynchronous way, according to the MIMD model (Multiple Instruction streams, Multiple Data streams); they communicate and synchronise by exchanging messages~\cite{Peter96}. MPI messages are explicitly sent, while the exchanges are implicit within the framework of a multi-thread programming environment like OpenMP or Pthreads. \subsection{CUDA}%L'article en anglais Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications -CUDA (an acronym for Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{NVIDIA12}. The +CUDA (an acronym for Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{CUDA10}. The unit of execution in CUDA is called a thread. Each thread executes a kernel by the streaming processors in parallel. In CUDA, a group of threads that are executed together is called a thread block, and the computational grid consists of a grid of thread blocks. Additionally, a thread block can use the shared memory on a single multiprocessor while the grid executes a single @@ -556,13 +556,13 @@ v_{i}=\frac{|\frac{a_{n}}{a_{i}}|^{\frac{1}{n-i}}}{2}. \subsubsection{Iterative Function} The operator used by the Aberth method is corresponding to the -following equation~\ref{Eq:EA} which will enable the convergence towards +following equation~\ref{Eq:EA1} which will enable the convergence towards polynomial solutions, provided all the roots are distinct. %Here we give a second form of the iterative function used by the Ehrlich-Aberth method: \begin{equation} -\label{Eq:EA} +\label{Eq:EA1} EA: z^{k+1}_{i}=z_{i}^{k}-\frac{\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}} {1-\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}\sum_{j=1,j\neq i}^{j=n}{\frac{1}{(z_{i}^{k}-z_{j}^{k})}}}, i=1,. . . .,n \end{equation}