X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/kahina_paper2.git/blobdiff_plain/f9bf26cc810e585f03a82f4a0a5e8c64b96842ca..77a92aa9f3abb73d75905e7108af76f5bba87567:/paper.tex diff --git a/paper.tex b/paper.tex index 962c7f9..4f0138e 100644 --- a/paper.tex +++ b/paper.tex @@ -312,8 +312,10 @@ % correct bad hyphenation here \hyphenation{op-tical net-works semi-conduc-tor} %\usepackage{graphicx} - - +\bibliographystyle{IEEEtran} +% argument is your BibTeX string definitions and bibliography database(s) +%\bibliography{IEEEabrv,../bib/paper} +%\bibliographystyle{elsarticle-num} \begin{document} % % paper title @@ -404,19 +406,19 @@ The abstract goes here. \section{Introduction} -Polynomials are mathematical algebraic structures used in science and engineering to capture physical phenomena and to express any outcome in the form of a function of some unknown variables. Formally speaking, a polynomial $p(x)$ of degree \textit{n} having $n$ coefficients in the complex plane \textit{C} is : +Polynomials are mathematical algebraic structures that play an important role in science and engineering by capturing physical phenomena and by expressing any outcome as a function of some unknown variables. Formally speaking, a polynomial $p(x)$ of degree \textit{n} having $n$ coefficients in the complex plane \textit{C} is : %%\begin{center} \begin{equation} {\Large p(x)=\sum_{i=0}^{n}{a_{i}x^{i}}}. \end{equation} %%\end{center} -The root finding problem consists in finding the values of all the $n$ values of the variable $x$ for which \textit{p(x)} is nullified. Such values are called zeros of $p$. If zeros are $\alpha_{i},\textit{i=1,...,n}$ the $p(x)$ can be written as : +The root finding problem consists in finding the values of all the $n$ different values of the variable $x$ for which \textit{p(x)} is null. Such values are called zeros of $p$. If zeros are $\alpha_{i},\textit{i=1,...,n}$ then $p(x)$ can be written as : \begin{equation} {\Large p(x)=a_{n}\prod_{i=1}^{n}(x-\alpha_{i}), a_{0} a_{n}\neq 0}. \end{equation} -The problem of finding the roots of polynomials is encountered in different applications. Most of the numerical methods that deal with this problem are simultaneous ones. These methods start from the initial approximations of all the roots of the polynomial and give a sequence of approximations that converge to the roots of the polynomial. The first method of this group is Durand-Kerner method: +The problem of finding the roots of polynomials can be encountered in numerous applications. Most of the numerical methods that deal with this problem are simultaneous ones, i.e that find concurrently all of $n$ zeroes. These methods start from the initial approximations of all the roots of the polynomial and give a sequence of approximations that converge to the roots of the polynomial. The first method of this group is Durand-Kerner method: \begin{equation} \label{DK} DK: z_i^{k+1}=z_{i}^{k}-\frac{P(z_i^{k})}{\prod_{i\neq j}(z_i^{k}-z_j^{k})}, i = 1, . . . , n, @@ -440,61 +442,152 @@ point $z$. %Aberth, Ehrlich and Farmer-Loizou~\cite{Loizou83} have proved that %the Ehrlich-Aberth method (EA) has a cubic order of convergence for simple roots whereas the Durand-Kerner has a quadratic order of %convergence. -The main problem of the simultaneous methods is that the necessary time needed for the convergence is increased with the increasing of the degree of the polynomial. Many authors have treated the problem of implementation of simultaneous methods in parallel. Freeman [10] implemented and compared DK, EA and another method of the fourth order proposed by Farmer -and Loizou [9], on a 8-processor linear chain, for polynomials of degree up to 8. -The third method often diverges, but the first two methods have speed-up equal to 5.5. Later, Freeman and Bane [11] considered asynchronous algorithms, in which each processor continues to update its approximations even though the latest values of other $z^{k}_{i}$ have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before -making a new iteration. Couturier and al. [12] proposed two methods of parallelization for a shared memory architecture with \textit{OpenMP} and for distributed memory one with \textit{MPI}. They were able to compute the roots of sparse polynomials of degree 10,000 in 116 seconds with \textit{OpenMP} and 135 seconds with \textit{MPI} only 8 personal computers and 2 communications per iteration. Comparing to the sequential implementation where it takes up to 3,300 seconds to obtain the same results, the authors show an interesting speedup. +The main problem of the simultaneous methods is that the necessary time needed for the convergence is increased with the increasing of the degree of the polynomial. Many authors have treated the problem of implementing simultaneous methods in parallel. Freeman~\cite{Freeman89} implemented and compared DK, EA and another method of the fourth order proposed by Farmer +and Loizou~\cite{Loizou83}, on a 8-processor linear chain, for polynomials of degree up to 8. +The third method often diverges, but the first two methods have speed-up equal to 5.5. Later, Freeman and Bane~\cite{Freemanall90} considered asynchronous algorithms, in which each processor continues to update its approximations even though the latest values of other $z^{k}_{i}$ have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before +making a new iteration. Couturier and al.~\cite{Raphaelall01} proposed two methods of parallelization for a shared memory architecture with \textit{OpenMP} and for distributed memory one with \textit{MPI}. They were able to compute the roots of sparse polynomials of degree 10,000 in 116 seconds with \textit{OpenMP} and 135 seconds with \textit{MPI} only by using 8 personal computers and 2 communications per iteration. Comparing to the sequential implementation where it takes up to 3,300 seconds to obtain the same results, the authors show an interesting speedup. -Very few works had been performed since this last work until the appearing of the Compute Unified Device Architecture (CUDA) [13], a parallel computing platform and a programming model invented by NVIDIA. The computing power of GPUs (Graphics Processing Unit) has exceeded that of CPUs. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche and al [14] proposed an implementation of the Durand-Kerner method on GPU. Their main result showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000. +Very few work had been performed since then until the appearing of the Compute Unified Device Architecture (CUDA)~\cite{CUDA10}, a parallel computing platform and a programming model invented by NVIDIA. The computing power of GPUs (Graphics Processing Unit) has exceeded that of CPUs. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche and al~\cite{Kahinall14} proposed an implementation of the Durand-Kerner method on GPU. Their main result showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000. -Finding polynomial roots rapidly and accurately is the main objective of our work. In this paper we propose the parallelization of Ehrlich-Aberth method using a parallel programming paradigms (OpenMP, MPI) on GPUs. We consider two architectures: Shared memory with OpenMP API based on threads from the same system process, which each thread is attached to one GPU and after the various memory allocation, each thread throws its part of calculation ( to do this you must first load on the GPU required data and after Suddenly repatriate the result on the host). Distributed memory with MPI: The MPI library is often used for parallel programming [11] in -cluster systems because it is a message-passing programming language. Each GPU are attached to one process MPI, and a loop is in charge of the distribution of tasks between the MPI processes. this solution can be used on one GPU, or executed on a distributed cluster of GPUs, employing the Message Passing Interface (MPI) to communicate between separate CUDA cards. This solution permits scaling of the problem size to larger classes than would be possible on a single device and demonstrates the performance which users might expect from future +Finding polynomial roots rapidly and accurately is the main objective of our work. In this paper we propose the parallelization of Ehrlich-Aberth method using parallel programming paradigms (OpenMP, MPI) on GPUs. We consider two architectures: shared memory with OpenMP API and distributed memory MPI API. The first approach is based on threads from the same system process, with each thread attached to one GPU and after the various memory allocations, each thread launches its part of computations. To do this we must first load on the GPU required data and after the computations are carried, repatriate the result on the host. The second approach i.e distributed memory with MPI relies on the MPI library which is often used for parallel programming~\cite{Peter96} in +cluster systems because it is a message-passing programming language. Each GPU is attached to one MPI process, and a loop is in charge of the distribution of tasks between the MPI processes. This solution can be used on one GPU, or executed on a distributed cluster of GPUs, employing the Message Passing Interface (MPI) to communicate between separate CUDA cards. This solution permits scaling of the problem size to larger classes than would be possible on a single device and demonstrates the performance which users might expect from future HPC architectures where accelerators are deployed. -This paper is organized as follows, in section 2 we recall the Ehrlich-Aberth method. In section 3 we present EA algorithm on single GPU. In section 4 we propose the EA algorithm implementation on MGPU for (OpenMP-CUDA) approach and (MPI-CUDA) approach. In section 5 we present our experiments and discus it. Finally, Section~\ref{sec6} concludes this paper and gives some hints for future research directions in this topic. +This paper is organized as follows, in section 2 we recall the Ehrlich-Aberth method. In section 3 we present EA algorithm on single GPU. In section 4 we propose the EA algorithm implementation on Multi-GPU for (OpenMP-CUDA) approach and (MPI-CUDA) approach. In section 5 we present our experiments and discus it. Finally, Section~\ref{sec6} concludes this paper and gives some hints for future research directions in this topic. \section{Parallel Programmings Model} -\subsection{OpenMP}%L'article en anglais Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications -Open Multi-Processing (OpenMP) is a shared memory architecture API that provides multi thread capacity [22]. OpenMP is +\subsection{OpenMP} +Open Multi-Processing (OpenMP) is a shared memory architecture API that provides multi thread capacity~\cite{openmp13}. OpenMP is a portable approach for parallel programming on shared memory systems based on compiler directives, that can be included in order -to parallelize a loop. In this way, a set of loops can be distributed along the different threads that will access to different data allo- -cated in local shared memory. One of the advantages of OpenMP is its global view of application memory address space that allows relatively fast development of parallel applications with easier maintenance. However, it is often difficult to get high rates of -performance in large scale applications. Although, in OpenMP a usage of threads ids and managing data explicitly as done in an MPI -code can be considered, it defeats the advantages of OpenMP. - -\subsection{OpenMP} %L'article en Français Programmation multiGPU – OpenMP versus MPI -OpenMP is a shared memory programming API based on threads from -the same system process. Designed for multiprocessor shared memory UMA or -NUMA [10], it relies on the execution model SPMD ( Single Program, Multiple Data Stream ) -where the thread "master" and threads "slaves" asynchronously execute their codes -communicate / synchronize via shared memory [7]. It also helps to build -the loop parallelism and is very suitable for an incremental code parallelization -Sequential natively. Threads share some or all of the available memory and can -have private memory areas [6]. - -\subsection{MPI} %L'article en Français Programmation multiGPU – OpenMP versus MPI - The library MPI allows to use a distributed memory architecture. The various processes have their own environment of execution and execute their codes in a asynchronous way, according to the model MIMD (Multiple Instruction streams, Multiple Dated streams); they communicate and synchronize by exchanges of messages [17]. MPI messages are explicitly sent, while the exchanges are implicit within the framework of a programming multi-thread (OpenMP/Pthreads). +to parallelize a loop. In this way, a set of loops can be distributed along the different threads that will access to different data allocated in local shared memory. One of the advantages of OpenMP is its global view of application memory address space that allows relatively fast development of parallel applications with easier maintenance. However, it is often difficult to get high rates of performance in large scale applications. Although usage of OpenMP threads and managed data explicitly done with MPI can be considered, this approcache undermines the advantages of OpenMP. + +%\subsection{OpenMP} +%OpenMP is a shared memory programming API based on threads from +%the same system process. Designed for multiprocessor shared memory UMA or +%NUMA [10], it relies on the execution model SPMD ( Single Program, Multiple Data Stream ) +%where the thread "master" and threads "slaves" asynchronously execute their codes +%communicate / synchronize via shared memory [7]. It also helps to build +%the loop parallelism and is very suitable for an incremental code parallelization +%Sequential natively. Threads share some or all of the available memory and can +%have private memory areas [6]. + +\subsection{MPI} +The MPI (Message Passing Interface) library allows to create computer programs that run on a distributed memory architecture. The various processes have their own environment of execution and execute their code in a asynchronous way, according to the MIMD model (Multiple Instruction streams, Multiple Data streams); they communicate and synchronize by exchanging messages~\cite{Peter96}. MPI messages are explicitly sent, while the exchanges are implicit within the framework of a multi-thread programming environment like OpenMP or Pthreads. -\subsection{CUDA}%L'article en anglais Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications - CUDA (an acronym for Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA [28]. The -unit of execution in CUDA is called a thread. Each thread executes the kernel by the streaming processors in parallel. In CUDA, -a group of threads that are executed together is called thread blocks, and the computational grid consists of a grid of thread -blocks. Additionally, a thread block can use the shared memory on a single multiprocessor as while as the grid executes a single +\subsection{CUDA} +CUDA (an acronym for Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{CUDA10}. The +unit of execution in CUDA is called a thread. Each thread executes a kernel by the streaming processors in parallel. In CUDA, +a group of threads that are executed together is called a thread block, and the computational grid consists of a grid of thread +blocks. Additionally, a thread block can use the shared memory on a single multiprocessor while the grid executes a single CUDA program logically in parallel. Thus in CUDA programming, it is necessary to design carefully the arrangement of the thread blocks in order to ensure low latency and a proper usage of shared memory, since it can be shared only in a thread block scope. The effective bandwidth of each memory space depends on the memory access pattern. Since the global memory has lower bandwidth than the shared memory, the global memory accesses should be minimized. -We introduced three paradigms of parallel programming. Our objective consist to implement an algorithm of root finding polynomial on multiple GPUs. It primordial to know how manage CUDA context of different GPUs. A direct method for controlling the various GPU is to use as many threads or processes that GPU. We can choose the GPU index based on the identifier of OpenMP thread or the rank of the MPI process. Both approaches will be created. +We introduced three paradigms of parallel programming. Our objective consists in implementing a root finding polynomial algorithm on multiple GPUs. To this end, it is primordial to know how to manage CUDA contexts of different GPUs. A direct method for controlling the various GPUs is to use as many threads or processes as GPU devices. We can choose the GPU index based on the identifier of OpenMP thread or the rank of the MPI process. Both approaches will be investigated. -\section{The EA algorithm on single GPU} +\section{The EA algorithm on a single GPU} \subsection{the EA method} -the Ehrlich-Aberth method is an iterative method , contain 4 steps, start from the initial approximations of all the -roots of the polynomial,the second step initialize the solution vector $Z$ using the Guggenheimer method to assure the distinction of the initial vector roots, than in step 3 we apply the the iterative function based on the Newton's method and Weiestrass operator[...,...], wich will make it possible to converge to the roots solution, provided that all the root are different. At the end of each application of the iterative function, a stop condition is verified consists in stopping the iterative process when the whole of the modules of the roots -are lower than a fixed value $ε$ + +A cubically convergent iteration method to find zeros of +polynomials was proposed by O. Aberth~\cite{Aberth73}. The +Ehrlich-Aberth method contains 4 main steps, presented in what +follows. + +%The Aberth method is a purely algebraic derivation. +%To illustrate the derivation, we let $w_{i}(z)$ be the product of linear factors + +%\begin{equation} +%w_{i}(z)=\prod_{j=1,j \neq i}^{n} (z-x_{j}) +%\end{equation} + +%And let a rational function $R_{i}(z)$ be the correction term of the +%Weistrass method~\cite{Weierstrass03} + +%\begin{equation} +%R_{i}(z)=\frac{p(z)}{w_{i}(z)} , i=1,2,...,n. +%\end{equation} + +%Differentiating the rational function $R_{i}(z)$ and applying the +%Newton method, we have: + +%\begin{equation} +%\frac{R_{i}(z)}{R_{i}^{'}(z)}= \frac{p(z)}{p^{'}(z)-p(z)\frac{w_{i}(z)}{w_{i}^{'}(z)}}= \frac{p(z)}{p^{'}(z)-p(z) \sum _{j=1,j \neq i}^{n}\frac{1}{z-x_{j}}}, i=1,2,...,n +%\end{equation} +%where R_{i}^{'}(z)is the rational function derivative of F evaluated in the point z +%Substituting $x_{j}$ for $z_{j}$ we obtain the Aberth iteration method.% + + +\subsubsection{Polynomials Initialization} +The initialization of a polynomial $p(z)$ is done by setting each of the $n$ complex coefficients $a_{i}$: + +\begin{equation} +\label{eq:SimplePolynome} + p(z)=\sum{a_{i}z^{n-i}} , a_{n} \neq 0,a_{0}=1, a_{i}\subset C +\end{equation} + + +\subsubsection{Vector $Z^{(0)}$ Initialization} +\label{sec:vec_initialization} +As for any iterative method, we need to choose $n$ initial guess points $z^{0}_{i}, i = 1, . . . , n.$ +The initial guess is very important since the number of steps needed by the iterative method to reach +a given approximation strongly depends on it. +In~\cite{Aberth73} the Ehrlich-Aberth iteration is started by selecting $n$ +equi-distant points on a circle of center 0 and radius r, where r is +an upper bound to the moduli of the zeros. Later, Bini and al.~\cite{Bini96} +performed this choice by selecting complex numbers along different +circles which relies on the result of~\cite{Ostrowski41}. + +\begin{equation} +\label{eq:radiusR} +%%\begin{align} +\sigma_{0}=\frac{u+v}{2};u=\frac{\sum_{i=1}^{n}u_{i}}{n.max_{i=1}^{n}u_{i}}; +v=\frac{\sum_{i=0}^{n-1}v_{i}}{n.min_{i=0}^{n-1}v_{i}};\\ +%%\end{align} +\end{equation} +Where: +\begin{equation} +u_{i}=2.|a_{i}|^{\frac{1}{i}}; +v_{i}=\frac{|\frac{a_{n}}{a_{i}}|^{\frac{1}{n-i}}}{2}. +\end{equation} + +\subsubsection{Iterative Function} +The operator used by the Aberth method corresponds to the +equation~\ref{Eq:EA1}, it enables the convergence towards +the polynomials zeros, provided all the roots are distinct. + +%Here we give a second form of the iterative function used by the Ehrlich-Aberth method: + +\begin{equation} +\label{Eq:EA1} +EA: z^{k+1}_{i}=z_{i}^{k}-\frac{\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}} +{1-\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}\sum_{j=1,j\neq i}^{j=n}{\frac{1}{(z_{i}^{k}-z_{j}^{k})}}}, i=1,. . . .,n +\end{equation} + +\subsubsection{Convergence Condition} +The convergence condition determines the termination of the algorithm. It consists in stopping the iterative function when the roots are sufficiently stable. We consider that the method converges sufficiently when: + +\begin{equation} +\label{eq:Aberth-Conv-Cond} +\forall i \in [1,n];\vert\frac{z_{i}^{k}-z_{i}^{k-1}}{z_{i}^{k}}\vert<\xi +\end{equation} + + +%\begin{figure}[htbp] +%\centering + % \includegraphics[angle=-90,width=0.5\textwidth]{EA-Algorithm} +%\caption{The Ehrlich-Aberth algorithm on single GPU} +%\label{fig:03} +%\end{figure} + +%the Ehrlich-Aberth method is an iterative method, contain 4 steps, start from the initial approximations of all the +%roots of the polynomial,the second step initialize the solution vector $Z$ using the Guggenheimer method to assure the distinction of the initial vector roots, than in step 3 we apply the the iterative function based on the Newton's method and Weiestrass operator[...,...], wich will make it possible to converge to the roots solution, provided that all the root are different. At the end of each application of the iterative function, a stop condition is verified consists in stopping the iterative process when the whole of the modules of the roots +%are lower than a fixed value $ε$ + + \subsection{EA parallel implementation on CUDA} Like any parallel code, a GPU parallel implementation first requires to determine the sequential tasks and the @@ -504,14 +597,14 @@ to execute in parallel must be made by the GPU to accelerate the execution of the application, like the step 3 and step 4. On the other hand, all the sequential operations and the operations that have data dependencies between threads or recursive computations must -be executed by only one CUDA or CPU thread (step 1 and step 2). Initially we specifies the organization of threads in parallel, need to specify the dimension of the grid Dimgrid: the number of block per grid and block by DimBlock: the number of threads per block required to process a certain task. - -we create the kernel, for step 3 we have two kernels, the -first named \textit{save} is used to save vector $Z^{K-1}$ and the kernel -\textit{update} is used to update the $Z^{K}$ vector. In step 4 a kernel is -created to test the convergence of the method. In order to -compute function H, we have two possibilities: either to use -the Jacobi method, or the Gauss-Seidel method which uses the +be executed by only one CUDA or CPU thread (step 1 and step 2). Initially, we specify the organization of parallel threads, by specifying the dimension of the grid Dimgrid, the number of blocks per grid DimBlock and the number of threads per block. + +The code is organzed by what is named kernels, portions o code that are run on GPU devices. For step 3, there are two kernels, the +first named \textit{save} is used to save vector $Z^{K-1}$ and the seconde one is named +\textit{update} and is used to update the $Z^{K}$ vector. For step 4, a kernel +tests the convergence of the method. In order to +compute the function H, we have two possibilities: either to use +the Jacobi mode, or the Gauss-Seidel mode of iterating which uses the most recent computed roots. It is well known that the Gauss- Seidel mode converges more quickly. So, we used the Gauss-Seidel mode of iteration. To parallelize the code, we created kernels and many functions to @@ -525,13 +618,11 @@ implement, as the development of corresponding kernels with CUDA is longer than on a CPU host. This comes in particular from the fact that it is very difficult to debug CUDA running threads like threads on a CPU host. In the following paragraph -Algorithm 1 shows the GPU parallel implementation of Ehrlich-Aberth method. - -Algorithm~\ref{alg2-cuda} shows a sketch of the Ehrlich-Aberth method using CUDA. +Algorithm~\ref{alg1-cuda} shows the GPU parallel implementation of Ehrlich-Aberth method. \begin{enumerate} \begin{algorithm}[htpb] -\label{alg2-cuda} +\label{alg1-cuda} %\LinesNumbered \caption{CUDA Algorithm to find roots with the Ehrlich-Aberth method} @@ -540,7 +631,7 @@ Algorithm~\ref{alg2-cuda} shows a sketch of the Ehrlich-Aberth method using CUDA \KwOut {$Z$ (Solution root's vector), $ZPrec$ (Previous solution root's vector)} -\BlankLine +%\BlankLine \item Initialization of the of P\; \item Initialization of the of Pu\; @@ -564,78 +655,172 @@ Algorithm~\ref{alg2-cuda} shows a sketch of the Ehrlich-Aberth method using CUDA \section{The EA algorithm on Multi-GPU} -\subsection{MGPU (OpenMP-CUDA)approach} -Before beginning the calculation, our implementation parallel with OpenMP and CUDA shares the input data between threads OpenMP, these input data sotn Z: the vector solution, P: the polynomial to solve, - -Before starting computations, our parallel implementation shared input data of the root finding polynomial between OpenMP threads. From Algorithm 1, the input data are the solution vector $Z$, the polynomial to solve $P$. Let number of OpenMP threads is equal to the number of GPUs, each threads OpenMP ( T-omp) checks one GPU, and control a part of the shared memory, that is a part of the vector Z like: $(n/Nbr_gpu)$ roots, n: the polynomial's degrees, $Nbr_gpu$ the number of GPUs. Then every GPU will have a grid of computation organized with its performances and the size of data of which it checks. In principle a grid is set by two parameter DimGrid, the number of block per grid, DimBloc: the number of threads per block. The following schema shows the architecture of (CUDA,OpenMP). - +\subsection{MGPU : an OpenMP-CUDA approach} +Our OpenMP-CUDA implementation of EA algorithm is based on the hybrid OpenMP and CUDA programming model. It works as follows. +Based on the metadata, a shared memory is used to make data evenly shared among OpenMP threads. The shared data are the solution vector $Z$, the polynomial to solve $P$, and the error vector $\Delta z$. Let (T\_omp) the number of OpenMP threads be equal to the number of GPUs, each OpenMP thread binds to one GPU, and controls a part of the shared memory, that is a part of the vector Z , that is $(n/num\_gpu)$ roots where $n$ is the polynomial's degree and $num\_gpu$ the total number of available GPUs. Each OpenMP thread copies its data from host memory to GPU’s device memory.Then every GPU will have a grid of computation organized according to the device performance and the size of data on which it runs the computation kernels. %In principle a grid is set by two parameter DimGrid, the number of block per grid, DimBloc: the number of threads per block. The following schema shows the architecture of (CUDA,OpenMP). +%\begin{figure}[htbp] +%\centering + % \includegraphics[angle=-90,width=0.5\textwidth]{OpenMP-CUDA} +%\caption{The OpenMP-CUDA architecture} +%\label{fig:03} +%\end{figure} +%Each thread OpenMP compute the kernels on GPUs,than after each iteration they copy out the data from GPU memory to CPU shared memory. The kernels are re-runs is up to the roots converge sufficiently. Here are below the corresponding algorithm: -Each thread OpenMP compute the kernels on GPUs,than after each iteration they copy out the data from GPU memory to CPU shared memory. The kernels are re-runs is up to the roots converge sufficiently. Here are below the corresponding algorithm: +$num\_gpus$ OpenMP threads are created using \verb=omp_set_num_threads();=function (step $3$, Algorithm \ref{alg2-cuda-openmp}), the shared memory is created using \verb=#pragma omp parallel shared()= OpenMP function (line $5$, Algorithm\ref{alg2-cuda-openmp}), then each OpenMP thread allocates memory and copies initial data from CPU memory to GPU global memory, executes the kernels on GPU, but computes only his portion of roots indicated with variable \textit{index} initialized in (line 5, Algorithm \ref{alg2-cuda-openmp}), used as input data in the $kernel\_update$ (line 10, Algorithm \ref{alg2-cuda-openmp}). After each iteration, all OpenMP threads synchronize using \verb=#pragma omp barrier;= to gather all the correct values of $\Delta z$, thus allowing the computation the maximum stop condition on vector $\Delta z$ (line 12, Algorithm \ref{alg2-cuda-openmp}). Finally, threads copy the results from GPU memories to CPU memory. The OpenMP threads execute kernels until the roots sufficiently converge. \begin{enumerate} \begin{algorithm}[htpb] -\label{alg2-cuda} +\label{alg2-cuda-openmp} %\LinesNumbered -\caption{CUDA OpenMP Algorithm to find roots with the Ehrlich-Aberth method} +\caption{CUDA-OpenMP Algorithm to find roots with the Ehrlich-Aberth method} \KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance - threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degrees), $\Delta z_{max}$ (Maximum value of stop condition)} + threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degree), $\Delta z$ ( Vector of errors for stop condition), $num_gpus$ (number of OpenMP threads/ Number of GPUs), $Size$ (number of roots)} -\KwOut {$Z$ (Solution root's vector), $ZPrec$ (Previous solution root's vector)} +\KwOut {$Z$ ( Root's vector), $ZPrec$ (Previous root's vector)} \BlankLine -// selection du GPU\; -\item cudaSetDevice(i)\; -// allocations memoire\; -\verb= #pragma omp single= -\item hostAlloc(P,Pu,Z)\; -\verb= #pragma omp parallel shared(Z,∆zmax,P)= -\item deviceAlloc(dP,dPu,dZ)\; -\verb= #pragma omp barrier= -// transfers CPU-GPU and compute GPU\; -\item copyH2D(P,dP)\; -\item copyH2D(Pu,dPu)\; -\item copyH2D(Zi,dZi)\; -\While {$\Delta z_{max} > \epsilon$}{ -\item Let $\Delta z_{max}=0$\; + +\item Initialization of P\; +\item Initialization of Pu\; +\item Initialization of the solution vector $Z^{0}$\; +\verb=omp_set_num_threads(num_gpus);= +\verb=#pragma omp parallel shared(Z,$\Delta$ z,P);= +\verb=cudaGetDevice(gpu_id);= +\item Allocate and copy initial data from CPU memory to the GPU global memories\; +\item index= $Size/num\_gpus$\; +\item k=0\; +\While {$error > \epsilon$}{ +\item Let $\Delta z=0$\; \item $ kernel\_save(ZPrec,Z)$\; \item k=k+1\; -//each GPU i compute the new root for his part dZi -\item $ kernel\_update(dZi,P,Pu)$\; -\item $kernel\_testConverge(\Delta z_{max},dZi,ZPrec)$\; +\item $ kernel\_update(Z,P,Pu,index)$\; +\item $kernel\_testConverge(\Delta z[gpu\_id],Z,ZPrec)$\; +%\verb=#pragma omp barrier;= +\item error= Max($\Delta z$)\; } -\item copyD2H(dZ,Zi)\; - // fin omp parallel\; +\item Copy results from GPU memories to CPU memory\; \end{algorithm} \end{enumerate} ~\\ -\subsection{MGPU (MPI-CUDA)approach} + +\subsection{Multi-GPU : an MPI-CUDA approach} +%\begin{figure}[htbp] +%\centering + % \includegraphics[angle=-90,width=0.2\textwidth]{MPI-CUDA} +%\caption{The MPI-CUDA architecture } +%\label{fig:03} +%\end{figure} +Our parallel implementation of the Ehrlich-Aberth method to find root of polynomials using a CUDA-MPI approach, splits input data of the polynomial to solve among MPI processes. In Algorithm \ref{alg2-cuda-mpi}, input data are the polynomial to solve $P$, the solution vector $Z$, the previous solution vector $ZPrev$, and the value of errors of stop condition $\Delta z$. Let $p$ denote the number of MPI processes on and $n$ the degree of the polynomial to be solved. The algorithm performs a simple data partitioning by creating $p$ portions, of at most $⌈n/p⌉$ roots to find per MPI process, for each $Z$ and $ZPrec$. Consequently, each MPI process of rank $k$ will have its own solution vector $Z_{k}$ and $ZPrec$, the error related to the stop condition $\Delta z_{k}$, enabling each MPI process to compute $⌈n/p⌉$ roots. + +Since a GPU works only on data already allocated in its memory, all local input data, $Z_{k}$, $ZPrec$ and $\Delta z_{k}$, must be transferred from CPU memories to the corresponding GPU memories. Afterwards, the same EA algorithm (Algorithm \ref{alg1-cuda}) is run by all processes but on different sub-polynomial roots $ p(x)_{k}=\sum_{i=1}^{n} a_{i}x^{i}, k=1,...,p$. Each MPI process executes the loop \verb=(While(...)...do)= containing the CUDA kernels. Then each MPI process computes only its own portion of the roots indicated with variable \textit{index} initialized in (line 5, Algorithm \ref{alg2-cuda-mpi}), used as an input variable in the $kernel\_update$ (line 10, Algorithm \ref{alg2-cuda-mpi}). After each iteration, MPI processes synchronize using \verb=MPI_Allreduce= function, in order to compute the maximum error related to the stop condition; the reduction on $\Delta z_{k}$ by each MPI process on (line, Algorithm\ref{alg2-cuda-mpi}), and copy the values of new computed roots from GPU memories to CPU memories, then communicate their results to other processes,using \verb=MPI_Alltoall=. If the stop condition is not verified ($error > \epsilon$) then processes stay withing the loop \verb= while(...)...do= until all the roots sufficiently converge. + +\begin{enumerate} +\begin{algorithm}[htpb] +\label{alg2-cuda-mpi} +%\LinesNumbered +\caption{CUDA-MPI Algorithm to find roots with the Ehrlich-Aberth method} + +\KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance + threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degrees), $\Delta z$ ( error of stop condition), $num_gpus$ (number of MPI processes/ number of GPUs), Size (number of roots)} + +\KwOut {$Z$ (Solution root's vector), $ZPrec$ (Previous solution root's vector)} + +\BlankLine +\item Initialization of P\; +\item Initialization of Pu\; +\item Initialization of the solution vector $Z^{0}$\; +\item Allocate and copy initial data from CPU memories to GPU global memories\; +\item $index= Size/num_gpus$\; +\item k=0\; +\While {$error > \epsilon$}{ +\item Let $\Delta z=0$\; +\item $kernel\_save(ZPrec,Z)$\; +\item k=k+1\; +\item $kernel\_update(Z,P,Pu,index)$\; +\item $kernel\_testConverge(\Delta z,Z,ZPrec)$\; +\item ComputeMaxError($\Delta z$,error)\; +\item Copy results from GPU memories to CPU memories\; +\item Send $Z[id]$ to all processes\; +\item Receive $Z[j]$ from every other process j\; +} +\end{algorithm} +\end{enumerate} +~\\ \section{experiments} +We study two categories of polynomials: sparse polynomials and full polynomials.\\ +{\it A sparse polynomial} is a polynomial for which only some coefficients are not null. In this paper, we consider sparse polynomials for which the roots are distributed on 2 distinct circles: +\begin{equation} + \forall \alpha_{1} \alpha_{2} \in C,\forall n_{1},n_{2} \in N^{*}; P(z)= (z^{n_{1}}-\alpha_{1})(z^{n_{2}}-\alpha_{2}) +\end{equation}\noindent +{\it A full polynomial} is, in contrast, a polynomial for which all the coefficients are not null. A full polynomial is defined by: +%%\begin{equation} + %%\forall \alpha_{i} \in C,\forall n_{i}\in N^{*}; P(z)= \sum^{n}_{i=1}(z^{n^{i}}.a_{i}) +%%\end{equation} + +\begin{equation} + {\Large \forall a_{i} \in C, i\in N; p(x)=\sum^{n}_{i=0} a_{i}.x^{i}} +\end{equation} +For our tests, a CPU Intel(R) Xeon(R) CPU E5620@2.40GHz and a GPU K40 (with 6 Go of ram) are used. + +We performed a set of experiments on single GPU and Multi-GPU using (OpenMP/MPI) to find roots polynomials with EA algorithm, for both sparse and full polynomials of different sizes. We took into account the execution times and the polynomial size performed by sum or each experiment. +All experimental results obtained from the simulations are made in +double precision data, the convergence threshold of the methods is set +to $10^{-7}$. +%Since we were more interested in the comparison of the +%performance behaviors of Ehrlich-Aberth and Durand-Kerner methods on +%CPUs versus on GPUs. +The initialization values of the vector solution +of the methods are given in %Section~\ref{sec:vec_initialization}. + +\subsection{Test with Multi-GPU (CUDA OpenMP) approach} + +In this part we performed a set of experiments on Multi-GPU (CUDA OpenMP) approach for full and sparse polynomials of different degrees, compare it with Single GPU (CUDA). + \subsubsection{Execution times in seconds of the Ehrlich-Aberth method for solving sparse polynomials on GPUs using shared memory paradigm with OpenMP} + + In this experiments we report the execution time of the EA algorithm, on single GPU and Multi-GPU with (2,3,4) GPUs, for different sparse polynomial degrees ranging from 100,000 to 1,400,000 \begin{figure}[htbp] \centering - \includegraphics[angle=-90,width=0.5\textwidth]{Sparse_openmp} + \includegraphics[angle=-90,width=0.5\textwidth]{Sparse_omp} \caption{Execution times in seconds of the Ehrlich-Aberth method for solving sparse polynomials on GPUs using shared memory paradigm with OpenMP} \label{fig:01} \end{figure} +This figure~\ref{fig:01} shows that (CUDA OpenMP) Multi-GPU approach reduce the execution time up to the scale 100 whereas single GPU is of scale 1000 for polynomial who exceed 1,000,000. It shows the advantage to use OpenMP parallel paradigm to connect the performances of several GPUs and solve a polynomial of high degrees. + +\subsubsection{Execution times in seconds of the Ehrlich-Aberth method for solving full polynomials on GPUs using shared memory paradigm with OpenMP} + +This experiments shows the execution time of the EA algorithm, on single GPU (CUDA) and Multi-GPU (CUDA OpenMP) approach for full polynomials of degrees ranging from 100,000 to 1,400,000 + \begin{figure}[htbp] \centering - \includegraphics[angle=-90,width=0.5\textwidth]{Sparse_mpi} -\caption{Execution times in seconds of the Ehrlich-Aberth method for solving sparse polynomials on GPUs using distributed memory paradigm with MPI} -\label{fig:02} + \includegraphics[angle=-90,width=0.5\textwidth]{Full_omp} +\caption{Execution times in seconds of the Ehrlich-Aberth method for solving full polynomials on GPUs using shared memory paradigm with OpenMP} +\label{fig:03} \end{figure} +The second test with full polynomial shows a very important saving of time, for a polynomial of degrees 1,4M (CUDA OpenMP) approach with 4 GPUs compute and solve it 4 times as fast as single GPU. We notice that curves are positioned one below the other one, more the number of used GPUs increases more the execution time decreases. + +\subsection{Test with Multi-GPU (CUDA MPI) approach} +In this part we perform a set of experiment to compare Multi-GPU (CUDA MPI) approach with single GPU, for solving full and sparse polynomials of degrees ranging from 100,000 to 1,400,000. + +\subsubsection{Execution times in seconds of the Ehrlich-Aberth method for solving sparse polynomials on GPUs using distributed memory paradigm with MPI} + \begin{figure}[htbp] \centering - \includegraphics[angle=-90,width=0.5\textwidth]{Full_openmp} -\caption{Execution times in seconds of the Ehrlich-Aberth method for solving full polynomials on GPUs using shared memory paradigm with OpenMP} -\label{fig:03} + \includegraphics[angle=-90,width=0.5\textwidth]{Sparse_mpi} +\caption{Execution times in seconds of the Ehrlich-Aberth method for solving sparse polynomials on GPUs using distributed memory paradigm with MPI} +\label{fig:02} \end{figure} +~\\ +This figure shows 4 curves of execution time of EA algorithm, a curve with single GPU, 3 curves with Multi-GPUs (2, 3, 4) GPUs. We see clearly that the curve with single GPU is above the other curves, which shows consumption in execution time compared to the Multi-GPU. We can see the approach Multi-GPU (CUDA MPI) reduces the execution time up to the scale 100 for polynomial of degrees more than 1,000,000 whereas single GPU is of the scale 1000. +\\ +\subsubsection{Execution times in seconds of the Ehrlich-Aberth method for solving full polynomials on GPUs using distributed memory paradigm with MPI} \begin{figure}[htbp] \centering @@ -644,33 +829,37 @@ Each thread OpenMP compute the kernels on GPUs,than after each iteration they co \label{fig:04} \end{figure} -\begin{figure}[htbp] -\centering - \includegraphics[angle=-90,width=0.5\textwidth]{Sparse_mpivsomp} -\caption{Comparaison between MPI and OpenMP versions of the Ehrlich-Aberth method for solving sparse plynomials on GPUs} -\label{fig:05} -\end{figure} -\begin{figure}[htbp] -\centering - \includegraphics[angle=-90,width=0.5\textwidth]{Full_mpivsomp} -\caption{Comparaison between MPI and OpenMP versions of the Ehrlich-Aberth method for solving full polynomials on GPUs} -\label{fig:06} -\end{figure} +this figure shows the execution time of the algorithm EA, on single GPU and Multi-GPUS with (2, 3, 4) GPUs for full polynomials. With (CUDA-MPI) approach we notice that the three curves are distinct from each other, more we use GPUs more the execution time decreases, on the other hand the curve with single GPU is well above the other curves. +This is due to the use of parallelization MPI paradigm that divides the polynomial into sub polynomials assigned to each GPU. unlike the single GPU which solves all the polynomial on a single GPU, consequently it engenders more execution time. -\begin{figure}[htbp] -\centering - \includegraphics[angle=-90,width=0.5\textwidth]{MPI_mpivsomp} -\caption{Comparaison of execution times of the Ehrlich-Aberth method for solving sparse and full polynomials on GPUs with distributed memory paradigm using MPI} -\label{fig:07} -\end{figure} +%\begin{figure}[htbp] +%\centering + % \includegraphics[angle=-90,width=0.5\textwidth]{Sparse} +%\caption{Comparaison between MPI and OpenMP versions of the Ehrlich-Aberth method for solving sparse plynomials on GPUs} +%\label{fig:05} +%\end{figure} -\begin{figure}[htbp] -\centering - \includegraphics[angle=-90,width=0.5\textwidth]{OMP_mpivsomp} -\caption{Comparaison of execution times of the Ehrlich-Aberth method for solving sparse and full polynomials on GPUs with shared memory paradigm using OpenMP} -\label{fig:08} -\end{figure} +%\begin{figure}[htbp] +%\centering + % \includegraphics[angle=-90,width=0.5\textwidth]{Full} +%\caption{Comparaison between MPI and OpenMP versions of the Ehrlich-Aberth method for solving full polynomials on GPUs} +%\label{fig:06} +%\end{figure} + +%\begin{figure}[htbp] +%\centering + % \includegraphics[angle=-90,width=0.5\textwidth]{MPI} +%\caption{Comparaison of execution times of the Ehrlich-Aberth method for solving sparse and full polynomials on GPUs with distributed memory paradigm using MPI} +%\label{fig:07} +%\end{figure} + +%\begin{figure}[htbp] +%\centering + % \includegraphics[angle=-90,width=0.5\textwidth]{OMP} +%\caption{Comparaison of execution times of the Ehrlich-Aberth method for solving sparse and full polynomials on GPUs with shared memory paradigm using OpenMP} +%\label{fig:08} +%\end{figure} % An example of a floating figure using the graphicx package. % Note that \label must occur AFTER (or within) \caption. @@ -770,7 +959,20 @@ Each thread OpenMP compute the kernels on GPUs,than after each iteration they co \section{Conclusion} -The conclusion goes here. +In this paper, we have presented a parallel implementation of Ehrlich-Aberth algorithm for solving full and sparse polynomials, on single GPU with CUDA and on Multi-GPUs using two parallel paradigm, shared memory with OpenMP, distributed memory with MPI.(CUDA-OpenMP) approach and (CUDA-MPI) approach, +We have performed many experiments with the Ehrlich-Aberth method in single GPU, Multi-GPU with (CUDA-OpenMP) approach, Multi-GPU with (CUDA-MPI) approach for sparse and full polynomials. the experiments show that, using parallel programming model like (OpenMP, MPI) can efficiently manage multiple graphics cards to work together to solve the same problem and accelerate parallel applications, like (CUDA MPI) approach with 4 GPUs can solve a polynomial of 1,000,000 4 speed up than on single GPU. + + +%In future, we will evaluate our parallel implementation of Ehrlich-Aberth algorithm on other parallel programming model + +Our next objective is to extend the model presented here at nodes clusters frame multi-GPU , with a three-level scheme: inter-node communication via MPI processes (distributed memory), management of multi-GPU node by OpenMP threads (shared memory). + +%present a communication approach between multiple GPUs. The comparison between MPI and OpenMP as GPUs controllers shows that these +%solutions can effectively manage multiple graphics cards to work together +%to solve the same problem + + + %than we have presented two communication approach between multiple GPUs.(CUDA-OpenMP) approach and (CUDA-MPI) approach, in the objective to manage multiple graphics cards to work together and solve the same problem. in the objective to manage multiple graphics cards to work together and solve the same problem. @@ -806,17 +1008,24 @@ The authors would like to thank... %\bibliographystyle{IEEEtran} % argument is your BibTeX string definitions and bibliography database(s) %\bibliography{IEEEabrv,../bib/paper} +%\bibliographystyle{./IEEEtran} +\bibliography{mybibfile} + % % manually copy in the resultant .bbl file % set second argument of \begin to the number of references % (used to reserve space for the reference number labels box) -\begin{thebibliography}{1} +%\begin{thebibliography}{1} -\bibitem{IEEEhowto:kopka} -H.~Kopka and P.~W. Daly, \emph{A Guide to \LaTeX}, 3rd~ed.\hskip 1em plus - 0.5em minus 0.4em\relax Harlow, England: Addison-Wesley, 1999. +%\bibitem{IEEEhowto:kopka} +%H.~Kopka and P.~W. Daly, \emph{A Guide to \LaTeX}, 3rd~ed.\hskip 1em plus + % 0.5em minus 0.4em\relax Harlow, England: Addison-Wesley, 1999. + +%\bibitem{IEEEhowto:NVIDIA12} + %NVIDIA Corporation, \textit{Whitepaper NVIDA’s Next Generation CUDATM Compute +%Architecture: KeplerTM }, 1st ed., 2012. -\end{thebibliography} +%\end{thebibliography}