Most of the numerical methods that deal with the polynomial root-finding problem are simultaneous methods, \textit{i.e.} the iterative methods to find simultaneous approximations of the $n$ polynomial roots. These methods start from the initial approximations of all $n$ polynomial roots and give a sequence of approximations that converge to the roots of the polynomial. Two examples of well-known simultaneous methods for root-finding problem of polynomials are Durand-Kerner method~\cite{Durand60,Kerner66} and Ehrlich-Aberth method~\cite{Ehrlich67,Aberth73}.
-The main problem of the simultaneous methods is that the necessary
-time needed for the convergence increases with the increasing of the
-polynomial's degree. Many authors have treated the problem of
-implementing simultaneous methods in
-parallel. Freeman~\cite{Freeman89} implemented and compared
-Durand-Kerner method, Ehrlich-Aberth method and another method of the
-fourth order of convergence proposed by Farmer and
-Loizou~\cite{Loizou83} on a 8-processor linear chain, for polynomials
-of degree up-to 8. The method of Farmer and Loizou~\cite{Loizou83}
-often diverges, but the first two methods (Durand-Kerner and
-Ehrlich-Aberth methods) have a speed-up equals to 5.5. Later, Freeman
-and Bane~\cite{Freemanall90} considered asynchronous algorithms in
-which each processor continues to update its approximations even
-though the latest values of other approximations $z^{k}_{i}$ have not
-been received from the other processors, in contrast with synchronous
-algorithms where it would wait those values before making a new
-iteration. Couturier and al.~\cite{cs01:nj} proposed two methods
-of parallelization for a shared memory architecture with OpenMP and
-for a distributed memory one with MPI. They are able to compute the
-roots of sparse polynomials of degree 10,000. The authors showed an interesting
-speedup that is 20 times as fast as the sequential implementation.
-
-
-Very few work had been performed since then until the appearing of the Compute Unified Device Architecture (CUDA)~\cite{CUDA15}, a parallel computing platform and a programming model invented by NVIDIA. The computing power of GPUs (Graphics Processing Units) has exceeded that of traditional processors CPUs. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by the GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche et al.~\cite{Kahinall14} proposed an implementation of the Durand-Kerner method on a single GPU. Their main results showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000. %\LZK{Y a pas d'autres travaux pour la résolution de polynômes sur GPUs?}
+
+The convergence time of simultaneous methods drastically increases with the increasing of the polynomial's degree. The great challenge with simultaneous methods is to parallelize them and to improve their convergence. Many authors have proposed parallel simultaneous methods~\cite{Freeman89,Loizou83,Freemanall90,cs01:nj,Couturier02}, using several paradigms of parallelization (synchronous or asynchronous computations, mechanism of shared or distributed memory, etc). However, they have treated only polynomials not exceeding degrees of 20,000.
+
+%The main problem of the simultaneous methods is that the necessary
+%time needed for the convergence increases with the increasing of the
+%polynomial's degree. Many authors have treated the problem of
+%implementing simultaneous methods in
+%parallel. Freeman~\cite{Freeman89} implemented and compared
+%Durand-Kerner method, Ehrlich-Aberth method and another method of the
+%fourth order of convergence proposed by Farmer and
+%Loizou~\cite{Loizou83} on a 8-processor linear chain, for polynomials
+%of degree up-to 8. The method of Farmer and Loizou~\cite{Loizou83}
+%often diverges, but the first two methods (Durand-Kerner and
+%Ehrlich-Aberth methods) have a speed-up equals to 5.5. Later, Freeman
+%and Bane~\cite{Freemanall90} considered asynchronous algorithms in
+%which each processor continues to update its approximations even
+%though the latest values of other approximations $z^{k}_{i}$ have not
+%been received from the other processors, in contrast with synchronous
+%algorithms where it would wait those values before making a new
+%iteration. Couturier and al.~\cite{cs01:nj} proposed two methods
+%of parallelization for a shared memory architecture with OpenMP and
+%for a distributed memory one with MPI. They are able to compute the
+%roots of sparse polynomials of degree 10,000. The authors showed an interesting
+%speedup that is 20 times as fast as the sequential implementation.
+
+Very few work had been performed since then until the appearing of the Compute Unified Device Architecture (CUDA)~\cite{CUDA15}, a parallel computing platform and a programming model invented by NVIDIA. The computing power of GPUs (Graphics Processing Units) has exceeded that of traditional processors CPUs. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by the GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche et al.~\cite{Kahinall14} proposed an implementation of the Durand-Kerner method on a single GPU. Their main results showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000.
In this paper we propose the parallelization of Ehrlich-Aberth method which has a good convergence and it is suitable to be implemented in parallel computers. We use two parallel programming paradigms OpenMP and MPI on CUDA multi-GPU platforms. Our CUDA-MPI and CUDA-OpenMP codes are the first implementations of Ehrlich-Aberth method with multiple GPUs for finding roots of polynomials. Our major contributions include:
\begin{itemize}
polynomials of degree up to 5 millions.
\end{itemize}
This latter approach is more used on clusters to solve very complex problems that are too large for traditional supercomputers, which are very expensive to build and run.
-%\LZK{Pas d'autres contributions possibles? J'ai supprimé les deux premiers points proposés précédemment.
-%\AS{La résolution du problème pour des polynomes pleins de degré 6M est une contribution aussi à mon avis}}
The paper is organized as follows. In Section~\ref{sec2} we present three different parallel programming models OpenMP, MPI and CUDA. In Section~\ref{sec3} we present the implementation of the Ehrlich-Aberth algorithm on a single GPU. In Section~\ref{sec4} we present the parallel implementations of the Ehrlich-Aberth algorithm on multiple GPUs using the OpenMP and MPI approaches. In section~\ref{sec5} we present our experiments and discuss them. Finally, Section~\ref{sec6} concludes this paper and gives some hints for future research directions in this topic.
Our objective consists in implementing a root-finding algorithm of polynomials on multiple GPUs. To this end, it is primordial to know how to manage CUDA contexts of different GPUs. A direct method for controlling the various GPUs is to use as many threads or processes as GPU devices. We investigate two parallel paradigms: OpenMP and MPI. In this case, the GPU indices are defined according to the identifiers of the OpenMP threads or the ranks of the MPI processes. In this section we present the parallel programming models: OpenMP, MPI and CUDA.
\subsection{OpenMP}
-
-
OpenMP (Open Multi-processing) is an application programming interface for parallel programming~\cite{openmp13}. It is a portable approach based on the multithreading designed for shared memory computers, where a master thread forks a number of slave threads which execute blocks of code in parallel. An OpenMP program alternates sequential regions and parallel regions of code, where the sequential regions are executed by the master thread and the parallel ones may be executed by multiple threads. During the execution of an OpenMP program the threads communicate their data (read and modified) in the shared memory. One advantage of OpenMP is the global view of the memory address space of an application. This allows relatively a fast development of parallel applications with easier maintenance. However, it is often difficult to get high rates of performances in large scale-applications.
\subsection{MPI}
-
-
MPI (Message Passing Interface) is a portable message passing style of the parallel programming designed especially for the distributed memory architectures~\cite{Peter96}. In most MPI implementations, a computation contains a fixed set of processes created at the initialization of the program in such way one process is created per processor. The processes synchronize their computations and communicate by sending/receiving messages to/from other processes. In this case, the data are explicitly exchanged by message passing while the data exchanges are implicit in a multithread programming model like OpenMP and Pthreads. However in the MPI programming model, the processes may either execute different programs referred to as multiple program multiple data (MPMD) or every process executes the same program (SPMD). The MPI approach is one of most used HPC programming model to solve large scale and complex applications.
\subsection{CUDA}
-
-
CUDA (Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{CUDA15} for GPUs. It provides a high level GPGPU-based programming model to program GPUs for general purpose computations. The GPU is viewed as an accelerator such that data-parallel operations of a CUDA program running on a CPU are off-loaded onto GPU and executed by this later. The data-parallel operations executed by GPUs are called kernels. The same kernel is executed in parallel by a large number of threads organized in grids of thread blocks, such that each GPU multiprocessor executes one or more thread blocks in SIMD fashion (Single Instruction, Multiple Data) and in turn each core of the multiprocessor executes one or more threads within a block. Threads within a block can cooperate by sharing data through a fast shared memory and coordinate their execution through synchronization points. In contrast, within a grid of thread blocks, there is no synchronization at all between blocks. The GPU only works on data filled in the global memory and the final results of the kernel executions must be transferred out of the GPU. In the GPU, the global memory has lower bandwidth than the shared memory associated to each multiprocessor. Thus in the CUDA programming, it is necessary to design carefully the arrangement of the thread blocks in order to ensure low latency and a proper usage of the shared memory, and the global memory accesses should be minimized.
\subsection{Improving Ehrlich-Aberth method}
With high degree polynomials, the Ehrlich-Aberth method suffers from floating point overflows due to the mantissa of floating points representations. This induces errors in the computation of $p(z)$ when $z$ is large.
-
In order to solve this problem, we propose to modify the iterative
function by using the logarithm and the exponential of a complex and
we propose a new version of the Ehrlich-Aberth method. This method
Q(z^k_i) = \exp(\ln(p(z^k_i)) - \ln(p'(z^k_i)) + \ln(\sum_{i\neq j}^n\frac{1}{z^k_i-z^k_j})).
\end{equation}
-
-
Using the logarithm and the exponential operators, we can replace any
multiplications and divisions with additions and
subtractions. Consequently, computations manipulate lower values in
absolute values~\cite{Karimall98}. In practice, the exponential and
-logarithm mode is used a root excepts the circle unit, represented by the radius $R$ evaluated in C language as :
+logarithm mode is used a root excepts the circle unit, \LZK{Je n'ai pas compris cette phrase!} represented by the radius $R$ evaluated in C language as :
\begin{equation}
\label{R.EL}
R = exp(log(DBL\_MAX)/(2*n) );
\subsection{The Ehrlich-Aberth parallel implementation on CUDA}
-
-
-
-
The code is organized as kernels which are parts of code that are run
on GPU devices. Algorithm~\ref{alg1-cuda} describes the CUDA
implementation of the Ehrlich-Aberth on a GPU. This algorithms starts