methods~\cite{Freeman89,Loizou83,Freemanall90,bini96,cs01:nj,Couturier02},
using several paradigms of parallelization (synchronous or
asynchronous computations, mechanism of shared or distributed memory,
-etc). However, so fat until now, only polynomials not exceeding
+etc). However, so far until now, only polynomials not exceeding
degrees of less than 100,000 have been solved.
%The main problem of the simultaneous methods is that the necessary
this problem. Indeed, the computing power of GPUs (Graphics Processing
Units) has exceeded that of traditional CPUs processors, which makes it very appealing to the research community to investigate new parallel implementations for a whole set of scientific problems in the reasonable hope to solve bigger instances of well known computationally demanding issues such as the one beforehand. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by the GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche et al.~\cite{Kahinall14} proposed an implementation of the Durand-Kerner method on a single GPU. Their main results showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000.
-In this paper we propose the parallelization of the Ehrlich-Aberth (EA) method which has a cubic convergence rate which is much better than the quadratic rate of the Durand-Kerner method which has already been investigated in \cite{Kahinall14}. In the other hand, EA is suitable to be implemented in parallel computers according to the data-parallel paradigm. In this model, computing elements carry computations on the data they are assigned and communicate with other computing elements in order to get fresh data or to synchronise. Classically, two parallel programming paradigms OpenMP and MPI are used to code such solutions. But in our case, computing elements are CUDA multi-GPU platforms. This architectural setting poses new programming challenges but offers also new opportunities to efficiently solve huge problems, otherwise considered intractable until recently. To the best of our knowledge, our CUDA-MPI and CUDA-OpenMP codes are the first implementations of EA method with multiple GPUs for finding roots of polynomials. Our major contributions include:
+In this paper we propose the parallelization of the Ehrlich-Aberth (EA) method which has a cubic convergence rate which is much better than the quadratic rate of the Durand-Kerner method which has already been investigated in \cite{Kahinall14}. In the other hand, EA is suitable to be implemented in parallel computers according to the data-parallel paradigm. In this model, computing elements carry computations on the data they are assigned and communicate with other computing elements in order to get fresh data or to synchronize. Classically, two parallel programming paradigms OpenMP and MPI are used to code such solutions. But in our case, computing elements are CUDA multi-GPU platforms. This architectural setting poses new programming challenges but offers also new opportunities to efficiently solve huge problems, otherwise considered intractable until recently. To the best of our knowledge, our CUDA-MPI and CUDA-OpenMP codes are the first implementations of EA method with multiple GPUs for finding roots of polynomials. Our major contributions include:
\begin{itemize}
\item The parallel implementation of EA algorithm on a multi-GPU platform with a shared memory using OpenMP API. It is based on threads created from the same system process, such that each thread is attached to one GPU. In this case the communications between GPUs are done by OpenMP threads through shared memory.
iterative function based on the Newton's method~\cite{newt70} and
Weiestrass operator~\cite{Weierstrass03} is applied. In our case, the
Ehrlich-Aberth is applied as in~(\ref{Eq:EA1}). Iterations of the
-Ehrlich-Aberth method will converge to the roots of the considered
+EA method will converge to the roots of the considered
polynomial. In order to stop the iterative function, a stop condition
is applied, this is the 4th step. This condition checks that all the
root modules are lower than a fixed value $\epsilon$.
\end{equation}
\subsection{Improving Ehrlich-Aberth method}
-With high degree polynomials, the Ehrlich-Aberth method suffers from floating point overflows due to the mantissa of floating points representations. This induces errors in the computation of $p(z)$ when $z$ is large.
+With high degree polynomials, the EA method suffers from floating point overflows due to the mantissa of floating points representations. This induces errors in the computation of $p(z)$ when $z$ is large.
In order to solve this problem, we propose to modify the iterative
function by using the logarithm and the exponential of a complex and
-we propose a new version of the Ehrlich-Aberth method. This method
+we propose a new version of the EA method. This method
allows us to exceed the computation of the polynomials of degree
-100,000 and to reach a degree up to more than 1,000,000. The reformulation of the iteration~(\ref{Eq:EA1}) of the Ehrlich-Aberth method with exponential and logarithm operators is defined as follows, for $i=1,\dots,n$:
+100,000 and to reach a degree up to more than 1,000,000. The reformulation of the iteration~(\ref{Eq:EA1}) of the EA method with exponential and logarithm operators is defined as follows, for $i=1,\dots,n$:
\begin{equation}
\label{Log_H2}
multiplications and divisions with additions and
subtractions. Consequently, computations manipulate lower values in
absolute values~\cite{Karimall98}. In practice, the exponential and
-logarithm mode is used when a root is outisde the circle unit represented by the radius $R$ evaluated in C language with:
+logarithm mode is used when a root is outside the circle unit represented by the radius $R$ evaluated in C language with:
\begin{equation}
\label{R.EL}
R = exp(log(DBL\_MAX)/(2*n) );
\end{equation}
where \verb=DBL_MAX= stands for the maximum representable
-\verb=double= value and $n$ is the degree of the polynimal.
+\verb=double= value and $n$ is the degree of the polynomial.
\subsection{The Ehrlich-Aberth parallel implementation on CUDA}
the local roots are transferred from the GPU memory to the CPU memory
(line 12) before being exchanged between all processors (line 13) in
order to give to all processors the last version of the roots (with
-the MPI\_AlltoAll routine). If the convergence is not satisfied, a
+the $MPI\_AlltoAll$ routine). If the convergence is not satisfied, a
new iteration is executed.
\begin{algorithm}[htpb]
evaluate both the GPU and Multi-GPU approaches, we performed a set of
experiments on a single GPU and multiple GPUs using OpenMP or MPI with
the EA algorithm, for both sparse and full polynomials of different
-sizes. All experimental results obtained are performed with double
+degrees. All experimental results obtained are performed with double
precision float data and the convergence threshold of the EA method is
set to $10^{-7}$. The initialization values of the vector solution of
the methods are given by the Guggenheimer method~\cite{Gugg86}.
\label{fig:06}
\end{figure}
-In Figure~\ref{fig:05} there is one curve for CUDA-OpenMP and another one for CUDA-MPI. We can see that the results are quite similar between OpenMP and MPI for the polynomial degree of 200K. For the degree of 800K, the MPI version is a little bit slower than the OpenMP version but for the degree of 1,4 million, there is a slight advantage for the MPI version. In Figure~\ref{fig:06}, we can see that when it comes to full polynomials, both approaches are almost equivalent.
+In Figure~\ref{fig:05} there is one curve for CUDA-OpenMP and another one for CUDA-MPI for each polynomial investigated. We can see that the results are quite similar between OpenMP and MPI for the polynomial degree of 200K. For the degree of 800K, the MPI version is a little bit slower than the OpenMP version but for the degree of 1,4 million, there is a slight advantage for the MPI version. In Figure~\ref{fig:06}, we can see that when it comes to full polynomials, both approaches are almost equivalent.
\subsection{Solving sparse and full polynomials of the same degree on multiple GPUs}
\section{Conclusion}
\label{sec6}
-In this paper, we have presented parallel implementations of the Ehrlich-Aberth algorithm to solve full and sparse polynomials, on a single GPU with CUDA and on multiple GPUs using two parallel paradigms: shared memory with OpenMP and distributed memory with MPI. These architectures were addressed by a CUDA-OpenMP approach and CUDA-MPI approach, respectively. Experiments show that, using parallel programming model like (OpenMP or MPI), we can efficiently manage multiple graphics cards to solve the same problem and accelerate the parallel execution with 4 GPUs and solve a polynomial of degree up-to 5,000,000 four times faster than on a single GPU.
+In this paper, we have presented parallel implementations of the Ehrlich-Aberth algorithm to solve full and sparse polynomials, on a single GPU with CUDA and on multiple GPUs using two parallel paradigms: shared memory with OpenMP and distributed memory with MPI. These architectures were addressed by a CUDA-OpenMP approach and CUDA-MPI approach, respectively. Experiments show that, using parallel programming model like OpenMP or MPI, we can efficiently manage multiple graphics cards to solve the same problem and accelerate the parallel execution with 4 GPUs and solve a polynomial of degree up-to 5,000,000 four times faster than on a single GPU.
-Our next objective is to extend the model presented here to clusters of GPU nodes, with a three-level scheme: inter-node communications via MPI processes (distributed memory), management of multi-GPU nodes by OpenMP threads (shared memory). Actual platforms may probably also contain purely multi-core nodes without any GPU. This heterogeneous setting may lead to the integration of load balancing algorithms so as to allow an optimal use of hardware ressources.
+Our next objective is to extend the model presented here to clusters of GPU nodes, with a three-level scheme: inter-node communications via MPI processes (distributed memory), management of multi-GPU nodes by OpenMP threads (shared memory). Actual platforms may probably also contain purely multi-core nodes without any GPU. This heterogeneous setting may lead to the integration of load balancing algorithms so as to allow an optimal use of hardware resource's.
\section*{Acknowledgment}