methods~\cite{Freeman89,Loizou83,Freemanall90,bini96,cs01:nj,Couturier02},
using several paradigms of parallelization (synchronous or
asynchronous computations, mechanism of shared or distributed memory,
-etc). However, so fat until now, only polynomials not exceeding
+etc). However, so far until now, only polynomials not exceeding
degrees of less than 100,000 have been solved.
%The main problem of the simultaneous methods is that the necessary
iterative function based on the Newton's method~\cite{newt70} and
Weiestrass operator~\cite{Weierstrass03} is applied. In our case, the
Ehrlich-Aberth is applied as in~(\ref{Eq:EA1}). Iterations of the
-Ehrlich-Aberth method will converge to the roots of the considered
+EA method will converge to the roots of the considered
polynomial. In order to stop the iterative function, a stop condition
is applied, this is the 4th step. This condition checks that all the
root modules are lower than a fixed value $\epsilon$.
\end{equation}
\subsection{Improving Ehrlich-Aberth method}
-With high degree polynomials, the Ehrlich-Aberth method suffers from floating point overflows due to the mantissa of floating points representations. This induces errors in the computation of $p(z)$ when $z$ is large.
+With high degree polynomials, the EA method suffers from floating point overflows due to the mantissa of floating points representations. This induces errors in the computation of $p(z)$ when $z$ is large.
In order to solve this problem, we propose to modify the iterative
function by using the logarithm and the exponential of a complex and
-we propose a new version of the Ehrlich-Aberth method. This method
+we propose a new version of the EA method. This method
allows us to exceed the computation of the polynomials of degree
-100,000 and to reach a degree up to more than 1,000,000. The reformulation of the iteration~(\ref{Eq:EA1}) of the Ehrlich-Aberth method with exponential and logarithm operators is defined as follows, for $i=1,\dots,n$:
+100,000 and to reach a degree up to more than 1,000,000. The reformulation of the iteration~(\ref{Eq:EA1}) of the EA method with exponential and logarithm operators is defined as follows, for $i=1,\dots,n$:
\begin{equation}
\label{Log_H2}
the local roots are transferred from the GPU memory to the CPU memory
(line 12) before being exchanged between all processors (line 13) in
order to give to all processors the last version of the roots (with
-the MPI\_AlltoAll routine). If the convergence is not satisfied, a
+the $MPI\_AlltoAll$ routine). If the convergence is not satisfied, a
new iteration is executed.
\begin{algorithm}[htpb]
evaluate both the GPU and Multi-GPU approaches, we performed a set of
experiments on a single GPU and multiple GPUs using OpenMP or MPI with
the EA algorithm, for both sparse and full polynomials of different
-sizes. All experimental results obtained are performed with double
+degrees. All experimental results obtained are performed with double
precision float data and the convergence threshold of the EA method is
set to $10^{-7}$. The initialization values of the vector solution of
the methods are given by the Guggenheimer method~\cite{Gugg86}.
\label{fig:06}
\end{figure}
-In Figure~\ref{fig:05} there is one curve for CUDA-OpenMP and another one for CUDA-MPI. We can see that the results are quite similar between OpenMP and MPI for the polynomial degree of 200K. For the degree of 800K, the MPI version is a little bit slower than the OpenMP version but for the degree of 1,4 million, there is a slight advantage for the MPI version. In Figure~\ref{fig:06}, we can see that when it comes to full polynomials, both approaches are almost equivalent.
+In Figure~\ref{fig:05} there is one curve for CUDA-OpenMP and another one for CUDA-MPI for each polynomial investigated. We can see that the results are quite similar between OpenMP and MPI for the polynomial degree of 200K. For the degree of 800K, the MPI version is a little bit slower than the OpenMP version but for the degree of 1,4 million, there is a slight advantage for the MPI version. In Figure~\ref{fig:06}, we can see that when it comes to full polynomials, both approaches are almost equivalent.
\subsection{Solving sparse and full polynomials of the same degree on multiple GPUs}
\section{Conclusion}
\label{sec6}
-In this paper, we have presented parallel implementations of the Ehrlich-Aberth algorithm to solve full and sparse polynomials, on a single GPU with CUDA and on multiple GPUs using two parallel paradigms: shared memory with OpenMP and distributed memory with MPI. These architectures were addressed by a CUDA-OpenMP approach and CUDA-MPI approach, respectively. Experiments show that, using parallel programming model like (OpenMP or MPI), we can efficiently manage multiple graphics cards to solve the same problem and accelerate the parallel execution with 4 GPUs and solve a polynomial of degree up-to 5,000,000 four times faster than on a single GPU.
+In this paper, we have presented parallel implementations of the Ehrlich-Aberth algorithm to solve full and sparse polynomials, on a single GPU with CUDA and on multiple GPUs using two parallel paradigms: shared memory with OpenMP and distributed memory with MPI. These architectures were addressed by a CUDA-OpenMP approach and CUDA-MPI approach, respectively. Experiments show that, using parallel programming model like OpenMP or MPI, we can efficiently manage multiple graphics cards to solve the same problem and accelerate the parallel execution with 4 GPUs and solve a polynomial of degree up-to 5,000,000 four times faster than on a single GPU.
Our next objective is to extend the model presented here to clusters of GPU nodes, with a three-level scheme: inter-node communications via MPI processes (distributed memory), management of multi-GPU nodes by OpenMP threads (shared memory). Actual platforms may probably also contain purely multi-core nodes without any GPU. This heterogeneous setting may lead to the integration of load balancing algorithms so as to allow an optimal use of hardware ressources.