DK: z_i^{k+1}=z_{i}^{k}-\frac{P(z_i^{k})}{\prod_{i\neq j}(z_i^{k}-z_j^{k})}, i = 1, . . . , n,
\end{equation}
%%\end{center}
-where $z_i^k$ is the $i^{th}$ root of the polynomial $P$ at the
+where $z_i^k$ is the $i^{th}$ root of the polynomial $p$ at the
iteration $k$.
EA: z_i^{k+1}=z_i^{k}-\frac{1}{{\frac {P'(z_i^{k})} {P(z_i^{k})}}-{\sum_{i\neq j}\frac{1}{(z_i^{k}-z_j^{k})}}}, i = 1, . . . , n,
\end{equation}
%%\end{center}
-where $P'(z)$ is the polynomial derivative of $P$ evaluated in the
+where $p'(z)$ is the polynomial derivative of $p$ evaluated in the
point $z$.
Aberth, Ehrlich and Farmer-Loizou~\cite{Loizou83} have proved that
diverges, but the first two methods have speed-up equal to 5.5. Later,
Freeman and Bane~\cite{Freemanall90} considered asynchronous
algorithms, in which each processor continues to update its
-approximations even though the latest values of other $z_i((k))$
+approximations even though the latest values of other $z_i^{k}$
have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before making a new iteration.
Couturier and al.~\cite{Raphaelall01} proposed two methods of parallelization for
a shared memory architecture and for distributed memory one. They were able to
-compute the roots of sparse polynomials of degree 10000 in 430 seconds with only 8
+compute the roots of sparse polynomials of degree 10,000 in 430 seconds with only 8
personal computers and 2 communications per iteration. Comparing to the sequential implementation
-where it takes up to 3300 seconds to obtain the same results, the authors show an interesting speedup.
+where it takes up to 3,300 seconds to obtain the same results, the authors show an interesting speedup.
Very few works had been performed since this last work until the appearing of
the Compute Unified Device Architecture (CUDA)~\cite{CUDA10}, a
Durand-Kerner method on GPU. Their main
result showed that a parallel CUDA implementation is about 10 times faster than
the sequential implementation on a single CPU for sparse
-polynomials of degree 48000.
+polynomials of degree 48,000.
In this paper, we focus on the implementation of the Ehrlich-Aberth
\ref{secStateofArt}. In Section~\ref{sec5} we propose a parallel
implementation of the Ehrlich-Aberth method on GPU and discuss
it. Section~\ref{sec6} presents and investigates our implementation
-and experimental study results. Finally, Section~\ref{sec7} 6 concludes
+and experimental study results. Finally, Section~\ref{sec7} concludes
this paper and gives some hints for future research directions in this
topic.
\section{Ehrlich-Aberth method}
\label{sec1}
A cubically convergent iteration method for finding zeros of
-polynomials was proposed by O. Aberth~\cite{Aberth73}. In the
-following we present the main stages of our implementation the Ehrlich-Aberth method.
+polynomials was proposed by O. Aberth~\cite{Aberth73}. The Ehrlich-Aberth method contain 4 main steps, presented in the following.
%The Aberth method is a purely algebraic derivation.
%To illustrate the derivation, we let $w_{i}(z)$ be the product of linear factors
\subsection{Polynomials Initialization}
-The initialization of a polynomial p(z) is done by setting each of the $n$ complex coefficients $a_{i}$:
+The initialization of a polynomial $p(z)$ is done by setting each of the $n$ complex coefficients $a_{i}$:
\begin{equation}
\label{eq:SimplePolynome}
\end{equation}
-\subsection{Vector $z^{(0)}$ Initialization}
+\subsection{Vector $Z^{(0)}$ Initialization}
\label{sec:vec_initialization}
As for any iterative method, we need to choose $n$ initial guess points $z^{(0)}_{i}, i = 1, . . . , n.$
The initial guess is very important since the number of steps needed by the iterative method to reach