DK: z_i^{k+1}=z_{i}^{k}-\frac{P(z_i^{k})}{\prod_{i\neq j}(z_i^{k}-z_j^{k})}, i = 1, . . . , n,
\end{equation}
%%\end{center}
-where $z_i^k$ is the $i^{th}$ root of the polynomial $P$ at the
+where $z_i^k$ is the $i^{th}$ root of the polynomial $p$ at the
iteration $k$.
EA: z_i^{k+1}=z_i^{k}-\frac{1}{{\frac {P'(z_i^{k})} {P(z_i^{k})}}-{\sum_{i\neq j}\frac{1}{(z_i^{k}-z_j^{k})}}}, i = 1, . . . , n,
\end{equation}
%%\end{center}
-where $P'(z)$ is the polynomial derivative of $P$ evaluated in the
+where $p'(z)$ is the polynomial derivative of $p$ evaluated in the
point $z$.
Aberth, Ehrlich and Farmer-Loizou~\cite{Loizou83} have proved that
diverges, but the first two methods have speed-up equal to 5.5. Later,
Freeman and Bane~\cite{Freemanall90} considered asynchronous
algorithms, in which each processor continues to update its
-approximations even though the latest values of other $z_i((k))$
+approximations even though the latest values of other $z_i^{k}$
have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before making a new iteration.
Couturier and al.~\cite{Raphaelall01} proposed two methods of parallelization for
a shared memory architecture and for distributed memory one. They were able to
-compute the roots of sparse polynomials of degree 10000 in 430 seconds with only 8
+compute the roots of sparse polynomials of degree 10,000 in 430 seconds with only 8
personal computers and 2 communications per iteration. Comparing to the sequential implementation
-where it takes up to 3300 seconds to obtain the same results, the authors show an interesting speedup.
+where it takes up to 3,300 seconds to obtain the same results, the authors show an interesting speedup.
Very few works had been performed since this last work until the appearing of
the Compute Unified Device Architecture (CUDA)~\cite{CUDA10}, a
Durand-Kerner method on GPU. Their main
result showed that a parallel CUDA implementation is about 10 times faster than
the sequential implementation on a single CPU for sparse
-polynomials of degree 48000.
+polynomials of degree 48,000.
In this paper, we focus on the implementation of the Ehrlich-Aberth
\ref{secStateofArt}. In Section~\ref{sec5} we propose a parallel
implementation of the Ehrlich-Aberth method on GPU and discuss
it. Section~\ref{sec6} presents and investigates our implementation
-and experimental study results. Finally, Section~\ref{sec7} 6 concludes
+and experimental study results. Finally, Section~\ref{sec7} concludes
this paper and gives some hints for future research directions in this
topic.
\section{Ehrlich-Aberth method}
\label{sec1}
A cubically convergent iteration method for finding zeros of
-polynomials was proposed by O. Aberth~\cite{Aberth73}. In the
-following we present the main stages of our implementation the Ehrlich-Aberth method.
+polynomials was proposed by O. Aberth~\cite{Aberth73}. The Ehrlich-Aberth method contain 4 main steps, presented in the following.
%The Aberth method is a purely algebraic derivation.
%To illustrate the derivation, we let $w_{i}(z)$ be the product of linear factors
\subsection{Polynomials Initialization}
-The initialization of a polynomial p(z) is done by setting each of the $n$ complex coefficients $a_{i}$:
+The initialization of a polynomial $p(z)$ is done by setting each of the $n$ complex coefficients $a_{i}$:
\begin{equation}
\label{eq:SimplePolynome}
\end{equation}
-\subsection{Vector $z^{(0)}$ Initialization}
+\subsection{Vector $Z^{(0)}$ Initialization}
\label{sec:vec_initialization}
-As for any iterative method, we need to choose $n$ initial guess points $z^{(0)}_{i}, i = 1, . . . , n.$
+As for any iterative method, we need to choose $n$ initial guess points $z^{0}_{i}, i = 1, . . . , n.$
The initial guess is very important since the number of steps needed by the iterative method to reach
a given approximation strongly depends on it.
In~\cite{Aberth73} the Ehrlich-Aberth iteration is started by selecting $n$
\begin{equation}
\label{Eq:Hi}
-EA2: z^{k+1}=z_{i}^{k}-\frac{\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}}
+EA2: z^{k+1}_{i}=z_{i}^{k}-\frac{\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}}
{1-\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}\sum_{j=1,j\neq i}^{j=n}{\frac{1}{(z_{i}^{k}-z_{j}^{k})}}}, i=0,. . . .,n
\end{equation}
It can be noticed that this equation is equivalent to Eq.~\ref{Eq:EA},
as well as the Durand-Kerner implement, suffers from overflow problems. This
situation occurs, for instance, in the case where a polynomial
having positive coefficients and a large degree is computed at a
-point $\xi$ where $|\xi| > 1$, where $|x|$ stands for the modolus of a complex $x$. Indeed, the limited number in the
-mantissa of floating points representations makes the computation of p(z) wrong when z
+point $\xi$ where $|\xi| > 1$, where $|z|$ stands for the modolus of a complex $z$. Indeed, the limited number in the
+mantissa of floating points representations makes the computation of $p(z)$ wrong when z
is large. For example $(10^{50}) +1+ (- 10^{50})$ will give the wrong result
of $0$ instead of $1$. Consequently, we can not compute the roots
for large degrees. This problem was early discussed in
%%$$ \exp \bigl( \ln(p(z)_{k})-ln(\ln(p(z)_{k}^{'}))- \ln(1- \exp(\ln(p(z)_{k})-ln(\ln(p(z)_{k}^{'})+\ln\sum_{i\neq j}^{n}\frac{1}{z_{k}-z_{j}})$$
\begin{equation}
\label{Log_H2}
-EA.EL: z^{k+1}=z_{i}^{k}-\exp \left(\ln \left(
+EA.EL: z^{k+1}_{i}=z_{i}^{k}-\exp \left(\ln \left(
p(z_{i}^{k})\right)-\ln\left(p'(z^{k}_{i})\right)- \ln
\left(1-Q(z^{k}_{i})\right)\right),
\end{equation}
\begin{equation}
\label{Log_H1}
Q(z^{k}_{i})=\exp\left( \ln (p(z^{k}_{i}))-\ln(p'(z^{k}_{i}))+\ln \left(
-\sum_{k\neq j}^{n}\frac{1}{z^{k}_{i}-z^{k}_{j}}\right)\right).
+\sum_{i\neq j}^{n}\frac{1}{z^{k}_{i}-z^{k}_{j}}\right)\right)i=1,...,n,
\end{equation}
This solution is applied when the root except the circle unit, represented by the radius $R$ evaluated in C language as:
\subsection{Influence of the number of threads on the execution times of different polynomials (sparse and full)}
To optimize the performances of an algorithm on a GPU, it is necessary to maximize the use of cores GPU (maximize the number of threads executed in parallel) and to optimize the use of the various memoirs GPU. In fact, it is interesting to see the influence of the number of threads per block on the execution time of Ehrlich-Aberth algorithm.
-For that, we notice that the maximum number of threads per block for the Nvidia Tesla K40 GPU is 1024, so we varied the number of threads per block from 8 to 1,024. We took into account the execution time for both sparse and full of 10 different polynomials of size 50,000 and 10 different polynomials of size 500,000 degrees.
+For that, we notice that the maximum number of threads per block for the Nvidia Tesla K40 GPU is 1,024, so we varied the number of threads per block from 8 to 1,024. We took into account the execution time for both sparse and full of 10 different polynomials of size 50,000 and 10 different polynomials of size 500,000 degrees.
\begin{figure}[htbp]
\centering
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{figures/sparse_full_explog}
-\caption{The impact of exp.log solution to compute very high degrees of polynomial.}
+\caption{The impact of exp-log solution to compute very high degrees of polynomial.}
\label{fig:03}
\end{figure}
with full and sparse polynomials degrees. We can see that the
execution times for both algorithms are the same with full polynomials
degrees less than 4,000 and sparse polynomials less than 150,000. We
-also clearly show that the classical version (without log-exp) of
+also clearly show that the classical version (without exp-log) of
Ehrlich-Aberth algorithm do not converge after these degree with
sparse and full polynomials. In counterpart, the new version of
Ehrlich-Aberth algorithm with the exp-log solution can solve very
high degree polynomials.
-%in fact, when the modulus of the roots are up than \textit{R} given in ~\ref{R},this exceed the limited number in the mantissa of floating points representations and can not compute the iterative function given in ~\ref{eq:Aberth-H-GS} to obtain the root solution, who justify the divergence of the classical Ehrlich-Aberth algorithm. However, applying log.exp solution given in ~\ref{sec2} took into account the limit of floating using the iterative function in(Eq.~\ref{Log_H1},Eq.~\ref{Log_H2} and allows to solve a very large polynomials degrees .
+%in fact, when the modulus of the roots are up than \textit{R} given in ~\ref{R},this exceed the limited number in the mantissa of floating points representations and can not compute the iterative function given in ~\ref{eq:Aberth-H-GS} to obtain the root solution, who justify the divergence of the classical Ehrlich-Aberth algorithm. However, applying exp-log solution given in ~\ref{sec2} took into account the limit of floating using the iterative function in(Eq.~\ref{Log_H1},Eq.~\ref{Log_H2} and allows to solve a very large polynomials degrees .
\label{fig:04}
\end{figure}
-\begin{figure}[htbp]
-\centering
- \includegraphics[width=0.8\textwidth]{figures/EA_DK1}
-\caption{Execution times of the Durand-Kerner and the Ehrlich-Aberth methods on GPU}
-\label{fig:0}
-\end{figure}
-
Figure~\ref{fig:04} shows the execution times of both methods with
sparse polynomial degrees ranging from 1,000 to 1,000,000. We can see
that the Ehrlich-Aberth algorithm is faster than Durand-Kerner
execution time per iteration, but it needs more iterations to converge.
- \section{Conclusion and perspective}
+ \section{Conclusion and perspectives}
\label{sec7}
In this paper we have presented the parallel implementation
Ehrlich-Aberth method on GPU for the problem of finding roots