X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/kahina_paper1.git/blobdiff_plain/bbbeb47ad22f51afe096a43c96a0130e92e19c30..1df92e2629fef95f9b236c8d952d94c08f5f34a0:/paper.tex diff --git a/paper.tex b/paper.tex index abc8b8d..1d70450 100644 --- a/paper.tex +++ b/paper.tex @@ -151,7 +151,7 @@ method: DK: z_i^{k+1}=z_{i}^{k}-\frac{P(z_i^{k})}{\prod_{i\neq j}(z_i^{k}-z_j^{k})}, i = 1, . . . , n, \end{equation} %%\end{center} -where $z_i^k$ is the $i^{th}$ root of the polynomial $P$ at the +where $z_i^k$ is the $i^{th}$ root of the polynomial $p$ at the iteration $k$. @@ -169,7 +169,7 @@ Aberth~\cite{Aberth73} uses a different iteration formula given as: EA: z_i^{k+1}=z_i^{k}-\frac{1}{{\frac {P'(z_i^{k})} {P(z_i^{k})}}-{\sum_{i\neq j}\frac{1}{(z_i^{k}-z_j^{k})}}}, i = 1, . . . , n, \end{equation} %%\end{center} -where $P'(z)$ is the polynomial derivative of $P$ evaluated in the +where $p'(z)$ is the polynomial derivative of $p$ evaluated in the point $z$. Aberth, Ehrlich and Farmer-Loizou~\cite{Loizou83} have proved that @@ -191,13 +191,13 @@ chain, for polynomials of degree up to 8. The third method often diverges, but the first two methods have speed-up equal to 5.5. Later, Freeman and Bane~\cite{Freemanall90} considered asynchronous algorithms, in which each processor continues to update its -approximations even though the latest values of other $z_i((k))$ +approximations even though the latest values of other $z_i^{k}$ have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before making a new iteration. Couturier and al.~\cite{Raphaelall01} proposed two methods of parallelization for a shared memory architecture and for distributed memory one. They were able to -compute the roots of sparse polynomials of degree 10000 in 430 seconds with only 8 +compute the roots of sparse polynomials of degree 10,000 in 430 seconds with only 8 personal computers and 2 communications per iteration. Comparing to the sequential implementation -where it takes up to 3300 seconds to obtain the same results, the authors show an interesting speedup. +where it takes up to 3,300 seconds to obtain the same results, the authors show an interesting speedup. Very few works had been performed since this last work until the appearing of the Compute Unified Device Architecture (CUDA)~\cite{CUDA10}, a @@ -211,7 +211,7 @@ Ghidouche and al~\cite{Kahinall14} proposed an implementation of the Durand-Kerner method on GPU. Their main result showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse -polynomials of degree 48000. +polynomials of degree 48,000. In this paper, we focus on the implementation of the Ehrlich-Aberth @@ -225,15 +225,14 @@ simultaneous methods using a parallel approach is presented in Section \ref{secStateofArt}. In Section~\ref{sec5} we propose a parallel implementation of the Ehrlich-Aberth method on GPU and discuss it. Section~\ref{sec6} presents and investigates our implementation -and experimental study results. Finally, Section~\ref{sec7} 6 concludes +and experimental study results. Finally, Section~\ref{sec7} concludes this paper and gives some hints for future research directions in this topic. \section{Ehrlich-Aberth method} \label{sec1} A cubically convergent iteration method for finding zeros of -polynomials was proposed by O. Aberth~\cite{Aberth73}. In the -following we present the main stages of our implementation the Ehrlich-Aberth method. +polynomials was proposed by O. Aberth~\cite{Aberth73}. The Ehrlich-Aberth method contain 4 main steps, presented in the following. %The Aberth method is a purely algebraic derivation. %To illustrate the derivation, we let $w_{i}(z)$ be the product of linear factors @@ -259,7 +258,7 @@ following we present the main stages of our implementation the Ehrlich-Aberth me \subsection{Polynomials Initialization} -The initialization of a polynomial p(z) is done by setting each of the $n$ complex coefficients $a_{i}$: +The initialization of a polynomial $p(z)$ is done by setting each of the $n$ complex coefficients $a_{i}$: \begin{equation} \label{eq:SimplePolynome} @@ -267,9 +266,9 @@ The initialization of a polynomial p(z) is done by setting each of the $n$ compl \end{equation} -\subsection{Vector $z^{(0)}$ Initialization} +\subsection{Vector $Z^{(0)}$ Initialization} \label{sec:vec_initialization} -As for any iterative method, we need to choose $n$ initial guess points $z^{(0)}_{i}, i = 1, . . . , n.$ +As for any iterative method, we need to choose $n$ initial guess points $z^{0}_{i}, i = 1, . . . , n.$ The initial guess is very important since the number of steps needed by the iterative method to reach a given approximation strongly depends on it. In~\cite{Aberth73} the Ehrlich-Aberth iteration is started by selecting $n$ @@ -300,7 +299,7 @@ Here we give a second form of the iterative function used by Ehrlich-Aberth meth \begin{equation} \label{Eq:Hi} -EA2: z^{k+1}=z_{i}^{k}-\frac{\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}} +EA2: z^{k+1}_{i}=z_{i}^{k}-\frac{\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}} {1-\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}\sum_{j=1,j\neq i}^{j=n}{\frac{1}{(z_{i}^{k}-z_{j}^{k})}}}, i=0,. . . .,n \end{equation} It can be noticed that this equation is equivalent to Eq.~\ref{Eq:EA}, @@ -322,8 +321,8 @@ With high degree polynomial, the Ehrlich-Aberth method implementation, as well as the Durand-Kerner implement, suffers from overflow problems. This situation occurs, for instance, in the case where a polynomial having positive coefficients and a large degree is computed at a -point $\xi$ where $|\xi| > 1$, where $|x|$ stands for the modolus of a complex $x$. Indeed, the limited number in the -mantissa of floating points representations makes the computation of p(z) wrong when z +point $\xi$ where $|\xi| > 1$, where $|z|$ stands for the modolus of a complex $z$. Indeed, the limited number in the +mantissa of floating points representations makes the computation of $p(z)$ wrong when z is large. For example $(10^{50}) +1+ (- 10^{50})$ will give the wrong result of $0$ instead of $1$. Consequently, we can not compute the roots for large degrees. This problem was early discussed in