1 \documentclass[review]{elsarticle}
3 \usepackage{lineno,hyperref}
4 %%\usepackage[utf8]{inputenc}
5 %%\usepackage[T1]{fontenc}
6 %%\usepackage[french]{babel}
7 \usepackage{amsmath,amsfonts,amssymb}
8 \usepackage[ruled,vlined]{algorithm2e}
9 \usepackage{array,multirow,makecell}
12 \newcolumntype{R}[1]{>{\raggedleft\arraybackslash }b{#1}}
13 \newcolumntype{L}[1]{>{\raggedright\arraybackslash }b{#1}}
14 \newcolumntype{C}[1]{>{\centering\arraybackslash }b{#1}}
17 \journal{Journal of \LaTeX\ Templates}
19 %%%%%%%%%%%%%%%%%%%%%%%
20 %% Elsevier bibliography styles
21 %%%%%%%%%%%%%%%%%%%%%%%
22 %% To change the style, put a % in front of the second line of the current style and
23 %% remove the % from the second line of the style you would like to use.
24 %%%%%%%%%%%%%%%%%%%%%%%
27 %\bibliographystyle{model1-num-names}
29 %% Numbered without titles
30 %\bibliographystyle{model1a-num-names}
33 %\bibliographystyle{model2-names.bst}\biboptions{authoryear}
36 %\usepackage{numcompress}\bibliographystyle{model3-num-names}
38 %% Vancouver name/year
39 %\usepackage{numcompress}\bibliographystyle{model4-names}\biboptions{authoryear}
42 %\bibliographystyle{model5-names}\biboptions{authoryear}
45 %\usepackage{numcompress}\bibliographystyle{model6-num-names}
47 %% `Elsevier LaTeX' style
48 \bibliographystyle{elsarticle-num}
49 %%%%%%%%%%%%%%%%%%%%%%%
55 \title{Parallel polynomial root finding using GPU}
57 %% Group authors per affiliation:
58 \author{Elsevier\fnref{myfootnote}}
59 \address{Radarweg 29, Amsterdam}
60 \fntext[myfootnote]{Since 1880.}
62 %% or include affiliations in footnotes:
63 \author[mymainaddress]{Ghidouche Kahina\corref{mycorrespondingauthor}}
64 %%\ead[url]{kahina.ghidouche@gmail.com}
65 \cortext[mycorrespondingauthor]{Corresponding author}
66 \ead{kahina.ghidouche@gmail.com}
68 \author[mysecondaryaddress]{Couturier Raphael\corref{mycorrespondingauthor}}
69 %%\cortext[mycorrespondingauthor]{Corresponding author}
70 \ead{raphael.couturier@univ-fcomte.fr}
72 \author[mymainaddress]{Abderrahmane Sider\corref{mycorrespondingauthor}}
73 %%\cortext[mycorrespondingauthor]{Corresponding author}
74 \ead{ar.sider@univ-bejaia.dz}
76 \address[mymainaddress]{Department of informatics,University of Bejaia,Algeria}
77 \address[mysecondaryaddress]{FEMTO-ST Institute, University of Franche-Compté }
80 in this article we present a parallel implementation
81 of the Aberth algorithm for the problem root finding for
82 high degree polynomials on GPU architecture (Graphics
87 root finding of polynomials, high degree, iterative methods, Durant-Kerner, GPU, CUDA, CPU , Parallelization
94 \section{The problem of finding roots of a polynomial}
95 Polynomials are algebraic structures used in mathematics that capture physical phenomenons and that express the outcome in the form of a function of some unknown variable. Formally speaking, a polynomial $p(x)$ of degree \textit{n} having $n$ coefficients in the complex plane \textit{C} and zeros $\alpha_{i},\textit{i=1,...,n}$
98 {\Large p(x)=\sum{a_{i}x^{i}}=a_{n}\prod(x-\alpha_{i}),a_{0} a_{n}\neq 0}.
102 The root finding problem consists in finding the values of all the $n$ values of the variable $x$ for which \textit{p(x)} is nullified. Such values are called zeroes of $p$. The problem of finding a root is equivalent to that of solving a fixed-point problem. To see this, consider the fixed-point problem of finding the $n$-dimensional
107 where $g : C^{n}\longrightarrow C^{n}$. Usually, we can easily
108 rewrite this fixed-point problem as a root-finding problem by
109 setting $f(x) = x-g(x)$ and likewise we can recast the
110 root-finding problem into a fixed-point problem by setting
115 Often it is not be possible to solve such nonlinear equation
116 root-finding problems analytically. When this occurs we turn to
117 numerical methods to approximate the solution.
118 Generally speaking, algorithms for solving problems can be divided into
119 two main groups: direct methods and iterative methods.
121 Direct methods exist only for $n \leq 4$, solved in closed form by G. Cardano
122 in the mid-16th century. However, N.H. Abel in the early 19th
123 century showed that polynomials of degree five or more could not
124 be solved by directs methods. Since then, mathmathicians have
125 focussed on numerical (iterative) methods such as the famous
126 Newton's method, Bernoulli's method of the 18th, and Graeffe's.
128 Later on, with the advent of electronic computers, other methods has
129 been developed such as the Jenkins-Traub method, Larkin's method,
130 Muller's method, and several methods for simultaneous
131 approximation of all the roots, starting with the Durand-Kerner (DK)
135 Z_{i}=Z_{i}-\frac{P(Z_{i})}{\prod_{i\neq j}(z_{i}-z_{j})}
139 This formula is mentioned for the first time by
140 Weiestrass~\cite{Weierstrass03} as part of the fundamental theorem
141 of Algebra and is rediscovered by Ilieff~\cite{Ilie50},
142 Docev~\cite{Docev62}, Durand~\cite{Durand60},
143 Kerner~\cite{Kerner66}. Another method discovered by
144 Borsch-Supan~\cite{ Borch-Supan63} and also described and brought
145 in the following form by Ehrlich~\cite{Ehrlich67} and
146 Aberth~\cite{Aberth73} uses a different iteration formula given as fellows :
149 Z_{i}=Z_{i}-\frac{1}{{\frac {P'(Z_{i})} {P(Z_{i})}}-{\sum_{i\neq j}(z_{i}-z_{j})}}.
153 Aberth, Ehrlich and Farmer-Loizou~\cite{Loizon83} have proved that
154 the Ehrlisch-Aberth method (EA) has a cubic order of convergence for simple roots whereas the Durand-Kerner has a quadratic order of convergence.
157 Iterative methods raise several problem when implemented e.g.
158 specific sizes of numbers must be used to deal with this
159 difficulty. Moreover, the convergence time of iterative methods
160 drastically increases like the degrees of high polynomials. It is expected that the
161 parallelization of these algorithms will improve the convergence
164 Many authors have dealt with the parallelisation of
165 simultaneous methods, i.e. that find all the zeros simultaneously.
166 Freeman~\cite{Freeman89} implemeted and compared DK, EA and another method of the fourth order proposed
167 by Farmer and Loizou~\cite{Loizon83}, on a 8- processor linear
168 chain, for polynomials of degree up to 8. The third method often
169 diverges, but the first two methods have speed-up 5.5
170 (speed-up=(Time on one processor)/(Time on p processors)). Later,
171 Freeman and Bane~\cite{Freemanall90} considered asynchronous
172 algorithms, in which each processor continues to update its
173 approximations even though the latest values of other $z_i((k))$
174 have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before making a new iteration.
175 Couturier et al. ~\cite{Raphaelall01} proposed two methods of parallelisation for
176 a shared memory architecture and for distributed memory one. They were able to
177 compute the roots of polynomials of degree 10000 in 430 seconds with only 8
178 personal computers and 2 communications per iteration. Comparing to the sequential implementation
179 where it takes up to 3300 seconds to obtain the same results, the authors show an interesting speedup, indeed.
181 Very few works had been since this last work until the appearing of
182 the Compute Unified Device Architecture (CUDA)~\cite{CUDA10}, a
183 parallel computing platform and a programming model invented by
184 NVIDIA. The computing power of GPUs (Graphics Processing Unit) has exceeded that of
185 of CPUs. However, CUDA adopts a totally new computing architecture to use the
186 hardware resources provided by GPU in order to offer a stronger
187 computing ability to the massive data computing.
190 Ghidouche et al. ~\cite{Kahinall14} proposed an implementation of the
191 Durand-Kerner method on GPU. Their main
192 result showed that a parallel CUDA implementation is 10 times as fast as
193 the sequential implementation on a single CPU for high degree
194 polynomials of about 48000. To our knowledge, it is the first time such high degree polynomials are numerically solved.
197 In this paper, we focus on the implementation of the Aberth method for
198 high degree polynomials on GPU. The paper is organised as fellows. Initially, we recall the Aberth method in Section.\ref{sec1}. Improvements for the Aberth method are proposed in Section.\ref{sec2}. Related work to the implementation of simultaneous methods using a parallel approach is presented in Section.\ref{secStateofArt}.
199 In Section.4 we propose a parallel implementation of the Aberth method on GPU and discuss it. Section 5 presents and investigates our implementation and experimental study results. Finally, Section 6 concludes this paper and gives some hints for future research directions in this topic.
201 \section{The Sequential Aberth method}
203 A cubically convergent iteration method for finding zeros of
204 polynomials was proposed by O.Aberth~\cite{Aberth73}. The Aberth
205 method is a purely algebraic derivation. To illustrate the
206 derivation, we let $w_{i}(z)$ be the product of linear factors
209 w_{i}(z)=\prod_{j=1,j \neq i}^{n} (z-x_{j})
212 And let a rational function $R_{i}(z)$ be the correction term of the
213 Weistrass method~\cite{Weierstrass03}
216 R_{i}(z)=\frac{p(z)}{w_{i}(z)} , i=1,2,...,n.
219 Differentiating the rational function $R_{i}(z)$ and applying the
220 Newton method, we have:
223 \frac{R_{i}(z)}{R_{i}^{'}(z)}= \frac{p(z)}{p^{'}(z)-p(z)\frac{w_{i}(z)}{w_{i}^{'}(z)}}= \frac{p(z)}{p^{'}(z)-p(z) \sum _{j=1,j \neq i}^{n}\frac{1}{z-x_{i}}}, i=1,2,...,n
226 Substituting $x_{j}$ for z we obtain the Aberth iteration method.
228 In the fellowing we present the main stages of the running of the Aberth method.
230 \subsection{Polynomials Initialization}
231 The initialization of a polynomial p(z) is done by setting each of the $n$ complex coefficients $a_{i}$
235 \label{eq:SimplePolynome}
236 p(z)=\sum{a_{i}z^{n-i}} , a_{n} \neq 0,a_{0}=1, a_{i}\subset C
240 \subsection{Vector $z^{(0)}$ Initialization}
242 Like for any iterative method, we need to choose $n$ initial guess points $z^{(0)}_{i}, i = 1, . . . , n.$
243 The initial guess is very important since the number of steps needed by the iterative method to reach
244 a given approximation strongly depends on it.
245 In~\cite{Aberth73} the Aberth iteration is started by selecting $n$
246 equi-spaced points on a circle of center 0 and radius r, where r is
247 an upper bound to the moduli of the zeros. Later, Bini et al.~\cite{Bini96}
248 performed this choice by selecting complex numbers along different
249 circles and relies on the result of~\cite{Ostrowski41}.
254 \sigma_{0}=\frac{u+v}{2};u=\frac{\sum_{i=1}^{n}u_{i}}{n.max_{i=1}^{n}u_{i}};
255 v=\frac{\sum_{i=0}^{n-1}v_{i}}{n.min_{i=0}^{n-1}v_{i}};\\
260 u_{i}=2.|a_{i}|^{\frac{1}{i}};
261 v_{i}=\frac{|\frac{a_{n}}{a_{i}}|^{\frac{1}{n-i}}}{2}.
264 \subsection{Iterative Function $H_{i}$}
265 The operator used by the Aberth method is corresponding to the
266 following equation which will enable the convergence towards
267 polynomial solutions, provided all the roots are distinct.
270 H_{i}(z)=z_{i}-\frac{1}{\frac{p^{'}(z_{i})}{p(z_{i})}-\sum_{j\neq
271 i}{\frac{1}{z_{i}-z_{j}}}}
274 \subsection{Convergence Condition}
275 The convergence condition determines the termination of the algorithm. It consists in stopping from running
276 the iterative function $H_{i}(z)$ when the roots are sufficiently stable. We consider that the method
277 converges sufficiently when :
280 \label{eq:Aberth-Conv-Cond}
282 [1,n];\frac{z_{i}^{(k)}-z_{i}^{(k-1)}}{z_{i}^{(k)}}<\xi
286 \section{Improving the Ehrlisch-Aberth Method}
288 The Ehrlisch-Aberth method implementation suffers of overflow problems. This
289 situation occurs, for instance, in the case where a polynomial
290 having positive coefficients and a large degree is computed at a
291 point $\xi$ where $|\xi| > 1$, where $|x|$ stands for the modolus of a complex $x$. Indeed, the limited number in the
292 mantissa of floating points representations makes the computation of p(z) wrong when z
293 is large. For example $(10^{50}) +1+ (- 10^{50})$ will give the wrong result
294 of $0$ instead of $1$. Consequently, we can not compute the roots
295 for large degrees. This problem was early discussed in
296 ~\cite{Karimall98} for the Durand-Kerner method, the authors
297 propose to use the logarithm and the exponential of a complex in order to compute the power at a high exponent.
301 \forall(x,y)\in R^{*2}; \ln (x+i.y)=\ln(x^{2}+y^{2})
302 2+i.\arcsin(y\sqrt{x^{2}+y^{2}})_{\left] -\pi, \pi\right[ }
306 \label{defexpcomplex}
307 \forall(x,y)\in R^{*2}; \exp(x+i.y) & = \exp(x).\exp(i.y)\\
308 & =\exp(x).\cos(y)+i.\exp(x).\sin(y)\label{defexpcomplex}
312 Using the logarithm (eq.~\ref{deflncomplex}) and the exponential (eq.~\ref{defexpcomplex}) operators, we can replace any multiplications and divisions with additions and subtractions. Consequently, computations
313 manipulate lower absolute values and the roots for large polynomial's degrees can be looked for successfully~\cite{Karimall98}.
315 Applying this solution for the Aberth method we obtain the
316 iteration function with logarithm:
317 %%$$ \exp \bigl( \ln(p(z)_{k})-ln(\ln(p(z)_{k}^{'}))- \ln(1- \exp(\ln(p(z)_{k})-ln(\ln(p(z)_{k}^{'})+\ln\sum_{i\neq j}^{n}\frac{1}{z_{k}-z_{j}})$$
319 H_{i}(z)=z_{i}^{k}-\exp \left(\ln \left(
320 p(z_{k})\right)-\ln\left(p(z_{k}^{'})\right)- \ln
321 \left(1-Q(z_{k})\right)\right),
327 Q(z_{k})=\exp\left( \ln (p(z_{k}))-\ln(p(z_{k}^{'}))+\ln \left(
328 \sum_{k\neq j}^{n}\frac{1}{z_{k}-z_{j}}\right)\right).
331 This solution is applied when it is necessary ??? When ??? (SIDER)
333 \section{The implementation of simultaneous methods in a parallel computer}
334 \label{secStateofArt}
335 The main problem of simultaneous methods is that the necessary
336 time needed for convergence is increased when we increase
337 the degree of the polynomial. The parallelisation of these
338 algorithms is expected to improve the convergence time.
339 Authors usually adopt one of the two following approaches to parallelize root
340 finding algorithms. The first approach aims at reducing the total number of
341 iterations as by Miranker
342 ~\cite{Mirankar68,Mirankar71}, Schedler~\cite{Schedler72} and
343 Winogard~\cite{Winogard72}. The second approach aims at reducing the
344 computation time per iteration, as reported
345 in~\cite{Benall68,Jana06,Janall99,Riceall06}.
347 There are many schemes for the simultaneous approximation of all roots of a given
348 polynomial. Several works on different methods and issues of root
349 finding have been reported in~\cite{Azad07, Gemignani07, Kalantari08, Skachek08, Zhancall08, Zhuall08}. However, Durand-Kerner and Ehrlisch-Aberth methods are the most practical choices among
350 them~\cite{Bini04}. These two methods have been extensively
351 studied for parallelization due to their intrinsics, i.e. the
352 computations involved in both methods has some inherent
353 parallelism that can be suitably exploited by SIMD machines.
354 Moreover, they have fast rate of convergence (quadratic for the
355 Durand-Kerner and cubic for the Ehrlisch-Aberth). Various parallel
356 algorithms reported for these methods can be found
357 in~\cite{Cosnard90, Freeman89,Freemanall90,,Jana99,Janall99}.
358 Freeman and Bane~\cite{Freemanall90} presented two parallel
359 algorithms on a local memory MIMD computer with the compute-to
360 communication time ratio O(n). However, their algorithms require
361 each processor to communicate its current approximation to all
362 other processors at the end of each iteration (synchronous). Therefore they
363 cause a high degree of memory conflict. Recently the author
364 in~\cite{Mirankar71} proposed two versions of parallel algorithm
365 for the Durand-Kerner method, and Ehrlisch-Aberth method on a model of
366 Optoelectronic Transpose Interconnection System (OTIS).The
367 algorithms are mapped on an OTIS-2D torus using N processors. This
368 solution needs N processors to compute N roots, which is not
369 practical for solving polynomials with large degrees.
370 Until very recently, the literature doen not mention implementations able to compute the roots of
371 large degree polynomials (higher then 1000) and within small or at least tractable times. Finding polynomial roots rapidly and accurately is the main objective of our work.
372 With the advent of CUDA (Compute Unified Device
373 Architecture), finding the roots of polynomials receives a new attention because of the new possibilities to solve higher degree polynomials in less time.
374 In~\cite{Kahinall14} we already proposed the first implementation
375 of a root finding method on GPUs, that of the Durand-Kerner method. The main result showed
376 that a parallel CUDA implementation is 10 times as fast as the
377 sequential implementation on a single CPU for high degree
378 polynomials of 48000. In this paper we present a parallel implementation of Ehlisch-Aberth method on
379 GPUs, which details are discussed in the sequel.
382 \section {A CUDA parallel Ehrlisch-Aberth method}
384 \subsection{Background on the GPU architecture}
385 A GPU is viewed as an accelerator for the data-parallel and
386 intensive arithmetic computations. It draws its computing power
387 from the parallel nature of its hardware and software
388 architectures. A GPU is composed of hundreds of Streaming
389 Processors (SPs) organized in several blocks called Streaming
390 Multiprocessors (SMs). It also has a memory hierarchy. It has a
391 private read-write local memory per SP, fast shared memory and
392 read-only constant and texture caches per SM and a read-write
393 global memory shared by all its SPs~\cite{NVIDIA10}.
395 On a CPU equipped with a GPU, all the data-parallel and intensive
396 functions of an application running on the CPU are off-loaded onto
397 the GPU in order to accelerate their computations. A similar
398 data-parallel function is executed on a GPU as a kernel by
399 thousands or even millions of parallel threads, grouped together
400 as a grid of thread blocks. Therefore, each SM of the GPU executes
401 one or more thread blocks in SIMD fashion (Single Instruction,
402 Multiple Data) and in turn each SP of a GPU SM runs one or more
403 threads within a block in SIMT fashion (Single Instruction,
404 Multiple threads). Indeed at any given clock cycle, the threads
405 execute the same instruction of a kernel, but each of them
406 operates on different data.
407 GPUs only work on data filled in their
408 global memories and the final results of their kernel executions
409 must be communicated to their CPUs. Hence, the data must be
410 transferred in and out of the GPU. However, the speed of memory
411 copy between the GPU and the CPU is slower than the memory
412 bandwidths of the GPU memories and, thus, it dramatically affects
413 the performances of GPU computations. Accordingly, it is necessary
414 to limit as much as possible, data transfers between the GPU and its CPU during the
416 \subsection{Background on the CUDA Programming Model}
418 The CUDA programming model is similar in style to a single program
419 multiple-data (SPMD) software model. The GPU is viewed as a
420 coprocessor that executes data-parallel kernel functions. CUDA
421 provides three key abstractions, a hierarchy of thread groups,
422 shared memories, and barrier synchronization. Threads have a three
423 level hierarchy. A grid is a set of thread blocks that execute a
424 kernel function. Each grid consists of blocks of threads. Each
425 block is composed of hundreds of threads. Threads within one block
426 can share data using shared memory and can be synchronized at a
427 barrier. All threads within a block are executed concurrently on a
428 multithreaded architecture.The programmer specifies the number of
429 threads per block, and the number of blocks per grid. A thread in
430 the CUDA programming language is much lighter weight than a thread
431 in traditional operating systems. A thread in CUDA typically
432 processes one data element at a time. The CUDA programming model
433 has two shared read-write memory spaces, the shared memory space
434 and the global memory space. The shared memory is local to a block
435 and the global memory space is accessible by all blocks. CUDA also
436 provides two read-only memory spaces, the constant space and the
437 texture space, which reside in external DRAM, and are accessed via
440 \subsection{ The implementation of Aberth method on GPU}
441 %%\subsection{A CUDA implementation of the Aberth's method }
442 %%\subsection{A GPU implementation of the Aberth's method }
446 \subsubsection{A sequential Aberth algorithm}
447 The main steps of Aberth method are shown in Algorithm.~\ref{alg1-seq} :
452 \caption{A sequential algorithm to find roots with the Aberth method}
454 \KwIn{$Z^{0}$(Initial root's vector),$\varepsilon$ (error
455 tolerance threshold),P(Polynomial to solve)}
457 \KwOut {Z(The solution root's vector)}
461 Initialization of the coefficients of the polynomial to solve\;
462 Initialization of the solution vector $Z^{0}$\;
464 \While {$\Delta z_{max}\succ \epsilon$}{
465 Let $\Delta z_{max}=0$\;
466 \For{$j \gets 0 $ \KwTo $n$}{
467 $ZPrec\left[j\right]=Z\left[j\right]$\;
468 $Z\left[j\right]=H\left(j,Z\right)$\;
471 \For{$i \gets 0 $ \KwTo $n-1$}{
472 $c=\frac{\left|Z\left[i\right]-ZPrec\left[i\right]\right|}{Z\left[i\right]}$\;
473 \If{$c\succ\Delta z_{max}$ }{
474 $\Delta z_{max}$=c\;}
480 In this sequential algorithm, one CPU thread executes all the steps. Let us look to the $3^{rd}$ step i.e. the execution of the iterative function, 2 sub-steps are needed. The first sub-step \textit{save}s the solution vector of the previous iteration, the second sub-step \textit{update}s or computes the new values of the roots vector.
481 There exists two ways to execute the iterative function that we call a Jacobi one and a Gauss-Seidel one. With the Jacobi iteration, at iteration $k+1$ we need all the previous values $z^{(k)}_{i}$ to compute the new values $z^{(k+1)}_{i}$, taht is :
484 H(i,z^{k+1})=\frac{p(z^{(k)}_{i})}{p'(z^{(k)}_{i})-p(z^{(k)}_{i})\sum^{n}_{j=1 j\neq i}\frac{1}{z^{(k)}_{i}-z^{(k)}_{j}}}, i=1,...,n.
487 With the the Gauss-seidel iteration, we have:
489 \label{eq:Aberth-H-GS}
490 H(i,z^{k+1})=\frac{p(z^{(k)}_{i})}{p'(z^{(k)}_{i})-p(z^{(k)}_{i})(\sum^{i-1}_{j=1}\frac{1}{z^{(k)}_{i}-z^{(k+1)}_{j}}+\sum^{n}_{j=i+1}\frac{1}{z^{(k)}_{i}-z^{(k)}_{j}})}, i=1,...,n.
493 Using Equation.~\ref{eq:Aberth-H-GS} for the update sub-step of $H(i,z^{k+1})$, we expect the Gauss-Seidel iteration to converge more quickly because, just as its ancestor (for solving linear systems of equations), it uses the most fresh computed roots $z^{k+1}_{i}$.
495 The $4^{th}$ step of the algorithm checks the convergence condition using Equation.~\ref{eq:Aberth-Conv-Cond}.
496 Both steps 3 and 4 use 1 thread to compute all the $n$ roots on CPU, which is very harmful for performance in case of the large degree polynomials.
498 \paragraph{The execution time}
499 Let $T_{i}(n)$ be the time to compute one new root value at step 3, $T_{i}$ depends on the polynomial's degree $n$. When $n$ increase $T_{i}(n)$ increases too. We need $n.T_{i}(n)$ to compute all the new values in one iteration at step 3.
501 Let $T_{j}$ be the time needed to check the convergence of one root value at the step 4, so we need $n.T_{j}$ to compute global convergence condition in each iteration at step 4.
503 Thus, the execution time for both steps 3 and 4 is:
505 T_{iter}=n(T_{i}(n)+T_{j})+O(n).
507 Let $K$ be the number of iterations necessary to compute all the roots, so the total execution time $T$ can be given as:
511 T=\left[n\left(T_{i}(n)+T_{j}\right)+O(n)\right].K
513 The execution time increases with the increasing of the polynomial degree, which justifies to parallelise these steps in order to reduce the global execution time. In the following, we explain how we did parrallelize these steps on a GPU architecture using the CUDA platform.
515 \subsubsection{A Parallel implementation with CUDA }
516 On the CPU, both steps 3 and 4 contain the loop \verb=for= and a single thread executes all the instructions in the loop $n$ times. In this subsection, we explain how the GPU architecture can compute this loop and reduce the execution time.
517 In the GPU, the schduler assigns the execution of this loop to a group of threads organised as a grid of blocks with block containing a number of threads. All threads within a block are executed concurrently in parallel. The instructions run on the GPU are grouped in special function called kernels. It's up to the programmer, to describe the execution context, that is the size of the Grid, the number of blocks and the number of threads per block upon the call of a given kernel, according to a special syntax defined by CUDA.
519 Let N be the number of threads executed in parallel, Equation.~\ref{eq:T-global} becomes then :
522 T=\left[\frac{n}{N}\left(T_{i}(n)+T_{j}\right)+O(n)\right].K.
525 In theory, total execution time $T$ on GPU is speed up $N$ times as $T$ on CPU. We will see at what extent this is true in the experimental study hereafter.
528 In CUDA programming, all the instructions of the \verb=for= loop are executed by the GPU as a kernel. A kernel is a function written in CUDA and defined by the \verb=__global__= qualifier added before a usual ``C`` function, which instructs the compiler to generate appropriate code to pass it to the CUDA runtime in order to be executed on the GPU.
530 Algorithm~\ref{alg2-cuda} shows a sketch of the Aberth algorithm usind CUDA.
535 \caption{CUDA Algorithm to find roots with the Aberth method}
537 \KwIn{$Z^{0}$(Initial root's vector),$\varepsilon$ (error
538 tolerance threshold),P(Polynomial to solve)}
540 \KwOut {Z(The solution root's vector)}
544 Initialization of the coeffcients of the polynomial to solve\;
545 Initialization of the solution vector $Z^{0}$\;
546 Allocate and copy initial data to the GPU global memory\;
548 \While {$\Delta z_{max}\succ \epsilon$}{
549 Let $\Delta z_{max}=0$\;
550 $ kernel\_save(d\_z^{k-1})$\;
551 $ kernel\_update(d\_z^{k})$\;
552 $kernel\_testConverge(\Delta z_{max},d_z^{k},d_z^{k-1})$\;
557 After the initialisation step, all data of the root finding problem to be solved must be copied from the CPU memory to the GPU global memory, because the GPUs only access data already present in their memories. Next, all the data-parallel arithmetic operations inside the main loop \verb=(do ... while(...))= are executed as kernels by the GPU. The first kernel named \textit{save} in line 6 of Algorithm~\ref{alg2-cuda} consists in saving the vector of polynomial's root found at the previous time-step in GPU memory, in order to check the convergence of the roots after each iteration (line 8, Algorithm~\ref{alg2-cuda}).
559 The second kernel executes the iterative function $H$ and updates $z^{k}$, according to Algorithm~\ref{alg3-update}. We notice that the update kernel is called in two forms, separated with the value of \emph{R} which determines the radius beyond which we apply the logarithm computation of the power of a complex.
564 \caption{A global Algorithm for the iterative function}
566 \eIf{$(\left|Z^{(k)}\right|<= R)$}{
567 $kernel\_update(d\_z^{k})$\;}
569 $kernel\_update\_Log(d\_z^{k})$\;
573 The first form executes formula \ref{eq:SimplePolynome} if the modulus of the current complex is less than the a certain value called the radius i.e. ($ |z^{k}_{i}|<= R$), else the kernel executes formulas (Eq.~\ref{deflncomplex},Eq.~\ref{defexpcomplex}). The radius $R$ is evaluated as :
575 $$R = \exp( \log(DBL\_MAX) / (2*n) )$$ where $DBL\_MAX$ stands for the maximum representable double value.
577 The last kernel verifies the convergence of the roots after each update of $Z^{(k)}$, according to formula. We used the functions of the CUBLAS Library (CUDA Basic Linear Algebra Subroutines) to implement this kernel.
579 The kernels terminate it computations when all the roots converge. Finally, the solution of the root finding problem is copied back from GPU global memory to CPU memory. We use the communication functions of CUDA for the memory allocation in the GPU \verb=(cudaMalloc())= and for data transfers from the CPU memory to the GPU memory \verb=(cudaMemcpyHostToDevice)=
580 or from GPU memory to CPU memory \verb=(cudaMemcpyDeviceToHost))=.
581 %%HIER END MY REVISIONS (SIDER)
582 \subsection{Experimental study}
584 \subsubsection{Definition of the polynomial used}
585 We use a polynomial of the following form for which the
586 roots are distributed on 2 distinct circles:
588 \forall \alpha_{1} \alpha_{2} \in C,\forall n_{1},n_{2} \in N^{*}; P(z)= (z^{n^{1}}-\alpha_{1})(z^{n^{2}}-\alpha_{2})
591 This form makes it possible to associate roots having two
592 different modules and thus to work on a polynomial constitute
593 of four non zero terms.
595 An other form of the polynomial to obtain a full polynomial is:
597 %%\forall \alpha_{i} \in C,\forall n_{i}\in N^{*}; P(z)= \sum^{n}_{i=1}(z^{n^{i}}.a_{i})
601 {\Large \forall a_{i} \in C; p(x)=\sum^{n-1}_{i=1} a_{i}.x^{i}}
603 with this formula, we can have until \textit{n} non zero terms.
605 \subsubsection{The study condition}
606 In order to have representative average values, for each
607 point of our curves we measured the roots finding of 10
608 different polynomials.
610 The our experiences results concern two parameters which are
611 the polynomial degree and the execution time of our program
612 to converge on the solution. The polynomial degree allows us
613 to validate that our algorithm is powerful with high degree
614 polynomials. The execution time remains the
615 element-key which justifies our work of parallelization.
616 For our tests we used a CPU Intel(R) Xeon(R) CPU
617 E5620@2.40GHz and a GPU Tesla C2070 (with 6 Go of ram)
619 \subsubsection{Comparative study}
620 We initially carried out the convergence of Aberth algorithm with various sizes of polynomial, in second we evaluate the influence of the size of the threads per block....
622 \paragraph{Aberth algorithm on CPU and GPU}
626 % \begin{tabular} {|R{2cm}|L{2.5cm}|L{2.5cm}|L{1.5cm}|L{1.5cm}|}
627 % \hline Polynomial's degrees & $T_{exe}$ on CPU & $T_{exe}$ on GPU & CPU iteration & GPU iteration\\
628 % \hline 5000 & 1.90 & 0.40 & 18 & 17\\
629 % \hline 10000 & 172.723 & 0.59 & 21 & 24\\
630 % \hline 20000 & 172.723 & 1.52 & 21 & 25\\
631 % \hline 30000 & 172.723 & 2.77 & 21 & 33\\
632 % \hline 50000 & 172.723 & 3.92 & 21 & 18\\
633 % \hline 500000 & $>$1h & 497.109 & & 24\\
634 % \hline 1000000 & $>$1h & 1,524.51& & 24\\
637 % \caption{the convergence of Aberth algorithm}
638 % \label{tab:theConvergenceOfAberthAlgorithm}
643 \includegraphics[width=0.8\textwidth]{figures/Compar_EA_algorithm_CPU_GPU}
644 \caption{Aberth algorithm on CPU and GPU}
649 \paragraph{The impact of the thread's number into the convergence of Aberth algorithm}
653 % \begin{tabular} {|R{2.5cm}|L{2.5cm}|L{2.5cm}|}
654 % \hline Thread's numbers & Execution time &Number of iteration\\
655 % \hline 1024 & 523 & 27\\
656 % \hline 512 & 449.426 & 24\\
657 % \hline 256 & 440.805 & 24\\
658 % \hline 128 & 456.175 & 22\\
659 % \hline 64 & 472.862 & 23\\
660 % \hline 32 & 830.152 & 24\\
661 % \hline 8 & 2632.78 & 23 \\
664 % \caption{The impact of the thread's number into the convergence of Aberth algorithm}
665 % \label{tab:Theimpactofthethread'snumberintotheconvergenceofAberthalgorithm}
672 \includegraphics[width=0.8\textwidth]{figures/influence_nb_threads}
673 \caption{Influence of the number of threads on the execution times of different polynomials (sparse and full)}
679 \paragraph{A comparative study between Aberth and Durand-kerner algorithm}
682 \begin{tabular} {|R{2cm}|L{2.5cm}|L{2.5cm}|L{1.5cm}|L{1.5cm}|}
683 \hline Polynomial's degrees & Aberth $T_{exe}$ & D-Kerner $T_{exe}$ & Aberth iteration & D-Kerner iteration\\
684 \hline 5000 & 0.40 & 3.42 & 17 & 138 \\
685 \hline 50000 & 3.92 & 385.266 & 17 & 823\\
686 \hline 500000 & 497.109 & 4677.36 & 24 & 214\\
689 \caption{Aberth algorithm compare to Durand-Kerner algorithm}
690 \label{tab:AberthAlgorithCompareToDurandKernerAlgorithm}
694 \bibliography{mybibfile}