1 \documentclass[review]{elsarticle}
3 \usepackage{lineno,hyperref}
4 %%\usepackage[utf8]{inputenc}
5 %%\usepackage[T1]{fontenc}
6 %%\usepackage[french]{babel}
7 \usepackage{amsmath,amsfonts,amssymb}
8 \usepackage[ruled,vlined]{algorithm2e}
9 %\usepackage[french,boxed,linesnumbered]{algorithm2e}
10 \usepackage{array,multirow,makecell}
13 \newcolumntype{R}[1]{>{\raggedleft\arraybackslash }b{#1}}
14 \newcolumntype{L}[1]{>{\raggedright\arraybackslash }b{#1}}
15 \newcolumntype{C}[1]{>{\centering\arraybackslash }b{#1}}
18 \journal{Journal of \LaTeX\ Templates}
20 %%%%%%%%%%%%%%%%%%%%%%%
21 %% Elsevier bibliography styles
22 %%%%%%%%%%%%%%%%%%%%%%%
23 %% To change the style, put a % in front of the second line of the current style and
24 %% remove the % from the second line of the style you would like to use.
25 %%%%%%%%%%%%%%%%%%%%%%%
28 %\bibliographystyle{model1-num-names}
30 %% Numbered without titles
31 %\bibliographystyle{model1a-num-names}
34 %\bibliographystyle{model2-names.bst}\biboptions{authoryear}
37 %\usepackage{numcompress}\bibliographystyle{model3-num-names}
39 %% Vancouver name/year
40 %\usepackage{numcompress}\bibliographystyle{model4-names}\biboptions{authoryear}
43 %\bibliographystyle{model5-names}\biboptions{authoryear}
46 %\usepackage{numcompress}\bibliographystyle{model6-num-names}
48 %% `Elsevier LaTeX' style
49 \bibliographystyle{elsarticle-num}
50 %%%%%%%%%%%%%%%%%%%%%%%
56 \title{Parallel polynomial root finding using GPU}
58 %% Group authors per affiliation:
59 \author{Elsevier\fnref{myfootnote}}
60 \address{Radarweg 29, Amsterdam}
61 \fntext[myfootnote]{Since 1880.}
63 %% or include affiliations in footnotes:
64 \author[mymainaddress]{Ghidouche Kahina\corref{mycorrespondingauthor}}
65 %%\ead[url]{kahina.ghidouche@gmail.com}
66 \cortext[mycorrespondingauthor]{Corresponding author}
67 \ead{kahina.ghidouche@gmail.com}
69 \author[mysecondaryaddress]{Couturier Raphael\corref{mycorrespondingauthor}}
70 %%\cortext[mycorrespondingauthor]{Corresponding author}
71 \ead{raphael.couturier@univ-fcomte.fr}
73 \author[mymainaddress]{Abderrahmane Sider\corref{mycorrespondingauthor}}
74 %%\cortext[mycorrespondingauthor]{Corresponding author}
75 \ead{ar.sider@univ-bejaia.dz}
77 \address[mymainaddress]{Department of informatics,University of Bejaia,Algeria}
78 \address[mysecondaryaddress]{FEMTO-ST Institute, University of Franche-Compté }
81 in this article we present a parallel implementation
82 of the Aberth algorithm for the problem root finding for
83 high degree polynomials on GPU architecture (Graphics
88 root finding of polynomials, high degree, iterative methods, Durant-Kerner, GPU, CUDA, CPU , Parallelization
95 \section{The problem of finding roots of a polynomial}
96 Polynomials are algebraic structures used in mathematics that capture physical phenomenons and that express the outcome in the form of a function of some unknown variable. Formally speaking, a polynomial $p(x)$ of degree \textit{n} having $n$ coefficients in the complex plane \textit{C} and zeros $\alpha_{i},\textit{i=1,...,n}$
99 {\Large p(x)=\sum{a_{i}x^{i}}=a_{n}\prod(x-\alpha_{i}),a_{0} a_{n}\neq 0}.
103 The root finding problem consists in finding the values of all the $n$ values of the variable $x$ for which \textit{p(x)} is nullified. Such values are called zeroes of $p$. The problem of finding a root is equivalent to that of solving a fixed-point problem. To see this, consider the fixed-point problem of finding the $n$-dimensional
108 where $g : C^{n}\longrightarrow C^{n}$. Usually, we can easily
109 rewrite this fixed-point problem as a root-finding problem by
110 setting $f(x) = x-g(x)$ and likewise we can recast the
111 root-finding problem into a fixed-point problem by setting
116 Often it is not be possible to solve such nonlinear equation
117 root-finding problems analytically. When this occurs we turn to
118 numerical methods to approximate the solution.
119 Generally speaking, algorithms for solving problems can be divided into
120 two main groups: direct methods and iterative methods.
122 Direct methods exist only for $n \leq 4$, solved in closed form by G. Cardano
123 in the mid-16th century. However, N.H. Abel in the early 19th
124 century showed that polynomials of degree five or more could not
125 be solved by directs methods. Since then, mathmathicians have
126 focussed on numerical (iterative) methods such as the famous
127 Newton's method, Bernoulli's method of the 18th, and Graeffe's.
129 Later on, with the advent of electronic computers, other methods has
130 been developed such as the Jenkins-Traub method, Larkin's method,
131 Muller's method, and several methods for simultaneous
132 approximation of all the roots, starting with the Durand-Kerner (DK)
136 Z_{i}=Z_{i}-\frac{P(Z_{i})}{\prod_{i\neq j}(z_{i}-z_{j})}
140 This formula is mentioned for the first time by
141 Weiestrass~\cite{Weierstrass03} as part of the fundamental theorem
142 of Algebra and is rediscovered by Ilieff~\cite{Ilie50},
143 Docev~\cite{Docev62}, Durand~\cite{Durand60},
144 Kerner~\cite{Kerner66}. Another method discovered by
145 Borsch-Supan~\cite{ Borch-Supan63} and also described and brought
146 in the following form by Ehrlich~\cite{Ehrlich67} and
147 Aberth~\cite{Aberth73} uses a different iteration formula given as fellows :
150 Z_{i}=Z_{i}-\frac{1}{{\frac {P'(Z_{i})} {P(Z_{i})}}-{\sum_{i\neq j}(z_{i}-z_{j})}}.
154 Aberth, Ehrlich and Farmer-Loizou~\cite{Loizon83} have proved that
155 the Ehrlisch-Aberth method (EA) has a cubic order of convergence for simple roots whereas the Durand-Kerner has a quadratic order of convergence.
158 Iterative methods raise several problem when implemented e.g.
159 specific sizes of numbers must be used to deal with this
160 difficulty. Moreover, the convergence time of iterative methods
161 drastically increases like the degrees of high polynomials. It is expected that the
162 parallelization of these algorithms will improve the convergence
165 Many authors have dealt with the parallelisation of
166 simultaneous methods, i.e. that find all the zeros simultaneously.
167 Freeman~\cite{Freeman89} implemeted and compared DK, EA and another method of the fourth order proposed
168 by Farmer and Loizou~\cite{Loizon83}, on a 8- processor linear
169 chain, for polynomials of degree up to 8. The third method often
170 diverges, but the first two methods have speed-up 5.5
171 (speed-up=(Time on one processor)/(Time on p processors)). Later,
172 Freeman and Bane~\cite{Freemanall90} considered asynchronous
173 algorithms, in which each processor continues to update its
174 approximations even though the latest values of other $z_i((k))$
175 have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before making a new iteration.
176 Couturier et al. ~\cite{Raphaelall01} proposed two methods of parallelisation for
177 a shared memory architecture and for distributed memory one. They were able to
178 compute the roots of polynomials of degree 10000 in 430 seconds with only 8
179 personal computers and 2 communications per iteration. Comparing to the sequential implementation
180 where it takes up to 3300 seconds to obtain the same results, the authors show an interesting speedup, indeed.
182 Very few works had been since this last work until the appearing of
183 the Compute Unified Device Architecture (CUDA)~\cite{CUDA10}, a
184 parallel computing platform and a programming model invented by
185 NVIDIA. The computing power of GPUs (Graphics Processing Unit) has exceeded that of
186 of CPUs. However, CUDA adopts a totally new computing architecture to use the
187 hardware resources provided by GPU in order to offer a stronger
188 computing ability to the massive data computing.
191 Ghidouche et al. ~\cite{Kahinall14} proposed an implementation of the
192 Durand-Kerner method on GPU. Their main
193 result showed that a parallel CUDA implementation is 10 times as fast as
194 the sequential implementation on a single CPU for high degree
195 polynomials of about 48000. To our knowledge, it is the first time such high degree polynomials are numerically solved.
198 In this paper, we focus on the implementation of the Aberth method for
199 high degree polynomials on GPU. The paper is organised as fellows. Initially, we recall the Aberth method in Section.\ref{sec1}. Improvements for the Aberth method are proposed in Section.\ref{sec2}. Related work to the implementation of simultaneous methods using a parallel approach is presented in Section.\ref{secStateofArt}.
200 In Section.4 we propose a parallel implementation of the Aberth method on GPU and discuss it. Section 5 presents and investigates our implementation and experimental study results. Finally, Section 6 concludes this paper and gives some hints for future research directions in this topic.
202 \section{The Sequential Aberth method}
204 A cubically convergent iteration method for finding zeros of
205 polynomials was proposed by O.Aberth~\cite{Aberth73}. The Aberth
206 method is a purely algebraic derivation. To illustrate the
207 derivation, we let $w_{i}(z)$ be the product of linear factors
210 w_{i}(z)=\prod_{j=1,j \neq i}^{n} (z-x_{j})
213 And let a rational function $R_{i}(z)$ be the correction term of the
214 Weistrass method~\cite{Weierstrass03}
217 R_{i}(z)=\frac{p(z)}{w_{i}(z)} , i=1,2,...,n.
220 Differentiating the rational function $R_{i}(z)$ and applying the
221 Newton method, we have:
224 \frac{R_{i}(z)}{R_{i}^{'}(z)}= \frac{p(z)}{p^{'}(z)-p(z)\frac{w_{i}(z)}{w_{i}^{'}(z)}}= \frac{p(z)}{p^{'}(z)-p(z) \sum _{j=1,j \neq i}^{n}\frac{1}{z-x_{i}}}, i=1,2,...,n
227 Substituting $x_{j}$ for z we obtain the Aberth iteration method.
229 In the fellowing we present the main stages of the running of the Aberth method.
231 \subsection{Polynomials Initialization}
232 The initialization of a polynomial p(z) is done by setting each of the $n$ complex coefficients $a_{i}$
236 \label{eq:SimplePolynome}
237 p(z)=\sum{a_{i}z^{n-i}} , a_{n} \neq 0,a_{0}=1, a_{i}\subset C
241 \subsection{Vector $z^{(0)}$ Initialization}
243 Like for any iterative method, we need to choose $n$ initial guess points $z^{(0)}_{i}, i = 1, . . . , n.$
244 The initial guess is very important since the number of steps needed by the iterative method to reach
245 a given approximation strongly depends on it.
246 In~\cite{Aberth73} the Aberth iteration is started by selecting $n$
247 equi-spaced points on a circle of center 0 and radius r, where r is
248 an upper bound to the moduli of the zeros. Later, Bini et al.~\cite{Bini96}
249 performed this choice by selecting complex numbers along different
250 circles and relies on the result of~\cite{Ostrowski41}.
255 \sigma_{0}=\frac{u+v}{2};u=\frac{\sum_{i=1}^{n}u_{i}}{n.max_{i=1}^{n}u_{i}};
256 v=\frac{\sum_{i=0}^{n-1}v_{i}}{n.min_{i=0}^{n-1}v_{i}};\\
261 u_{i}=2.|a_{i}|^{\frac{1}{i}};
262 v_{i}=\frac{|\frac{a_{n}}{a_{i}}|^{\frac{1}{n-i}}}{2}.
265 \subsection{Iterative Function $H_{i}$}
266 The operator used by the Aberth method is corresponding to the
267 following equation which will enable the convergence towards
268 polynomial solutions, provided all the roots are distinct.
271 H_{i}(z)=z_{i}-\frac{1}{\frac{p^{'}(z_{i})}{p(z_{i})}-\sum_{j\neq
272 i}{\frac{1}{z_{i}-z_{j}}}}
275 \subsection{Convergence Condition}
276 The convergence condition determines the termination of the algorithm. It consists in stopping from running
277 the iterative function $H_{i}(z)$ when the roots are sufficiently stable. We consider that the method
278 converges sufficiently when :
281 \label{eq:Aberth-Conv-Cond}
283 [1,n];\frac{z_{i}^{(k)}-z_{i}^{(k-1)}}{z_{i}^{(k)}}<\xi
287 \section{Improving the Ehrlisch-Aberth Method}
289 The Ehrlisch-Aberth method implementation suffers of overflow problems. This
290 situation occurs, for instance, in the case where a polynomial
291 having positive coefficients and a large degree is computed at a
292 point $\xi$ where $|\xi| > 1$, where $|x|$ stands for the modolus of a complex $x$. Indeed, the limited number in the
293 mantissa of floating points representations makes the computation of p(z) wrong when z
294 is large. For example $(10^{50}) +1+ (- 10^{50})$ will give the wrong result
295 of $0$ instead of $1$. Consequently, we can not compute the roots
296 for large degrees. This problem was early discussed in
297 ~\cite{Karimall98} for the Durand-Kerner method, the authors
298 propose to use the logarithm and the exponential of a complex in order to compute the power at a high exponent.
302 \forall(x,y)\in R^{*2}; \ln (x+i.y)=\ln(x^{2}+y^{2})
303 2+i.\arcsin(y\sqrt{x^{2}+y^{2}})_{\left] -\pi, \pi\right[ }
307 \label{defexpcomplex}
308 \forall(x,y)\in R^{*2}; \exp(x+i.y) & = \exp(x).\exp(i.y)\\
309 & =\exp(x).\cos(y)+i.\exp(x).\sin(y)\label{defexpcomplex}
313 Using the logarithm (eq.~\ref{deflncomplex}) and the exponential (eq.~\ref{defexpcomplex}) operators, we can replace any multiplications and divisions with additions and subtractions. Consequently, computations
314 manipulate lower absolute values and the roots for large polynomial's degrees can be looked for successfully~\cite{Karimall98}.
316 Applying this solution for the Aberth method we obtain the
317 iteration function with logarithm:
318 %%$$ \exp \bigl( \ln(p(z)_{k})-ln(\ln(p(z)_{k}^{'}))- \ln(1- \exp(\ln(p(z)_{k})-ln(\ln(p(z)_{k}^{'})+\ln\sum_{i\neq j}^{n}\frac{1}{z_{k}-z_{j}})$$
320 H_{i}(z)=z_{i}^{k}-\exp \left(\ln \left(
321 p(z_{k})\right)-\ln\left(p(z_{k}^{'})\right)- \ln
322 \left(1-Q(z_{k})\right)\right),
328 Q(z_{k})=\exp\left( \ln (p(z_{k}))-\ln(p(z_{k}^{'}))+\ln \left(
329 \sum_{k\neq j}^{n}\frac{1}{z_{k}-z_{j}}\right)\right).
332 This solution is applied when it is necessary ??? When ??? (SIDER)
334 \section{The implementation of simultaneous methods in a parallel computer}
335 \label{secStateofArt}
336 The main problem of simultaneous methods is that the necessary
337 time needed for convergence is increased when we increase
338 the degree of the polynomial. The parallelisation of these
339 algorithms is expected to improve the convergence time.
340 Authors usually adopt one of the two following approaches to parallelize root
341 finding algorithms. The first approach aims at reducing the total number of
342 iterations as by Miranker
343 ~\cite{Mirankar68,Mirankar71}, Schedler~\cite{Schedler72} and
344 Winogard~\cite{Winogard72}. The second approach aims at reducing the
345 computation time per iteration, as reported
346 in~\cite{Benall68,Jana06,Janall99,Riceall06}.
348 There are many schemes for the simultaneous approximation of all roots of a given
349 polynomial. Several works on different methods and issues of root
350 finding have been reported in~\cite{Azad07, Gemignani07, Kalantari08, Skachek08, Zhancall08, Zhuall08}. However, Durand-Kerner and Ehrlisch-Aberth methods are the most practical choices among
351 them~\cite{Bini04}. These two methods have been extensively
352 studied for parallelization due to their intrinsics, i.e. the
353 computations involved in both methods has some inherent
354 parallelism that can be suitably exploited by SIMD machines.
355 Moreover, they have fast rate of convergence (quadratic for the
356 Durand-Kerner and cubic for the Ehrlisch-Aberth). Various parallel
357 algorithms reported for these methods can be found
358 in~\cite{Cosnard90, Freeman89,Freemanall90,,Jana99,Janall99}.
359 Freeman and Bane~\cite{Freemanall90} presented two parallel
360 algorithms on a local memory MIMD computer with the compute-to
361 communication time ratio O(n). However, their algorithms require
362 each processor to communicate its current approximation to all
363 other processors at the end of each iteration (synchronous). Therefore they
364 cause a high degree of memory conflict. Recently the author
365 in~\cite{Mirankar71} proposed two versions of parallel algorithm
366 for the Durand-Kerner method, and Ehrlisch-Aberth method on a model of
367 Optoelectronic Transpose Interconnection System (OTIS).The
368 algorithms are mapped on an OTIS-2D torus using N processors. This
369 solution needs N processors to compute N roots, which is not
370 practical for solving polynomials with large degrees.
371 Until very recently, the literature doen not mention implementations able to compute the roots of
372 large degree polynomials (higher then 1000) and within small or at least tractable times. Finding polynomial roots rapidly and accurately is the main objective of our work.
373 With the advent of CUDA (Compute Unified Device
374 Architecture), finding the roots of polynomials receives a new attention because of the new possibilities to solve higher degree polynomials in less time.
375 In~\cite{Kahinall14} we already proposed the first implementation
376 of a root finding method on GPUs, that of the Durand-Kerner method. The main result showed
377 that a parallel CUDA implementation is 10 times as fast as the
378 sequential implementation on a single CPU for high degree
379 polynomials of 48000. In this paper we present a parallel implementation of Ehlisch-Aberth method on
380 GPUs, which details are discussed in the sequel.
383 \section {A CUDA parallel Ehrlisch-Aberth method}
385 \subsection{Background on the GPU architecture}
386 A GPU is viewed as an accelerator for the data-parallel and
387 intensive arithmetic computations. It draws its computing power
388 from the parallel nature of its hardware and software
389 architectures. A GPU is composed of hundreds of Streaming
390 Processors (SPs) organized in several blocks called Streaming
391 Multiprocessors (SMs). It also has a memory hierarchy. It has a
392 private read-write local memory per SP, fast shared memory and
393 read-only constant and texture caches per SM and a read-write
394 global memory shared by all its SPs~\cite{NVIDIA10}.
396 On a CPU equipped with a GPU, all the data-parallel and intensive
397 functions of an application running on the CPU are off-loaded onto
398 the GPU in order to accelerate their computations. A similar
399 data-parallel function is executed on a GPU as a kernel by
400 thousands or even millions of parallel threads, grouped together
401 as a grid of thread blocks. Therefore, each SM of the GPU executes
402 one or more thread blocks in SIMD fashion (Single Instruction,
403 Multiple Data) and in turn each SP of a GPU SM runs one or more
404 threads within a block in SIMT fashion (Single Instruction,
405 Multiple threads). Indeed at any given clock cycle, the threads
406 execute the same instruction of a kernel, but each of them
407 operates on different data.
408 GPUs only work on data filled in their
409 global memories and the final results of their kernel executions
410 must be communicated to their CPUs. Hence, the data must be
411 transferred in and out of the GPU. However, the speed of memory
412 copy between the GPU and the CPU is slower than the memory
413 bandwidths of the GPU memories and, thus, it dramatically affects
414 the performances of GPU computations. Accordingly, it is necessary
415 to limit as much as possible, data transfers between the GPU and its CPU during the
417 \subsection{Background on the CUDA Programming Model}
419 The CUDA programming model is similar in style to a single program
420 multiple-data (SPMD) software model. The GPU is viewed as a
421 coprocessor that executes data-parallel kernel functions. CUDA
422 provides three key abstractions, a hierarchy of thread groups,
423 shared memories, and barrier synchronization. Threads have a three
424 level hierarchy. A grid is a set of thread blocks that execute a
425 kernel function. Each grid consists of blocks of threads. Each
426 block is composed of hundreds of threads. Threads within one block
427 can share data using shared memory and can be synchronized at a
428 barrier. All threads within a block are executed concurrently on a
429 multithreaded architecture.The programmer specifies the number of
430 threads per block, and the number of blocks per grid. A thread in
431 the CUDA programming language is much lighter weight than a thread
432 in traditional operating systems. A thread in CUDA typically
433 processes one data element at a time. The CUDA programming model
434 has two shared read-write memory spaces, the shared memory space
435 and the global memory space. The shared memory is local to a block
436 and the global memory space is accessible by all blocks. CUDA also
437 provides two read-only memory spaces, the constant space and the
438 texture space, which reside in external DRAM, and are accessed via
441 \subsection{ The implementation of Aberth method on GPU}
442 %%\subsection{A CUDA implementation of the Aberth's method }
443 %%\subsection{A GPU implementation of the Aberth's method }
447 \subsubsection{A sequential Aberth algorithm}
448 The main steps of Aberth method are shown in Algorithm.~\ref{alg1-seq} :
453 \caption{A sequential algorithm to find roots with the Aberth method}
455 \KwIn{$Z^{0}$(Initial root's vector),$\varepsilon$ (error
456 tolerance threshold),P(Polynomial to solve)}
458 \KwOut {Z(The solution root's vector)}
462 Initialization of the coefficients of the polynomial to solve\;
463 Initialization of the solution vector $Z^{0}$\;
465 \While {$\Delta z_{max}\succ \epsilon$}{
466 Let $\Delta z_{max}=0$\;
467 \For{$j \gets 0 $ \KwTo $n$}{
468 $ZPrec\left[j\right]=Z\left[j\right]$\;
469 $Z\left[j\right]=H\left(j,Z\right)$\;
472 \For{$i \gets 0 $ \KwTo $n-1$}{
473 $c=\frac{\left|Z\left[i\right]-ZPrec\left[i\right]\right|}{Z\left[i\right]}$\;
474 \If{$c\succ\Delta z_{max}$ }{
475 $\Delta z_{max}$=c\;}
481 In this sequential algorithm, one CPU thread executes all the steps. Let us look to the $3^{rd}$ step i.e. the execution of the iterative function, 2 sub-steps are needed. The first sub-step \textit{save}s the solution vector of the previous iteration, the second sub-step \textit{update}s or computes the new values of the roots vector.
482 There exists two ways to execute the iterative function that we call a Jacobi one and a Gauss-Seidel one. With the Jacobi iteration, at iteration $k+1$ we need all the previous values $z^{(k)}_{i}$ to compute the new values $z^{(k+1)}_{i}$, that is :
485 H(i,z^{k+1})=\frac{p(z^{(k)}_{i})}{p'(z^{(k)}_{i})-p(z^{(k)}_{i})\sum^{n}_{j=1 j\neq i}\frac{1}{z^{(k)}_{i}-z^{(k)}_{j}}}, i=1,...,n.
488 With the the Gauss-seidel iteration, we have:
490 \label{eq:Aberth-H-GS}
491 H(i,z^{k+1})=\frac{p(z^{(k)}_{i})}{p'(z^{(k)}_{i})-p(z^{(k)}_{i})(\sum^{i-1}_{j=1}\frac{1}{z^{(k)}_{i}-z^{(k+1)}_{j}}+\sum^{n}_{j=i+1}\frac{1}{z^{(k)}_{i}-z^{(k)}_{j}})}, i=1,...,n.
494 Using Equation.~\ref{eq:Aberth-H-GS} for the update sub-step of $H(i,z^{k+1})$, we expect the Gauss-Seidel iteration to converge more quickly because, just as its ancestor (for solving linear systems of equations), it uses the most fresh computed roots $z^{k+1}_{i}$.
496 The $4^{th}$ step of the algorithm checks the convergence condition using Equation.~\ref{eq:Aberth-Conv-Cond}.
497 Both steps 3 and 4 use 1 thread to compute all the $n$ roots on CPU, which is very harmful for performance in case of the large degree polynomials.
499 \paragraph{The execution time}
500 Let $T_{i}(n)$ be the time to compute one new root value at step 3, $T_{i}$ depends on the polynomial's degree $n$. When $n$ increase $T_{i}(n)$ increases too. We need $n.T_{i}(n)$ to compute all the new values in one iteration at step 3.
502 Let $T_{j}$ be the time needed to check the convergence of one root value at the step 4, so we need $n.T_{j}$ to compute global convergence condition in each iteration at step 4.
504 Thus, the execution time for both steps 3 and 4 is:
506 T_{iter}=n(T_{i}(n)+T_{j})+O(n).
508 Let $K$ be the number of iterations necessary to compute all the roots, so the total execution time $T$ can be given as:
512 T=\left[n\left(T_{i}(n)+T_{j}\right)+O(n)\right].K
514 The execution time increases with the increasing of the polynomial degree, which justifies to parallelise these steps in order to reduce the global execution time. In the following, we explain how we did parrallelize these steps on a GPU architecture using the CUDA platform.
516 \subsubsection{A Parallel implementation with CUDA }
517 On the CPU, both steps 3 and 4 contain the loop \verb=for= and a single thread executes all the instructions in the loop $n$ times. In this subsection, we explain how the GPU architecture can compute this loop and reduce the execution time.
518 In the GPU, the schduler assigns the execution of this loop to a group of threads organised as a grid of blocks with block containing a number of threads. All threads within a block are executed concurrently in parallel. The instructions run on the GPU are grouped in special function called kernels. It's up to the programmer, to describe the execution context, that is the size of the Grid, the number of blocks and the number of threads per block upon the call of a given kernel, according to a special syntax defined by CUDA.
520 Let N be the number of threads executed in parallel, Equation.~\ref{eq:T-global} becomes then :
523 T=\left[\frac{n}{N}\left(T_{i}(n)+T_{j}\right)+O(n)\right].K.
526 In theory, total execution time $T$ on GPU is speed up $N$ times as $T$ on CPU. We will see at what extent this is true in the experimental study hereafter.
529 In CUDA programming, all the instructions of the \verb=for= loop are executed by the GPU as a kernel. A kernel is a function written in CUDA and defined by the \verb=__global__= qualifier added before a usual ``C`` function, which instructs the compiler to generate appropriate code to pass it to the CUDA runtime in order to be executed on the GPU.
531 Algorithm~\ref{alg2-cuda} shows a sketch of the Aberth algorithm usind CUDA.
536 \caption{CUDA Algorithm to find roots with the Aberth method}
538 \KwIn{$Z^{0}$(Initial root's vector),$\varepsilon$ (error
539 tolerance threshold),P(Polynomial to solve)}
541 \KwOut {Z(The solution root's vector)}
545 Initialization of the coeffcients of the polynomial to solve\;
546 Initialization of the solution vector $Z^{0}$\;
547 Allocate and copy initial data to the GPU global memory\;
549 \While {$\Delta z_{max}\succ \epsilon$}{
550 Let $\Delta z_{max}=0$\;
551 $ kernel\_save(d\_z^{k-1})$\;
552 $ kernel\_update(d\_z^{k})$\;
553 $kernel\_testConverge(\Delta z_{max},d_z^{k},d_z^{k-1})$\;
558 After the initialisation step, all data of the root finding problem to be solved must be copied from the CPU memory to the GPU global memory, because the GPUs only access data already present in their memories. Next, all the data-parallel arithmetic operations inside the main loop \verb=(do ... while(...))= are executed as kernels by the GPU. The first kernel named \textit{save} in line 6 of Algorithm~\ref{alg2-cuda} consists in saving the vector of polynomial's root found at the previous time-step in GPU memory, in order to check the convergence of the roots after each iteration (line 8, Algorithm~\ref{alg2-cuda}).
560 The second kernel executes the iterative function $H$ and updates $z^{k}$, according to Algorithm~\ref{alg3-update}. We notice that the update kernel is called in two forms, separated with the value of \emph{R} which determines the radius beyond which we apply the logarithm computation of the power of a complex.
565 \caption{A global Algorithm for the iterative function}
567 \eIf{$(\left|Z^{(k)}\right|<= R)$}{
568 $kernel\_update(d\_z^{k})$\;}
570 $kernel\_update\_Log(d\_z^{k})$\;
574 The first form executes formula \ref{eq:SimplePolynome} if the modulus of the current complex is less than the a certain value called the radius i.e. ($ |z^{k}_{i}|<= R$), else the kernel executes formulas (Eq.~\ref{deflncomplex},Eq.~\ref{defexpcomplex}). The radius $R$ is evaluated as :
576 $$R = \exp( \log(DBL\_MAX) / (2*n) )$$ where $DBL\_MAX$ stands for the maximum representable double value.
578 The last kernel verifies the convergence of the roots after each update of $Z^{(k)}$, according to formula. We used the functions of the CUBLAS Library (CUDA Basic Linear Algebra Subroutines) to implement this kernel.
580 The kernels terminate it computations when all the roots converge. Finally, the solution of the root finding problem is copied back from GPU global memory to CPU memory. We use the communication functions of CUDA for the memory allocation in the GPU \verb=(cudaMalloc())= and for data transfers from the CPU memory to the GPU memory \verb=(cudaMemcpyHostToDevice)=
581 or from GPU memory to CPU memory \verb=(cudaMemcpyDeviceToHost))=.
582 %%HIER END MY REVISIONS (SIDER)
583 \section{Experimental study}
585 \subsection{Definition of the polynomial used}
586 We use two forms of polynomials:
587 \paragraph{sparse polynomial}:
588 in this following form, the roots are distributed on 2 distinct circles:
590 \forall \alpha_{1} \alpha_{2} \in C,\forall n_{1},n_{2} \in N^{*}; P(z)= (z^{n^{1}}-\alpha_{1})(z^{n^{2}}-\alpha_{2})
593 This form makes it possible to associate roots having two
594 different modules and thus to work on a polynomial constitute
595 of four non zero terms.
597 \paragraph{Full polynomial}:
598 the second form used to obtain a full polynomial is:
600 %%\forall \alpha_{i} \in C,\forall n_{i}\in N^{*}; P(z)= \sum^{n}_{i=1}(z^{n^{i}}.a_{i})
604 {\Large \forall a_{i} \in C, i\in N; p(x)=\sum^{n-1}_{i=1} a_{i}.x^{i}}
606 with this form, we can have until \textit{n} non zero terms.
608 \subsection{The study condition}
609 In order to have representative average values, for each
610 point of our curves we measured the roots finding of 10
611 different polynomials.
613 The our experiences results concern two parameters which are
614 the polynomial degree and the execution time of our program
615 to converge on the solution. The polynomial degree allows us
616 to validate that our algorithm is powerful with high degree
617 polynomials. The execution time remains the
618 element-key which justifies our work of parallelization.
619 For our tests we used a CPU Intel(R) Xeon(R) CPU
620 E5620@2.40GHz and a GPU K40 (with 6 Go of ram)
623 \subsection{Comparative study}
624 We initially carried out the convergence of Aberth algorithm with various sizes of polynomial, in second we evaluate the influence of the size of the threads per block....
626 \subsubsection{Aberth algorithm on CPU and GPU}
630 % \begin{tabular} {|R{2cm}|L{2.5cm}|L{2.5cm}|L{1.5cm}|L{1.5cm}|}
631 % \hline Polynomial's degrees & $T_{exe}$ on CPU & $T_{exe}$ on GPU & CPU iteration & GPU iteration\\
632 % \hline 5000 & 1.90 & 0.40 & 18 & 17\\
633 % \hline 10000 & 172.723 & 0.59 & 21 & 24\\
634 % \hline 20000 & 172.723 & 1.52 & 21 & 25\\
635 % \hline 30000 & 172.723 & 2.77 & 21 & 33\\
636 % \hline 50000 & 172.723 & 3.92 & 21 & 18\\
637 % \hline 500000 & $>$1h & 497.109 & & 24\\
638 % \hline 1000000 & $>$1h & 1,524.51& & 24\\
641 % \caption{the convergence of Aberth algorithm}
642 % \label{tab:theConvergenceOfAberthAlgorithm}
647 \includegraphics[width=0.8\textwidth]{figures/Compar_EA_algorithm_CPU_GPU}
648 \caption{Aberth algorithm on CPU and GPU}
653 \subsubsection{The impact of the thread's number into the convergence of Aberth algorithm}
657 % \begin{tabular} {|R{2.5cm}|L{2.5cm}|L{2.5cm}|}
658 % \hline Thread's numbers & Execution time &Number of iteration\\
659 % \hline 1024 & 523 & 27\\
660 % \hline 512 & 449.426 & 24\\
661 % \hline 256 & 440.805 & 24\\
662 % \hline 128 & 456.175 & 22\\
663 % \hline 64 & 472.862 & 23\\
664 % \hline 32 & 830.152 & 24\\
665 % \hline 8 & 2632.78 & 23 \\
668 % \caption{The impact of the thread's number into the convergence of Aberth algorithm}
669 % \label{tab:Theimpactofthethread'snumberintotheconvergenceofAberthalgorithm}
676 \includegraphics[width=0.8\textwidth]{figures/influence_nb_threads}
677 \caption{Influence of the number of threads on the execution times of different polynomials (sparse and full)}
683 \subsubsection{A comparative study between Aberth and Durand-kerner algorithm}
686 \begin{tabular} {|R{2cm}|L{2.5cm}|L{2.5cm}|L{1.5cm}|L{1.5cm}|}
687 \hline Polynomial's degrees & Aberth $T_{exe}$ & D-Kerner $T_{exe}$ & Aberth iteration & D-Kerner iteration\\
688 \hline 5000 & 0.40 & 3.42 & 17 & 138 \\
689 \hline 50000 & 3.92 & 385.266 & 17 & 823\\
690 \hline 500000 & 497.109 & 4677.36 & 24 & 214\\
693 \caption{Aberth algorithm compare to Durand-Kerner algorithm}
694 \label{tab:AberthAlgorithCompareToDurandKernerAlgorithm}
698 \bibliography{mybibfile}