--- /dev/null
+\documentclass[11pt,a4paper]{article}\r
+\usepackage[latin1]{inputenc}\r
+\usepackage{amsmath}\r
+\usepackage{amsfonts}\r
+\usepackage{amssymb}\r
+\author{ghidouche}\r
+\title{Paper1_kahina}\r
+\begin{document}\r
+\section{Root finding problem}\r
+we consider a polynomial of degree \textit{n} having coefficients\r
+in the complex \textit{C} and zeros $\alpha\r
+_{i},\textit{i=1,...,n}$. \\\r
+\begin{center}\r
+ {\Large$p(x)=\sum{a_{i}x^{i}}=a_{n}\prod(x-\alpha_{i}),\r
+a_{0}a_{n}\neq0,$}\r
+\end{center}\r
+\r
+ the root finding problem consist to find\r
+all n root of \textit{p(x)}. the problem of finding a root is\r
+equivalent to the problem of finding a fixed-point. To see this\r
+consider the fixed-point problem of finding the n-dimensional\r
+vector x such that\r
+\begin{center}\r
+$x=g(x). $\r
+\end{center}\r
+where $g : C^{n}\longrightarrow C^{n}$. Note that we can easily\r
+rewrite this fixed-point problem as a root-finding problem by\r
+setting $f (x) = x-g(x)$ and likewise we can recast the\r
+root-finding problem into a fixed-point problem by setting\r
+\begin{center}\r
+$g(x)= f(x)-x$\r
+\end{center}\r
+Often it will not be possible to solve such nonlinear equation\r
+root-finding problems analytically. When this occurs we turn to\r
+numerical methods to approximate the solution. Generally speaking,\r
+algorithms for solving problems numerically can be divided into\r
+two main groups: direct methods and iterative methods.\r
+\\\r
+ Direct methods exist only for $n \leqslant4$,solved in closed form by G. Cardano\r
+in the mid-16th century. However, N.H. Abel in the early 19th\r
+century showed that polynomials of degree five or more could not\r
+be solved by directs methods. Since then researchers have\r
+concentrated on numerical (iterative) methods such as the famous\r
+Newton s method, Bernoulli s method of the 18th, and Graeffe s.\r
+With the advent of electronic computers, different methods has\r
+been developed such as the Jenkins-Traub method, Larkin s method,\r
+Muller s method, and several methods for simultaneous\r
+approximation of all the roots, starting with the Durand-Kerner\r
+method:\r
+\begin{center}\r
+\r
+$ Z_{i}=Z_{i}-\frac{P(Z_{i})}{\prod_{i\neq j}(z_{i}-z_{j})} $\r
+\end{center}\r
+\r
+This formula is mentioned for the first time from Weiestrass [12]\r
+as part of the fundamental theorem of Algebra and is rediscovered\r
+from Ilieff [2], Docev [3], Durand [4], Kerner [5]. Another method\r
+discovered from Borsch-Supan [6] and also described and brought in\r
+the following form from Ehrlich [7] and Aberth [8]\r
+\begin{center}\r
+\r
+$ Z_{i}=Z_{i}-\frac{1}{{\frac {P'(Z_{i})}\r
+{P(Z_{i})}}-{\sum_{i\neq j}(z_{i}-z_{j})}} $\r
+\end{center}\r
+\r
+Aberth, Ehrlich and Farmer-Loizou [10] have proved that the above\r
+method has cubic order of convergence for simple roots.\r
+\r
+\r
+Iterative methods raise several problem when implemented e.g.\r
+specific sizes of numbers must be used to deal with this\r
+difficulty.Moreover,the convergence time of iterative methods\r
+drastically increase like the degrees of high polynomials. The\r
+parallelization of these algorithms will improve the convergence\r
+time.\r
+\r
+Many authors have treated the problem of parallelization of\r
+simultaneous methods. Freeman [13] has tested the DK method, EA\r
+method and another method of the fourth order proposed from Farmer\r
+and Loizou [10],on a 8- processor linear chain, for polynomial of\r
+degree up to 8. The third method often diverges, but the first two\r
+methods have speed-up 5.5 (speed-up=(Time on one processor)/(Time\r
+on p processors)). Later Freeman and Bane [14] consider\r
+asynchronous algorithms, in which each processor continues to\r
+update its approximations even although the latest values of other\r
+$z_i((k))$ have not received from the other processors, in\r
+difference with the synchronous version where it would wait. in\r
+[15]proposed two methods of parallelization for architecture with\r
+shared memory and distributed memory,it able to compute the root\r
+of polynomial degree 10000 on 430 s with only 8 pc and 2\r
+communications per iteration. Compare to the sequential it take\r
+3300 s to obtain the same results.\r
+\r
+After this few works discuses this problem until the apparition of\r
+the Compute Unified Device Architecture (CUDA) [19],a parallel\r
+computing platform and a programming model invented by NVIDIA. the\r
+computing ability of GPU has exceeded the counterpart of CPU. It\r
+is a waste of resource to be just a graphics card for GPU. CUDA\r
+adopts a totally new computing architecture to use the hardware\r
+resources provided by GPU in order to offer a stronger computing\r
+ability to the massive data computing.\r
+\r
+\r
+Indeed [16]proposed the implementation of the Durand-Kerner method\r
+on GPU (Graphics Processing Unit). The main result prove that a\r
+parallel implementation is 10 times as fast as the sequential\r
+implementation on a single CPU for high degree polynomials that is\r
+greater than about 48000.\r
+\paragraph{}\r
+The mean part of our work is to implement the Aberth method on GPU\r
+and compare it with the Durand Kerner\r
+implementation.................To be continued..................\r
+\r
+\r
+\r
+\bibliographystyle{alpha}\r
+\begin{thebibliography}{2}\r
+\r
+\bibitem [1] {1} O. Aberth, Iteration Methods for Finding\r
+all Zeros of a Polynomial Simultaneously, Math. Comput. 27, 122\r
+(1973) 339\96344.\r
+\r
+\bibitem [2] {2} Ilieff, L. (1948-50), On the approximations of Newton, Annual\r
+Sofia Univ. 46, 167-171.\r
+\r
+\bibitem [3] {3} Docev, K. (1962), An alternative method of Newton for\r
+simultaneous calculation of all the roots of a given algebraic\r
+equation, Phys. Math. J., Bulg. Acad. Sci. 5, 136-139.\r
+\r
+\bibitem [4]{4} Durand, E. (1960), Solution Numerique des Equations\r
+Algebriques, Vol. 1, Equations du Type F(x)=0, Racines d'une\r
+Polynome. Masson, Paris.\r
+\r
+\bibitem [4] {4} Aberth, O. (1973), Iterative methods for finding all zeros of\r
+a polynomial simultaneously, Math. Comp. 27, 339-344.\r
+\r
+\bibitem [5] {5} Kerner, I.O. (1966), Ein Gesamtschritteverfahren zur\r
+Berechnung der Nullstellen von Polynomen, Numer. Math. 8, 290-294.\r
+\r
+\bibitem [6]{6} Borch-Supan, W. (1963), A posteriori error for the zeros of\r
+polynomials, Numer. Math. 5, 380-398.\r
+\r
+\bibitem [7] {7} Ehrlich, L. W. (1967), A modified Newton method for\r
+polynomials, Comm. Ass. Comput. Mach. 10, 107-108.\r
+\r
+\r
+\r
+\bibitem [10] {10}Loizon, G. (1983), Higher-order iteration functions for\r
+simultaneously approximating polynomial zeros, Intern. J. Computer\r
+Math. 14, 45-58.\r
+\r
+\bibitem [11]{11} E. Durand, Solutions num´eriques des ´equations alg´ebriques,\r
+Tome 1: Equations du type F(X) = 0; Racines d\92un polyn\88ome,\r
+Masson, Paris 1960.\r
+\r
+\bibitem [12] {12} Weierstrass, K. (1903), Neuer Beweis des Satzes, dass\r
+jede ganze rationale function einer veranderlichen dagestellt\r
+werden kann als ein product aus linearen functionen derselben\r
+veranderlichen, Ges. Werke 3, 251-269.\r
+\bibitem [13] {13} Freeman, T. L. (1989), Calculating polynomial zeros on a\r
+local memory parallel computer, Parallel Computing 12, 351-358.\r
+\r
+\bibitem [14] {14} Freeman, T. L., Brankin, R. K. (1990), Asynchronous\r
+polynomial zero-finding algorithms, Parallel Computing 17,\r
+673-681.\r
+\r
+\bibitem [15] {15} Raphaël,C. François,S. (2001), Extraction de racines dans des\r
+polynômes creux de degré élevé. RSRCP (Réseaux et Systèmes\r
+Répartis, Calculateurs Parallèles), Numéro thématique :\r
+Algorithmes itératifs parallèles et distribués, 13(1):67--81.\r
+\r
+\bibitem [16]{16} Kahina, G. Raphaël, C. Abderrahmane, S. A\r
+parallel implementation of the Durand-Kerner algorithm for\r
+polynomial root-finding on GPU. In INDS 2014, Int. Conf. on\r
+advanced Networking, Distributed Systems and Applications, Bejaia,\r
+Algeria, pages 53--57, June 2014. IEEE\r
+\r
+\end{thebibliography}\r
+\end{document}\r