-The main problem of the simultaneous methods is that the necessary time needed for the convergence increases with the increasing of the polynomial's degree. Many authors have treated the problem of implementing simultaneous methods in parallel. Freeman~\cite{Freeman89} implemented and compared DK, EA and another method of the fourth order of convergence proposed by Farmer and Loizou~\cite{Loizou83} on a 8-processor linear chain, for polynomials of degree up-to 8. The method of Farmer and Loizou~\cite{Loizou83} often diverges, but the first two methods (DK and EA) have a speed-up equals to 5.5. Later, Freeman and Bane~\cite{Freemanall90} considered asynchronous algorithms in which each processor continues to update its approximations even though the latest values of other $z^{k}_{i}$ have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before making a new iteration. Couturier et al.~\cite{Raphaelall01} proposed two methods of parallelization for a shared memory architecture with \textit{OpenMP} and for a distributed memory one with \textit{MPI}. They are able to compute the roots of sparse polynomials of degree 10,000 in 116 seconds with \textit{OpenMP} and 135 seconds with \textit{MPI} only by using 8 personal computers and 2 communications per iteration.
-\LZK{je suppose que c'est pour la version mpi (only by using 8 personal computers and 2 communications per iteration). A t on utilisé le même nombre de procs pour les deux versions openmp et mpi} The authors showed an interesting speedup comparing to the sequential implementation that takes up-to 3,300 seconds to obtain same results.
-
-Very few work had been performed since then until the appearing of the Compute Unified Device Architecture (CUDA)~\cite{CUDA10}, a parallel computing platform and a programming model invented by NVIDIA. The computing power of GPUs (Graphics Processing Units) has exceeded that of CPUs. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by the GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche and al~\cite{Kahinall14} proposed an implementation of the Durand-Kerner method on a single GPU. Their main results showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000.
+The main problem of the simultaneous methods is that the necessary
+time needed for the convergence increases with the increasing of the
+polynomial's degree. Many authors have treated the problem of
+implementing simultaneous methods in
+parallel. Freeman~\cite{Freeman89} implemented and compared
+Durand-Kerner method, Ehrlich-Aberth method and another method of the
+fourth order of convergence proposed by Farmer and
+Loizou~\cite{Loizou83} on a 8-processor linear chain, for polynomials
+of degree up-to 8. The method of Farmer and Loizou~\cite{Loizou83}
+often diverges, but the first two methods (Durand-Kerner and
+Ehrlich-Aberth methods) have a speed-up equals to 5.5. Later, Freeman
+and Bane~\cite{Freemanall90} considered asynchronous algorithms in
+which each processor continues to update its approximations even
+though the latest values of other approximations $z^{k}_{i}$ have not
+been received from the other processors, in contrast with synchronous
+algorithms where it would wait those values before making a new
+iteration. Couturier et al.~\cite{Raphaelall01} proposed two methods
+of parallelization for a shared memory architecture with OpenMP and
+for a distributed memory one with MPI. They are able to compute the
+roots of sparse polynomials of degree 10,000 in 116 seconds with
+OpenMP and 135 seconds with MPI only by using 8 personal computers and
+2 communications per iteration. \RC{si on donne des temps faut donner
+ le proc, comme c'est vieux à mon avis faut supprimer ca, votre avis?} The authors showed an interesting
+speedup comparing to the sequential implementation which takes up-to
+3,300 seconds to obtain same results.
+\LZK{``only by using 8 personal computers and 2 communications per iteration''. Pour MPI? et Pour OpenMP: Rep: c'est MPI seulement}
+
+Very few work had been performed since then until the appearing of the Compute Unified Device Architecture (CUDA)~\cite{CUDA15}, a parallel computing platform and a programming model invented by NVIDIA. The computing power of GPUs (Graphics Processing Units) has exceeded that of traditional processors CPUs. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by the GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche et al.~\cite{Kahinall14} proposed an implementation of the Durand-Kerner method on a single GPU. Their main results showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000.
+
+%Finding polynomial roots rapidly and accurately is the main objective of our work. In this paper we propose the parallelization of Ehrlich-Aberth method using two parallel programming paradigms OpenMP and MPI on multi-GPU platforms. We consider two architectures: shared memory and distributed memory computers. The first parallel algorithm is implemented on shared memory computers by using OpenMP API. It is based on threads created from the same system process, such that each thread is attached to one GPU. In this case the communications between GPUs are done by OpenMP threads through shared memory. The second parallel algorithm uses the MPI API, such that each GPU is attached and managed by a MPI process. The GPUs exchange their data by message-passing communications. This latter approach is more used on distributed memory clusters to solve very complex problems that are too large for traditional supercomputers, which are very expensive to build and run.
+%\LZK{Cette partie est réécrite. \\ Sinon qu'est ce qui a été fait pour l'accuracy dans ce papier (Finding polynomial roots rapidly and accurately is the main objective of our work.)?}
+%\LZK{Les contributions ne sont pas définies !!}
+
+In this paper we propose the parallelization of Ehrlich-Aberth method using two parallel programming paradigms OpenMP and MPI on CUDA multi-GPU platforms. Our CUDA/MPI and CUDA/OpenMP codes are the first implementations of Ehrlich-Aberth method with multiple GPUs for finding roots of polynomials. Our major contributions include:
+\LZK{Pourquoi la méthode Ehrlich-Aberth et pas autres? the Ehrlich-Aberth have very good convergence and it is suitable to be implemented in parallel computers.}
+ \begin{itemize}
+ \item An improvements for the Ehrlich-Aberth method using the exponential logarithm in order to be able to solve sparse and full polynomial of degree up to 1, 000, 000.
+ \item A parallel implementation of Ehrlich-Aberth method on single GPU with CUDA.
+\item The parallel implementation of Ehrlich-Aberth algorithm on a multi-GPU platform with a shared memory using OpenMP API. It is based on threads created from the same system process, such that each thread is attached to one GPU. In this case the communications between GPUs are done by OpenMP threads through shared memory.
+\item The parallel implementation of Ehrlich-Aberth algorithm on a multi-GPU platform with a distributed memory using MPI API, such that each GPU is attached and managed by a MPI process. The GPUs exchange their data by message-passing communications. This latter approach is more used on clusters to solve very complex problems that are too large for traditional supercomputers, which are very expensive to build and run.
+ \end{itemize}
+\LZK{Pas d'autres contributions possibles?: j'ai rajouté 2}