-The main problem of the simultaneous methods is that the necessary time needed for the convergence increases with the increasing of the polynomial's degree. Many authors have treated the problem of implementing simultaneous methods in parallel. Freeman~\cite{Freeman89} implemented and compared Durand-Kerner method, Ehrlich-Aberth method and another method of the fourth order of convergence proposed by Farmer and Loizou~\cite{Loizou83} on a 8-processor linear chain, for polynomials of degree up-to 8. The method of Farmer and Loizou~\cite{Loizou83} often diverges, but the first two methods (Durand-Kerner and Ehrlich-Aberth methods) have a speed-up equals to 5.5. Later, Freeman and Bane~\cite{Freemanall90} considered asynchronous algorithms in which each processor continues to update its approximations even though the latest values of other approximations $z^{k}_{i}$ have not been received from the other processors, in contrast with synchronous algorithms where it would wait those values before making a new iteration. Couturier et al.~\cite{Raphaelall01} proposed two methods of parallelization for a shared memory architecture with OpenMP and for a distributed memory one with MPI. They are able to compute the roots of sparse polynomials of degree 10,000 in 116 seconds with OpenMP and 135 seconds with MPI only by using 8 personal computers and 2 communications per iteration. The authors showed an interesting speedup comparing to the sequential implementation which takes up-to 3,300 seconds to obtain same results.
-\LZK{``only by using 8 personal computers and 2 communications per iteration''. Pour MPI? et Pour OpenMP?}
+The main problem of the simultaneous methods is that the necessary
+time needed for the convergence increases with the increasing of the
+polynomial's degree. Many authors have treated the problem of
+implementing simultaneous methods in
+parallel. Freeman~\cite{Freeman89} implemented and compared
+Durand-Kerner method, Ehrlich-Aberth method and another method of the
+fourth order of convergence proposed by Farmer and
+Loizou~\cite{Loizou83} on a 8-processor linear chain, for polynomials
+of degree up-to 8. The method of Farmer and Loizou~\cite{Loizou83}
+often diverges, but the first two methods (Durand-Kerner and
+Ehrlich-Aberth methods) have a speed-up equals to 5.5. Later, Freeman
+and Bane~\cite{Freemanall90} considered asynchronous algorithms in
+which each processor continues to update its approximations even
+though the latest values of other approximations $z^{k}_{i}$ have not
+been received from the other processors, in contrast with synchronous
+algorithms where it would wait those values before making a new
+iteration. Couturier et al.~\cite{Raphaelall01} proposed two methods
+of parallelization for a shared memory architecture with OpenMP and
+for a distributed memory one with MPI. They are able to compute the
+roots of sparse polynomials of degree 10,000 in 116 seconds with
+OpenMP and 135 seconds with MPI only by using 8 personal computers and
+2 communications per iteration. The authors showed an interesting
+speedup comparing to the sequential implementation which takes up-to
+3,300 seconds to obtain same results.
+\RC{si on donne des temps faut donner le proc, comme c'est vieux à mon avis faut supprimer ca, votre avis?}
+\LZK{Supprimons ces détails et mettons une référence s'il y en a une}