+%\begin{algorithm}[htpb]
+%\label{alg1-cuda}
+%\LinesNumbered
+%\SetAlgoNoLine
+%\caption{CUDA Algorithm to find polynomial roots with the Ehrlich-Aberth method}
+%\KwIn{$Z^{0}$ (Initial vector of roots), $\epsilon$ (Error tolerance threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degree), $\Delta z_{max}$ (Maximum value of stop condition)}
+%\KwOut{$Z$ (Solution vector of roots)}
+
+%\BlankLine
+
+%Initialization of P\;
+%Initialization of Pu\;
+%Initialization of the solution vector $Z^{0}$\;
+%Allocate and copy initial data to the GPU global memory\;
+%\While {$\Delta z_{max} > \epsilon$}{
+% $ ZPres=kernel\_save(Z)$\;
+% $ Z=kernel\_update(Z,P,Pu)$\;
+% $\Delta z_{max}=kernel\_testConv(Z,ZPrec)$\;
+
+%}
+%Copy results from GPU memory to CPU memory\;
+%\end{algorithm}
+
+\begin{algorithm}[htpb]
+\LinesNumbered
+\SetAlgoNoLine
+\caption{Finding roots of polynomials with the Ehrlich-Aberth method on a GPU}
+\KwIn{$n$ (polynomial's degree), $\epsilon$ (tolerance threshold)}
+\KwOut{$Z$ (solution vector of roots)}
+Initialize the polynomial $P$ and its derivative $P'$\;
+Set the initial values of vector $Z$\;
+Copy $P$, $P'$ and $Z$ from the CPU memory to the GPU memory\;
+\While{\emph{not convergence}}{
+ $Z_{prev}$ = Kernel\_Save($Z,n$)\;
+ $Z$ = Kernel\_Update($P,P',Z,n$)\;
+ Kernel\_Test\_Convergence($Z,Z_{prev},n,\epsilon$)\;
+}
+Copy $Z$ from the GPU memory to the CPU memory\;
+\label{alg1-cuda}
+\end{algorithm}
+
+
+
+
+
+
+
+
+
+
+\RC{Si l'algo vous convient, il faudrait le détailler précisément}
+
+\section{The EA algorithm on Multiple GPUs}
+\label{sec4}
+\subsection{an OpenMP-CUDA approach}
+Our OpenMP-CUDA implementation of EA algorithm is based on the hybrid
+OpenMP and CUDA programming model. All the data are shared with
+OpenMP amoung all the OpenMP threads. The shared data are the solution
+vector $Z$, the polynomial to solve $P$, and the error vector $\Delta
+z$. The number of OpenMP threads is equal to the number of GPUs, each
+OpenMP thread binds to one GPU, and it controls a part of the shared
+memory. More precisely each OpenMP thread will be responsible to
+update its owns part of the vector Z. This part is call $Z_{loc}$ in
+the following. Then all GPUs will have a grid of computation organized
+according to the device performance and the size of data on which it
+runs the computation kernels.
+
+To compute one iteration of the EA method each GPU performs the
+followings steps. First roots are shared with OpenMP and the
+computation of the local size for each GPU is performed (lines 5-7 in
+Algo\ref{alg2-cuda-openmp}). Each thread starts by copying all the
+previous roots inside its GPU (line 9). Then each GPU will copy the
+previous roots (line 10) and it will compute an iteration of the EA
+method on its own roots (line 11). For that all the other roots are
+used. The convergence is checked on the new roots (line 12). At the end
+of an iteration, the updated roots are copied from the GPU to the
+CPU (line 14) by direcly updating its own roots in the shared memory
+arrays containing all the roots.
+
+%In principle a grid is set by two parameter DimGrid, the number of block per grid, DimBloc: the number of threads per block. The following schema shows the architecture of (CUDA,OpenMP).
+
+%\begin{figure}[htbp]
+%\centering
+ % \includegraphics[angle=-90,width=0.5\textwidth]{OpenMP-CUDA}
+%\caption{The OpenMP-CUDA architecture}
+%\label{fig:03}
+%\end{figure}
+%Each thread OpenMP compute the kernels on GPUs,than after each iteration they copy out the data from GPU memory to CPU shared memory. The kernels are re-runs is up to the roots converge sufficiently. Here are below the corresponding algorithm:
+
+%% \RC{Surement à virer ou réécrire pour etre compris sans algo}
+%% $num\_gpus$ OpenMP threads are created using
+%% \verb=omp_set_num_threads();=function (step $3$, Algorithm
+%% \ref{alg2-cuda-openmp}), the shared memory is created using
+%% \verb=#pragma omp parallel shared()= OpenMP function (line $5$,
+%% Algorithm\ref{alg2-cuda-openmp}), then each OpenMP thread allocates
+%% memory and copies initial data from CPU memory to GPU global memory,
+%% executes the kernels on GPU, but computes only his portion of roots
+%% indicated with variable \textit{index} initialized in (line 5,
+%% Algorithm \ref{alg2-cuda-openmp}), used as input data in the
+%% $kernel\_update$ (line 10, Algorithm \ref{alg2-cuda-openmp}). After
+%% each iteration, all OpenMP threads synchronize using
+%% \verb=#pragma omp barrier;= to gather all the correct values of
+%% $\Delta z$, thus allowing the computation the maximum stop condition
+%% on vector $\Delta z$ (line 12, Algorithm
+%% \ref{alg2-cuda-openmp}). Finally, threads copy the results from GPU
+%% memories to CPU memory. The OpenMP threads execute kernels until the
+%% roots sufficiently converge.
+
+
+\begin{algorithm}[h]
+\label{alg2-cuda-openmp}
+\LinesNumbered
+\SetAlgoNoLine
+\caption{CUDA-OpenMP Algorithm to find roots with the Ehrlich-Aberth method}
+
+\KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance
+ threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degree), $\Delta z$ ( Vector of errors for stop condition), $num\_gpus$ (number of OpenMP threads/ Number of GPUs), $Size$ (number of roots)}
+
+\KwOut {$Z$ ( Root's vector), $ZPrec$ (Previous root's vector)}
+
+\BlankLine
+
+Initialization of P\;
+Initialization of Pu\;
+Initialization of the solution vector $Z^{0}$\;
+Start of a parallel part with OpenMP (Z, $\Delta z$, P are shared variables)\;
+gpu\_id=cudaGetDevice()\;
+Allocate memory on GPU\;
+Compute local size and offet according to gpu\_id\;
+\While {$error > \epsilon$}{
+ copy Z from CPU to GPU\;
+$ ZPrec_{loc}=kernel\_save(Z_{loc})$\;
+$ Z_{loc}=kernel\_update(Z,P,Pu)$\;
+$\Delta z[gpu\_id] = kernel\_testConv(Z_{loc},ZPrec_{loc})$\;
+$ error= Max(\Delta z)$\;
+ copy $Z_{loc}$ from GPU to Z in CPU
+}
+\end{algorithm}
+
+
+
+\subsection{an MPI-CUDA approach}
+%\begin{figure}[htbp]
+%\centering
+ % \includegraphics[angle=-90,width=0.2\textwidth]{MPI-CUDA}
+%\caption{The MPI-CUDA architecture }
+%\label{fig:03}
+%\end{figure}
+Our parallel implementation of EA to find root of polynomials using a CUDA-MPI approach is a data parallel approach. It splits input data of the polynomial to solve among MPI processes. In Algorithm \ref{alg2-cuda-mpi}, input data are the polynomial to solve $P$, the solution vector $Z$, the previous solution vector $ZPrev$, and the value of errors of stop condition $\Delta z$. Let $p$ denote the number of MPI processes on and $n$ the degree of the polynomial to be solved. The algorithm performs a simple data partitioning by creating $p$ portions, of at most $\lceil n/p \rceil$ roots to find per MPI process, for each $Z$ and $ZPrec$. Consequently, each MPI process of rank $k$ will have its own solution vector $Z_{k}$ and $ZPrec$, the error related to the stop condition $\Delta z_{k}$, enabling each MPI process to compute $\lceil n/p \rceil$ roots.
+
+Since a GPU works only on data already allocated in its memory, all local input data, $Z_{k}$, $ZPrec$ and $\Delta z_{k}$, must be transferred from CPU memories to the corresponding GPU memories. Afterwards, the same EA algorithm (Algorithm \ref{alg1-cuda}) is run by all processes but on different polynomial subset of roots $ p(x)_{k}=\sum_{i=1}^{n} a_{i}x^{i}, k=1,...,p$. Each MPI process executes the loop \verb=(While(...)...do)= containing the CUDA kernels but each MPI process computes only its own portion of the roots according to the rule ``''owner computes``''. The local range of roots is indicated with the \textit{index} variable initialized at (line 5, Algorithm \ref{alg2-cuda-mpi}), and passed as an input variable to $kernel\_update$ (line 10, Algorithm \ref{alg2-cuda-mpi}). After each iteration, MPI processes synchronize (\verb=MPI_Allreduce= function) by a reduction on $\Delta z_{k}$ in order to compute the maximum error related to the stop condition. Finally, processes copy the values of new computed roots from GPU memories to CPU memories, then communicate their results to other processes with \verb=MPI_Alltoall= broadcast. If the stop condition is not verified ($error > \epsilon$) then processes stay withing the loop \verb= while(...)...do= until all the roots sufficiently converge.
+
+\begin{algorithm}[htpb]
+\label{alg2-cuda-mpi}
+%\LinesNumbered
+\caption{CUDA-MPI Algorithm to find roots with the Ehrlich-Aberth method}
+
+\KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance
+ threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degrees), $\Delta z$ ( error of stop condition), $num_gpus$ (number of MPI processes/ number of GPUs), Size (number of roots)}
+
+\KwOut {$Z$ (Solution root's vector), $ZPrec$ (Previous solution root's vector)}
+
+\BlankLine
+Initialization of P\;
+Initialization of Pu\;
+Initialization of the solution vector $Z^{0}$\;
+Distribution of Z\;
+Allocate memory to GPU\;
+\While {$error > \epsilon$}{
+copy Z from CPU to GPU\;
+$ZPrec_{loc}=kernel\_save(Z_{loc})$\;
+$Z_{loc}=kernel\_update(Z,P,Pu)$\;
+$\Delta z=kernel\_testConv(Z_{loc},ZPrec_{loc})$\;
+$error=MPI\_Reduce(\Delta z)$\;
+Copy $Z_{loc}$ from GPU to CPU\;
+$Z=MPI\_AlltoAll(Z_{loc})$\;
+}
+\end{algorithm}
+
+
+\section{Experiments}
+\label{sec5}
+We study two categories of polynomials: sparse polynomials and full polynomials.\\
+{\it A sparse polynomial} is a polynomial for which only some coefficients are not null. In this paper, we consider sparse polynomials for which the roots are distributed on 2 distinct circles:
+\begin{equation}
+ \forall \alpha_{1} \alpha_{2} \in C,\forall n_{1},n_{2} \in N^{*}; P(z)= (z^{n_{1}}-\alpha_{1})(z^{n_{2}}-\alpha_{2})
+\end{equation}\noindent
+{\it A full polynomial} is, in contrast, a polynomial for which all the coefficients are not null. A full polynomial is defined by:
+%%\begin{equation}
+ %%\forall \alpha_{i} \in C,\forall n_{i}\in N^{*}; P(z)= \sum^{n}_{i=1}(z^{n^{i}}.a_{i})
+%%\end{equation}
+
+\begin{equation}
+ {\Large \forall a_{i} \in C, i\in N; p(x)=\sum^{n}_{i=0} a_{i}.x^{i}}
+\end{equation}
+
+For our test, 4 cards GPU tesla Kepler K40 are used. In order to
+evaluate both the GPU and Multi-GPU approaches, we performed a set of
+experiments on a single GPU and multiple GPUs using OpenMP or MPI with
+the EA algorithm, for both sparse and full polynomials of different
+sizes. All experimental results obtained are perfomed with double
+precision float data and the convergence threshold of the EA method is
+set to $10^{-7}$. The initialization values of the vector solution of
+the methods are given by Guggenheimer method~\cite{Gugg86}.
+
+
+\subsection{Evaluation of the CUDA-OpenMP approach}
+
+Here we report some experiments witt full and sparse polynomials of
+different degrees with multiple GPUs.
+\subsubsection{Execution times of the EA method to solve sparse polynomials on multiple GPUs}
+
+In this experiments we report the execution time of the EA algorithm, on single GPU and multi-GPUs with (2,3,4) GPUs, for different sparse polynomial degrees ranging from 100,000 to 1,400,000.