-
-Our objective consists in implementing a root finding polynomial
-algorithm on multiple GPUs. To this end, it is primordial to know how
-to manage CUDA contexts of different GPUs. A direct method for
-controlling the various GPUs is to use as many threads or processes as
-GPU devices. We can choose the GPU index based on the identifier of
-OpenMP thread or the rank of the MPI process. Both approaches will be
-investigated. \LZK{Répétition! Le même texte est déjà écrit comme
- intro dans la section II. Sinon ici on parle seulement de
- l'implémentation cuda sans mpi et openmp! \RC{Je suis d'accord à
- revoir après, quand les 2 parties suivantes seront plus stables}}
-
-
-
-
-Like any parallel code, a GPU parallel implementation first requires to determine the sequential code and the data-parallel operations of a algorithm. In fact, all the operations that are easy to execute in parallel must be made by the GPU to accelerate the execution, like the steps 3 and 4. On the other hand, all the sequential operations and the operations that have data dependencies between CUDA threads or recursive computations must be executed by only one CUDA thread or a CPU thread (the steps 1 and 2).\LZK{La méthode est déjà mal présentée, dans ce cas c'est encore plus difficile de comprendre que représentent ces différentes étapes!} Initially, we specify the organization of parallel threads by specifying the dimension of the grid \verb+Dimgrid+, the number of blocks per grid \verb+DimBlock+ and the number of threads per block.
-
-The code is organized as kernels which are parts of code that are run on GPU devices. For step 3, there are two kernels, the first is named \textit{save} is used to save vector $Z^{K-1}$ and the second one is
-named \textit{update} and is used to update the $Z^{K}$ vector. For
-step 4, a kernel tests the convergence of the method. In order to
-compute the function H, we have two possibilities: either to use the
-Jacobi mode, or the Gauss-Seidel mode of iterating which uses the most
-recent computed roots. It is well known that the Gauss-Seidel mode
-converges more quickly. So, we use Gauss-Seidel iterations. To
-parallelize the code, we create kernels and many functions to be
-executed on the GPU for all the operations dealing with the
-computation on complex numbers and the evaluation of the
-polynomials. As said previously, we manage both functions of
-evaluation: the normal method, based on the method of
-Horner and the method based on the logarithm of the polynomial. All
-these methods were rather long to implement, as the development of
-corresponding kernels with CUDA is longer than on a CPU host. This
-comes in particular from the fact that it is very difficult to debug
-CUDA running threads like threads on a CPU host. In the following
-paragraph Algorithm~\ref{alg1-cuda} shows the GPU parallel
-implementation of Ehrlich-Aberth method.
-\LZK{Vaut mieux expliquer l'implémentation en faisant référence à l'algo séquentiel que de parler des différentes steps.}
-
-
+The code is organized as kernels which are parts of code that are run
+on GPU devices. Algorithm~\ref{alg1-cuda} describes the CUDA
+implementation of the Ehrlich-Aberth on a GPU. This algorithms starts
+by initializing the polynomial and its derivative (line 1). The
+initialization of the roots is performed (line 2). Data are transferred
+from the CPU to the GPU (after allocation of the required memory on
+the GPU) (line 3). Then at each iteration, if the error is greater
+than a threshold, the following operations are performed. The previous
+roots are saved using a kernel (line 5). Then the new roots with the
+new iterations are computed using the EA method with a Gauss-Seidel
+iteration mode in order to use the lastest updated roots (line
+6). This improves the convergence. This kernel is, in practice, very
+long since it performs all the operations with complex numbers with
+the normal mode of the EA method but also with the
+logarithm-exponential one. Then the error is computed with a final
+kernel (line 7). Finally when the EA method has converged, the roots
+are transferred from the GPU to the CPU.