X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/kahina_paper2.git/blobdiff_plain/ae6f215f966ccb138e928e0ac09ed44e4139ef3f..254d68992f882593d5924b2cd3e97de2fa251051:/paper.tex?ds=inline diff --git a/paper.tex b/paper.tex index a6cb8a4..00671de 100644 --- a/paper.tex +++ b/paper.tex @@ -720,7 +720,7 @@ Using the logarithm and the exponential operators, we can replace any multiplic %This problem was discussed earlier in~\cite{Karimall98} for the Durand-Kerner method. The authors %propose to use the logarithm and the exponential of a complex in order to compute the power at a high exponent. Using the logarithm and the exponential operators, we can replace any multiplications and divisions with additions and subtractions. Consequently, computations manipulate lower absolute values and the roots for large polynomial degrees can be looked for successfully~\cite{Karimall98}. -\subsection{Ehrlich-Aberth parallel implementation on CUDA} +\subsection{The Ehrlich-Aberth parallel implementation on CUDA} %We introduced three paradigms of parallel programming. Our objective consists in implementing a root finding polynomial @@ -729,26 +729,14 @@ to manage CUDA contexts of different GPUs. A direct method for controlling the various GPUs is to use as many threads or processes as GPU devices. We can choose the GPU index based on the identifier of OpenMP thread or the rank of the MPI process. Both approaches will be -investigated. +investigated. \LZK{Répétition! Le même texte est déjà écrit comme intro dans la section II. Sinon ici on parle seulement de l'implémentation cuda sans mpi et openmp!} -Like any parallel code, a GPU parallel implementation first requires -to determine the sequential tasks and the parallelizable parts of the -sequential version of the program/algorithm. In our case, all the -operations that are easy to execute in parallel must be made by the -GPU to accelerate the execution of the application, like the step 3 -and step 4. On the other hand, all the sequential operations and the -operations that have data dependencies between threads or recursive -computations must be executed by only one CUDA or CPU thread (step 1 -and step 2). Initially, we specify the organization of parallel -threads, by specifying the dimension of the grid Dimgrid, the number -of blocks per grid DimBlock and the number of threads per block. +Like any parallel code, a GPU parallel implementation first requires to determine the sequential code and the data-parallel operations of a algorithm. In fact, all the operations that are easy to execute in parallel must be made by the GPU to accelerate the execution, like the steps 3 and 4. On the other hand, all the sequential operations and the operations that have data dependencies between CUDA threads or recursive computations must be executed by only one CUDA thread or a CPU thread (the steps 1 and 2).\LZK{La méthode est déjà mal présentée, dans ce cas c'est encore plus difficile de comprendre que représentent ces différentes étapes!} Initially, we specify the organization of parallel threads by specifying the dimension of the grid \verb+Dimgrid+, the number of blocks per grid \verb+DimBlock+ and the number of threads per block. -The code is organized kernels which are part of code that are run on -GPU devices. For step 3, there are two kernels, the first named -\textit{save} is used to save vector $Z^{K-1}$ and the second one is +The code is organized as kernels which are parts of code that are run on GPU devices. For step 3, there are two kernels, the first is named \textit{save} is used to save vector $Z^{K-1}$ and the second one is named \textit{update} and is used to update the $Z^{K}$ vector. For step 4, a kernel tests the convergence of the method. In order to compute the function H, we have two possibilities: either to use the @@ -767,6 +755,7 @@ comes in particular from the fact that it is very difficult to debug CUDA running threads like threads on a CPU host. In the following paragraph Algorithm~\ref{alg1-cuda} shows the GPU parallel implementation of Ehrlich-Aberth method. +\LZK{Vaut mieux expliquer l'implémentation en faisant référence à l'algo séquentiel que de parler des différentes steps.} \begin{algorithm}[htpb] \label{alg1-cuda}