From 0036086ac3b16dd8e27667e166acbaae2b5703a8 Mon Sep 17 00:00:00 2001 From: couturie Date: Sun, 17 Jan 2016 21:33:41 +0100 Subject: [PATCH] =?utf8?q?mise=20a=20jour=20des=20explications=20des=20alg?= =?utf8?q?os=20(ca=20me=20semble=20pas=20mal,=20il=20y=20a=20peut=20etre?= =?utf8?q?=20encore=20des=20coquilles,=20=C3=A0=20vous=20de=20relire=20vit?= =?utf8?q?e)?= MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit --- paper.tex | 73 +++++++++++++++++++++++++++++-------------------------- 1 file changed, 39 insertions(+), 34 deletions(-) diff --git a/paper.tex b/paper.tex index 5df0ef9..2fd0993 100644 --- a/paper.tex +++ b/paper.tex @@ -253,28 +253,31 @@ OpenMP and MPI is presented. \label{sec4} \subsection{an OpenMP-CUDA approach} Our OpenMP-CUDA implementation of EA algorithm is based on the hybrid -OpenMP and CUDA programming model. All the data are shared with -OpenMP amoung all the OpenMP threads. The shared data are the solution -vector $Z$, the polynomial to solve $P$, and the error vector $\Delta -z$. The number of OpenMP threads is equal to the number of GPUs, each -OpenMP thread binds to one GPU, and it controls a part of the shared -memory. More precisely each OpenMP thread will be responsible to -update its owns part of the vector Z. This part is call $Z_{loc}$ in -the following. Then all GPUs will have a grid of computation organized -according to the device performance and the size of data on which it -runs the computation kernels. +OpenMP and CUDA programming model. This algorithm is presented in +Algorithm~\ref{alg2-cuda-openmp}. All the data are shared with OpenMP +amoung all the OpenMP threads. The shared data are the solution vector +$Z$, the polynomial to solve $P$, its derivative $P'$, and the error +vector $error$. The number of OpenMP threads is equal to the number of +GPUs, each OpenMP thread binds to one GPU, and it controls a part of +the shared memory. More precisely each OpenMP thread will be +responsible to update its owns part of the vector $Z$. This part is +called $Z_{loc}$ in the following. Then all GPUs will have a grid of +computation organized according to the device performance and the size +of data on which it runs the computation kernels. To compute one iteration of the EA method each GPU performs the followings steps. First roots are shared with OpenMP and the -computation of the local size for each GPU is performed (lines 5-7 in -Algorithm \ref{alg2-cuda-openmp}). Each thread starts by copying all the -previous roots inside its GPU (line 9). Then each GPU will copy the -previous roots (line 10) and it will compute an iteration of the EA -method on its own roots (line 11). For that all the other roots are -used. The convergence is checked on the new roots (line 12). At the end -of an iteration, the updated roots are copied from the GPU to the -CPU (line 14) by direcly updating its own roots in the shared memory -arrays containing all the roots. +computation of the local size for each GPU is performed (line 4). Each +thread starts by copying all the previous roots inside its GPU (line +5). At each iteration, the following operations are performed. First +the vector $Z$ is transferred from the CPU to the GPU (line 7). Each +GPU copies the previous roots (line 8) and it computes an iteration of +the EA method on its own roots (line 9). For that all the other roots +are used. The local error is computed on the new roots (line 10) and +the max of the local errors is computed on all OpenMP threads (lien 11). At +the end of an iteration, the updated roots are copied from the GPU to +the CPU (line 12) by direcly updating its own roots in the shared +memory arrays containing all the roots. @@ -306,27 +309,29 @@ Copy $P$, $P'$ from CPU to GPU\; \subsection{a MPI-CUDA approach} -Our parallel implementation of EA to find root of polynomials using a -CUDA-MPI approach follows a similar computing approach to the one used -in CUDA-OpenMP. Each process is responsible to compute its own part of +Our parallel implementation of EA to find roots of polynomials using a +CUDA-MPI approach follows a similar approach to the one used in +CUDA-OpenMP. Each process is responsible to compute its own part of roots using all the roots computed by other processors at the previous iteration. The difference between both approaches lies in the way -processes communicate and exchange data. With MPI processors need to +processes communicate and exchange data. With MPI, processors need to send and receive data explicitely. So in Algorithm~\ref{alg2-cuda-mpi}, after the initialization all the processors have the same $Z$ vector. Then they need to compute the parameters used by the $MPI\_AlltoAll$ routines (line 4). In practise, -each processor needs to compute its offset and its local size. Then -processors need to allocate memory on their GPU (line 5). At the -beginning of each iteration, a processor starts by transfering the -whole vector Z from the CPU to the GPU (line 7). Then only the local -part of $Z^{prev}$ is saved (line 8). After that, a processor is able -to compute its own roots (line 9). Next, the local error can be -computed (ligne 10) and the global error (line 11). Then the local -roots are transfered from the GPU memory to the CPU memory (line 12) -before being exchanged between all processors (linge 13) in order to -give to all processors the last version of the roots. If the -convergence is not statisfied, an new iteration is executed. +each processor needs to compute its offset and its local +size. Processors need to allocate memory on their GPU and need to copy +their data on the GPU (line 5). At the beginning of each iteration, a +processor starts by transfering the whole vector Z from the CPU to the +GPU (line 7). Only the local part of $Z^{prev}$ is saved (line +8). After that, a processor is able to compute an updated version of +its own roots (line 9) with the EA method. The local error is computed +(ligne 10) and the global error using $MPI\_Reduce$ (line 11). Then +the local roots are transfered from the GPU memory to the CPU memory +(line 12) before being exchanged between all processors (lige 13) in +order to give to all processors the last version of the roots (with +the MPI\_AlltoAll routine). If the convergence is not statisfied, an +new iteration is executed. -- 2.39.5