-Our OpenMP-CUDA implementation of EA algorithm is based on the hybrid OpenMP and CUDA programming model. It works as follows.
-Based on the metadata, a shared memory is used to make data evenly shared among OpenMP threads. The shared data are the solution vector $Z$, the polynomial to solve $P$, and the error vector $\Delta z$. Let (T\_omp) the number of OpenMP threads be equal to the number of GPUs, each OpenMP thread binds to one GPU, and controls a part of the shared memory, that is a part of the vector Z , that is $(n/num\_gpu)$ roots where $n$ is the polynomial's degree and $num\_gpu$ the total number of available GPUs. Each OpenMP thread copies its data from host memory to GPU’s device memory. Then every GPU will have a grid of computation organized according to the device performance and the size of data on which it runs the computation kernels. %In principle a grid is set by two parameter DimGrid, the number of block per grid, DimBloc: the number of threads per block. The following schema shows the architecture of (CUDA,OpenMP).
+Our OpenMP-CUDA implementation of EA algorithm is based on the hybrid
+OpenMP and CUDA programming model. All the data
+are shared with OpenMP amoung all the OpenMP threads. The shared data
+are the solution vector $Z$, the polynomial to solve $P$, and the
+error vector $\Delta z$. The number of OpenMP threads is equal to the
+number of GPUs, each OpenMP thread binds to one GPU, and it controls a
+part of the shared memory. More precisely each OpenMP thread owns of
+the vector Z, that is $(n/num\_gpu)$ roots where $n$ is the
+polynomial's degree and $num\_gpu$ the total number of available
+GPUs. Then all GPUs will have a grid of computation organized
+according to the device performance and the size of data on which it
+runs the computation kernels.
+
+To compute one iteration of the EA method each GPU performs the
+followings steps. First roots are shared with OpenMP. Each thread
+starts by copying all the previous roots inside its GPU. Then each GPU
+will compute an iteration of the EA method on its own roots. For that
+all the other roots are used. At the end of an iteration, the updated
+roots are copied from the GPU to the CPU. The convergence is checked
+on the new roots. Finally each CPU will update its own roots in the
+shared memory arrays containing all the roots.
+
+%In principle a grid is set by two parameter DimGrid, the number of block per grid, DimBloc: the number of threads per block. The following schema shows the architecture of (CUDA,OpenMP).