\section{Parallel programming models}
\label{sec2}
\LZK{Toute cette section a été réécrite. Donc à relire et à améliorer si possible.}
-
-In this section we present three different parallel programming models: OpenMP, MPI and CUDA.
+
+Our objective consists in implementing a root-finding polynomial algorithm on multiple GPUs. To this end, it is primordial to know how to manage CUDA contexts of different GPUs. A direct method for controlling the various GPUs is to use as many threads or processes as GPU devices. We investigate two parallel paradigms: OpenMP and MPI. In this case, the GPU indices are defined according to the identifiers of the OpenMP threads or the ranks of the MPI processes. In this section we present the parallel programming models: OpenMP, MPI and CUDA.
\subsection{OpenMP}
%Open Multi-Processing (OpenMP) is a shared memory architecture API that provides multi thread capacity~\cite{openmp13}. OpenMP is a portable approach for parallel programming on shared memory systems based on compiler directives, that can be included in order to parallelize a loop. In this way, a set of loops can be distributed along the different threads that will access to different data allocated in local shared memory. One of the advantages of OpenMP is its global view of application memory address space that allows relatively fast development of parallel applications with easier maintenance. However, it is often difficult to get high rates of performance in large scale applications. Although usage of OpenMP threads and managed data explicitly done with MPI can be considered, this approcache undermines the advantages of OpenMP.
\subsection{CUDA}
%CUDA (is an acronym of the Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{CUDA10}.The unit of execution in CUDA is called a thread. Each thread executes a kernel by the streaming processors in parallel. In CUDA, a group of threads that are executed together is called a thread block, and the computational grid consists of a grid of thread blocks. Additionally, a thread block can use the shared memory on a single multiprocessor while the grid executes a single CUDA program logically in parallel. Thus in CUDA programming, it is necessary to design carefully the arrangement of the thread blocks in order to ensure low latency and a proper usage of shared memory, since it can be shared only in a thread block scope. The effective bandwidth of each memory space depends on the memory access pattern. Since the global memory has lower bandwidth than the shared memory, the global memory accesses should be minimized.
-CUDA (Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{CUDA15} for GPUs. It provides a high level GPGPU-based programming model to program GPUs for general purpose computations and non-graphic applications. The GPU is viewed as an accelerator such that data-parallel operations of a CUDA program running on a CPU are off-loaded onto GPU and executed by this later. The data-parallel operations executed by GPUs are called kernels. The same kernel is executed in parallel by a large number of threads organized in grids of thread blocks, such that each GPU multiprocessor executes one or more thread blocks in SIMD fashion (Single Instruction, Multiple Data) and in turn each core of the multiprocessor executes one or more threads within a block. Threads within a block can cooperate by sharing data through a fast shared memory and coordinate their execution through synchronization points. In contrast, within a grid of thread blocks, there is no synchronization at all between blocks. The GPU only works on data filled in the global memory and the final results of the kernel executions must be transferred out of the GPU. In the GPU, the global memory has lower bandwidth than the shared memory associated to each multiprocessor. Thus in the CUDA programming, it is necessary to design carefully the arrangement of the thread blocks in order to ensure low latency, a proper usage of the shared memory and the global memory accesses should be minimized.
-
-
+CUDA (Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{CUDA15} for GPUs. It provides a high level GPGPU-based programming model to program GPUs for general purpose computations and non-graphic applications. The GPU is viewed as an accelerator such that data-parallel operations of a CUDA program running on a CPU are off-loaded onto GPU and executed by this later. The data-parallel operations executed by GPUs are called kernels. The same kernel is executed in parallel by a large number of threads organized in grids of thread blocks, such that each GPU multiprocessor executes one or more thread blocks in SIMD fashion (Single Instruction, Multiple Data) and in turn each core of the multiprocessor executes one or more threads within a block. Threads within a block can cooperate by sharing data through a fast shared memory and coordinate their execution through synchronization points. In contrast, within a grid of thread blocks, there is no synchronization at all between blocks. The GPU only works on data filled in the global memory and the final results of the kernel executions must be transferred out of the GPU. In the GPU, the global memory has lower bandwidth than the shared memory associated to each multiprocessor. Thus in the CUDA programming, it is necessary to design carefully the arrangement of the thread blocks in order to ensure low latency and a proper usage of the shared memory, and the global memory accesses should be minimized.
-
-
-
-We introduced three paradigms of parallel programming. Our objective consists in implementing a root finding polynomial algorithm on multiple GPUs. To this end, it is primordial to know how to manage CUDA contexts of different GPUs. A direct method for controlling the various GPUs is to use as many threads or processes as GPU devices. We can choose the GPU index based on the identifier of OpenMP thread or the rank of the MPI process. Both approaches will be investigated.
+%We introduced three paradigms of parallel programming. Our objective consists in implementing a root finding polynomial algorithm on multiple GPUs. To this end, it is primordial to know how to manage CUDA contexts of different GPUs. A direct method for controlling the various GPUs is to use as many threads or processes as GPU devices. We can choose the GPU index based on the identifier of OpenMP thread or the rank of the MPI process. Both approaches will be investigated.
\section{The EA algorithm on a single GPU}
\label{sec3}
\subsection{EA parallel implementation on CUDA}
+We introduced three paradigms of parallel programming. Our objective consists in implementing a root finding polynomial algorithm on multiple GPUs. To this end, it is primordial to know how to manage CUDA contexts of different GPUs. A direct method for controlling the various GPUs is to use as many threads or processes as GPU devices. We can choose the GPU index based on the identifier of OpenMP thread or the rank of the MPI process. Both approaches will be investigated.
+
+
+
+
Like any parallel code, a GPU parallel implementation first
requires to determine the sequential tasks and the
parallelizable parts of the sequential version of the