From 610b7d4815a2fd9055e42d8615c88f262b2c7fba Mon Sep 17 00:00:00 2001 From: asider Date: Mon, 28 Dec 2015 22:26:26 +0100 Subject: [PATCH] revu s2 --- paper.tex | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/paper.tex b/paper.tex index 9810500..83d482a 100644 --- a/paper.tex +++ b/paper.tex @@ -461,10 +461,7 @@ This paper is organized as follows, in section 2 we recall the Ehrlich-Aberth me \subsection{OpenMP} Open Multi-Processing (OpenMP) is a shared memory architecture API that provides multi thread capacity~\cite{openmp13}. OpenMP is a portable approach for parallel programming on shared memory systems based on compiler directives, that can be included in order -to parallelize a loop. In this way, a set of loops can be distributed along the different threads that will access to different data allo- -cated in local shared memory. One of the advantages of OpenMP is its global view of application memory address space that allows relatively fast development of parallel applications with easier maintenance. However, it is often difficult to get high rates of -performance in large scale applications. Although, in OpenMP a usage of threads ids and managing data explicitly as done in an MPI -code can be considered, it defeats the advantages of OpenMP. +to parallelize a loop. In this way, a set of loops can be distributed along the different threads that will access to different data allocated in local shared memory. One of the advantages of OpenMP is its global view of application memory address space that allows relatively fast development of parallel applications with easier maintenance. However, it is often difficult to get high rates of performance in large scale applications. Although usage of OpenMP threads and managed data explicitly done with MPI can be considered, this approcache undermines the advantages of OpenMP. %\subsection{OpenMP} %L'article en Français Programmation multiGPU – OpenMP versus MPI %OpenMP is a shared memory programming API based on threads from @@ -477,20 +474,20 @@ code can be considered, it defeats the advantages of OpenMP. %have private memory areas [6]. \subsection{MPI} - The library MPI allows to use a distributed memory architecture. The various processes have their own environment of execution and execute their codes in a asynchronous way, according to the model MIMD (Multiple Instruction streams, Multiple Dated streams); they communicate and synchronize by exchanges of messages~\cite{Peter96}. MPI messages are explicitly sent, while the exchanges are implicit within the framework of a programming multi-thread (OpenMP/Pthreads). +The MPI (Message Passing Interface) library allows to create computer programs that run on a distributed memory architecture. The various processes have their own environment of execution and execute their code in a asynchronous way, according to the MIMD model (Multiple Instruction streams, Multiple Data streams); they communicate and synchronise by exchanging messages~\cite{Peter96}. MPI messages are explicitly sent, while the exchanges are implicit within the framework of a multi-thread programming environment like OpenMP or Pthreads. \subsection{CUDA}%L'article en anglais Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications - CUDA (an acronym for Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{NVIDIA12}. The -unit of execution in CUDA is called a thread. Each thread executes the kernel by the streaming processors in parallel. In CUDA, -a group of threads that are executed together is called thread blocks, and the computational grid consists of a grid of thread -blocks. Additionally, a thread block can use the shared memory on a single multiprocessor as while as the grid executes a single +CUDA (an acronym for Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{NVIDIA12}. The +unit of execution in CUDA is called a thread. Each thread executes a kernel by the streaming processors in parallel. In CUDA, +a group of threads that are executed together is called a thread block, and the computational grid consists of a grid of thread +blocks. Additionally, a thread block can use the shared memory on a single multiprocessor while the grid executes a single CUDA program logically in parallel. Thus in CUDA programming, it is necessary to design carefully the arrangement of the thread blocks in order to ensure low latency and a proper usage of shared memory, since it can be shared only in a thread block scope. The effective bandwidth of each memory space depends on the memory access pattern. Since the global memory has lower bandwidth than the shared memory, the global memory accesses should be minimized. -We introduced three paradigms of parallel programming. Our objective consist to implement an algorithm of root finding polynomial on multiple GPUs. It primordial to know how manage CUDA context of different GPUs. A direct method for controlling the various GPU is to use as many threads or processes that GPU. We can choose the GPU index based on the identifier of OpenMP thread or the rank of the MPI process. Both approaches will be created. +We introduced three paradigms of parallel programming. Our objective consist to implement an algorithm of root finding polynomial on multiple GPUs. It primordial to know how to manage CUDA contexts of different GPUs. A direct method for controlling the various GPU is to use as many threads or processes as GPU devices. We can choose the GPU index based on the identifier of OpenMP thread or the rank of the MPI process. Both approaches will be investigated. \section{The EA algorithm on single GPU} \subsection{the EA method} -- 2.39.5