From 7620333f88cf490eac21e8a1c1e1d29a6934533f Mon Sep 17 00:00:00 2001 From: Kahina Date: Wed, 30 Dec 2015 06:58:45 +0100 Subject: [PATCH] MAJ --- paper.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paper.tex b/paper.tex index 2f43639..fbdcac7 100644 --- a/paper.tex +++ b/paper.tex @@ -477,7 +477,7 @@ to parallelize a loop. In this way, a set of loops can be distributed along the The MPI (Message Passing Interface) library allows to create computer programs that run on a distributed memory architecture. The various processes have their own environment of execution and execute their code in a asynchronous way, according to the MIMD model (Multiple Instruction streams, Multiple Data streams); they communicate and synchronise by exchanging messages~\cite{Peter96}. MPI messages are explicitly sent, while the exchanges are implicit within the framework of a multi-thread programming environment like OpenMP or Pthreads. \subsection{CUDA}%L'article en anglais Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications -CUDA (an acronym for Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{NVIDIA12}. The +CUDA (an acronym for Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA~\cite{CUDA10}. The unit of execution in CUDA is called a thread. Each thread executes a kernel by the streaming processors in parallel. In CUDA, a group of threads that are executed together is called a thread block, and the computational grid consists of a grid of thread blocks. Additionally, a thread block can use the shared memory on a single multiprocessor while the grid executes a single -- 2.39.5