Using CUDA\index{CUDA}, GPU kernel executions are nonblocking, and GPU/CPU data
transfers\index{CUDA!data transfer}
are blocking or nonblocking operations. All GPU kernel executions and CPU/GPU
-data transfers are associated to "streams,"\index{CUDA!stream} and all operations on a same stream
+data transfers are associated to ``streams'',\index{CUDA!stream} and all operations on a same stream
are serialized. When transferring data from the CPU to the GPU, then running GPU
computations, and finally transferring results from the GPU to the CPU, there is
a natural synchronization and serialization if these operations are achieved on
When CPU/GPU data transfers are not negligible compared to GPU computations, it
can be interesting to overlap internode CPU computations with a \emph{GPU
- sequence}\index{GPU sequence} including CPU/GPU data transfers and GPU computations (see
+ sequence}\index{GPU!sequence} including CPU/GPU data transfers and GPU computations (see
\Fig{fig:ch6p1overlapseqsequence}). Algorithmic issues of this approach are basic,
but their implementation requires explicit CPU multithreading and
synchronization, and CPU data buffer duplication. We need to implement two
\Lst{algo:ch6p1overlapstreamsequence} introduces the generic MPI+OpenMP+CUDA
code, explicitly overlapping MPI communications with
-streamed GPU sequences\index{GPU sequence!streamed}.
+streamed GPU sequences\index{GPU!streamed sequence}.
%\begin{algorithm}
% \caption{Generic scheme explicitly overlapping MPI communications with streamed sequences of CUDA
is not so generic as \Lst{algo:ch6p1overlapseqsequence}.
-\subsection{Interleaved communications-transfers-computations\\overlapping}
+\subsection{Interleaved communications-transfers-computations overlapping}
Many algorithms do not support splitting data transfers and kernel calls, and
cannot exploit CUDA streams, for example, when each GPU thread requires access to