From 1874c46934f4ba7e8c2013d3829f65309456d292 Mon Sep 17 00:00:00 2001 From: couturie Date: Tue, 6 Aug 2013 22:01:50 +0200 Subject: [PATCH] new --- BookGPU/Chapters/chapter1/ch1.tex | 12 +-- BookGPU/Chapters/chapter12/ch12.tex | 88 +++++++++++----------- BookGPU/Chapters/chapter13/ch13.tex | 82 ++++++++++---------- BookGPU/Chapters/chapter17/ch17.tex | 4 +- BookGPU/Chapters/chapter18/ch18.tex | 4 +- BookGPU/Chapters/chapter19/ch19.tex | 18 ++--- BookGPU/Chapters/chapter2/ch2.tex | 20 ++--- BookGPU/Chapters/chapter3/ch3.tex | 6 +- BookGPU/Chapters/chapter4/ch4.tex | 8 +- BookGPU/Chapters/chapter5/ch5.tex | 2 +- BookGPU/Chapters/chapter6/PartieAsync.tex | 4 +- BookGPU/Chapters/chapter8/biblio8.bib | 45 +++++------ BookGPU/Chapters/chapter8/ch8.tex | 92 ++++++++++++----------- 13 files changed, 195 insertions(+), 190 deletions(-) diff --git a/BookGPU/Chapters/chapter1/ch1.tex b/BookGPU/Chapters/chapter1/ch1.tex index fc891b5..9c3d8af 100755 --- a/BookGPU/Chapters/chapter1/ch1.tex +++ b/BookGPU/Chapters/chapter1/ch1.tex @@ -74,7 +74,7 @@ comparison with OpenCL, interested readers may refer to~\cite{ch1:Dongarra}. \section{Architecture of current GPUs} -The architecture \index{architecture of a GPU} of current GPUs is constantly +The architecture \index{GPU!architecture of a} of current GPUs is constantly evolving. Nevertheless some trends remain constant throughout this evolution. Processing units composing a GPU are far simpler than a traditional CPU and it is much easier to integrate many computing units inside a GPU card than to do @@ -232,11 +232,11 @@ will explain that. \section{Memory hierarchy} -The memory hierarchy of GPUs\index{memory~hierarchy} is different from that of CPUs. In practice, there are registers\index{memory~hierarchy!registers}, local -memory\index{memory~hierarchy!local~memory}, shared -memory\index{memory~hierarchy!shared~memory}, cache -memory\index{memory~hierarchy!cache~memory}, and global -memory\index{memory~hierarchy!global~memory}. +The memory hierarchy of GPUs\index{memory hierarchy} is different from that of CPUs. In practice, there are registers\index{memory hierarchy!registers}, local +memory\index{memory hierarchy!local memory}, shared +memory\index{memory hierarchy!shared memory}, cache +memory\index{memory hierarchy!cache memory}, and global +memory\index{memory hierarchy!global memory}. As previously mentioned each thread can access its own registers. It is diff --git a/BookGPU/Chapters/chapter12/ch12.tex b/BookGPU/Chapters/chapter12/ch12.tex index 3843597..a514438 100755 --- a/BookGPU/Chapters/chapter12/ch12.tex +++ b/BookGPU/Chapters/chapter12/ch12.tex @@ -47,7 +47,7 @@ CPU cluster and on a GPU cluster of solving large sparse linear systems. %%--------------------------%% \section{Krylov iterative methods} \label{ch12:sec:02} -Let us consider the following system of $n$ linear equations\index{Sparse~linear~system} +Let us consider the following system of $n$ linear equations\index{sparse linear system} in $\mathbb{R}$: \begin{equation} Ax=b, @@ -57,7 +57,7 @@ where $A\in\mathbb{R}^{n\times n}$ is a sparse nonsingular square matrix, $x\in\ is the solution vector, $b\in\mathbb{R}^{n}$ is the right-hand side, and $n\in\mathbb{N}$ is a large integer number. -The iterative methods\index{Iterative~method} for solving the large sparse linear system~(\ref{ch12:eq:01}) +The iterative methods\index{iterative method} for solving the large sparse linear system~(\ref{ch12:eq:01}) proceed by successive iterations of a same block of elementary operations, during which an infinite number of approximate solutions $\{x_k\}_{k\geq 0}$ is computed. Indeed, from an initial guess $x_0$, an iterative method determines at each iteration $k>0$ an approximate @@ -68,20 +68,20 @@ x^{*}=\lim\limits_{k\to\infty}x_{k}=A^{-1}b. \end{equation} The number of iterations necessary to reach the exact solution $x^{*}$ is not known beforehand and can be infinite. In practice, an iterative method often finds an approximate solution $\tilde{x}$ -after a fixed number of iterations and/or when a given convergence criterion\index{Convergence} +after a fixed number of iterations and/or when a given convergence criterion\index{convergence} is satisfied as follows: \begin{equation} \|b-A\tilde{x}\| < \varepsilon, \label{ch12:eq:03} \end{equation} -where $\varepsilon<1$ is the required convergence tolerance threshold\index{Convergence!Tolerance~threshold}. +where $\varepsilon<1$ is the required convergence tolerance threshold\index{convergence!tolerance threshold}. Some of the most iterative methods that have proven their efficiency for solving large sparse -linear systems are those called \textit{Krylov subspace methods}~\cite{ch12:ref1}\index{Iterative~method!Krylov~subspace}. +linear systems are those called \textit{Krylov subspace methods}~\cite{ch12:ref1}\index{iterative method!Krylov subspace}. In the present chapter, we describe two Krylov methods which are widely used: the CG method (conjugate gradient method) and the GMRES method (generalized minimal residual method). In practice, the Krylov subspace methods are usually used with preconditioners that allow the improvement of their -convergence. So, in what follows, the CG and GMRES methods are used to solve the left-preconditioned\index{Sparse~linear~system!Preconditioned} +convergence. So, in what follows, the CG and GMRES methods are used to solve the left-preconditioned\index{sparse linear system!preconditioned} sparse linear system: \begin{equation} M^{-1}Ax=M^{-1}b, @@ -99,14 +99,14 @@ It is one of the well-known iterative methods to solve large sparse linear syste can be adapted to solve nonlinear equations and optimization problems. However, it can only be applied to problems with positive definite symmetric matrices. -The main idea of the CG method\index{Iterative~method!CG} is the computation of a sequence of approximate -solutions $\{x_k\}_{k\geq 0}$ in a Krylov subspace\index{Iterative~method!Krylov~subspace} of order $k$ as +The main idea of the CG method\index{iterative method!CG} is the computation of a sequence of approximate +solutions $\{x_k\}_{k\geq 0}$ in a Krylov subspace\index{iterative method!Krylov~subspace} of order $k$ as follows: \begin{equation} x_k \in x_0 + \mathcal{K}_k(A,r_0), \label{ch12:eq:04} \end{equation} -such that the Galerkin condition\index{Galerkin~condition} must be satisfied: +such that the Galerkin condition\index{Galerkin condition} must be satisfied: \begin{equation} r_k \bot \mathcal{K}_k(A,r_0), \label{ch12:eq:05} @@ -181,15 +181,15 @@ the recurrences~(\ref{ch12:eq:08}) and~(\ref{ch12:eq:09}) allow the deduction th \end{algorithm} Algorithm~\ref{ch12:alg:01} shows the main key points of the preconditioned CG method. It allows -the solving the left-preconditioned\index{Sparse~linear~system!Preconditioned} sparse linear system~(\ref{ch12:eq:11}). +the solving the left-preconditioned\index{sparse linear system!preconditioned} sparse linear system~(\ref{ch12:eq:11}). In this algorithm, $\varepsilon$ is the convergence tolerance threshold, $maxiter$ is the maximum number of iterations, and $(\cdot,\cdot)$ defines the dot product between two vectors in $\mathbb{R}^{n}$. At every iteration, a direction vector $p_k$ is determined, so that it is orthogonal to the preconditioned residual $z_k$ and to the direction vectors $\{p_i\}_{i0}$ in -a Krylov subspace\index{Iterative~method!Krylov~subspace} $\mathcal{K}_k$ as follows: +a Krylov subspace\index{iterative method!Krylov subspace} $\mathcal{K}_k$ as follows: \begin{equation} \begin{array}{ll} x_k \in x_0 + \mathcal{K}_k(A, v_1),& v_1=\frac{r_0}{\|r_0\|_2}, \end{array} \label{ch12:eq:12} \end{equation} -so that the Petrov-Galerkin condition\index{Petrov-Galerkin~condition} is satisfied: +so that the Petrov-Galerkin condition\index{Petrov-Galerkin condition} is satisfied: \begin{equation} \begin{array}{ll} r_k \bot A \mathcal{K}_k(A, v_1). \end{array} \label{ch12:eq:13} \end{equation} -GMRES uses the Arnoldi process~\cite{ch12:ref5}\index{Iterative~method!Arnoldi~process} to construct an -orthonormal basis $V_k$ for the Krylov subspace $\mathcal{K}_k$ and an upper Hessenberg matrix\index{Hessenberg~matrix} +GMRES uses the Arnoldi process~\cite{ch12:ref5}\index{iterative method!Arnoldi process} to construct an +orthonormal basis $V_k$ for the Krylov subspace $\mathcal{K}_k$ and an upper Hessenberg matrix\index{Hessenberg matrix} $\bar{H}_k$ of order $(k+1)\times k$: \begin{equation} \begin{array}{ll} @@ -311,16 +311,16 @@ $V$ to $m$ orthogonal vectors. \end{algorithm} Algorithm~\ref{ch12:alg:02} shows the key points of the GMRES method with restarts. -It solves the left-preconditioned\index{Sparse~linear~system!Preconditioned} sparse linear +It solves the left-preconditioned\index{sparse linear system!preconditioned} sparse linear system~(\ref{ch12:eq:11}), such that $M$ is the preconditioning matrix. At each iteration -$k$, GMRES uses the Arnoldi process\index{Iterative~method!Arnoldi~process} (defined from +$k$, GMRES uses the Arnoldi process\index{iterative method!Arnoldi process} (defined from line~$7$ to line~$17$) to construct a basis $V_m$ of $m$ orthogonal vectors and an upper -Hessenberg matrix\index{Hessenberg~matrix} $\bar{H}_m$ of size $(m+1)\times m$. Then, it +Hessenberg matrix\index{Hessenberg matrix} $\bar{H}_m$ of size $(m+1)\times m$. Then, it solves the linear least-squares problem of size $m$ to find the vector $y\in\mathbb{R}^{m}$ which minimizes at best the residual norm (line~$18$). Finally, it computes an approximate solution $x_m$ in the Krylov subspace spanned by $V_m$ (line~$19$). The GMRES algorithm is stopped when the residual norm is sufficiently small ($\|r_m\|_2<\varepsilon$) and/or the -maximum number of iterations\index{Convergence!Maximum~number~of~iterations} ($maxiter$) +maximum number of iterations\index{convergence!maximum number of iterations} ($maxiter$) is reached. @@ -329,11 +329,11 @@ is reached. %%--------------------------%% \section{Parallel implementation on a GPU cluster} \label{ch12:sec:03} -In this section, we present the parallel algorithms of both iterative CG\index{Iterative~method!CG} -and GMRES\index{Iterative~method!GMRES} methods for GPU clusters. The implementation is performed on +In this section, we present the parallel algorithms of both iterative CG\index{iterative method!CG} +and GMRES\index{iterative method!GMRES} methods for GPU clusters. The implementation is performed on a GPU cluster composed of different computing nodes, such that each node is a CPU core managed by one MPI (message passing interface) process and equipped with a GPU card. The parallelization of these algorithms is carried out by -using the MPI communication routines between the GPU computing nodes\index{Computing~node} and the +using the MPI communication routines between the GPU computing nodes\index{computing node} and the CUDA (compute unified device architecture) programming environment inside each node. In what follows, the algorithms of the iterative methods are called iterative solvers. @@ -400,15 +400,15 @@ solve the least-squares problem, and a kernel to update the elements of the solu vector $x$. The least-squares problem in the GMRES method is solved by performing a QR factorization -on the Hessenberg matrix\index{Hessenberg~matrix} $\bar{H}_m$ with plane rotations and, +on the Hessenberg matrix\index{Hessenberg matrix} $\bar{H}_m$ with plane rotations and, then, solving the triangular system by backward substitutions to compute $y$. Consequently, solving the least-squares problem on the GPU is not efficient. Indeed, the triangular solves are not easy to parallelize and inefficient on GPUs. However, the least-squares problem to solve in the GMRES method with restarts has, generally, a very small size $m$. Therefore, we develop an inexpensive kernel which must be executed by a single CUDA thread. -The most important operation in CG\index{Iterative~method!CG} and GMRES\index{Iterative~method!GMRES} -methods is the SpMV multiplication (sparse matrix-vector multiplication)\index{SpMV~multiplication}, +The most important operation in CG\index{iterative method!CG} and GMRES\index{iterative method!GMRES} +methods is the SpMV multiplication (sparse matrix-vector multiplication)\index{SpMV multiplication}, because it is often an expensive operation in terms of execution time and memory space. Moreover, it requires taking care of the storage format of the sparse matrix in the memory. Indeed, the naive storage, row-by-row or column-by-column, of a sparse matrix @@ -416,12 +416,12 @@ can cause a significant waste of memory space and execution time. In addition, t nature of the matrix often leads to irregular memory accesses to read the matrix nonzero values. So, the computation of the SpMV multiplication on GPUs can involve noncoalesced accesses to the global memory, which slows down its performances even more. One of the -most efficient compressed storage formats\index{Compressed~storage~format} of sparse -matrices on GPUs is the HYB (hybrid)\index{Compressed~storage~format!HYB} format~\cite{ch12:ref7}. +most efficient compressed storage formats\index{compressed storage format} of sparse +matrices on GPUs is the HYB (hybrid)\index{compressed storage format!HYB} format~\cite{ch12:ref7}. It is a combination of ELLpack (ELL) and Coordinate (COO) formats. Indeed, it stores -a typical number of nonzero values per row in ELL\index{Compressed~storage~format!ELL} +a typical number of nonzero values per row in ELL\index{compressed storage format!ELL} format and the remaining entries of exceptional rows in COO format. It combines the efficiency -of ELL due to the regularity of its memory accesses and the flexibility of COO\index{Compressed~storage~format!COO} +of ELL due to the regularity of its memory accesses and the flexibility of COO\index{compressed storage format!COO} which is insensitive to the matrix structure. Consequently, we use the HYB kernel~\cite{ch12:ref8} developed by NVIDIA to implement the SpMV multiplication of CG and GMRES methods on GPUs. Moreover, to avoid the noncoalesced accesses to the high-latency global memory, we fill @@ -434,10 +434,10 @@ the elements of the iterate vector $x$ in the cached texture memory. \label{ch12:sec:03.03} All the computing nodes of the GPU cluster execute in parallel the same iterative solver (Algorithm~\ref{ch12:alg:01} or Algorithm~\ref{ch12:alg:02}) adapted to GPUs, but on their -own portions of the sparse linear system\index{Sparse~linear~system}: $M^{-1}_iA_ix_i=M^{-1}_ib_i$, +own portions of the sparse linear system\index{sparse linear system}: $M^{-1}_iA_ix_i=M^{-1}_ib_i$, $0\leq i0$, the fixed point mapping $F_{\gamma}$ of the projected Richardson method\index{Iterative~method!Projected~Richardson} +$\gamma>0$, the fixed point mapping $F_{\gamma}$ of the projected Richardson method\index{iterative method!projected Richardson} is defined as follows: \begin{equation} U^{*} = F_{\gamma}(U^{*}) = P_K(U^{*} - \gamma(\mathcal{A}.U^{*} - G)). @@ -237,7 +237,7 @@ of the projected Richardson method~\cite{ch13:ref6}. Let $\alpha\in\mathbb{N}$ be a positive integer. We consider that the space $E=\displaystyle\prod_{i=1}^{\alpha} E_i$ is a product of $\alpha$ subspaces $E_i$ where $i\in\{1,\ldots,\alpha\}$. Note that $E_i=\mathbb{R}^{m_i}$, -where $\displaystyle\sum_{i=1}^{\alpha} m_{i}=M$, is also a Hilbert space\index{Hilbert~space} +where $\displaystyle\sum_{i=1}^{\alpha} m_{i}=M$, is also a Hilbert space\index{Hilbert space} in which $\scalprod{.}{.}_i$ denotes the scalar product and $|.|_i$ the associated norm, for all $i\in\{1,\ldots,\alpha\}$. Then, for all $u,v\in E$, $\scalprod{u}{v}=\displaystyle\sum_{i=1}^{\alpha}\scalprod{u_i}{v_i}_i$ is the scalar product on $E$. @@ -255,7 +255,7 @@ Assume that the convex set $K=\displaystyle\prod_{i=1}^{\alpha}K_{i}$, such that and $K_i$ is a closed convex set. Let also $G=(G_1,\ldots,G_{\alpha})\in E$; for any $U\in E$, $P_K(U)=(P_{K_1}(U_1),\ldots,P_{K_{\alpha}}(U_{\alpha}))$ is the projection of $U$ on $K$ where $\forall i\in\{1,\ldots,\alpha\},P_{K_i}$ is the projector from $E_i$ onto -$K_i$. So, the fixed point mapping of the projected Richardson method~(\ref{ch13:eq:10})\index{Iterative~method!Projected~Richardson} +$K_i$. So, the fixed point mapping of the projected Richardson method~(\ref{ch13:eq:10})\index{iterative method!projected Richardson} can be written in the following way: \begin{equation} \forall U\in E\mbox{,~}\forall i\in\{1,\ldots,\alpha\}\mbox{,~}F_{i,\gamma}(U) = P_{K_i}(U_i - \gamma(\mathcal{A}_i.U - G_i)). @@ -299,12 +299,12 @@ and $\forall j\in\{1,\ldots,\alpha\}$, \label{ch13:eq:15} \end{equation} -The previous asynchronous scheme\index{Asynchronous} of the projected Richardson +The previous asynchronous scheme\index{asynchronous} of the projected Richardson method models computations that are carried out in parallel without order or synchronization (according to the behavior of the parallel iterative method) and describes a subdomain method without overlapping. It is a general model that takes into account all possible situations of parallel computations and nonblocking message -passing. So, the synchronous iterative scheme\index{Synchronous} is defined by +passing. So, the synchronous iterative scheme\index{synchronous} is defined by \begin{equation} \forall j\in\{1,\ldots,\alpha\} \mbox{,~} \forall p\in\mathbb{N} \mbox{,~} \rho_j(p)=p. \label{ch13:eq:16} @@ -322,12 +322,12 @@ each component of the vector must be relaxed an infinite number of times. The ch relaxed components to be used in the computational process may be guided by any criterion, and in particular, a natural criterion is to pickup the most recently available values of the components computed by the processors. Furthermore, the asynchronous -iterations are implemented by means of nonblocking MPI communication subroutines\index{MPI~subroutines!Nonblocking} +iterations are implemented by means of nonblocking MPI communication subroutines\index{MPI subroutines!nonblocking} (asynchronous communications). The important property ensuring the convergence of the parallel projected Richardson method, both synchronous and asynchronous algorithms, is the fact that $\mathcal{A}$ -is an M-matrix. Moreover, the convergence\index{Convergence} proceeds from a result +is an M-matrix. Moreover, the convergence\index{convergence} proceeds from a result of~\cite{ch13:ref6}. Indeed, there exists a value $\gamma_0>0$, such that $\forall\gamma\in ]0,\gamma_0[$, the parallel iterations~(\ref{ch13:eq:13}), (\ref{ch13:eq:14}), and~(\ref{ch13:eq:15}), associated to the fixed point mapping $F_\gamma$~(\ref{ch13:eq:12}), converge to the @@ -348,7 +348,7 @@ of data, at each iteration between the GPU computing nodes, can be either synchr or asynchronous using the MPI communication subroutines, whereas inside each GPU node, a CUDA parallelization is performed. -Let $S$ denote the number of computing nodes\index{Computing~node} on the GPU cluster, +Let $S$ denote the number of computing nodes\index{computing node} on the GPU cluster, where a computing node is composed of CPU core holding one MPI process and a GPU card. So, before starting computations, the obstacle problem of size $(NX\times NY\times NZ)$ is split into $S$ parallelepipedic subproblems, each for a node (MPI process, GPU), as @@ -366,14 +366,14 @@ exchanges at subdomain boundaries compared to a naive $z$-axis-wise partitioning All the computing nodes of the GPU cluster execute in parallel the Algorithm~\ref{ch13:alg:01} on a three-dimensional subproblems of size $(NX\times ny\times nz)$. -This algorithm gives the main key points for solving an obstacle problem\index{Obstacle~problem} +This algorithm gives the main key points for solving an obstacle problem\index{obstacle problem} defined in a three-dimensional domain, where $A$ is the discretization matrix, $G$ is the right-hand side, and $U$ is the solution vector. After the initialization step, all the data generated from the partitioning operation are copied from the CPU memories to the GPU global memories to be processed on the GPUs. Next, the algorithm uses $NbSteps$ time steps to solve the global obstacle problem. In fact, it uses a parallel algorithm adapted to GPUs from the projected Richardson iterative method for solving the nonlinear -systems\index{Nonlinear} of the obstacle problem. This function is defined by {\it Solve()} +systems\index{nonlinear} of the obstacle problem. This function is defined by {\it Solve()} in Algorithm~\ref{ch13:alg:01}. At every time step, the initial guess $U^0$ for the iterative algorithm is set to the solution found at the previous time step. Moreover, the right-hand side $G$ is computed as follows: \[G = \frac{1}{k}.U^{prev} + F\] where $k$ is the time step, @@ -419,12 +419,12 @@ Copy the solution $U$ back from GPU memory\; \end{algorithm} As are many other iterative methods, the algorithm of the projected Richardson -method\index{Iterative~method!Projected~Richardson} is based on algebraic +method\index{iterative method!projected Richardson} is based on algebraic functions operating on vectors and/or matrices, which are more efficient on parallel computers when they work on large vectors. Its parallel implementation on the GPU cluster is carried out so that the GPUs execute the vector operations as kernels and the CPUs execute the serial codes, supervise the kernel executions -and the data exchanges with the neighboring nodes\index{Neighboring~node}, and +and the data exchanges with the neighboring nodes\index{neighboring node}, and supply the GPUs with data. Algorithm~\ref{ch13:alg:02} shows the main key points of the parallel iterative algorithm (function $Solve()$ in Algorithm~\ref{ch13:alg:01}). All the vector operations inside the main loop ({\bf repeat} ... {\bf until}) @@ -477,7 +477,7 @@ The first operation of this function is implemented as kernels to be performed b \end{itemize} As mentioned previously, we develop the \emph{synchronous} and \emph{asynchronous} algorithms of the projected Richardson method. Obviously, in this scope, the -synchronous\index{Synchronous} or asynchronous\index{Asynchronous} communications +synchronous\index{synchronous} or asynchronous\index{asynchronous} communications refer to the communications between the CPU cores (MPI processes) on the GPU cluster, in order to exchange the vector elements associated to subdomain boundaries. For the memory copies between a CPU core and its GPU, we use the synchronous communication @@ -486,19 +486,19 @@ in the synchronous algorithm and the asynchronous ones: \verb+cublasSetVectorAsy and \verb+cublasGetVectorAsync()+ in the asynchronous algorithm. Moreover, we use the communication routines of the MPI library to carry out the data exchanges between the neighboring nodes. We use the following communication routines: \verb+MPI_Isend()+ -and \verb+MPI_Irecv()+ to perform nonblocking\index{MPI~subroutines!Nonblocking} +and \verb+MPI_Irecv()+ to perform nonblocking\index{MPI subroutines!nonblocking} sends and receives, respectively. For the synchronous algorithm, we use the MPI routine \verb+MPI_Waitall()+ which puts the MPI process of a computing node in blocking status until all data exchanges with neighboring nodes (sends and receives) are completed. In contrast, for the asynchronous algorithms, we use the MPI routine \verb+MPI_Test()+ which tests the completion of a data exchange (send or receives) -without putting the MPI process in blocking status\index{MPI~subroutines!Blocking}. +without putting the MPI process in blocking status\index{MPI subroutines!blocking}. The function $Compute\_New\_Vector\_Elements()$ (line~$6$ in Algorithm~\ref{ch13:alg:02}) computes, at each iteration, the new elements of the iterate vector $U$. Its general code is presented in Listing~\ref{ch13:list:01} (CPU function). The iterations of the projected -Richardson method\index{Iterative~method!Projected~Richardson}, based on those of the Jacobi -method\index{Iterative~method!Jacobi}, are defined as follows: +Richardson method\index{iterative method!projected Richardson}, based on those of the Jacobi +method\index{iterative method!Jacobi}, are defined as follows: \begin{equation} \begin{array}{ll} u^{p+1}(x,y,z) =& \frac{1}{Center}(g(x,y,z) - (Center\cdot u^{p}(x,y,z) + \\ @@ -578,8 +578,8 @@ efficient to fill them on the low-latency registers of each thread. The function $Convergence()$ (line~$11$ in Algorithm~\ref{ch13:alg:02}) allows us to detect the convergence of the parallel iterative algorithm and is based on -the tolerance threshold\index{Convergence!Tolerance~threshold} $\varepsilon$ -and the maximum number of relaxations\index{Convergence!Maximum~number~of~relaxations} +the tolerance threshold\index{convergence!tolerance threshold} $\varepsilon$ +and the maximum number of relaxations\index{convergence!maximum number of relaxations} $MaxRelax$. We take into account the number of relaxations since that of iterations cannot be computed in the asynchronous case. Indeed, a relaxation is the update~(\ref{ch13:eq:13}) of a local iterate vector $U_i$ according to $F_i$. Then, counting the number @@ -587,7 +587,7 @@ of relaxations is possible in both synchronous and asynchronous cases. On the other hand, an iteration is the update of at least all vector components with $F_i$. -In the synchronous\index{Synchronous} algorithm, the global convergence is detected +In the synchronous\index{synchronous} algorithm, the global convergence is detected when the maximal value of the absolute error, $error$, is sufficiently small and/or the maximum number of relaxations, $MaxRelax$, is reached, as follows: $$ @@ -598,11 +598,11 @@ AllReduce(error,\hspace{0.1cm}maxerror,\hspace{0.1cm}MAX); \\ conv \leftarrow true; \end{array} $$ -where the function $AllReduce()$ uses the MPI global reduction subroutine\index{MPI~subroutines!Global} +where the function $AllReduce()$ uses the MPI global reduction subroutine\index{MPI subroutines!global} \verb+MPI_Allreduce()+ to compute the maximal value, $maxerror$, among the local absolute errors, $error$, of all computing nodes, and $p$ (in Algorithm~\ref{ch13:alg:02}) is used as a counter of the local relaxations carried out by a computing node. In -the asynchronous\index{Asynchronous} algorithms, the global convergence is detected +the asynchronous\index{asynchronous} algorithms, the global convergence is detected when all computing nodes locally converge. For this, we use a token ring architecture around which a boolean token travels, in one direction, from a computing node to another. Starting from node $0$, the boolean token is set to $true$ by node $i$ if the local @@ -617,7 +617,7 @@ sends a stop message (end of parallel solving) to all computing nodes in the clu %%--------------------------%% \section{Experimental tests on a GPU cluster} \label{ch13:sec:05} -The GPU cluster\index{GPU~cluster} of tests that we used in this chapter is an $20GB/s$ +The GPU cluster\index{GPU!cluster} of tests that we used in this chapter is an $20GB/s$ Infiniband network of six machines. Each machine is a Quad-Core Xeon E5530 CPU running at $2.4$GHz. It provides a RAM memory of $12$GB with a memory bandwidth of $25.6$GB/s and it is equipped with two NVIDIA Tesla C1060 GPUs. A Tesla GPU contains in total $240$ cores @@ -719,7 +719,7 @@ $800^{3}$ & $3,950.87$ & $899,088$ & $56.22$ The fourth and seventh columns of Table~\ref{ch13:tab:02} show the relative gains obtained by executing the parallel algorithms on the cluster of $12$ GPUs instead -on the cluster of $24$ CPU cores. We compute the relative gain\index{Relative~gain} +on the cluster of $24$ CPU cores. We compute the relative gain\index{relative gain} $\tau$ as a ratio between the execution time $T_{cpu}$ spent on the CPU cluster over that $T_{gpu}$ spent on the GPU cluster: \[\tau=\frac{T_{cpu}}{T_{gpu}}.\] We can see from these ratios that solving large obstacle problems is faster on the GPU cluster @@ -745,13 +745,13 @@ consequently it also depends on the number of computing nodes. %%--------------------------%% \section{Red-black ordering technique} \label{ch13:sec:06} -As is wellknown, the Jacobi method\index{Iterative~method!Jacobi} is characterized -by a slow convergence\index{Convergence} rate compared to some iterative methods\index{Iterative~method} -(for example, Gauss-Seidel method\index{Iterative~method!Gauss-Seidel}). So, in this +As is wellknown, the Jacobi method\index{iterative method!Jacobi} is characterized +by a slow convergence\index{convergence} rate compared to some iterative methods\index{iterative method} +(for example, Gauss-Seidel method\index{iterative method!Gauss-Seidel}). So, in this section, we present some solutions to reduce the execution time and the number of relaxations and, more specifically, to speed up the convergence of the parallel projected Richardson method on the GPU cluster. We propose to use the point red-black -ordering technique\index{Iterative~method!Red-Black~ordering} to accelerate the +ordering technique\index{iterative method!red-black ordering} to accelerate the convergence. This technique is often used to increase the parallelism of iterative methods for solving linear systems~\cite{ch13:ref13,ch13:ref14,ch13:ref15}. We apply it to the projected Richardson method as a compromise between the Jacobi @@ -773,7 +773,7 @@ This technique can be implemented on the GPU in two different manners: However, in both solutions, for each memory transaction, only half of the memory segment addressed by a half-warp is used. So, the computation of the red and black vector elements leads to using twice the initial number of memory transactions. Then, -we apply the point red-black ordering\index{Iterative~method!Red-Black~ordering} +we apply the point red-black ordering\index{iterative method!red-black ordering} accordingly to the $y$-coordinate, as is shown in Figure~\ref{ch13:fig:06.02}. In this case, the vector elements having even $y$-coordinate are computed in parallel using the values of those having odd $y$-coordinate and then viceversa. Moreover, @@ -860,9 +860,9 @@ number of computing nodes on the cluster. This is due to the fact that the ratio between the time of the computation over that of the communication is reduced when the computations are performed on GPUs. Indeed, GPUs compute faster than CPUs and communications are more time-consuming. In this context, asynchronous algorithms -are more scalable than synchronous ones. So, with large scale GPU clusters, synchronous\index{Synchronous} +are more scalable than synchronous ones. So, with large scale GPU clusters, synchronous\index{synchronous} algorithms might be more penalized by communications, as can be deduced from Figure~\ref{ch13:fig:07}. -That is why we think that asynchronous\index{Asynchronous} iterative algorithms +That is why we think that asynchronous\index{asynchronous} iterative algorithms are all the more interesting in this case. diff --git a/BookGPU/Chapters/chapter17/ch17.tex b/BookGPU/Chapters/chapter17/ch17.tex index f69219e..93204ce 100755 --- a/BookGPU/Chapters/chapter17/ch17.tex +++ b/BookGPU/Chapters/chapter17/ch17.tex @@ -48,7 +48,7 @@ and grids are often identified as the main solution to increase simulation performance but GPUs are also a promising technology with an attractive performance/cost ratio. -Conceptually a MAS\index{Multi-Agent System} is a distributed system +Conceptually a MAS\index{multi-agent system} is a distributed system as it favors the definition and description of large sets of individuals, the agents, that can be run in parallel. As a large set of agents could have the same behavior, a Single Instruction Multiple @@ -298,7 +298,7 @@ collembolas in fields and forests. It is based on a diffusion algorithm which illustrates the case of agents with a simple behavior and few synchronization problems. -\subsection{The Collembola model\index{Collembola model}} +\subsection{The Collembola model\index{collembola model}} \label{ch17:subsec:collembolamodel} The Collembola model is an example of multi-agent system using GIS diff --git a/BookGPU/Chapters/chapter18/ch18.tex b/BookGPU/Chapters/chapter18/ch18.tex index d161c76..7c2a393 100755 --- a/BookGPU/Chapters/chapter18/ch18.tex +++ b/BookGPU/Chapters/chapter18/ch18.tex @@ -93,7 +93,7 @@ with basic notions on topology (see for instance~\cite{Devaney}). Chaos theory studies the behavior of dynamical systems that are perfectly predictable, yet appear to be wildly amorphous and meaningless. -Chaotic systems\index{chaotic systems} are highly sensitive to initial conditions, +Chaotic systems\index{chaotic!systems} are highly sensitive to initial conditions, which is popularly referred to as the butterfly effect. In other words, small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes, in general rendering long-term prediction impossible \cite{kellert1994wake}. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved \cite{kellert1994wake}. That is, the deterministic nature of these systems does not make them predictable \cite{kellert1994wake,Werndl01032009}. This behavior is known as deterministic chaos, or simply chaos. It has been well-studied in mathematics and @@ -149,7 +149,7 @@ When $f$ is chaotic, then the system $(\mathcal{X}, f)$ is chaotic and quoting D -\subsection{Chaotic iterations}\index{chaotic iterations} +\subsection{Chaotic iterations}\index{chaotic!iterations} \label{subsection:Chaotic iterations} Let us now introduce an example of a dynamical systems family that has diff --git a/BookGPU/Chapters/chapter19/ch19.tex b/BookGPU/Chapters/chapter19/ch19.tex index 90dee23..40912bd 100755 --- a/BookGPU/Chapters/chapter19/ch19.tex +++ b/BookGPU/Chapters/chapter19/ch19.tex @@ -7,7 +7,7 @@ \section{Introduction} \label{ch19:intro} -The Number Field Sieve (NFS)\index{iterative methods!Number Field Sieve} is the current state-of-the-art integer factorization method. It requires the solution of a large sparse linear system over Galois Field GF(2) (called the linear algebra step). The Block Wiedemann\index{Number Field Sieve!Block Wiedemann} (BW)\cite{ch19:bw} algorithm can be used to solve such a large sparse linear system efficiently using iterative sparse matrix vector multiplication (SpMV). +The Number Field Sieve (NFS)\index{iterative method!number field sieve} is the current state-of-the-art integer factorization method. It requires the solution of a large sparse linear system over Galois Field GF(2) (called the linear algebra step). The Block Wiedemann\index{number field sieve!block Wiedemann} (BW)\cite{ch19:bw} algorithm can be used to solve such a large sparse linear system efficiently using iterative sparse matrix vector multiplication (SpMV). Recent integer factorization efforts have been using CPU clusters to solve the large sparse linear system \cite{ch19:kilobit,ch19:rsa768}. The RSA-768 factorization \cite{ch19:rsa768}, for example, reported a runtime of 3 months for the linear algebra step on a cluster with 48 AMD dual hex-core CPUs. Previous work on parallelizing the linear algebra step focused on using CPU clusters and grids \cite{ch19:aoki,ch19:hwang,ch19:grid,ch19:hetero768}. In this chapter, we present a CUDA approach that can be used to accelerate the costly iterative SpMV operation for matrices derived from NFS. @@ -18,7 +18,7 @@ SpMV on the GPU has been explored previously in several papers \cite{ch19:nvidia \section{Block Wiedemann algorithm} \label{ch19:block-wiedemann} -The BW algorithm heuristically finds $n$ vectors in the kernel space \index{Number Field Sieve!kernel space}of a $d \times d$ binary matrix $B$; $n$ is one of two parameters $m, n$, called blocking factors\index{Number Field Sieve!blocking factors}. BW consists of the following steps: +The BW algorithm heuristically finds $n$ vectors in the kernel space \index{number field sieve!kernel space}of a $d \times d$ binary matrix $B$; $n$ is one of two parameters $m, n$, called blocking factors\index{number field sieve!blocking factors}. BW consists of the following steps: \begin{itemize} \item \textbf{Step 1 (BW1):} Compute the matrix sequence @@ -28,7 +28,7 @@ A_i = x \cdot B^i \cdot y, \forall i=1,...,{\frac{d}{m}}+{ \frac{d}{n} }+O(1), \end{equation} where $x,y$ are randomly chosen binary matrices of size $m \times d$ and $d \times n$, respectively. -\item \textbf{Step 2 (BW2):} The Berlekamp-Massey\index{Number Field Sieve!Berlekamp-Massey} algorithm \cite{ch19:Thome:subqad} is used to compute a generating polynomial of the matrix sequence $A$ from BW1 in the form +\item \textbf{Step 2 (BW2):} The Berlekamp-Massey\index{number field sieve!Berlekamp-Massey} algorithm \cite{ch19:Thome:subqad} is used to compute a generating polynomial of the matrix sequence $A$ from BW1 in the form \begin{equation} F(X)= \sum_{i=1}^{{ \frac{d}{n} }+O(1)} {C_i \cdot X^i}, \end{equation} @@ -100,7 +100,7 @@ We can treat $x$, $y$, and $S_i$ as vectors of block width $m$ or $n$. Assuming \label{fig:ex_matrix} \end{figure} - \subsubsection*{Coordinate list (COO)\index{Compressed storage format!COO}} + \subsubsection*{Coordinate list (COO)\index{compressed storage format!COO}} For each nonzero, both its column and row indices are explicitly stored. The Cusp implementation \cite{ch19:cusp} stores elements in sorted order of row indices ensuring that entries with the same row index are stored contiguously. \begin{lstlisting}[caption={}] @@ -109,7 +109,7 @@ For each nonzero, both its column and row indices are explicitly stored. The Cus coo.value = {3, 1, 5, 2, 4, 6, 8, 10, 9, 7, 11} \end{lstlisting} - \subsubsection*{Compressed sparse row (CSR)\index{Compressed storage format!CSR}} Nonzeros are sorted by the row index, and only their column indices are explicitly stored in a column array. Additionally, the vector $row\_start$ stores indices of the first nonzero element of each row in the column array. + \subsubsection*{Compressed sparse row (CSR)\index{compressed storage format!CSR}} Nonzeros are sorted by the row index, and only their column indices are explicitly stored in a column array. Additionally, the vector $row\_start$ stores indices of the first nonzero element of each row in the column array. \begin{lstlisting}[caption={}] csr.row_start = {0, 1, 3, 5, 8, 9, 12} @@ -117,7 +117,7 @@ For each nonzero, both its column and row indices are explicitly stored. The Cus csr.value = {3, 1, 5, 2, 4, 6, 8, 10, 9, 7, 11} \end{lstlisting} - \subsubsection*{Ellpack (ELL)\index{Compressed storage format!ELL}} Let $K$ be the maximum number of nonzero elements in any row of the matrix. Then, for each row, ELL stores exactly $K$ elements (extra padding is required for rows that contain fewer than $K$ nonzero elements). Only column indices are required to store in an array, the row index can be implied since exactly $K$ elements are stored per row. The Cusp implementation stores the column indices in a transposed manner so that consecutive threads can access consecutive memory addresses. + \subsubsection*{Ellpack (ELL)\index{compressed storage format!ELL}} Let $K$ be the maximum number of nonzero elements in any row of the matrix. Then, for each row, ELL stores exactly $K$ elements (extra padding is required for rows that contain fewer than $K$ nonzero elements). Only column indices are required to store in an array, the row index can be implied since exactly $K$ elements are stored per row. The Cusp implementation stores the column indices in a transposed manner so that consecutive threads can access consecutive memory addresses. \begin{lstlisting}[caption={}] ell.col_index = { @@ -130,7 +130,7 @@ For each nonzero, both its column and row indices are explicitly stored. The Cus *, *, *, 10, *, *} \end{lstlisting} - \subsubsection*{Hybrid (HYB)\index{Compressed storage format!HYB}} The HYB format heuristically computes a value $K$ and stores $K$ nonzeros per rows in the ELL format. When a row has more than $K$ non-zeros, the trailing nonzeros are stored in COO. This design decreases the storage overhead due to ELL padding elements and thus improves the overall performance. + \subsubsection*{Hybrid (HYB)\index{compressed storage format!HYB}} The HYB format heuristically computes a value $K$ and stores $K$ nonzeros per rows in the ELL format. When a row has more than $K$ non-zeros, the trailing nonzeros are stored in COO. This design decreases the storage overhead due to ELL padding elements and thus improves the overall performance. \begin{lstlisting}[caption={}] hyb.nnz_per_row = 2 hyb.ell.col_index = {2, 1, 1, 0, 2, 0, *, 4, 3, 2, *, 5} @@ -140,7 +140,7 @@ For each nonzero, both its column and row indices are explicitly stored. The Cus hyb.coo.value = {10} \end{lstlisting} - \subsubsection*{Sliced Ellpack (SLE)\index{Compressed storage format!SLE}} This format partitions the matrix into horizontal slices of $S$ adjacent rows \cite{ch19:sle}. Each slice is stored in ELLPACK format. The maximum number of nonzeros may be different for each slice. An additional array $slice\_start$ is used to index the first element in each slice. The matrix rows are usually sorted by the number of nonzeros per row in order to move rows with similar number of nonzeros together. + \subsubsection*{Sliced Ellpack (SLE)\index{compressed storage format!SLE}} This format partitions the matrix into horizontal slices of $S$ adjacent rows \cite{ch19:sle}. Each slice is stored in ELLPACK format. The maximum number of nonzeros may be different for each slice. An additional array $slice\_start$ is used to index the first element in each slice. The matrix rows are usually sorted by the number of nonzeros per row in order to move rows with similar number of nonzeros together. \begin{lstlisting}[caption={}] sle.slice_size = 2 sle.col_index = { @@ -177,7 +177,7 @@ The existing formats do not achieve good performance due to the special structur \section{A hybrid format for SpMV on GPUs} \label{Implementation} -As a preprocessing step, we reorder the rows of the matrix by their \emph{row weight}, in nonincreasing order. The row weight of row $j$ of $B$ is defined as the total number of nonzero elements in row $j$. We then partition the sorted matrix rows into at most four consecutive parts. Each part uses a different format. The different formats are optimized for the sparseness properties of each partition as shown in Figure \ref{fig:partitioning}. For the densest part, we use a dense format. When the matrix gets less dense, we switch to another format which we call \index{Compressed storage format!Sliced COO} Sliced COO (SCOO). SCOO has three variants, small, medium, and large. Our formats are now described in more detail. +As a preprocessing step, we reorder the rows of the matrix by their \emph{row weight}, in nonincreasing order. The row weight of row $j$ of $B$ is defined as the total number of nonzero elements in row $j$. We then partition the sorted matrix rows into at most four consecutive parts. Each part uses a different format. The different formats are optimized for the sparseness properties of each partition as shown in Figure \ref{fig:partitioning}. For the densest part, we use a dense format. When the matrix gets less dense, we switch to another format which we call \index{compressed storage format!sliced COO} Sliced COO (SCOO). SCOO has three variants, small, medium, and large. Our formats are now described in more detail. \begin{figure}[t] \centering diff --git a/BookGPU/Chapters/chapter2/ch2.tex b/BookGPU/Chapters/chapter2/ch2.tex index b330d6b..bd48e2a 100755 --- a/BookGPU/Chapters/chapter2/ch2.tex +++ b/BookGPU/Chapters/chapter2/ch2.tex @@ -23,7 +23,7 @@ are executed on a GPU. This code is in Listing~\ref{ch2:lst:ex1}. As GPUs have their own memory, the first step consists of allocating memory on -the GPU. A call to \texttt{cudaMalloc}\index{CUDA~functions!cudaMalloc} +the GPU. A call to \texttt{cudaMalloc}\index{CUDA functions!cudaMalloc} allocates memory on the GPU. The second parameter represents the size of the allocated variables, this size is expressed in bits. @@ -33,14 +33,14 @@ allocated variables, this size is expressed in bits. In this example, we want to compare the execution time of the additions of two arrays in CPU and GPU. So for both these operations, a timer is created to measure the time. CUDA proposes to manipulate timers quite easily. The first -step is to create the timer\index{CUDA~functions!timer}, then to start it, and at +step is to create the timer\index{CUDA functions!timer}, then to start it, and at the end to stop it. For each of these operations a dedicated function is used. In order to compute the same sum with a GPU, the first step consists of transferring the data from the CPU (considered as the host with CUDA) to the GPU (considered as the device with CUDA). A call to \texttt{cudaMemcpy} copies the content of an array allocated in the host to the device when the fourth parameter is set -to \texttt{cudaMemcpyHostToDevice}\index{CUDA~functions!cudaMemcpy}. The first +to \texttt{cudaMemcpyHostToDevice}\index{CUDA functions!cudaMemcpy}. The first parameter of the function is the destination array, the second is the source array, and the third is the number of elements to copy (expressed in bytes). @@ -52,26 +52,26 @@ two arrays in parallel (if the number of blocks and threads per blocks is sufficient). In Listing~\ref{ch2:lst:ex1} at the beginning, a simple kernel, called \texttt{addition} is defined to compute in parallel the summation of the two arrays. With CUDA, a kernel starts with the -keyword \texttt{\_\_global\_\_} \index{CUDA~keywords!\_\_shared\_\_} which +keyword \texttt{\_\_global\_\_} \index{CUDA keywords!\_\_shared\_\_} which indicates that this kernel can be called from the C code. The first instruction in this kernel is used to compute the variable \texttt{tid} which represents the -thread index. This thread index\index{thread index} is computed according to +thread index. This thread index\index{CUDA keywords!thread index} is computed according to the values of the block index -(called \texttt{blockIdx} \index{CUDA~keywords!blockIdx} in CUDA) and of the -thread index (called \texttt{threadIdx}\index{CUDA~keywords!threadIdx} in +(called \texttt{blockIdx} \index{CUDA keywords!blockIdx} in CUDA) and of the +thread index (called \texttt{threadIdx}\index{CUDA keywords!threadIdx} in CUDA). Blocks of threads and thread indexes can be decomposed into 1 dimension, 2 dimensions, or 3 dimensions. {\bf A REGARDER} According to the dimension of manipulated data, the appropriate dimension can be useful. In our example, only one dimension is used. Then using the notation \texttt{.x}, we can access the first dimension (\texttt{.y} and \texttt{.z}, respectively allow access to the second and -third dimension). The variable \texttt{blockDim}\index{CUDA~keywords!blockDim} +third dimension). The variable \texttt{blockDim}\index{CUDA keywords!blockDim} gives the size of each block. -\section{Second example: using CUBLAS} +\section{Second example: using CUBLAS \index{CUBLAS}} \label{ch2:2ex} The Basic Linear Algebra Subprograms (BLAS) allows programmers to use efficient @@ -81,7 +81,7 @@ operations, and matrix-matri operations~\cite{ch2:journals/ijhpca/Dongarra02}. Some of those operations seem to be easy to implement with CUDA. Nevertheless, as soon as a reduction is needed, implementing an efficient reduction routine with CUDA is far from being -simple. Roughly speaking, a reduction operation\index{reduction~operation} is an +simple. Roughly speaking, a reduction operation\index{reduction operation} is an operation which combines all the elements of an array and extracts a number computed from all the elements. For example, a sum, a maximum, or a dot product are reduction operations. diff --git a/BookGPU/Chapters/chapter3/ch3.tex b/BookGPU/Chapters/chapter3/ch3.tex index 7078aaf..95bce29 100755 --- a/BookGPU/Chapters/chapter3/ch3.tex +++ b/BookGPU/Chapters/chapter3/ch3.tex @@ -61,7 +61,7 @@ The Makefile given in Listing \ref{lst:mkfile} shows how to adapt examples given \section{Performance measurements} As our goal is to design very fast implementations of basic image processing algorithms, we need to make quite accurate time-measurements, within the order of magnitude of $0.01$~ms. Again, the easiest way of doing so is to use the helper functions of the \textbf{cutil} library. As usual, because the durations we are measuring are short and possibly subject to non negligible variations, a good practice is to measure multiple executions and report the mean runtime. All time results given in this chapter have been obtained through 1000 calls to each kernel. -Listing \ref{lst:chronos} shows how to use the dedicated \textbf{cutil} functions \index{Cutil library!Timer usage}. Timer declaration and creation need to be performed only once while reset, start and stop functions can be used as often as necessary. Synchronization is mandatory before stopping the timer (Line 7), to avoid runtime measurement being biased. +Listing \ref{lst:chronos} shows how to use the dedicated \textbf{cutil} functions \index{Cutil library!timer usage}. Timer declaration and creation need to be performed only once while reset, start and stop functions can be used as often as necessary. Synchronization is mandatory before stopping the timer (Line 7), to avoid runtime measurement being biased. \lstinputlisting[label={lst:chronos},caption=Time measurement technique using cutil functions]{Chapters/chapter3/code/exChronos.cu} In an attempt to provide relevant speedup values, we either implemented CPU versions of the algorithms studied or used the values found in existing literature. Still, the large number and diversity of hardware platforms and GPU cards makes it impossible to benchmark every possible combination and significant differences may occur between the speedups we report and those obtained with different devices. As a reference, our developing platform details as follows: @@ -204,7 +204,7 @@ To overcome this, the most frequent choice made in efficient implementations fou As for registers, designing a generic median filter that would use only that type of memory seems difficult, due to the above mentioned 63 register-per-thread limitation. \index{register count} Yet, nothing forbids us to design fixed-size filters, each of them specific to one of the most popular window sizes. It might be worth the effort as dramatic increase in performance could be expected. -Another track to follow in order to improve performance of GPU implementations consists of hiding latencies generated by arithmetic instruction calls and memory accesses. Both can be partially hidden by introducing Instruction-Level Parallelism \index{Instruction-Level Parallelism}(ILP) and by increasing the data count outputted by each thread. Though such techniques may seem to break the NVIDIA occupancy paradigm, they can lead to dramatically higher data throughput values. +Another track to follow in order to improve performance of GPU implementations consists of hiding latencies generated by arithmetic instruction calls and memory accesses. Both can be partially hidden by introducing Instruction-Level Parallelism \index{instruction-level parallelism}(ILP) and by increasing the data count outputted by each thread. Though such techniques may seem to break the NVIDIA occupancy paradigm, they can lead to dramatically higher data throughput values. The following sections illustrate these ideas and detail the design of the fastest CUDA median filter known to date. \section{A 3$\times$3 median filter: using registers} @@ -252,7 +252,7 @@ In our $3\times 3$ pixel window example, the minimum register count becomes $k_9 This iterative process is illustrated in Figure \ref{fig:forgetful3}, where it achieves one entire $3\times 3$ median selection, beginning with $k_9=6$ elements. The \textit{forgetful selection} method, used in \cite{mcguire2008median}, does not imply full sorting of values, but only selecting minimum and maximum values, which, at the price of a few iteration steps ($n^2-k$), reduces arithmetic complexity. -Listing \ref{lst:medianForget1pix3} details this process where forgetful selection is achieved by use of simple 2-value swapping function ($s()$, lines 1 to 5) that swaps input values if necessary, so as to achieve the first steps of an incomplete sorting network \cite{Batcher:1968:SNA:1468075.1468121}. Moreover, whenever possible, in order to increase the ILP, \index{Instruction-Level Parallelism} successive calls to $s()$ are done with independant elements as arguments. This is illustrated by the macro definitions of lines 7 to 12 and by Figure \ref{fig:bitonic} which details the first iteration of the $5\times 5$ selection, starting with $k_{25}=14$ elements. +Listing \ref{lst:medianForget1pix3} details this process where forgetful selection is achieved by use of simple 2-value swapping function ($s()$, lines 1 to 5) that swaps input values if necessary, so as to achieve the first steps of an incomplete sorting network \cite{Batcher:1968:SNA:1468075.1468121}. Moreover, whenever possible, in order to increase the ILP, \index{instruction-level parallelism} successive calls to $s()$ are done with independant elements as arguments. This is illustrated by the macro definitions of lines 7 to 12 and by Figure \ref{fig:bitonic} which details the first iteration of the $5\times 5$ selection, starting with $k_{25}=14$ elements. \begin{figure}[b] \centering \includegraphics[width=6cm]{Chapters/chapter3/img/forgetful_selection.png} diff --git a/BookGPU/Chapters/chapter4/ch4.tex b/BookGPU/Chapters/chapter4/ch4.tex index 0a0d6cb..805de25 100644 --- a/BookGPU/Chapters/chapter4/ch4.tex +++ b/BookGPU/Chapters/chapter4/ch4.tex @@ -8,7 +8,7 @@ \section{Overview} In this chapter, after dealing with GPU median filter implementations, -we propose to explore how convolutions\index{Convolution} can be implemented on modern +we propose to explore how convolutions\index{convolution} can be implemented on modern GPUs. Widely used in digital image processing filters, the \emph{convolution operation} basically consists of taking the sum of products of elements from two 2D functions, letting one of the two functions move over @@ -81,7 +81,7 @@ This first implementation consists of a rather naive application to convolutions of the techniques applied to median filters in the previous chapter, as a reminder: texture memory used with incoming data, pinned memory with output data, optimized use of registers -while processing data and multiple output per thread\index{Multiple output per thread}. +while processing data and multiple output per thread\index{multiple output per thread}. One significant difference lies in the fact that the median filter uses only one parameter, the size of the window mask, which can be hard-coded, while a convolution mask requires referring to several parameters; hard-coding @@ -239,7 +239,7 @@ However, our technique requires writing one kernel per mask size, which can be s \lstinputlisting[label={lst:convoGene8x8pL3},caption=CUDA kernel achieving a $3\times 3$ convolution operation with the mask in symbol memory and direct data fetches in texture memory]{Chapters/chapter4/code/convoGene8x8pL3.cu} -\subsection{Using shared memory to store prefetched data\index{Prefetching}.} +\subsection{Using shared memory to store prefetched data\index{prefetching}.} \index{memory~hierarchy!shared~memory} A more convenient way of coding a convolution kernel is to use shared memory to perform a prefetching stage of the whole halo before computing the convolution sums. This proves to be quite efficient and more versatile, but it obviously generates some overhead because @@ -356,7 +356,7 @@ $\mathbf{4096\times 4096}$&1.533 \\\hline \label{tab:cpyToArray} \end{table} \lstinputlisting[label={lst:convoSepSh},caption=data copy between the calls to 1D convolution kernels achieving a 2D separable convolution operation]{Chapters/chapter4/code/convoSepSh.cu} -\lstinputlisting[label={lst:convoSepShV},caption=CUDA kernel achieving a horizontal 1D convolution operation after a preloading \index{Prefetching} of data into shared memory]{Chapters/chapter4/code/convoSepShV.cu} +\lstinputlisting[label={lst:convoSepShV},caption=CUDA kernel achieving a horizontal 1D convolution operation after a preloading \index{prefetching} of data into shared memory]{Chapters/chapter4/code/convoSepShV.cu} \lstinputlisting[label={lst:convoSepShH},caption=CUDA kernel achieving a vertical 1D convolution operation after a preloading of data into shared memory]{Chapters/chapter4/code/convoSepShH.cu} \section{Conclusion} diff --git a/BookGPU/Chapters/chapter5/ch5.tex b/BookGPU/Chapters/chapter5/ch5.tex index dea460f..fed9a80 100644 --- a/BookGPU/Chapters/chapter5/ch5.tex +++ b/BookGPU/Chapters/chapter5/ch5.tex @@ -160,7 +160,7 @@ We refer the reader to Chapter \ref{ch7} for an example of a scientific applicat \subsection{Heat conduction equation}\index{heat conduction} First, we consider a two-dimensional heat conduction problem defined on a unit square. The heat conduction equation is a parabolic partial differential diffusion equation, including both spatial and temporal derivatives. It describes how the diffusion of heat in a medium changes with time. Diffusion equations are of great importance in many fields of sciences, e.g., fluid dynamics, where the fluid motion is uniquely described by the Navier-Stokes equations, which include a diffusive viscous term~\cite{ch5:chorin1993,ch5:Ferziger1996}.%, or in financial science where diffusive terms are present in the Black-Scholes equations for estimation of option price trends~\cite{}. -The heat problem is an IVP \index{initial value problem}, it describes how the heat distribution evolves from a specified initial state. Together with homogeneous Dirichlet boundary conditions\index{boundary conditions}, the heat problem in the unit square is given as +The heat problem is an IVP \index{initial value problem}, it describes how the heat distribution evolves from a specified initial state. Together with homogeneous Dirichlet boundary conditions\index{boundary condition}, the heat problem in the unit square is given as \begin{subequations}\begin{align} \frac{\partial u}{\partial t} - \kappa\nabla^2u = 0, & \qquad (x,y)\in \Omega([0,1]\times[0,1]),\quad t\geq 0, \label{ch5:eq:heateqdt}\\ u = 0, & \qquad (x,y) \in \partial\Omega,\label{ch5:eq:heateqbc} diff --git a/BookGPU/Chapters/chapter6/PartieAsync.tex b/BookGPU/Chapters/chapter6/PartieAsync.tex index 3365b41..0253c9c 100644 --- a/BookGPU/Chapters/chapter6/PartieAsync.tex +++ b/BookGPU/Chapters/chapter6/PartieAsync.tex @@ -6,7 +6,7 @@ In the previous section, we have seen how to efficiently implement overlap of computations (CPU and GPU) with communications (GPU transfers and internode communications). However, we have previously shown that for some parallel iterative algorithms, it is sometimes even more efficient to use an asynchronous -scheme of iterations\index{iterations!asynchronous} \cite{HPCS2002,ParCo05,Para10}. In that case, the nodes do +scheme of iterations\index{iterations asynchronous} \cite{HPCS2002,ParCo05,Para10}. In that case, the nodes do not wait for each other but they perform their iterations using the last external data they have received from the other nodes, even if this data was produced \emph{before} the previous iteration on the other nodes. @@ -887,7 +887,7 @@ the CPU may vary depending on the application. For example, when processing data streams (pipelines), pre-processing of the next data item and/or post-processing of the previous result can be done on the CPU while the GPU is processing the current data item. In other cases, the CPU can perform \emph{auxiliary} -computations\index{computation!auxiliary} +computations\index{computation auxiliary} that are not absolutely required to obtain the result but that may accelerate the entire iterative process. Another possibility would be to distribute the main computations between the GPU and CPU. However, this diff --git a/BookGPU/Chapters/chapter8/biblio8.bib b/BookGPU/Chapters/chapter8/biblio8.bib index 3e10526..7aadc57 100644 --- a/BookGPU/Chapters/chapter8/biblio8.bib +++ b/BookGPU/Chapters/chapter8/biblio8.bib @@ -8,7 +8,7 @@ @InProceedings{ch8:Carneiro_2011, author = {T. Carneiro and A. E. Muritibab and M. Negreirosc and G. A. Lima de Campos}, - title = {A New Parallel Schema for Branch-and-Bound Algorithms Using GPGPU}, + title = {A New Parallel Schema for Branch-and-Bound Algorithms Using {GPGPU}}, booktitle = {23rd International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)}, year = {2011} } @@ -17,14 +17,15 @@ author = "L. G. Casadoa and J. A. Martíneza and I. Garcíaa and E. M. T. Hendrixb.", title = "Branch-and-Bound interval global optimization on shared memory multiprocessors", journal = "Optimization Methods and Software", - volume = "23, No.5", + volume = "23", + number= "5", pages = "689-701", year = "2008" } @InProceedings{ch8:Fung, author = {W. Fung and I. Sham and G. Yuan and T. Aamodt}, - title = {Dynamic warp formation and scheduling for efficient gpu control flow}, + title = {Dynamic warp formation and scheduling for efficient {GPU} control flow}, booktitle = {{In MICRO '07: Proceedings of the 40th Annual IEEE/ACM International Symposium on Micro-architecture}}, year = {2007}, pages = {407-420}, @@ -43,7 +44,7 @@ @Article{ch8:Gendron_1994, author = {B. Gendron and T. G. Crainic}, - title = {Parallel {B}ranch and {B}ound {A}lgorithms: {S}urvey and {S}ynthesis}, + title = {Parallel Branch and Bound Algorithms: Survey and Synthesis}, journal = {Operations Research}, year = {1994}, volume = {42}, @@ -53,15 +54,15 @@ @InProceedings{ch8:Han, author = {T. Han and T. S. Abdelrahman}, - title = {Reducing branch divergence in GPU programs}, - booktitle = {{In Proceedings of the Fourth Workshop on General Purpose Processing on Graphics Processing Units (GPGPU-4), ACM}}, + title = {Reducing branch divergence in {GPU} programs}, + booktitle = {{Proceedings of the Fourth Workshop on General Purpose Processing on Graphics Processing Units (GPGPU-4), ACM}}, year = {2011}, publisher = {New York, USA} } @Article{ch8:Johnson_1954, author = {S. M. Johnson}, - title = {{Optimal two and three-stage production schedules with setup times included}}, + title = {{Optimal two- and three-stage production schedules with setup times included}}, journal = {Naval Research Logistis Quarterly}, year = {1954}, volume = {1}, @@ -69,16 +70,16 @@ pages = {61--68} } -@ARTICLE{ch8:Kurzak_2010, +@BOOK{ch8:Kurzak_2010, author = "J. Kurzak and D. A. Bader and J. Dongarra.", - title = "Scientific Computing with Multicore and Accelerators", - journal = "Chapman \& Hall / CRC Press", + title = {{Scientific Computing with Multicore and Accelerators}}, + publisher = {{Chapman \& Hall / CRC Press}}, year = "2010" } @Article{ch8:Lenstra_1978, author = {J. K. Lenstra and B. J. Lageweg and A. H. G. Rinnooy Kan}, - title = {{A General bounding scheme for the permutation flow-shop problem}}, + title = {{A general bounding scheme for the permutation flow-shop problem}}, journal = {Operations Research}, year = {1978}, volume = {26}, @@ -92,7 +93,7 @@ TITLE = "Contributions \`a la r\'esolution de probl\`emes d'optimisation combin HOWPUBLISHED = "LIFL, USTL", MONTH = "Novembre", year = "2005", -NOTE = "Th\`ese HDR" +NOTE = "Habilitation to Direct Research" } @ARTICLE{ch8:Taillard_1993, @@ -109,7 +110,7 @@ NOTE = "Th\`ese HDR" @ARTICLE{ch8:JRJackson_1956, AUTHOR ="J. R. Jackson", - TITLE ="An Extension of Johnson's results on Job-Lot Scheduling", + TITLE ="An Extension of {J}ohnson's results on Job-Lot Scheduling", JOURNAL ="Naval Research Logistis Quarterly", YEAR ="1956", NOTE ="3:3" @@ -117,15 +118,15 @@ NOTE = "Th\`ese HDR" @ARTICLE{ch8:LGMitten_1959, AUTHOR ="L. G. Mitten", - TITLE ="Sequencing n jobs on two machines with arbitrary time lags", + TITLE ="Sequencing $n$ jobs on two machines with arbitrary time lags", JOURNAL ="Management Science", YEAR ="1959" } @InProceedings{ch8:Mezmaz_2007, - author = {M. Mezmaz and N. Melab and E-G. Talbi.}, + author = {M. Mezmaz and N. Melab and E.-G. Talbi.}, title = {A grid-enabled branch and bound algorithm for solving challenging combinatorial optimization problems}, - booktitle = {{In Proc. of 21th IEEE Intl. Parallel and Distributed Processing Symp. (IPDPS)}}, + booktitle = {{Proceedings of 21th IEEE International Parallel and Distributed Processing Symposium (IPDPS)}}, year = {2007}, month = {March}, publisher = {Long Beach, California} @@ -134,16 +135,17 @@ NOTE = "Th\`ese HDR" @ARTICLE{ch8:Quinn_1990, author = "M. J. Quinn.", title = "Analysis and implementation of branch-and-bound algorithms on a hypercube multicomputer", - journal = "IEEE transactions on computers", - volume = "39, No3", + journal = "IEEE Transactions on Computers", + volume = "39", + number ="3", pages = "384-387", year = "1990" } @InProceedings{ch8:Zhang, author = {E. Z. Zhang and Y. Jiang and Z. Guo and X. Shen}, - title = {Streamlining GPU applications on the fly: thread divergence elimination through runtime thread-data remapping}, - booktitle = {{In Proceedings of the 24th ACM International Conference on Supercomputing (ICS'10), ACM.}}, + title = {Streamlining {GPU} applications on the fly: {T}hread divergence elimination through runtime thread-data remapping}, + booktitle = {{Proceedings of the 24th ACM International Conference on Supercomputing (ICS'10), ACM}}, year = {2010}, pages = {115-126}, publisher = {New York, NY, USA} @@ -152,7 +154,6 @@ NOTE = "Th\`ese HDR" @misc{ch8:cuda, author = {{NVIDIA Corporation}}, keywords = {CUDA}, - note = {Version 4.0}, - title = {{NVIDIA CUDA C} Programming Guide}, + title = {{NVIDIA CUDA C} Programming Guide, Version 4.0 }, year = 2011 } diff --git a/BookGPU/Chapters/chapter8/ch8.tex b/BookGPU/Chapters/chapter8/ch8.tex index f79178a..e0b7ddd 100644 --- a/BookGPU/Chapters/chapter8/ch8.tex +++ b/BookGPU/Chapters/chapter8/ch8.tex @@ -415,16 +415,16 @@ The data access optimization challenge is to find the best mapping of the data s \subsection{Complexity analysis of the memory usage of the lower bound } -In this section, the characteristics of the data structures used by the lower bound function are studied in terms of sizes and access frequencies. For an efficient implementation of the LB, six data structures are required: the matrix $PTM$ of the processing times of the jobs, the matrix of lags $LM$, the Johnson's matrix $JM$, the matrix $RM$ of the earliest starting times of jobs, the matrix $QM$ of their lowest latency times, and the matrix $MM$ containing the couples of machines. The complexities of the different data structures are summarized in Table~\ref{ch8:tabMemComplex} where the columns represent, respectively, the name of the data structure, its size, and the number of times it is accessed.\\ +In this section, the characteristics of the data structures used by the lower bound function are studied in terms of sizes and access frequencies. For an efficient implementation of the LB, six data structures are required: the matrix PTM of the processing times of the jobs, the matrix of lags LM, the Johnson's matrix JM, the matrix RM of the earliest starting times of jobs, the matrix QM of their lowest latency times, and the matrix MM containing the couples of machines. The complexities of the different data structures are summarized in Table~\ref{ch8:tabMemComplex} where the columns represent, respectively, the name of the data structure, its size, and the number of times it is accessed.\\ -In the LB expression, the computation of the term $P_{Ja}^*(\jmath,M_k,M_l)$ requires the calculation of the lag of each remaining job to be scheduled on the couple $(M_k,M_l)$ of machines using its processing times on these machines (Johnson's rule with lags). Such computation is repeated for each couple $(M_k,M_l)$ of machines with $1 \leq k,l \leq m$ and $k