\section{Architecture of current GPUs}
-The architecture \index{architecture of a GPU} of current GPUs is constantly
+The architecture \index{GPU!architecture of a} of current GPUs is constantly
evolving. Nevertheless some trends remain constant throughout this evolution.
Processing units composing a GPU are far simpler than a traditional CPU and
it is much easier to integrate many computing units inside a GPU card than to do
\section{Memory hierarchy}
-The memory hierarchy of GPUs\index{memory~hierarchy} is different from that of CPUs. In practice, there are registers\index{memory~hierarchy!registers}, local
-memory\index{memory~hierarchy!local~memory}, shared
-memory\index{memory~hierarchy!shared~memory}, cache
-memory\index{memory~hierarchy!cache~memory}, and global
-memory\index{memory~hierarchy!global~memory}.
+The memory hierarchy of GPUs\index{memory hierarchy} is different from that of CPUs. In practice, there are registers\index{memory hierarchy!registers}, local
+memory\index{memory hierarchy!local memory}, shared
+memory\index{memory hierarchy!shared memory}, cache
+memory\index{memory hierarchy!cache memory}, and global
+memory\index{memory hierarchy!global memory}.
As previously mentioned each thread can access its own registers. It is
%%--------------------------%%
\section{Krylov iterative methods}
\label{ch12:sec:02}
-Let us consider the following system of $n$ linear equations\index{Sparse~linear~system}
+Let us consider the following system of $n$ linear equations\index{sparse linear system}
in $\mathbb{R}$:
\begin{equation}
Ax=b,
is the solution vector, $b\in\mathbb{R}^{n}$ is the right-hand side, and $n\in\mathbb{N}$ is a
large integer number.
-The iterative methods\index{Iterative~method} for solving the large sparse linear system~(\ref{ch12:eq:01})
+The iterative methods\index{iterative method} for solving the large sparse linear system~(\ref{ch12:eq:01})
proceed by successive iterations of a same block of elementary operations, during which an
infinite number of approximate solutions $\{x_k\}_{k\geq 0}$ is computed. Indeed, from an
initial guess $x_0$, an iterative method determines at each iteration $k>0$ an approximate
\end{equation}
The number of iterations necessary to reach the exact solution $x^{*}$ is not known beforehand
and can be infinite. In practice, an iterative method often finds an approximate solution $\tilde{x}$
-after a fixed number of iterations and/or when a given convergence criterion\index{Convergence}
+after a fixed number of iterations and/or when a given convergence criterion\index{convergence}
is satisfied as follows:
\begin{equation}
\|b-A\tilde{x}\| < \varepsilon,
\label{ch12:eq:03}
\end{equation}
-where $\varepsilon<1$ is the required convergence tolerance threshold\index{Convergence!Tolerance~threshold}.
+where $\varepsilon<1$ is the required convergence tolerance threshold\index{convergence!tolerance threshold}.
Some of the most iterative methods that have proven their efficiency for solving large sparse
-linear systems are those called \textit{Krylov subspace methods}~\cite{ch12:ref1}\index{Iterative~method!Krylov~subspace}.
+linear systems are those called \textit{Krylov subspace methods}~\cite{ch12:ref1}\index{iterative method!Krylov subspace}.
In the present chapter, we describe two Krylov methods which are widely used: the CG method (conjugate
gradient method) and the GMRES method (generalized minimal residual method). In practice, the
Krylov subspace methods are usually used with preconditioners that allow the improvement of their
-convergence. So, in what follows, the CG and GMRES methods are used to solve the left-preconditioned\index{Sparse~linear~system!Preconditioned}
+convergence. So, in what follows, the CG and GMRES methods are used to solve the left-preconditioned\index{sparse linear system!preconditioned}
sparse linear system:
\begin{equation}
M^{-1}Ax=M^{-1}b,
can be adapted to solve nonlinear equations and optimization problems. However, it can only be applied
to problems with positive definite symmetric matrices.
-The main idea of the CG method\index{Iterative~method!CG} is the computation of a sequence of approximate
-solutions $\{x_k\}_{k\geq 0}$ in a Krylov subspace\index{Iterative~method!Krylov~subspace} of order $k$ as
+The main idea of the CG method\index{iterative method!CG} is the computation of a sequence of approximate
+solutions $\{x_k\}_{k\geq 0}$ in a Krylov subspace\index{iterative method!Krylov~subspace} of order $k$ as
follows:
\begin{equation}
x_k \in x_0 + \mathcal{K}_k(A,r_0),
\label{ch12:eq:04}
\end{equation}
-such that the Galerkin condition\index{Galerkin~condition} must be satisfied:
+such that the Galerkin condition\index{Galerkin condition} must be satisfied:
\begin{equation}
r_k \bot \mathcal{K}_k(A,r_0),
\label{ch12:eq:05}
\end{algorithm}
Algorithm~\ref{ch12:alg:01} shows the main key points of the preconditioned CG method. It allows
-the solving the left-preconditioned\index{Sparse~linear~system!Preconditioned} sparse linear system~(\ref{ch12:eq:11}).
+the solving the left-preconditioned\index{sparse linear system!preconditioned} sparse linear system~(\ref{ch12:eq:11}).
In this algorithm, $\varepsilon$ is the convergence tolerance threshold, $maxiter$ is the maximum
number of iterations, and $(\cdot,\cdot)$ defines the dot product between two vectors in $\mathbb{R}^{n}$.
At every iteration, a direction vector $p_k$ is determined, so that it is orthogonal to the preconditioned
residual $z_k$ and to the direction vectors $\{p_i\}_{i<k}$ previously determined (from line~$8$ to
line~$13$). Then, at lines~$16$ and~$17$, the iterate $x_k$ and the residual $r_k$ are computed using
formulas~(\ref{ch12:eq:07}) and~(\ref{ch12:eq:08}), respectively. The CG method converges after, at
-most, $n$ iterations. In practice, the CG algorithm stops when the tolerance threshold\index{Convergence!Tolerance~threshold}
-$\varepsilon$ and/or the maximum number of iterations\index{Convergence!Maximum~number~of~iterations}
+most, $n$ iterations. In practice, the CG algorithm stops when the tolerance threshold\index{convergence!tolerance threshold}
+$\varepsilon$ and/or the maximum number of iterations\index{convergence!maximum number of iterations}
$maxiter$ is reached.
\subsection{GMRES method}
\label{ch12:sec:02.02}
The iterative GMRES method was developed by Saad and Schultz in 1986~\cite{ch12:ref3} as a generalization
-of the minimum residual method MINRES~\cite{ch12:ref4}\index{Iterative~method!MINRES}. Indeed, GMRES can
+of the minimum residual method MINRES~\cite{ch12:ref4}\index{iterative method!MINRES}. Indeed, GMRES can
be applied for solving symmetric or nonsymmetric linear systems.
-The main principle of the GMRES method\index{Iterative~method!GMRES} is to find an approximation minimizing
+The main principle of the GMRES method\index{iterative method!GMRES} is to find an approximation minimizing
at best the residual norm. In fact, GMRES computes a sequence of approximate solutions $\{x_k\}_{k>0}$ in
-a Krylov subspace\index{Iterative~method!Krylov~subspace} $\mathcal{K}_k$ as follows:
+a Krylov subspace\index{iterative method!Krylov subspace} $\mathcal{K}_k$ as follows:
\begin{equation}
\begin{array}{ll}
x_k \in x_0 + \mathcal{K}_k(A, v_1),& v_1=\frac{r_0}{\|r_0\|_2},
\end{array}
\label{ch12:eq:12}
\end{equation}
-so that the Petrov-Galerkin condition\index{Petrov-Galerkin~condition} is satisfied:
+so that the Petrov-Galerkin condition\index{Petrov-Galerkin condition} is satisfied:
\begin{equation}
\begin{array}{ll}
r_k \bot A \mathcal{K}_k(A, v_1).
\end{array}
\label{ch12:eq:13}
\end{equation}
-GMRES uses the Arnoldi process~\cite{ch12:ref5}\index{Iterative~method!Arnoldi~process} to construct an
-orthonormal basis $V_k$ for the Krylov subspace $\mathcal{K}_k$ and an upper Hessenberg matrix\index{Hessenberg~matrix}
+GMRES uses the Arnoldi process~\cite{ch12:ref5}\index{iterative method!Arnoldi process} to construct an
+orthonormal basis $V_k$ for the Krylov subspace $\mathcal{K}_k$ and an upper Hessenberg matrix\index{Hessenberg matrix}
$\bar{H}_k$ of order $(k+1)\times k$:
\begin{equation}
\begin{array}{ll}
\end{algorithm}
Algorithm~\ref{ch12:alg:02} shows the key points of the GMRES method with restarts.
-It solves the left-preconditioned\index{Sparse~linear~system!Preconditioned} sparse linear
+It solves the left-preconditioned\index{sparse linear system!preconditioned} sparse linear
system~(\ref{ch12:eq:11}), such that $M$ is the preconditioning matrix. At each iteration
-$k$, GMRES uses the Arnoldi process\index{Iterative~method!Arnoldi~process} (defined from
+$k$, GMRES uses the Arnoldi process\index{iterative method!Arnoldi process} (defined from
line~$7$ to line~$17$) to construct a basis $V_m$ of $m$ orthogonal vectors and an upper
-Hessenberg matrix\index{Hessenberg~matrix} $\bar{H}_m$ of size $(m+1)\times m$. Then, it
+Hessenberg matrix\index{Hessenberg matrix} $\bar{H}_m$ of size $(m+1)\times m$. Then, it
solves the linear least-squares problem of size $m$ to find the vector $y\in\mathbb{R}^{m}$
which minimizes at best the residual norm (line~$18$). Finally, it computes an approximate
solution $x_m$ in the Krylov subspace spanned by $V_m$ (line~$19$). The GMRES algorithm is
stopped when the residual norm is sufficiently small ($\|r_m\|_2<\varepsilon$) and/or the
-maximum number of iterations\index{Convergence!Maximum~number~of~iterations} ($maxiter$)
+maximum number of iterations\index{convergence!maximum number of iterations} ($maxiter$)
is reached.
%%--------------------------%%
\section{Parallel implementation on a GPU cluster}
\label{ch12:sec:03}
-In this section, we present the parallel algorithms of both iterative CG\index{Iterative~method!CG}
-and GMRES\index{Iterative~method!GMRES} methods for GPU clusters. The implementation is performed on
+In this section, we present the parallel algorithms of both iterative CG\index{iterative method!CG}
+and GMRES\index{iterative method!GMRES} methods for GPU clusters. The implementation is performed on
a GPU cluster composed of different computing nodes, such that each node is a CPU core managed by one
MPI (message passing interface) process and equipped with a GPU card. The parallelization of these algorithms is carried out by
-using the MPI communication routines between the GPU computing nodes\index{Computing~node} and the
+using the MPI communication routines between the GPU computing nodes\index{computing node} and the
CUDA (compute unified device architecture) programming environment inside each node. In what follows, the algorithms of the iterative methods
are called iterative solvers.
vector $x$.
The least-squares problem in the GMRES method is solved by performing a QR factorization
-on the Hessenberg matrix\index{Hessenberg~matrix} $\bar{H}_m$ with plane rotations and,
+on the Hessenberg matrix\index{Hessenberg matrix} $\bar{H}_m$ with plane rotations and,
then, solving the triangular system by backward substitutions to compute $y$. Consequently,
solving the least-squares problem on the GPU is not efficient. Indeed, the triangular
solves are not easy to parallelize and inefficient on GPUs. However, the least-squares
problem to solve in the GMRES method with restarts has, generally, a very small size $m$.
Therefore, we develop an inexpensive kernel which must be executed by a single CUDA thread.
-The most important operation in CG\index{Iterative~method!CG} and GMRES\index{Iterative~method!GMRES}
-methods is the SpMV multiplication (sparse matrix-vector multiplication)\index{SpMV~multiplication},
+The most important operation in CG\index{iterative method!CG} and GMRES\index{iterative method!GMRES}
+methods is the SpMV multiplication (sparse matrix-vector multiplication)\index{SpMV multiplication},
because it is often an expensive operation in terms of execution time and memory space.
Moreover, it requires taking care of the storage format of the sparse matrix in the
memory. Indeed, the naive storage, row-by-row or column-by-column, of a sparse matrix
nature of the matrix often leads to irregular memory accesses to read the matrix nonzero
values. So, the computation of the SpMV multiplication on GPUs can involve noncoalesced
accesses to the global memory, which slows down its performances even more. One of the
-most efficient compressed storage formats\index{Compressed~storage~format} of sparse
-matrices on GPUs is the HYB (hybrid)\index{Compressed~storage~format!HYB} format~\cite{ch12:ref7}.
+most efficient compressed storage formats\index{compressed storage format} of sparse
+matrices on GPUs is the HYB (hybrid)\index{compressed storage format!HYB} format~\cite{ch12:ref7}.
It is a combination of ELLpack (ELL) and Coordinate (COO) formats. Indeed, it stores
-a typical number of nonzero values per row in ELL\index{Compressed~storage~format!ELL}
+a typical number of nonzero values per row in ELL\index{compressed storage format!ELL}
format and the remaining entries of exceptional rows in COO format. It combines the efficiency
-of ELL due to the regularity of its memory accesses and the flexibility of COO\index{Compressed~storage~format!COO}
+of ELL due to the regularity of its memory accesses and the flexibility of COO\index{compressed storage format!COO}
which is insensitive to the matrix structure. Consequently, we use the HYB kernel~\cite{ch12:ref8}
developed by NVIDIA to implement the SpMV multiplication of CG and GMRES methods on GPUs.
Moreover, to avoid the noncoalesced accesses to the high-latency global memory, we fill
\label{ch12:sec:03.03}
All the computing nodes of the GPU cluster execute in parallel the same iterative solver
(Algorithm~\ref{ch12:alg:01} or Algorithm~\ref{ch12:alg:02}) adapted to GPUs, but on their
-own portions of the sparse linear system\index{Sparse~linear~system}: $M^{-1}_iA_ix_i=M^{-1}_ib_i$,
+own portions of the sparse linear system\index{sparse linear system}: $M^{-1}_iA_ix_i=M^{-1}_ib_i$,
$0\leq i<p$. However, in order to solve the complete sparse linear system~(\ref{ch12:eq:11}),
synchronizations must be performed between the local computations of the computing nodes over
-the cluster. In what follows, two computing nodes sharing data are called neighboring nodes\index{Neighboring~node}.
+the cluster. In what follows, two computing nodes sharing data are called neighboring nodes\index{neighboring node}.
As already mentioned, the most important operation of CG and GMRES methods is the SpMV multiplication.
In the parallel implementation of the iterative methods, each computing node $i$ performs the
$p$ and $v$ in CG and GMRES methods, respectively) is composed of a local subvector and a shared
subvector.
-Therefore, before computing the SpMV multiplication\index{SpMV~multiplication}, the neighboring
-nodes\index{Neighboring~node} over the GPU cluster must exchange between them the shared vector
+Therefore, before computing the SpMV multiplication\index{SpMV multiplication}, the neighboring
+nodes\index{neighboring node} over the GPU cluster must exchange between them the shared vector
elements necessary to compute this multiplication. First, each computing node determines, in its
local subvector, the vector elements needed by other nodes. Then, the neighboring nodes exchange
between them these shared vector elements. The data exchanges are implemented by using the MPI
-point-to-point communication routines: blocking\index{MPI~subroutines!Blocking} sends with \verb+MPI_Send()+
-and nonblocking\index{MPI~subroutines!Nonblocking} receives with \verb+MPI_Irecv()+. Figure~\ref{ch12:fig:02}
+point-to-point communication routines: blocking\index{MPI subroutines!blocking} sends with \verb+MPI_Send()+
+and nonblocking\index{MPI subroutines!nonblocking} receives with \verb+MPI_Irecv()+. Figure~\ref{ch12:fig:02}
shows an example of data exchanges between \textit{Node 1} and its neighbors \textit{Node 0}, \textit{Node 2},
and \textit{Node 3}. In this example, the iterate matrix $A$ split between these four computing
nodes is that presented in Figure~\ref{ch12:fig:01}.
\label{ch12:fig:03}
\end{figure}
-A GPU cluster\index{GPU~cluster} is a parallel platform with a distributed memory. So, the synchronizations
+A GPU cluster\index{GPU!cluster}\index{multi-GPU} is a parallel platform with a distributed memory. So, the synchronizations
and communication data between GPU nodes are carried out by passing messages. However, a GPU cannot exchange data
with other GPUs in a direct way. Then, CPUs via MPI processes are in charge of the synchronizations within the GPU
cluster. Consequently, the vector elements to be exchanged must be copied from the GPU memory to the CPU memory
and vice versa before and after the synchronization operation between CPUs. We have used the CUBLAS\index{CUBLAS}
communication subroutines to perform the data transfers between a CPU core and its GPU: \verb+cublasGetVector()+
and \verb+cublasSetVector()+. Finally, in addition to the data exchanges, GPU nodes perform reduction operations
-to compute in parallel the dot products and Euclidean norms. This is implemented by using the MPI global communication\index{MPI~subroutines!Global}
+to compute in parallel the dot products and Euclidean norms. This is implemented by using the MPI global communication\index{MPI subroutines!global}
\verb+MPI_Allreduce()+.
providing $12$GB of RAM with a memory bandwidth of $25.6$GB/s. In addition, two Tesla C1060 GPUs are
connected to each machine via a PCI-Express 16x Gen 2.0 interface with a throughput of $8$GB/s. A
Tesla C1060 GPU contains $240$ cores running at $1.3$GHz and providing a global memory of $4$GB with
-a memory bandwidth of $102$GB/s. Figure~\ref{ch12:fig:04} shows the general scheme of the GPU cluster\index{GPU~cluster}
+a memory bandwidth of $102$GB/s. Figure~\ref{ch12:fig:04} shows the general scheme of the GPU cluster\index{GPU!cluster}
that we used in the experimental tests.
Linux cluster version 2.6.39 OS is installed on CPUs. C programming language is used to code
All tests are made on double-precision floating point operations. The parameters of both linear
solvers are initialized as follows: the residual tolerance threshold $\varepsilon=10^{-12}$, the
maximum number of iterations $maxiter=500$, the right-hand side $b$ is filled with $1.0$, and the
-initial guess $x_0$ is filled with $0.0$. In addition, we limited the Arnoldi process\index{Iterative~method!Arnoldi~process}
+initial guess $x_0$ is filled with $0.0$. In addition, we limited the Arnoldi process\index{iterative method!Arnoldi process}
used in the GMRES method to $16$ iterations ($m=16$). For the sake of simplicity, we have chosen
the preconditioner $M$ as the main diagonal of the sparse matrix $A$. Indeed, it allows us to easily
compute the required inverse matrix $M^{-1}$, and it provides a relatively good preconditioning for
columns give, respectively, the execution times in seconds obtained on $24$ CPU cores
($Time_{cpu}$) and that obtained on $12$ GPUs ($Time_{gpu}$). Moreover, we take into account
the relative gains $\tau$ of a solver implemented on the GPU cluster compared to the same
-solver implemented on the CPU cluster. The relative gains\index{Relative~gain}, presented
+solver implemented on the CPU cluster. The relative gains\index{relative gain}, presented
in the fourth column, are computed as a ratio of the CPU execution time over the GPU
execution time:
\begin{equation}
a GPU cluster is more efficient than on a CPU cluster (see relative gains $\tau$). We can also
notice that the execution times of the CG method, whether in a CPU cluster or in a GPU cluster,
are better than those of the GMRES method for solving large symmetric linear systems. In fact, the
-CG method is characterized by a better convergence\index{Convergence} rate and a shorter execution
+CG method is characterized by a better convergence\index{convergence} rate and a shorter execution
time of an iteration than those of the GMRES method. Moreover, an iteration of the parallel GMRES
method requires more data exchanges between computing nodes compared to the parallel CG method.
%%*******************
\subsection{Mathematical model}
\label{ch13:sec:02.01}
-An obstacle problem\index{Obstacle~problem}, arising for example in mechanics or financial
-derivatives, consists of solving a time-dependent nonlinear equation\index{Nonlinear}:
+An obstacle problem\index{obstacle problem}, arising for example in mechanics or financial
+derivatives, consists of solving a time-dependent nonlinear equation\index{nonlinear}:
\begin{equation}
\left\{
\begin{array}{l}
The time-dependent equation~(\ref{ch13:eq:01}) is numerically solved by considering an
implicit or a semi-implicit time marching, where at each time step $k$ a stationary nonlinear
-problem\index{Nonlinear} is solved:
+problem\index{nonlinear} is solved:
\begin{equation}
\left\{
\begin{array}{l}
convection-diffusion operator is not self-adjoint. Moreover, the fact that
the operator is self-adjoint or not plays an important role in the choice
of the appropriate algorithm for solving nonlinear systems derived from the
-discretization of the obstacle problem\index{Obstacle~problem}. Nevertheless,
+discretization of the obstacle problem\index{obstacle problem}. Nevertheless,
since the convection coefficients arising in the operator~(\ref{ch13:eq:02})
are constant, we can formulate the same problem by self-adjoint operator
by performing a classical change of variables. Then, we can replace the stationary
Consequently, the numerical resolution of the diffusion problem (the self-adjoint
operator~(\ref{ch13:eq:04})) is done by optimization algorithms, in contrast to that
of the convection-diffusion problem (non self-adjoint operator~(\ref{ch13:eq:03}))
-which is done by relaxation algorithms. In the case of our studied algorithm, the convergence\index{Convergence}
+which is done by relaxation algorithms. In the case of our studied algorithm, the convergence\index{convergence}
is ensured by M-matrix property; then, the performance is linked to the magnitude of
the spectral radius of the iteration matrix, which is independent of the condition
number.
Owing to the large size of the previous discrete complementary problem~(\ref{ch13:eq:05}),
we will solve it by parallel synchronous or asynchronous iterative algorithms (see~\cite{ch13:ref3,ch13:ref4,ch13:ref5}).
In this chapter, we aim at harnessing the computing power of GPU clusters for solving these
-large nonlinear systems\index{Nonlinear}. Then, we choose to use the projected Richardson
-iterative method\index{Iterative~method!Projected~Richardson} for solving the diffusion
+large nonlinear systems\index{nonlinear}. Then, we choose to use the projected Richardson
+iterative method\index{iterative method!projected Richardson} for solving the diffusion
problem~(\ref{ch13:eq:04}). Indeed, this method is based on the iterations of the Jacobi
-method\index{Iterative~method!Jacobi}, which are easy to parallelize on parallel computers
+method\index{iterative method!Jacobi}, which are easy to parallelize on parallel computers
and easy to adapt to GPU architectures. Then, according to the boundary value problem
formulation with a self-adjoint operator~(\ref{ch13:eq:04}), we can consider here the
equivalent optimization problem and the fixed point mapping associated to its solution.
-Assume that $E=\mathbb{R}^{M}$ is a Hilbert space\index{Hilbert~space}, in which $\scalprod{.}{.}$
+Assume that $E=\mathbb{R}^{M}$ is a Hilbert space\index{Hilbert space}, in which $\scalprod{.}{.}$
is the scalar product and $\|.\|$ its associated norm. So, the general fixed point problem
to be solved is defined as follows:
\begin{equation}
K = \{U | U \geq \Phi \mbox{~everywhere in~} E\},
\label{ch13:eq:07}
\end{equation}
-where $\Phi$ is the discrete obstacle function. In fact, the obstacle problem~(\ref{ch13:eq:05})\index{Obstacle~problem}
+where $\Phi$ is the discrete obstacle function. In fact, the obstacle problem~(\ref{ch13:eq:05})\index{obstacle problem}
is formulated as the following constrained optimization problem:
\begin{equation}
\left\{
self-adjoint operator~(\ref{ch13:eq:04}) after change of variables.
For any $U\in E$; let $P_K(U)$ be the projection of $U$ on $K$. For any $\gamma\in\mathbb{R}$,
-$\gamma>0$, the fixed point mapping $F_{\gamma}$ of the projected Richardson method\index{Iterative~method!Projected~Richardson}
+$\gamma>0$, the fixed point mapping $F_{\gamma}$ of the projected Richardson method\index{iterative method!projected Richardson}
is defined as follows:
\begin{equation}
U^{*} = F_{\gamma}(U^{*}) = P_K(U^{*} - \gamma(\mathcal{A}.U^{*} - G)).
Let $\alpha\in\mathbb{N}$ be a positive integer. We consider that the space $E=\displaystyle\prod_{i=1}^{\alpha} E_i$
is a product of $\alpha$ subspaces $E_i$ where $i\in\{1,\ldots,\alpha\}$. Note that $E_i=\mathbb{R}^{m_i}$,
-where $\displaystyle\sum_{i=1}^{\alpha} m_{i}=M$, is also a Hilbert space\index{Hilbert~space}
+where $\displaystyle\sum_{i=1}^{\alpha} m_{i}=M$, is also a Hilbert space\index{Hilbert space}
in which $\scalprod{.}{.}_i$ denotes the scalar product and $|.|_i$ the associated norm, for
all $i\in\{1,\ldots,\alpha\}$. Then, for all $u,v\in E$, $\scalprod{u}{v}=\displaystyle\sum_{i=1}^{\alpha}\scalprod{u_i}{v_i}_i$
is the scalar product on $E$.
and $K_i$ is a closed convex set. Let also $G=(G_1,\ldots,G_{\alpha})\in E$; for any
$U\in E$, $P_K(U)=(P_{K_1}(U_1),\ldots,P_{K_{\alpha}}(U_{\alpha}))$ is the projection of $U$
on $K$ where $\forall i\in\{1,\ldots,\alpha\},P_{K_i}$ is the projector from $E_i$ onto
-$K_i$. So, the fixed point mapping of the projected Richardson method~(\ref{ch13:eq:10})\index{Iterative~method!Projected~Richardson}
+$K_i$. So, the fixed point mapping of the projected Richardson method~(\ref{ch13:eq:10})\index{iterative method!projected Richardson}
can be written in the following way:
\begin{equation}
\forall U\in E\mbox{,~}\forall i\in\{1,\ldots,\alpha\}\mbox{,~}F_{i,\gamma}(U) = P_{K_i}(U_i - \gamma(\mathcal{A}_i.U - G_i)).
\label{ch13:eq:15}
\end{equation}
-The previous asynchronous scheme\index{Asynchronous} of the projected Richardson
+The previous asynchronous scheme\index{asynchronous} of the projected Richardson
method models computations that are carried out in parallel without order or
synchronization (according to the behavior of the parallel iterative method) and
describes a subdomain method without overlapping. It is a general model that takes
into account all possible situations of parallel computations and nonblocking message
-passing. So, the synchronous iterative scheme\index{Synchronous} is defined by
+passing. So, the synchronous iterative scheme\index{synchronous} is defined by
\begin{equation}
\forall j\in\{1,\ldots,\alpha\} \mbox{,~} \forall p\in\mathbb{N} \mbox{,~} \rho_j(p)=p.
\label{ch13:eq:16}
relaxed components to be used in the computational process may be guided by any criterion,
and in particular, a natural criterion is to pickup the most recently available
values of the components computed by the processors. Furthermore, the asynchronous
-iterations are implemented by means of nonblocking MPI communication subroutines\index{MPI~subroutines!Nonblocking}
+iterations are implemented by means of nonblocking MPI communication subroutines\index{MPI subroutines!nonblocking}
(asynchronous communications).
The important property ensuring the convergence of the parallel projected Richardson
method, both synchronous and asynchronous algorithms, is the fact that $\mathcal{A}$
-is an M-matrix. Moreover, the convergence\index{Convergence} proceeds from a result
+is an M-matrix. Moreover, the convergence\index{convergence} proceeds from a result
of~\cite{ch13:ref6}. Indeed, there exists a value $\gamma_0>0$, such that $\forall\gamma\in ]0,\gamma_0[$,
the parallel iterations~(\ref{ch13:eq:13}), (\ref{ch13:eq:14}), and~(\ref{ch13:eq:15}),
associated to the fixed point mapping $F_\gamma$~(\ref{ch13:eq:12}), converge to the
or asynchronous using the MPI communication subroutines, whereas inside each GPU node,
a CUDA parallelization is performed.
-Let $S$ denote the number of computing nodes\index{Computing~node} on the GPU cluster,
+Let $S$ denote the number of computing nodes\index{computing node} on the GPU cluster,
where a computing node is composed of CPU core holding one MPI process and a GPU card.
So, before starting computations, the obstacle problem of size $(NX\times NY\times NZ)$
is split into $S$ parallelepipedic subproblems, each for a node (MPI process, GPU), as
All the computing nodes of the GPU cluster execute in parallel the Algorithm~\ref{ch13:alg:01}
on a three-dimensional subproblems of size $(NX\times ny\times nz)$.
-This algorithm gives the main key points for solving an obstacle problem\index{Obstacle~problem}
+This algorithm gives the main key points for solving an obstacle problem\index{obstacle problem}
defined in a three-dimensional domain, where $A$ is the discretization matrix, $G$
is the right-hand side, and $U$ is the solution vector. After the initialization step,
all the data generated from the partitioning operation are copied from the CPU memories
to the GPU global memories to be processed on the GPUs. Next, the algorithm uses $NbSteps$
time steps to solve the global obstacle problem. In fact, it uses a parallel algorithm
adapted to GPUs from the projected Richardson iterative method for solving the nonlinear
-systems\index{Nonlinear} of the obstacle problem. This function is defined by {\it Solve()}
+systems\index{nonlinear} of the obstacle problem. This function is defined by {\it Solve()}
in Algorithm~\ref{ch13:alg:01}. At every time step, the initial guess $U^0$ for the iterative
algorithm is set to the solution found at the previous time step. Moreover, the right-hand
side $G$ is computed as follows: \[G = \frac{1}{k}.U^{prev} + F\] where $k$ is the time step,
\end{algorithm}
As are many other iterative methods, the algorithm of the projected Richardson
-method\index{Iterative~method!Projected~Richardson} is based on algebraic
+method\index{iterative method!projected Richardson} is based on algebraic
functions operating on vectors and/or matrices, which are more efficient on
parallel computers when they work on large vectors. Its parallel implementation
on the GPU cluster is carried out so that the GPUs execute the vector operations
as kernels and the CPUs execute the serial codes, supervise the kernel executions
-and the data exchanges with the neighboring nodes\index{Neighboring~node}, and
+and the data exchanges with the neighboring nodes\index{neighboring node}, and
supply the GPUs with data. Algorithm~\ref{ch13:alg:02} shows the main key points
of the parallel iterative algorithm (function $Solve()$ in Algorithm~\ref{ch13:alg:01}).
All the vector operations inside the main loop ({\bf repeat} ... {\bf until})
\end{itemize}
As mentioned previously, we develop the \emph{synchronous} and \emph{asynchronous}
algorithms of the projected Richardson method. Obviously, in this scope, the
-synchronous\index{Synchronous} or asynchronous\index{Asynchronous} communications
+synchronous\index{synchronous} or asynchronous\index{asynchronous} communications
refer to the communications between the CPU cores (MPI processes) on the GPU cluster,
in order to exchange the vector elements associated to subdomain boundaries. For
the memory copies between a CPU core and its GPU, we use the synchronous communication
and \verb+cublasGetVectorAsync()+ in the asynchronous algorithm. Moreover, we
use the communication routines of the MPI library to carry out the data exchanges
between the neighboring nodes. We use the following communication routines: \verb+MPI_Isend()+
-and \verb+MPI_Irecv()+ to perform nonblocking\index{MPI~subroutines!Nonblocking}
+and \verb+MPI_Irecv()+ to perform nonblocking\index{MPI subroutines!nonblocking}
sends and receives, respectively. For the synchronous algorithm, we use the MPI
routine \verb+MPI_Waitall()+ which puts the MPI process of a computing node in
blocking status until all data exchanges with neighboring nodes (sends and receives)
are completed. In contrast, for the asynchronous algorithms, we use the MPI routine
\verb+MPI_Test()+ which tests the completion of a data exchange (send or receives)
-without putting the MPI process in blocking status\index{MPI~subroutines!Blocking}.
+without putting the MPI process in blocking status\index{MPI subroutines!blocking}.
The function $Compute\_New\_Vector\_Elements()$ (line~$6$ in Algorithm~\ref{ch13:alg:02})
computes, at each iteration, the new elements of the iterate vector $U$. Its general code
is presented in Listing~\ref{ch13:list:01} (CPU function). The iterations of the projected
-Richardson method\index{Iterative~method!Projected~Richardson}, based on those of the Jacobi
-method\index{Iterative~method!Jacobi}, are defined as follows:
+Richardson method\index{iterative method!projected Richardson}, based on those of the Jacobi
+method\index{iterative method!Jacobi}, are defined as follows:
\begin{equation}
\begin{array}{ll}
u^{p+1}(x,y,z) =& \frac{1}{Center}(g(x,y,z) - (Center\cdot u^{p}(x,y,z) + \\
The function $Convergence()$ (line~$11$ in Algorithm~\ref{ch13:alg:02}) allows us
to detect the convergence of the parallel iterative algorithm and is based on
-the tolerance threshold\index{Convergence!Tolerance~threshold} $\varepsilon$
-and the maximum number of relaxations\index{Convergence!Maximum~number~of~relaxations}
+the tolerance threshold\index{convergence!tolerance threshold} $\varepsilon$
+and the maximum number of relaxations\index{convergence!maximum number of relaxations}
$MaxRelax$. We take into account the number of relaxations since that of iterations
cannot be computed in the asynchronous case. Indeed, a relaxation is the update~(\ref{ch13:eq:13})
of a local iterate vector $U_i$ according to $F_i$. Then, counting the number
other hand, an iteration is the update of at least all vector components with
$F_i$.
-In the synchronous\index{Synchronous} algorithm, the global convergence is detected
+In the synchronous\index{synchronous} algorithm, the global convergence is detected
when the maximal value of the absolute error, $error$, is sufficiently small and/or
the maximum number of relaxations, $MaxRelax$, is reached, as follows:
$$
conv \leftarrow true;
\end{array}
$$
-where the function $AllReduce()$ uses the MPI global reduction subroutine\index{MPI~subroutines!Global}
+where the function $AllReduce()$ uses the MPI global reduction subroutine\index{MPI subroutines!global}
\verb+MPI_Allreduce()+ to compute the maximal value, $maxerror$, among the local
absolute errors, $error$, of all computing nodes, and $p$ (in Algorithm~\ref{ch13:alg:02})
is used as a counter of the local relaxations carried out by a computing node. In
-the asynchronous\index{Asynchronous} algorithms, the global convergence is detected
+the asynchronous\index{asynchronous} algorithms, the global convergence is detected
when all computing nodes locally converge. For this, we use a token ring architecture
around which a boolean token travels, in one direction, from a computing node to another.
Starting from node $0$, the boolean token is set to $true$ by node $i$ if the local
%%--------------------------%%
\section{Experimental tests on a GPU cluster}
\label{ch13:sec:05}
-The GPU cluster\index{GPU~cluster} of tests that we used in this chapter is an $20GB/s$
+The GPU cluster\index{GPU!cluster} of tests that we used in this chapter is an $20GB/s$
Infiniband network of six machines. Each machine is a Quad-Core Xeon E5530 CPU running at
$2.4$GHz. It provides a RAM memory of $12$GB with a memory bandwidth of $25.6$GB/s and it
is equipped with two NVIDIA Tesla C1060 GPUs. A Tesla GPU contains in total $240$ cores
The fourth and seventh columns of Table~\ref{ch13:tab:02} show the relative gains
obtained by executing the parallel algorithms on the cluster of $12$ GPUs instead
-on the cluster of $24$ CPU cores. We compute the relative gain\index{Relative~gain}
+on the cluster of $24$ CPU cores. We compute the relative gain\index{relative gain}
$\tau$ as a ratio between the execution time $T_{cpu}$ spent on the CPU cluster over
that $T_{gpu}$ spent on the GPU cluster: \[\tau=\frac{T_{cpu}}{T_{gpu}}.\] We can see
from these ratios that solving large obstacle problems is faster on the GPU cluster
%%--------------------------%%
\section{Red-black ordering technique}
\label{ch13:sec:06}
-As is wellknown, the Jacobi method\index{Iterative~method!Jacobi} is characterized
-by a slow convergence\index{Convergence} rate compared to some iterative methods\index{Iterative~method}
-(for example, Gauss-Seidel method\index{Iterative~method!Gauss-Seidel}). So, in this
+As is wellknown, the Jacobi method\index{iterative method!Jacobi} is characterized
+by a slow convergence\index{convergence} rate compared to some iterative methods\index{iterative method}
+(for example, Gauss-Seidel method\index{iterative method!Gauss-Seidel}). So, in this
section, we present some solutions to reduce the execution time and the number of
relaxations and, more specifically, to speed up the convergence of the parallel
projected Richardson method on the GPU cluster. We propose to use the point red-black
-ordering technique\index{Iterative~method!Red-Black~ordering} to accelerate the
+ordering technique\index{iterative method!red-black ordering} to accelerate the
convergence. This technique is often used to increase the parallelism of iterative
methods for solving linear systems~\cite{ch13:ref13,ch13:ref14,ch13:ref15}. We
apply it to the projected Richardson method as a compromise between the Jacobi
However, in both solutions, for each memory transaction, only half of the memory
segment addressed by a half-warp is used. So, the computation of the red and black
vector elements leads to using twice the initial number of memory transactions. Then,
-we apply the point red-black ordering\index{Iterative~method!Red-Black~ordering}
+we apply the point red-black ordering\index{iterative method!red-black ordering}
accordingly to the $y$-coordinate, as is shown in Figure~\ref{ch13:fig:06.02}. In
this case, the vector elements having even $y$-coordinate are computed in parallel
using the values of those having odd $y$-coordinate and then viceversa. Moreover,
between the time of the computation over that of the communication is reduced when
the computations are performed on GPUs. Indeed, GPUs compute faster than CPUs and
communications are more time-consuming. In this context, asynchronous algorithms
-are more scalable than synchronous ones. So, with large scale GPU clusters, synchronous\index{Synchronous}
+are more scalable than synchronous ones. So, with large scale GPU clusters, synchronous\index{synchronous}
algorithms might be more penalized by communications, as can be deduced from Figure~\ref{ch13:fig:07}.
-That is why we think that asynchronous\index{Asynchronous} iterative algorithms
+That is why we think that asynchronous\index{asynchronous} iterative algorithms
are all the more interesting in this case.
simulation performance but GPUs are also a promising technology with
an attractive performance/cost ratio.
-Conceptually a MAS\index{Multi-Agent System} is a distributed system
+Conceptually a MAS\index{multi-agent system} is a distributed system
as it favors the definition and description of large sets of
individuals, the agents, that can be run in parallel. As a large set
of agents could have the same behavior, a Single Instruction Multiple
algorithm which illustrates the case of agents with a simple behavior
and few synchronization problems.
-\subsection{The Collembola model\index{Collembola model}}
+\subsection{The Collembola model\index{collembola model}}
\label{ch17:subsec:collembolamodel}
The Collembola model is an example of multi-agent system using GIS
Chaos theory studies the behavior of dynamical systems that are perfectly predictable, yet appear to be wildly amorphous and meaningless.
-Chaotic systems\index{chaotic systems} are highly sensitive to initial conditions,
+Chaotic systems\index{chaotic!systems} are highly sensitive to initial conditions,
which is popularly referred to as the butterfly effect.
In other words, small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes,
in general rendering long-term prediction impossible \cite{kellert1994wake}. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved \cite{kellert1994wake}. That is, the deterministic nature of these systems does not make them predictable \cite{kellert1994wake,Werndl01032009}. This behavior is known as deterministic chaos, or simply chaos. It has been well-studied in mathematics and
-\subsection{Chaotic iterations}\index{chaotic iterations}
+\subsection{Chaotic iterations}\index{chaotic!iterations}
\label{subsection:Chaotic iterations}
Let us now introduce an example of a dynamical systems family that has
\section{Introduction}
\label{ch19:intro}
-The Number Field Sieve (NFS)\index{iterative methods!Number Field Sieve} is the current state-of-the-art integer factorization method. It requires the solution of a large sparse linear system over Galois Field GF(2) (called the linear algebra step). The Block Wiedemann\index{Number Field Sieve!Block Wiedemann} (BW)\cite{ch19:bw} algorithm can be used to solve such a large sparse linear system efficiently using iterative sparse matrix vector multiplication (SpMV).
+The Number Field Sieve (NFS)\index{iterative method!number field sieve} is the current state-of-the-art integer factorization method. It requires the solution of a large sparse linear system over Galois Field GF(2) (called the linear algebra step). The Block Wiedemann\index{number field sieve!block Wiedemann} (BW)\cite{ch19:bw} algorithm can be used to solve such a large sparse linear system efficiently using iterative sparse matrix vector multiplication (SpMV).
Recent integer factorization efforts have been using CPU clusters to solve the large sparse linear system \cite{ch19:kilobit,ch19:rsa768}. The RSA-768 factorization \cite{ch19:rsa768}, for example, reported a runtime of 3 months for the linear algebra step on a cluster with 48 AMD dual hex-core CPUs. Previous work on parallelizing the linear algebra step focused on using CPU clusters and grids \cite{ch19:aoki,ch19:hwang,ch19:grid,ch19:hetero768}. In this chapter, we present a CUDA approach that can be used to accelerate the costly iterative SpMV operation for matrices derived from NFS.
\section{Block Wiedemann algorithm}
\label{ch19:block-wiedemann}
-The BW algorithm heuristically finds $n$ vectors in the kernel space \index{Number Field Sieve!kernel space}of a $d \times d$ binary matrix $B$; $n$ is one of two parameters $m, n$, called blocking factors\index{Number Field Sieve!blocking factors}. BW consists of the following steps:
+The BW algorithm heuristically finds $n$ vectors in the kernel space \index{number field sieve!kernel space}of a $d \times d$ binary matrix $B$; $n$ is one of two parameters $m, n$, called blocking factors\index{number field sieve!blocking factors}. BW consists of the following steps:
\begin{itemize}
\item \textbf{Step 1 (BW1):} Compute the matrix sequence
\end{equation}
where $x,y$ are randomly chosen binary matrices of size $m \times d$ and $d \times n$, respectively.
-\item \textbf{Step 2 (BW2):} The Berlekamp-Massey\index{Number Field Sieve!Berlekamp-Massey} algorithm \cite{ch19:Thome:subqad} is used to compute a generating polynomial of the matrix sequence $A$ from BW1 in the form
+\item \textbf{Step 2 (BW2):} The Berlekamp-Massey\index{number field sieve!Berlekamp-Massey} algorithm \cite{ch19:Thome:subqad} is used to compute a generating polynomial of the matrix sequence $A$ from BW1 in the form
\begin{equation}
F(X)= \sum_{i=1}^{{ \frac{d}{n} }+O(1)} {C_i \cdot X^i},
\end{equation}
\label{fig:ex_matrix}
\end{figure}
- \subsubsection*{Coordinate list (COO)\index{Compressed storage format!COO}}
+ \subsubsection*{Coordinate list (COO)\index{compressed storage format!COO}}
For each nonzero, both its column and row indices are explicitly stored. The Cusp implementation \cite{ch19:cusp} stores elements in sorted order of row indices ensuring that entries with the same row index are stored contiguously.
\begin{lstlisting}[caption={}]
coo.value = {3, 1, 5, 2, 4, 6, 8, 10, 9, 7, 11}
\end{lstlisting}
- \subsubsection*{Compressed sparse row (CSR)\index{Compressed storage format!CSR}} Nonzeros are sorted by the row index, and only their column indices are explicitly stored in a column array. Additionally, the vector $row\_start$ stores indices of the first nonzero element of each row in the column array.
+ \subsubsection*{Compressed sparse row (CSR)\index{compressed storage format!CSR}} Nonzeros are sorted by the row index, and only their column indices are explicitly stored in a column array. Additionally, the vector $row\_start$ stores indices of the first nonzero element of each row in the column array.
\begin{lstlisting}[caption={}]
csr.row_start = {0, 1, 3, 5, 8, 9, 12}
csr.value = {3, 1, 5, 2, 4, 6, 8, 10, 9, 7, 11}
\end{lstlisting}
- \subsubsection*{Ellpack (ELL)\index{Compressed storage format!ELL}} Let $K$ be the maximum number of nonzero elements in any row of the matrix. Then, for each row, ELL stores exactly $K$ elements (extra padding is required for rows that contain fewer than $K$ nonzero elements). Only column indices are required to store in an array, the row index can be implied since exactly $K$ elements are stored per row. The Cusp implementation stores the column indices in a transposed manner so that consecutive threads can access consecutive memory addresses.
+ \subsubsection*{Ellpack (ELL)\index{compressed storage format!ELL}} Let $K$ be the maximum number of nonzero elements in any row of the matrix. Then, for each row, ELL stores exactly $K$ elements (extra padding is required for rows that contain fewer than $K$ nonzero elements). Only column indices are required to store in an array, the row index can be implied since exactly $K$ elements are stored per row. The Cusp implementation stores the column indices in a transposed manner so that consecutive threads can access consecutive memory addresses.
\begin{lstlisting}[caption={}]
ell.col_index = {
*, *, *, 10, *, *}
\end{lstlisting}
- \subsubsection*{Hybrid (HYB)\index{Compressed storage format!HYB}} The HYB format heuristically computes a value $K$ and stores $K$ nonzeros per rows in the ELL format. When a row has more than $K$ non-zeros, the trailing nonzeros are stored in COO. This design decreases the storage overhead due to ELL padding elements and thus improves the overall performance.
+ \subsubsection*{Hybrid (HYB)\index{compressed storage format!HYB}} The HYB format heuristically computes a value $K$ and stores $K$ nonzeros per rows in the ELL format. When a row has more than $K$ non-zeros, the trailing nonzeros are stored in COO. This design decreases the storage overhead due to ELL padding elements and thus improves the overall performance.
\begin{lstlisting}[caption={}]
hyb.nnz_per_row = 2
hyb.ell.col_index = {2, 1, 1, 0, 2, 0, *, 4, 3, 2, *, 5}
hyb.coo.value = {10}
\end{lstlisting}
- \subsubsection*{Sliced Ellpack (SLE)\index{Compressed storage format!SLE}} This format partitions the matrix into horizontal slices of $S$ adjacent rows \cite{ch19:sle}. Each slice is stored in ELLPACK format. The maximum number of nonzeros may be different for each slice. An additional array $slice\_start$ is used to index the first element in each slice. The matrix rows are usually sorted by the number of nonzeros per row in order to move rows with similar number of nonzeros together.
+ \subsubsection*{Sliced Ellpack (SLE)\index{compressed storage format!SLE}} This format partitions the matrix into horizontal slices of $S$ adjacent rows \cite{ch19:sle}. Each slice is stored in ELLPACK format. The maximum number of nonzeros may be different for each slice. An additional array $slice\_start$ is used to index the first element in each slice. The matrix rows are usually sorted by the number of nonzeros per row in order to move rows with similar number of nonzeros together.
\begin{lstlisting}[caption={}]
sle.slice_size = 2
sle.col_index = {
\section{A hybrid format for SpMV on GPUs}
\label{Implementation}
-As a preprocessing step, we reorder the rows of the matrix by their \emph{row weight}, in nonincreasing order. The row weight of row $j$ of $B$ is defined as the total number of nonzero elements in row $j$. We then partition the sorted matrix rows into at most four consecutive parts. Each part uses a different format. The different formats are optimized for the sparseness properties of each partition as shown in Figure \ref{fig:partitioning}. For the densest part, we use a dense format. When the matrix gets less dense, we switch to another format which we call \index{Compressed storage format!Sliced COO} Sliced COO (SCOO). SCOO has three variants, small, medium, and large. Our formats are now described in more detail.
+As a preprocessing step, we reorder the rows of the matrix by their \emph{row weight}, in nonincreasing order. The row weight of row $j$ of $B$ is defined as the total number of nonzero elements in row $j$. We then partition the sorted matrix rows into at most four consecutive parts. Each part uses a different format. The different formats are optimized for the sparseness properties of each partition as shown in Figure \ref{fig:partitioning}. For the densest part, we use a dense format. When the matrix gets less dense, we switch to another format which we call \index{compressed storage format!sliced COO} Sliced COO (SCOO). SCOO has three variants, small, medium, and large. Our formats are now described in more detail.
\begin{figure}[t]
\centering
As GPUs have their own memory, the first step consists of allocating memory on
-the GPU. A call to \texttt{cudaMalloc}\index{CUDA~functions!cudaMalloc}
+the GPU. A call to \texttt{cudaMalloc}\index{CUDA functions!cudaMalloc}
allocates memory on the GPU. The second parameter represents the size of the
allocated variables, this size is expressed in bits.
In this example, we want to compare the execution time of the additions of two
arrays in CPU and GPU. So for both these operations, a timer is created to
measure the time. CUDA proposes to manipulate timers quite easily. The first
-step is to create the timer\index{CUDA~functions!timer}, then to start it, and at
+step is to create the timer\index{CUDA functions!timer}, then to start it, and at
the end to stop it. For each of these operations a dedicated function is used.
In order to compute the same sum with a GPU, the first step consists of
transferring the data from the CPU (considered as the host with CUDA) to the GPU
(considered as the device with CUDA). A call to \texttt{cudaMemcpy} copies the content of an array allocated in the host to the device when the fourth
parameter is set
-to \texttt{cudaMemcpyHostToDevice}\index{CUDA~functions!cudaMemcpy}. The first
+to \texttt{cudaMemcpyHostToDevice}\index{CUDA functions!cudaMemcpy}. The first
parameter of the function is the destination array, the second is the
source array, and the third is the number of elements to copy (expressed in
bytes).
sufficient). In Listing~\ref{ch2:lst:ex1} at the beginning, a simple kernel,
called \texttt{addition} is defined to compute in parallel the summation of the
two arrays. With CUDA, a kernel starts with the
-keyword \texttt{\_\_global\_\_} \index{CUDA~keywords!\_\_shared\_\_} which
+keyword \texttt{\_\_global\_\_} \index{CUDA keywords!\_\_shared\_\_} which
indicates that this kernel can be called from the C code. The first instruction
in this kernel is used to compute the variable \texttt{tid} which represents the
-thread index. This thread index\index{thread index} is computed according to
+thread index. This thread index\index{CUDA keywords!thread index} is computed according to
the values of the block index
-(called \texttt{blockIdx} \index{CUDA~keywords!blockIdx} in CUDA) and of the
-thread index (called \texttt{threadIdx}\index{CUDA~keywords!threadIdx} in
+(called \texttt{blockIdx} \index{CUDA keywords!blockIdx} in CUDA) and of the
+thread index (called \texttt{threadIdx}\index{CUDA keywords!threadIdx} in
CUDA). Blocks of threads and thread indexes can be decomposed into 1 dimension,
2 dimensions, or 3 dimensions. {\bf A REGARDER} According to the dimension of manipulated data,
the appropriate dimension can be useful. In our example, only one dimension is
used. Then using the notation \texttt{.x}, we can access the first dimension
(\texttt{.y} and \texttt{.z}, respectively allow access to the second and
-third dimension). The variable \texttt{blockDim}\index{CUDA~keywords!blockDim}
+third dimension). The variable \texttt{blockDim}\index{CUDA keywords!blockDim}
gives the size of each block.
-\section{Second example: using CUBLAS}
+\section{Second example: using CUBLAS \index{CUBLAS}}
\label{ch2:2ex}
The Basic Linear Algebra Subprograms (BLAS) allows programmers to use efficient
operations~\cite{ch2:journals/ijhpca/Dongarra02}. Some of those operations seem
to be easy to implement with CUDA. Nevertheless, as soon as a reduction is
needed, implementing an efficient reduction routine with CUDA is far from being
-simple. Roughly speaking, a reduction operation\index{reduction~operation} is an
+simple. Roughly speaking, a reduction operation\index{reduction operation} is an
operation which combines all the elements of an array and extracts a number
computed from all the elements. For example, a sum, a maximum, or a dot product
are reduction operations.
\section{Performance measurements}
As our goal is to design very fast implementations of basic image processing algorithms, we need to make quite accurate time-measurements, within the order of magnitude of $0.01$~ms. Again, the easiest way of doing so is to use the helper functions of the \textbf{cutil} library. As usual, because the durations we are measuring are short and possibly subject to non negligible variations, a good practice is to measure multiple executions and report the mean runtime. All time results given in this chapter have been obtained through 1000 calls to each kernel.
-Listing \ref{lst:chronos} shows how to use the dedicated \textbf{cutil} functions \index{Cutil library!Timer usage}. Timer declaration and creation need to be performed only once while reset, start and stop functions can be used as often as necessary. Synchronization is mandatory before stopping the timer (Line 7), to avoid runtime measurement being biased.
+Listing \ref{lst:chronos} shows how to use the dedicated \textbf{cutil} functions \index{Cutil library!timer usage}. Timer declaration and creation need to be performed only once while reset, start and stop functions can be used as often as necessary. Synchronization is mandatory before stopping the timer (Line 7), to avoid runtime measurement being biased.
\lstinputlisting[label={lst:chronos},caption=Time measurement technique using cutil functions]{Chapters/chapter3/code/exChronos.cu}
In an attempt to provide relevant speedup values, we either implemented CPU versions of the algorithms studied or used the values found in existing literature. Still, the large number and diversity of hardware platforms and GPU cards makes it impossible to benchmark every possible combination and significant differences may occur between the speedups we report and those obtained with different devices. As a reference, our developing platform details as follows:
As for registers, designing a generic median filter that would use only that type of memory seems difficult, due to the above mentioned 63 register-per-thread limitation. \index{register count}
Yet, nothing forbids us to design fixed-size filters, each of them specific to one of the most popular window sizes. It might be worth the effort as dramatic increase in performance could be expected.
-Another track to follow in order to improve performance of GPU implementations consists of hiding latencies generated by arithmetic instruction calls and memory accesses. Both can be partially hidden by introducing Instruction-Level Parallelism \index{Instruction-Level Parallelism}(ILP) and by increasing the data count outputted by each thread. Though such techniques may seem to break the NVIDIA occupancy paradigm, they can lead to dramatically higher data throughput values.
+Another track to follow in order to improve performance of GPU implementations consists of hiding latencies generated by arithmetic instruction calls and memory accesses. Both can be partially hidden by introducing Instruction-Level Parallelism \index{instruction-level parallelism}(ILP) and by increasing the data count outputted by each thread. Though such techniques may seem to break the NVIDIA occupancy paradigm, they can lead to dramatically higher data throughput values.
The following sections illustrate these ideas and detail the design of the fastest CUDA median filter known to date.
\section{A 3$\times$3 median filter: using registers}
This iterative process is illustrated in Figure \ref{fig:forgetful3}, where it achieves one entire $3\times 3$ median selection, beginning with $k_9=6$ elements.
The \textit{forgetful selection} method, used in \cite{mcguire2008median}, does not imply full sorting of values, but only selecting minimum and maximum values, which, at the price of a few iteration steps ($n^2-k$), reduces arithmetic complexity.
-Listing \ref{lst:medianForget1pix3} details this process where forgetful selection is achieved by use of simple 2-value swapping function ($s()$, lines 1 to 5) that swaps input values if necessary, so as to achieve the first steps of an incomplete sorting network \cite{Batcher:1968:SNA:1468075.1468121}. Moreover, whenever possible, in order to increase the ILP, \index{Instruction-Level Parallelism} successive calls to $s()$ are done with independant elements as arguments. This is illustrated by the macro definitions of lines 7 to 12 and by Figure \ref{fig:bitonic} which details the first iteration of the $5\times 5$ selection, starting with $k_{25}=14$ elements.
+Listing \ref{lst:medianForget1pix3} details this process where forgetful selection is achieved by use of simple 2-value swapping function ($s()$, lines 1 to 5) that swaps input values if necessary, so as to achieve the first steps of an incomplete sorting network \cite{Batcher:1968:SNA:1468075.1468121}. Moreover, whenever possible, in order to increase the ILP, \index{instruction-level parallelism} successive calls to $s()$ are done with independant elements as arguments. This is illustrated by the macro definitions of lines 7 to 12 and by Figure \ref{fig:bitonic} which details the first iteration of the $5\times 5$ selection, starting with $k_{25}=14$ elements.
\begin{figure}[b]
\centering
\includegraphics[width=6cm]{Chapters/chapter3/img/forgetful_selection.png}
\section{Overview}
In this chapter, after dealing with GPU median filter implementations,
-we propose to explore how convolutions\index{Convolution} can be implemented on modern
+we propose to explore how convolutions\index{convolution} can be implemented on modern
GPUs. Widely used in digital image processing filters, the \emph{convolution
operation} basically consists of taking the sum of products of elements
from two 2D functions, letting one of the two functions move over
convolutions of the techniques applied to median filters in the
previous chapter, as a reminder: texture memory used with incoming
data, pinned memory with output data, optimized use of registers
-while processing data and multiple output per thread\index{Multiple output per thread}.
+while processing data and multiple output per thread\index{multiple output per thread}.
One significant difference lies in the fact
that the median filter uses only one parameter, the size of the window mask,
which can be hard-coded, while a convolution mask requires referring to several parameters; hard-coding
\lstinputlisting[label={lst:convoGene8x8pL3},caption=CUDA kernel achieving a $3\times 3$ convolution operation with the mask in symbol memory and direct data fetches in texture memory]{Chapters/chapter4/code/convoGene8x8pL3.cu}
-\subsection{Using shared memory to store prefetched data\index{Prefetching}.}
+\subsection{Using shared memory to store prefetched data\index{prefetching}.}
\index{memory~hierarchy!shared~memory}
A more convenient way of coding a convolution kernel is to use shared memory to perform a prefetching stage of the whole halo before computing the convolution sums.
This proves to be quite efficient and more versatile, but it obviously generates some overhead because
\label{tab:cpyToArray}
\end{table}
\lstinputlisting[label={lst:convoSepSh},caption=data copy between the calls to 1D convolution kernels achieving a 2D separable convolution operation]{Chapters/chapter4/code/convoSepSh.cu}
-\lstinputlisting[label={lst:convoSepShV},caption=CUDA kernel achieving a horizontal 1D convolution operation after a preloading \index{Prefetching} of data into shared memory]{Chapters/chapter4/code/convoSepShV.cu}
+\lstinputlisting[label={lst:convoSepShV},caption=CUDA kernel achieving a horizontal 1D convolution operation after a preloading \index{prefetching} of data into shared memory]{Chapters/chapter4/code/convoSepShV.cu}
\lstinputlisting[label={lst:convoSepShH},caption=CUDA kernel achieving a vertical 1D convolution operation after a preloading of data into shared memory]{Chapters/chapter4/code/convoSepShH.cu}
\section{Conclusion}
\subsection{Heat conduction equation}\index{heat conduction}
First, we consider a two-dimensional heat conduction problem defined on a unit square. The heat conduction equation is a parabolic partial differential diffusion equation, including both spatial and temporal derivatives. It describes how the diffusion of heat in a medium changes with time. Diffusion equations are of great importance in many fields of sciences, e.g., fluid dynamics, where the fluid motion is uniquely described by the Navier-Stokes equations, which include a diffusive viscous term~\cite{ch5:chorin1993,ch5:Ferziger1996}.%, or in financial science where diffusive terms are present in the Black-Scholes equations for estimation of option price trends~\cite{}.
-The heat problem is an IVP \index{initial value problem}, it describes how the heat distribution evolves from a specified initial state. Together with homogeneous Dirichlet boundary conditions\index{boundary conditions}, the heat problem in the unit square is given as
+The heat problem is an IVP \index{initial value problem}, it describes how the heat distribution evolves from a specified initial state. Together with homogeneous Dirichlet boundary conditions\index{boundary condition}, the heat problem in the unit square is given as
\begin{subequations}\begin{align}
\frac{\partial u}{\partial t} - \kappa\nabla^2u = 0, & \qquad (x,y)\in \Omega([0,1]\times[0,1]),\quad t\geq 0, \label{ch5:eq:heateqdt}\\
u = 0, & \qquad (x,y) \in \partial\Omega,\label{ch5:eq:heateqbc}
computations (CPU and GPU) with communications (GPU transfers and internode
communications). However, we have previously shown that for some parallel
iterative algorithms, it is sometimes even more efficient to use an asynchronous
-scheme of iterations\index{iterations!asynchronous} \cite{HPCS2002,ParCo05,Para10}. In that case, the nodes do
+scheme of iterations\index{iterations asynchronous} \cite{HPCS2002,ParCo05,Para10}. In that case, the nodes do
not wait for each other but they perform their iterations using the last
external data they have received from the other nodes, even if this
data was produced \emph{before} the previous iteration on the other nodes.
streams (pipelines), pre-processing of the next data item and/or post-processing
of the previous result can be done on the CPU while the GPU is processing the current
data item. In other cases, the CPU can perform \emph{auxiliary}
-computations\index{computation!auxiliary}
+computations\index{computation auxiliary}
that are not absolutely required to obtain the result but that may accelerate
the entire iterative process. Another possibility would be to distribute the
main computations between the GPU and CPU. However, this
@InProceedings{ch8:Carneiro_2011,
author = {T. Carneiro and A. E. Muritibab and M. Negreirosc and G. A. Lima de Campos},
- title = {A New Parallel Schema for Branch-and-Bound Algorithms Using GPGPU},
+ title = {A New Parallel Schema for Branch-and-Bound Algorithms Using {GPGPU}},
booktitle = {23rd International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)},
year = {2011}
}
author = "L. G. Casadoa and J. A. Martíneza and I. Garcíaa and E. M. T. Hendrixb.",
title = "Branch-and-Bound interval global optimization on shared memory multiprocessors",
journal = "Optimization Methods and Software",
- volume = "23, No.5",
+ volume = "23",
+ number= "5",
pages = "689-701",
year = "2008"
}
@InProceedings{ch8:Fung,
author = {W. Fung and I. Sham and G. Yuan and T. Aamodt},
- title = {Dynamic warp formation and scheduling for efficient gpu control flow},
+ title = {Dynamic warp formation and scheduling for efficient {GPU} control flow},
booktitle = {{In MICRO '07: Proceedings of the 40th Annual IEEE/ACM International Symposium on Micro-architecture}},
year = {2007},
pages = {407-420},
@Article{ch8:Gendron_1994,
author = {B. Gendron and T. G. Crainic},
- title = {Parallel {B}ranch and {B}ound {A}lgorithms: {S}urvey and {S}ynthesis},
+ title = {Parallel Branch and Bound Algorithms: Survey and Synthesis},
journal = {Operations Research},
year = {1994},
volume = {42},
@InProceedings{ch8:Han,
author = {T. Han and T. S. Abdelrahman},
- title = {Reducing branch divergence in GPU programs},
- booktitle = {{In Proceedings of the Fourth Workshop on General Purpose Processing on Graphics Processing Units (GPGPU-4), ACM}},
+ title = {Reducing branch divergence in {GPU} programs},
+ booktitle = {{Proceedings of the Fourth Workshop on General Purpose Processing on Graphics Processing Units (GPGPU-4), ACM}},
year = {2011},
publisher = {New York, USA}
}
@Article{ch8:Johnson_1954,
author = {S. M. Johnson},
- title = {{Optimal two and three-stage production schedules with setup times included}},
+ title = {{Optimal two- and three-stage production schedules with setup times included}},
journal = {Naval Research Logistis Quarterly},
year = {1954},
volume = {1},
pages = {61--68}
}
-@ARTICLE{ch8:Kurzak_2010,
+@BOOK{ch8:Kurzak_2010,
author = "J. Kurzak and D. A. Bader and J. Dongarra.",
- title = "Scientific Computing with Multicore and Accelerators",
- journal = "Chapman \& Hall / CRC Press",
+ title = {{Scientific Computing with Multicore and Accelerators}},
+ publisher = {{Chapman \& Hall / CRC Press}},
year = "2010"
}
@Article{ch8:Lenstra_1978,
author = {J. K. Lenstra and B. J. Lageweg and A. H. G. Rinnooy Kan},
- title = {{A General bounding scheme for the permutation flow-shop problem}},
+ title = {{A general bounding scheme for the permutation flow-shop problem}},
journal = {Operations Research},
year = {1978},
volume = {26},
HOWPUBLISHED = "LIFL, USTL",
MONTH = "Novembre",
year = "2005",
-NOTE = "Th\`ese HDR"
+NOTE = "Habilitation to Direct Research"
}
@ARTICLE{ch8:Taillard_1993,
@ARTICLE{ch8:JRJackson_1956,
AUTHOR ="J. R. Jackson",
- TITLE ="An Extension of Johnson's results on Job-Lot Scheduling",
+ TITLE ="An Extension of {J}ohnson's results on Job-Lot Scheduling",
JOURNAL ="Naval Research Logistis Quarterly",
YEAR ="1956",
NOTE ="3:3"
@ARTICLE{ch8:LGMitten_1959,
AUTHOR ="L. G. Mitten",
- TITLE ="Sequencing n jobs on two machines with arbitrary time lags",
+ TITLE ="Sequencing $n$ jobs on two machines with arbitrary time lags",
JOURNAL ="Management Science",
YEAR ="1959"
}
@InProceedings{ch8:Mezmaz_2007,
- author = {M. Mezmaz and N. Melab and E-G. Talbi.},
+ author = {M. Mezmaz and N. Melab and E.-G. Talbi.},
title = {A grid-enabled branch and bound algorithm for solving challenging combinatorial optimization problems},
- booktitle = {{In Proc. of 21th IEEE Intl. Parallel and Distributed Processing Symp. (IPDPS)}},
+ booktitle = {{Proceedings of 21th IEEE International Parallel and Distributed Processing Symposium (IPDPS)}},
year = {2007},
month = {March},
publisher = {Long Beach, California}
@ARTICLE{ch8:Quinn_1990,
author = "M. J. Quinn.",
title = "Analysis and implementation of branch-and-bound algorithms on a hypercube multicomputer",
- journal = "IEEE transactions on computers",
- volume = "39, No3",
+ journal = "IEEE Transactions on Computers",
+ volume = "39",
+ number ="3",
pages = "384-387",
year = "1990"
}
@InProceedings{ch8:Zhang,
author = {E. Z. Zhang and Y. Jiang and Z. Guo and X. Shen},
- title = {Streamlining GPU applications on the fly: thread divergence elimination through runtime thread-data remapping},
- booktitle = {{In Proceedings of the 24th ACM International Conference on Supercomputing (ICS'10), ACM.}},
+ title = {Streamlining {GPU} applications on the fly: {T}hread divergence elimination through runtime thread-data remapping},
+ booktitle = {{Proceedings of the 24th ACM International Conference on Supercomputing (ICS'10), ACM}},
year = {2010},
pages = {115-126},
publisher = {New York, NY, USA}
@misc{ch8:cuda,
author = {{NVIDIA Corporation}},
keywords = {CUDA},
- note = {Version 4.0},
- title = {{NVIDIA CUDA C} Programming Guide},
+ title = {{NVIDIA CUDA C} Programming Guide, Version 4.0 },
year = 2011
}
\subsection{Complexity analysis of the memory usage of the lower bound }
-In this section, the characteristics of the data structures used by the lower bound function are studied in terms of sizes and access frequencies. For an efficient implementation of the LB, six data structures are required: the matrix $PTM$ of the processing times of the jobs, the matrix of lags $LM$, the Johnson's matrix $JM$, the matrix $RM$ of the earliest starting times of jobs, the matrix $QM$ of their lowest latency times, and the matrix $MM$ containing the couples of machines. The complexities of the different data structures are summarized in Table~\ref{ch8:tabMemComplex} where the columns represent, respectively, the name of the data structure, its size, and the number of times it is accessed.\\
+In this section, the characteristics of the data structures used by the lower bound function are studied in terms of sizes and access frequencies. For an efficient implementation of the LB, six data structures are required: the matrix PTM of the processing times of the jobs, the matrix of lags LM, the Johnson's matrix JM, the matrix RM of the earliest starting times of jobs, the matrix QM of their lowest latency times, and the matrix MM containing the couples of machines. The complexities of the different data structures are summarized in Table~\ref{ch8:tabMemComplex} where the columns represent, respectively, the name of the data structure, its size, and the number of times it is accessed.\\
-In the LB expression, the computation of the term $P_{Ja}^*(\jmath,M_k,M_l)$ requires the calculation of the lag of each remaining job to be scheduled on the couple $(M_k,M_l)$ of machines using its processing times on these machines (Johnson's rule with lags). Such computation is repeated for each couple $(M_k,M_l)$ of machines with $1 \leq k,l \leq m$ and $k<l$. To avoid the repetitive computation of the lags, they are computed once at the beginning of the algorithm and stored in the matrix $LM$. The dimension of $LM$ is $n \times \frac{m\times (m-1)}{2}$, where $n$ and $m$ are respectively the number of jobs to be scheduled and $m$ the number of machines. $LM$ is accessed $n' \times \frac{m \times (m-1)}{2}$ times, $n'$ being the number of remaining jobs to be scheduled in the subproblem for which the lower bound is being calculated. The processing times of all the jobs on all the machines are stored in the matrix $PTM$. This matrix has a dimension of $n \times m$ and is accessed $n' \times m \times (m-1)$ times.\\
+In the LB expression, the computation of the term $P_{Ja}^*(\jmath,M_k,M_l)$ requires the calculation of the lag of each remaining job to be scheduled on the couple $(M_k,M_l)$ of machines using its processing times on these machines (Johnson's rule with lags). Such computation is repeated for each couple $(M_k,M_l)$ of machines with $1 \leq k,l \leq m$ and $k<l$. To avoid the repetitive computation of the lags, they are computed once at the beginning of the algorithm and stored in the matrix $LM$. The dimension of $LM$ is $n \times \frac{m\times (m-1)}{2}$, where $n$ and $m$ are respectively the number of jobs to be scheduled and $m$ the number of machines. $LM$ is accessed $n' \times \frac{m \times (m-1)}{2}$ times, $n'$ being the number of remaining jobs to be scheduled in the subproblem for which the lower bound is being calculated. The processing times of all the jobs on all the machines are stored in the matrix PTM. This matrix has a dimension of $n \times m$ and is accessed $n' \times m \times (m-1)$ times.\\
-In addition, in order to avoid relaunching the Johnson's algorithm for each couple of machines and each subset of jobs, the Johnson's algorithm is computed once to find the optimal solutions on the couples of machines. These optimal solutions are then stored in the Johnson's matrix $JM$. This matrix has the same dimension as $LM$ and is accessed $n \times \frac{m \times (m-1)}{2}$ times during the computation of the lower bound. Finally, the $MM$ matrix that contains all the couples of machines has a dimension and access frequency of $m \times (m-1)$. \\
+In addition, in order to avoid relaunching the Johnson's algorithm for each couple of machines and each subset of jobs, the Johnson's algorithm is computed once to find the optimal solutions on the couples of machines. These optimal solutions are then stored in the Johnson's matrix JM. This matrix has the same dimension as LM and is accessed $n \times \frac{m \times (m-1)}{2}$ times during the computation of the lower bound. Finally, the MM matrix that contains all the couples of machines has a dimension and access frequency of $m \times (m-1)$. \\
-To reduce the computation time cost of the term $\min\limits_{(i,j)\in \jmath^2, i \neq j}(r_{i,k}+q_{j,l})$ in the LB expression, two matrices are defined, namely $RM$ and $QM$. They are used to store, respectively, the lowest starting and latency times of all the jobs on each machine. Their dimension is $m$ and, are accessed $ m \times (m-1)$ times and $ \frac{m \times (m-1)}{2}$ times, respectively.
+To reduce the computation time cost of the term $\min\limits_{(i,j)\in \jmath^2, i \neq j}(r_{i,k}+q_{j,l})$ in the LB expression, two matrices are defined, namely RM and QM. They are used to store, respectively, the lowest starting and latency times of all the jobs on each machine. Their dimension is $m$ and, are accessed $ m \times (m-1)$ times and $ \frac{m \times (m-1)}{2}$ times, respectively.
\begin{table}
\centering
\end{table}
-Taking into consideration the sizes of each data structure presented in Table \ref{ch8:tabMemSizes}, our challenge is to find which data structure has to be mapped onto which memory and in some cases how to split the data structures onto different memories and efficiently manage their accesses. The sizes in bytes reported in Table \ref{ch8:tabMemSizes} are computed knowing that in our implementation the elements of $JM$ and $PTM$ are unsigned chars (one byte) and that the elements of $LM$, $RM$, $QM$, and $MM$ are unsigned short ints (2 bytes). It is important here to highlight that the types of the data of the used matrices impact the size of each matrix. For instance, a matrix of $100$ integers has a size of $400$ octets while the same matrix with $100$ unsigned chars has a size of $100$ octets. In order to minimize the size of each of the used matrices, we analyzed the ranges of their values and defined their data types accordingly. For instance, in PTM all the processing times have positive values varying between $0$ and $100$. Therefore, we defined PTM as a matrix of \verb|unsigned char| having values in the range $[0, 255]$. Using the \verb|unsigned char| type instead of the integer type allows us to reduce by $4$ times the memory space occupied by PTM.\\
+Taking into consideration the sizes of each data structure presented in Table \ref{ch8:tabMemSizes}, our challenge is to find which data structure has to be mapped onto which memory and in some cases how to split the data structures onto different memories and efficiently manage their accesses. The sizes in bytes reported in Table \ref{ch8:tabMemSizes} are computed knowing that in our implementation the elements of JM and PTM are unsigned chars (one byte) and that the elements of LM, RM, QM, and MM are unsigned short ints (2 bytes). It is important here to highlight that the types of the data of the used matrices impact the size of each matrix. For instance, a matrix of $100$ integers has a size of $400$ octets while the same matrix with $100$ unsigned chars has a size of $100$ octets. In order to minimize the size of each of the used matrices, we analyzed the ranges of their values and defined their data types accordingly. For instance, in PTM all the processing times have positive values varying between $0$ and $100$. Therefore, we defined PTM as a matrix of \verb|unsigned char| having values in the range $[0, 255]$. Using the \verb|unsigned char| type instead of the integer type allows us to reduce by $4$ times the memory space occupied by PTM.\\
According to the Table \ref{ch8:tabMemSizes} :
\begin{itemize}
- \item The data structures $RM$, $QM$ and $MM$ are small sized-matrices. Therefore, their impact on the performances is not significant whatever is the memory to which they are off-loaded. In particular, preliminary experiments prove that putting them on the shared memory would allows a very poor performance improvement.
-\item The $LM$ data structure is the double of the $JM$ in memory size but with a much lower access frequency. It is thus better to map $JM$ on the shared memory.
-\item The $PTM$ has almost the same access frequency than $JM$ but requires less memory space.
+ \item The data structures RM, QM and MM are small sized-matrices. Therefore, their impact on the performances is not significant whatever is the memory to which they are off-loaded. In particular, preliminary experiments prove that putting them on the shared memory would allows a very poor performance improvement.
+\item The LM data structure is the double of the JM in memory size but with a much lower access frequency. It is thus better to map JM on the shared memory.
+\item The PTM has almost the same access frequency than JM but requires less memory space.
\end{itemize}
-Consequently, the focus is put on the study of the performance impact of the placement of $JM$ and $PTM$ on the shared memory. Three placement scenarios of $JM$ and $PTM$ are experimented and studied: (1) Only $PTM$ is stored in shared memory and all others are placed in global memory~; (2) Only $JM$ is stored in shared memory and all others are placed on global memory~; (3) $PTM$ and $JM$ are stored together in shared memory and all others are placed on global memory. \\
+Consequently, the focus is put on the study of the performance impact of the placement of JM and PTM on the shared memory. Three placement scenarios of JM and PTM are experimented and studied: (1) Only PTM is stored in shared memory and all others are placed in global memory~; (2) Only JM is stored in shared memory and all others are placed on global memory~; (3) PTM and JM are stored together in shared memory and all others are placed on global memory. \\
Taking profit from the configurable storage space provided in the new Fermi-based devices, the $64$ KB of local storage was split between the shared memory and the L1 cache according to the experimented scenario.
\subsection{Experimental protocol: computing the speedup}
\label{ch8:Protocol}
-We need to compute the speed up of our approach to evaluate its performances. This speed up is obtained by comparing our GPU B\&B version to a sequential B\&B version deployed on one CPU core. However, all the instances used in our experiments are extremely hard to solve. Indeed, the resolution of each of these instances requires several months of computation on one CPU core. For example, the optimal solution of one of these instances defined by $50$ jobs and $20$ machines is obtained after $25$ days of computation using an average of $328$ CPU cores \cite{ch8:Mezmaz_2007}. \\
+We need to compute the speedup of our approach to evaluate its performances. This speedup is obtained by comparing our GPU B\&B version to a sequential B\&B version deployed on one CPU core. However, all the instances used in our experiments are extremely hard to solve. Indeed, the resolution of each of these instances requires several months of computation on one CPU core. For example, the optimal solution of one of these instances defined by $50$ jobs and $20$ machines is obtained after $25$ days of computation using an average of $328$ CPU cores \cite{ch8:Mezmaz_2007}. \\
-Using the approach defined in \cite{ch8:Mezmaz_2007}, it is possible to obtain a random list $L$ of subproblems such that the resolution of $L$ lasts $T$ minutes with a sequential B\&B. So by initializing the pool of our sequential B\&B with the subproblems of this list $L$, we are sure that the resolution of the sequential B\&B will last $T{cpu}$ minutes such as $T{cpu}$ will be approximately equal to $T$. Therefore, it will be possible to initialize the pool of our GPU B\&B with the same list $L$ of subproblems in order to compute the speed up. Let us suppose that the resolution of the GPU B\&B will last $T{gpu}$ minutes. So the speed up of our GPU algorithm will be equal to $Tcpu/Tgpu$. With this experimental protocol, the subproblems explored by the GPU and CPU B\&B versions will be exactly the same. So to find the speed up associated to an instance, we:
+Using the approach defined in \cite{ch8:Mezmaz_2007}, it is possible to obtain a random list $L$ of subproblems such that the resolution of $L$ lasts $T$ minutes with a sequential B\&B. So by initializing the pool of our sequential B\&B with the subproblems of this list $L$, we are sure that the resolution of the sequential B\&B will last $T{cpu}$ minutes such as $T{cpu}$ will be approximately equal to $T$. Therefore, it will be possible to initialize the pool of our GPU B\&B with the same list $L$ of subproblems in order to compute the speedup. Let us suppose that the resolution of the GPU B\&B will last $T{gpu}$ minutes. So the speedup of our GPU algorithm will be equal to $Tcpu/Tgpu$. With this experimental protocol, the subproblems explored by the GPU and CPU B\&B versions will be exactly the same. So to find the speedup associated to an instance, we:
\begin{itemize}
\item compute, using the approach defined in \cite{ch8:Mezmaz_2007}, a list $L$ of subproblems such as the resolution of $L$ lasts $T$ minutes with a sequential B\&B;
\item solve the subproblems of this pool with our GPU B\&B;
\item get the GPU resolution time $T{gpu}$ and the number of explored subproblems $N{gpu}$;
\item check that $N{gpu}$ is exactly equal to $N{cpu}$;
-\item and finally compute the speed up associated to this instance by dividing $T{cpu}$ by $T{gpu}$ (i.e., $Tcpu/Tgpu$).
+\item and finally compute the speedup associated to this instance by dividing $T{cpu}$ by $T{gpu}$ (i.e., $Tcpu/Tgpu$).
\end{itemize}
\footnotesize
\begin{tabular}{|r|r|r|r|r|}
\hline
-Instance (No. of jobs x No. of machines) & 20$\times$20 & 50$\times$20 & 100$\times$20 & 200$\times$20 \\
+Instance (No. of jobs $\times$ No. of machines) & 20$\times$20 & 50$\times$20 & 100$\times$20 & 200$\times$20 \\
\hline
Sequential resolution time (minutes) & 10 & 50 & 150 & 300 \\
\hline
The objective of the experimental study presented in this section is to compared the performances of both proposed approaches for designing B\&B on top of GPUs.
-Table \ref{ch8:ParaGPU1} and Table~\ref{ch8:ParaGPU2} report respectively the speedups obtained with the GPU-PTE-BB and GPU-PEB-BB approaches for different problem instances. The first part of both tables gives the size of the pool generated and evaluated on the GPU. The second part of the tables gives the average speedup for each group of instances and for each pool size. Each line corresponds to a group of $10$ instances defined by the same number of jobs and the same number of machines.
+Table \ref{ch8:ParaGPU1} and Table~\ref{ch8:ParaGPU2} report the speedups obtained with the GPU-PTE-BB and GPU-PEB-BB approaches, respectively, for different problem instances. The first part of both tables gives the size of the pool generated and evaluated on the GPU. The second part of the tables gives the average speedup for each group of instances and for each pool size. Each line corresponds to a group of $10$ instances defined by the same number of jobs and the same number of machines. \\
-The results obtained with the GPU-PTE-BB approach (see Table \ref{ch8:ParaGPU1}) show that exploring in parallel the tree search allows to speedup the execution of the B\&B compared to a CPU-based execution. Indeed, an acceleration factor up to 40.50 is obtained for the 20 $\times$ 20 problem instances using a pool of 262144 subproblems.
+The results obtained with the GPU-PTE-BB approach (see Table \ref{ch8:ParaGPU1}) show that exploring the tree search in parallel allows the speedup of the execution of the B\&B compared to a CPU-based execution. Indeed, an acceleration factor up to 40.50 is obtained for the 20 $\times$ 20 problem instances using a pool of 262144 subproblems. \\
-The results show also that the parallel efficiency decreases with the size of the problem instance. For a fixed number of machines (here 20 machines) and a fixed pool size, the obtained speedup decline accordingly with the number of jobs. For instance for a pool size of 262144, the acceleration factor obtained with 200 jobs (13.4) while it is (40.50) for the instances with 20 jobs. This behavior is mainly due to the overhead induced by the transfer of the pool of resulting subproblems between the CPU and the GPU. For example, for the instances with 200 jobs the size of the pool to exchange between the CPU and the GPU is ten times bigger than the size of the pool for the instances with 20 jobs.
-
-\begin{table}[htbp]
+\begin{table}[h]
\setlength{\tabcolsep}{0.2cm}
\renewcommand{\arraystretch}{1.2}
\centering
Pool size & 4096 & 8192 & 16384 & 32768 & 65536 & 131072 & 262144\\
\hline
\hline
-(NJobs $\times$ NMachines) & \multicolumn{7}{|c|}{Average speedup for each group of 10 instances}\\
+($N$ Jobs $\times$ $N$ Machines) & \multicolumn{7}{|c|}{Average speedup for each group of 10 instances}\\
\hline
$200 \times $20 & 1.12 & 2.89 & 3.57 & 4.23 & 6.442 & 8.32 & 13.4\\
\hline
\label{ch8:ParaGPU1}
\end{table}
-The results obtained with the GPU-PEB-BB approach (see Table \ref{ch8:ParaGPU2}) show that evaluating in parallel the bounds of a selected pool, allow to significantly speedup the execution of the B\&B. Indeed, an acceleration factor up to 71.69 is obtained for the 200 $\times$ 20 problem instances using a pool of 262144 subproblems. The results show also that the parallel efficiency grows with the size of the problem instance. For a fixed number of machines (here 20 machines) and a fixed pool size, the obtained speedup grows accordingly with the number of jobs. For instance for a pool size of 262144, the acceleration factor obtained with 200 jobs (71.69) is almost the double of the one obtained with 20 jobs (38.40).
-As far the pool size tuning is considered, we could notice that this parameter depends strongly on the problem instance being solved. Indeed, while the best acceleration is obtained with a pool size of 8192 subproblems for the instances 50 $\times$ 20 and 20 $\times$ 20, the best speedups are obtained with a pool size of 262144 subproblems with the instances 200 $\times$ 20 and 100 $\times$ 20.\\
+The results show also that the parallel efficiency decreases with the size of the problem instance. For a fixed number of machines (here 20 machines) and a fixed pool size, the obtained speedup declines accordingly with the number of jobs. For instance for a pool size of 262144, the acceleration factor obtained with 200 jobs is 13.4 while it is 40.50 for the instances with 20 jobs. This behavior is mainly due to the overhead induced by the transfer of the pool of resulting subproblems between the CPU and the GPU. For example, for the instances with 200 jobs the size of the pool to exchange between the CPU and the GPU is ten times bigger than the size of the pool for the instances with 20 jobs.\\
-\begin{table}
+
+The results obtained with the GPU-PEB-BB approach (see Table \ref{ch8:ParaGPU2}) show that evaluating the bounds of a selected pool in parallel, allows significants speedup of the execution of the B\&B. Indeed, an acceleration factor up to 71.69 is obtained for the 200 $\times$ 20 problem instances using a pool of 262144 subproblems. The results show also that the parallel efficiency grows with the size of the problem instance. For a fixed number of machines (here 20 machines) and a fixed pool size, the obtained speedup grows accordingly with the number of jobs. For instance for a pool size of 262144, the acceleration factor obtained with 200 jobs (71.69) is almost double obtained with 20 jobs (38.40). \\
+
+\begin{table}[h]
\setlength{\tabcolsep}{0.2cm}
\renewcommand{\arraystretch}{1.2}
\centering
\label{ch8:ParaGPU2}
\end{table}
-Compared to the parallel tree exploration-based GPU-accelerated B\&B approach, the parallel evaluation of bounds approach is by far much more efficient wherever the instance is. For example, while the GPU-PEB-BB approach reaches speedup of $\times$71.69 for the instance with 200 jobs on 20 machines, a speedup of a $\times$13.4 is measured with the parallel tree exploration-based approach which corresponds to an acceleration of $\times$5.56 . Moreover, on the contrary to the GPU-PEB-BB approach, in the GPU-PTE-BB the speedups decrease when the problem instance becomes higher. Remember here that while in the GPU-PEB-BB approach all threads evaluate only one node each whatever the permutation size is. In the GPU-PTE-BB, each thread branches all the children of its assigned parent node. Therefore, the bigger the size of the permutation is, the bigger the amount of work performed by each thread is and the bigger the difference between the workload is. Indeed, let us suppose that for the instance with $200$ jobs, the thread $0$ handles a node from the level $2$ of the tree and the thread $100$ handles a node from the level $170$ of the tree. In this case, the thread $0$ generates and evaluates $198$ nodes while the thread $100$ decomposes and bounds only $30$ nodes. The problem in this example is that the kernel execution would last until the thread $0$ finishes its work while the other threads might have ended their works and stayed idle.
+As far as the pool size tuning is considered, we could notice that this parameter depends strongly on the problem instance being solved. Indeed, while the best acceleration is obtained with a pool size of 8192 subproblems for the instances 50 $\times$ 20 and 20 $\times$ 20 (Table\~ref{ch8:ParaGPU2} in bold), the best speedups are obtained with a pool size of 262144 subproblems with the instances 200 $\times$ 20 and 100 $\times$ 20 (Table\~ref{ch8:ParaGPU2} in bold).\\
+
+
+
+Compared to the parallel tree exploration-based GPU-accelerated B\&B approach, the parallel evaluation of bounds approach is by far much more efficient wherever the instance is. For example, while the GPU-PEB-BB approach reaches speedup of $\times$71.69 for the instance with 200 jobs on 20 machines, a speedup of a $\times$13.4 is measured with the parallel tree exploration-based approach which corresponds to an acceleration of $\times$5.56. Moreover, to the contrary of the GPU-PEB-BB approach, in the GPU-PTE-BB the speedups decrease when the problem instance becomes higher. Remember here that while in the GPU-PEB-BB approach all threads evaluate only one node each whatever the permutation size is. In the GPU-PTE-BB, each thread branches all the children of its assigned parent node. Therefore, the bigger the size of the permutation, the bigger the amount of work performed by each thread is and the bigger the difference between the workload. Indeed, let us suppose that for the instance with $200$ jobs, the thread $0$ handles a node from the level $2$ of the tree and the thread $100$ handles a node from the level $170$ of the tree. In this case, the thread $0$ generates and evaluates $198$ nodes while the thread $100$ decomposes and bounds only $30$ nodes. The problem in this example is that the kernel execution would last until the thread $0$ finishes its work while the other threads might have completed their work and remains idle.
\subsection{Thread divergence reduction}
-The objective of this section is to demonstrate that the thread divergence reduction mechanisms we propose has an impact on the performance of the GPU accelerated B\&B and to evaluate how this impact is significant.
+The objective of this section is to demonstrate that the thread divergence reduction mechanisms we propose have an impact on the performance of the GPU accelerated B\&B and to evaluate how this impact is significant.
In the following, the reported results are obtained with the GPU-accelerated B\&B based on the parallel evaluation of bounds.
-\begin{table}[!h]
+\begin{table}[h]
\setlength{\tabcolsep}{0.2cm}
\renewcommand{\arraystretch}{1.2}
\centering
Pool size & 4096 & 8192 & 16384 & 32768 & 65536 & 131072 & 262144\\
\hline
\hline
-(NJobs $\times$ NMachines) & \multicolumn{7}{|c|}{Average speedup for each group of 10 instances}\\
+($N$ Jobs $\times$ $N$ Machines) & \multicolumn{7}{|c|}{Average speedup for each group of 10 instances}\\
\hline
\hline
$200 \times $20 & 46.63 & 60.88 & 63.80 & 67.51 & 73.47 & 75.94 & \textbf{77.46}\\
\label{ch8:ParaDivergence}
\end{table}
-Table~\ref{ch8:ParaDivergence} shows the experimental results obtained using the sorting process and the refactoring approach presented in Section \ref{ch8:ThreadDivergence}. Results show that the proposed optimizations emphasize the GPU acceleration reported in Table~\ref{ch8:ParaGPU2} and obtained without thread divergence reduction. For example, for the instances of 200 jobs over 20 machines and a pool size of 262144, the average reported speedup is 77.46 while the average acceleration factor obtained without thread divergence management for the same instances and the same pool size is 71.69 which corresponds to an improvement of 7.68\%. Such considerable but not outstanding improvement is predictable, as claimed in \cite{ch8:Han}, since the factorized part of the branches in the FSP lower bound is very small.
+Table~\ref{ch8:ParaDivergence} shows the experimental results obtained using the sorting process and the refactoring approach presented in Section \ref{ch8:ThreadDivergence}. Results show that the proposed optimizations emphasize the GPU acceleration reported in Table~\ref{ch8:ParaGPU2} obtained without thread divergence reduction. For example, for the instances of 200 jobs over 20 machines and a pool size of 262144, the average reported speedup is 77.46 (Table~\ref{ch8:ParaDivergence} in bold) while the average acceleration factor obtained without thread divergence management for the same instances and the same pool size is 71.69 which corresponds to an improvement of 7.68\%. Such considerable but not outstanding improvement is predictable, as claimed in \cite{ch8:Han}, since the factorized part of the branches in the FSP lower bound is very small.
\subsection{Data access optimization}
-The objective of the experimental study presented in this section is to find the best mapping of the six data structures of the lower bound LB kernel on the memories of the GPU device. In the following, the reported results are obtained with the GPU-accelerated B\&B based on the parallel evaluation of bounds.
+The objective of the experimental study presented in this section is to find the best mapping of the six data structures of the LB kernel on the memories of the GPU device. In the following, the reported results are obtained with the GPU-accelerated B\&B based on the parallel evaluation of bounds.
-Table~\ref{ch8:PTM-on-SM} reports the speedups obtained for the first experimented scenario where only the matrix $PTM$ is put on the shared memory. Results show that the speedup grows on average with the growing of the pool size in the same way as in Table~\ref{ch8:ParaDivergence}. For the largest problem instance and pool size, putting the PTM matrix on the shared memory improves the speedups up to ($14\%$) compared to those obtained when $PTM$ is on global memory reaching an acceleration of $\times 90.51$ for the problem instances $200 \times 20$ and a pool size of $262144$ subproblems .
+Table~\ref{ch8:PTM-on-SM} reports the speedups obtained for the first experimental scenario where only the matrix PTM is put on the shared memory. Results show that the speedup grows on average with the growing of the pool size in the same way as in Table~\ref{ch8:ParaDivergence}. For the largest problem instance and pool size, putting the PTM matrix on the shared memory improves the speedups up ($14\%$) compared to those obtained when PTM is on global memory reaching an acceleration of $\times 90.51$ for the problem instances $200 \times 20$ and a pool size of $262144$ subproblems .
\begin{table}
\centering
Pool size & 4096 & 8192 & 16384 & 32768 & 65536 & 131072 & 262144\\
\hline
\hline
-(NJobs $\times$ NMachines) & \multicolumn{7}{|c|}{Average speedup for each group of 10 instances}\\
+($N$ Jobs $\times$ $N$ Machines) & \multicolumn{7}{|c|}{Average speedup for each group of 10 instances}\\
\hline
\hline
$200 \times $20 & 54.03 & 67.75 & 68.43 & 72.17 & 82.01 & 88.35 & \textbf{90.51}\\
% \hline
% \hline
\end{tabular}
- \caption[Speedup for different FSP instances and pool sizes obtained with data access optimization.]{Speedup for different FSP instances and pool sizes obtained with data access optimization. $PTM$ is placed in shared memory and all others are placed in global memory.}
+ \caption[Speedup for different FSP instances and pool sizes obtained with data access optimization.]{Speedup for different FSP instances and pool sizes obtained with data access optimization. PTM is placed in shared memory and all others are placed in global memory.}
\label{ch8:PTM-on-SM}
\end{table}
-Table~\ref{ch8:JM-on-SM} reports the behavior of the speedup averaged on the different problem instances (sizes) as a function of the pool size for the scenario where the Johnson's matrix is put on the shared memory. Results show that putting the $JM$ matrix on the shared matrix improves more the performances comparing to the first scenario where $PTM$ is put on the shared memory. Indeed, according to Table~\ref{ch8:tabMemComplex}, matrix $JM$ is accessed more frequently than matrix $PTM$. Putting $JM$ matrix on the shared memory allows accelerations up to $\times 97.83$ for the problem instances $200 \times 20$.
+Table~\ref{ch8:JM-on-SM} reports the behavior of the speedup averaged on the different problem instances (sizes) as a function of the pool size for the scenario where the Johnson's matrix is put on the shared memory. Results show that putting the JM matrix on the shared memory improves the performances more than in the first scenario where PTM is put on the shared memory. Indeed, according to Table~\ref{ch8:tabMemComplex}, matrix JM is accessed more frequently than matrix PTM. Putting JM matrix on the shared memory allows accelerations up to $\times 97.83$ for the problem instances $200 \times 20$.
\begin{table}
\centering
Pool size & 4096 & 8192 & 16384 & 32768 & 65536 & 131072 & 262144\\
\hline
\hline
-(NJobs $\times$ NMachines) & \multicolumn{7}{|c|}{Average speedup for each group of 10 instances}\\
+($N$ Jobs $\times$ $N$ Machines) & \multicolumn{7}{|c|}{Average speedup for each group of 10 instances}\\
\hline
\hline
$200 \times $20 & 63.01 & 79.40 & 81.40 & 84.02 & 93.61 & 96.56 & \textbf{97.83}\\
% \hline
\end{tabular}
\caption[Speedup for different FSP instances and pool sizes obtained with data access optimization.]{Speedup for different FSP instances and pool sizes obtained with data access optimization.
-$JM$ is placed in shared memory and all others are placed in global memory.}
+JM is placed in shared memory and all others are placed in global memory.}
\label{ch8:JM-on-SM}
\end{table}
-Table~\ref{ch8:JM-PTM-on-SM} reports the behavior of the average speedup for the different problem instances (sizes) with $20$ machines for the data placement scenario where both $PTM$ and $JM$ are put on shared memory. According to the underlying Table, the scenarios~(3) ($JM$ together or without $PTM$ in shared memory) is clearly better than the scenarii~(1)and~(2) (respectively $PTM$ in shared memory and $JM$ in shared memory) whatever is the problem instance (size).
+Table~\ref{ch8:JM-PTM-on-SM} reports the behavior of the average speedup for the different problem instances (sizes) with $20$ machines for the data placement scenario where both PTM and JM are put on shared memory. According to the underlying tables, scenarios~3 (JM together with PTM in shared memory) is clearly better than the scenarios~1 and~2 (respectively PTM in shared memory and JM in shared memory) whatever the problem instance (size).
\begin{table}
\centering
% \hline
% \hline
\end{tabular}
- \caption[Speedup for different FSP instances and pool sizes obtained with data access optimization.]{Speedup for different FSP instances and pool sizes obtained with data access optimization. $PTM$ and $JM$ are placed together in shared memory and all others are placed in global memory.}
+ \caption[Speedup for different FSP instances and pool sizes obtained with data access optimization.]{Speedup for different FSP instances and pool sizes obtained with data access optimization. PTM and JM are placed together in shared memory and all others are placed in global memory.}
\label{ch8:JM-PTM-on-SM}
\end{table}
-By carefully analyzing each of the scenarii of data placement on the memory hierarchies of the GPU, the recommendation is to put in the shared memory the Johnson's and the processing time matrices ($JM$ and $PTM$) if they fit in together. Otherwise, the whole or a part of the Johnson's matrix has to be put in priority in the shared memory. The other data structures are mapped to the global memory.
+By carefully analyzing each of the scenarios of data placement on the memory hierarchies of the GPU, the recommendation is to put in the shared memory the Johnson's and the processing time matrices (JM and PTM) if they fit in together. Otherwise, the whole or a part of the Johnson's matrix has to be given in priority in the shared memory. The other data structures are mapped to the global memory.
\section{Conclusion and future work}
\label{ch8:Conclusion}
-In this chapter, we have revisited the design of parallel B\&B algorithms on GPU accelerators to allow highly efficient solving of permutation-based COPs. To do so, our contributions consist in: (1) rethinking two approaches for parallel B\&B on top of GPUs, discussing the performances of each and identifying which best suits the GPU accelerators. (2) proposing a new approach for thread/branch divergence reduction through a thorough analysis of the different loops and conditional instructions of the bounding function. (3) defining an optimal mapping of the data structures of the bounding function on the hierarchy of memories provided in the GPU device through a careful analysis of both the data structures (size and access frequency) and the GPU memories (size and access latency).
+In this chapter, we have revisited the design of parallel B\&B algorithms on GPU accelerators to allow highly efficient solving of permutation-based COPs. To do so, our contributions consisted of: (1) rethinking two approaches for parallel B\&B on top of GPUs, discussing the performances of each and identifying which best suits the GPU accelerators; (2) proposing a new approach for thread/branch divergence reduction through a thorough analysis of the different loops and conditional instructions of the bounding function; and (3) defining an optimal mapping of the data structures of the bounding function on the hierarchy of memories provided in the GPU device through a careful analysis of both the data structures (size and access frequency) and the GPU memories (size and access latency).
-In the first parallel tree-exploration-based B\&B, a set of pending nodes is selected from this list according to their depth and off-loaded to the GPU where each thread builds its own local search tree by applying
-the branching, bounding and pruning operators to the assigned node. In the GPU-accelerated B\&B based on the parallel evaluation of bounds, the generation of the subproblems (branching, selection and pruning operations) is performed on CPU and the evaluation of their lower bounds (bounding operation) is executed on the GPU device. Pools of subproblems are off-loaded from CPU to GPU to be evaluated by blocks of threads. After evaluation, the lower bounds are returned to the CPU.
+In the first parallel treeexploration-based B\&B, a set of pending nodes is selected from this list according to their depth and off-loaded to the GPU where each thread builds its own local search tree by applying
+the branching, bounding, and pruning operators to the assigned node. In the GPU-accelerated B\&B based on the parallel evaluation of bounds, the generation of the subproblems (branching, selection, and pruning operations) is performed on CPU and the evaluation of their lower bounds (bounding operation) is executed on the GPU device. Pools of subproblems are off-loaded from CPU to GPU to be evaluated by blocks of threads. After evaluation, the lower bounds are returned to the CPU.
-In both considered approaches, our focus is on the GPU-based lower bound's implementation and the associated thread divergence and data placement challenges. The proposed mechanisms for reducing the thread divergence issue are based on a thorough analysis of the different loops and conditional instructions of the lower bound function. On the one hand, the sorting process aims to homogenize the data of the subproblems off-loaded to the GPU to minimize the number of threads that diverge on loop instructions. On the other hand, the technique of branch refactoring rewrite the conditional instructions into uniform instructions so that threads of the same warp execute a same code. The proposed data access optimization is based on a preliminary analysis of the lower bound function. Such analysis allowed us to identify six data structures for which we have proposed a complexity analysis in terms of memory size and access frequency. Due to the limited size of the shared memory the matrices do not fit in all together. According to the complexity study, the recommendation is to put in the shared memory the Johnson's and the processing time matrices ($JM$ and $PTM$) if they fit in together. Otherwise, the whole or a part of the Johnson's matrix has to be put in priority in the shared memory. The other data structures are mapped to the global memory. Such recommendation has been confirmed through extensive experiments using a recent C2050 Tesla GPU card.
+In both considered approaches, our focus is on the GPU-based lower bound's implementation and the associated thread divergence and data placement challenges. The proposed mechanisms for reducing the thread divergence issue are based on a thorough analysis of the different loops and conditional instructions of the lower bound function. On the one hand, the sorting process aims to homogenize the data of the subproblems off-loaded to the GPU to minimize the number of threads that diverge on loop instructions. On the other hand, the technique of branch refactoring rewrite the conditional instructions into uniform instructions so that threads of the same warp execute the same code. The proposed data access optimization is based on a preliminary analysis of the lower bound function. Such analysis allowed us to identify six data structures for which we have proposed a complexity analysis in terms of memory size and access frequency. Due to the limited size of the shared memory the matrices do not fit in all together. According to the complexity study, the recommendation is to put the Johnson's and the processing time matrices (JM and PTM) in the shared memory if they fit in together. Otherwise, the whole or a part of the Johnson's matrix should be put in priority in the shared memory. The other data structures are mapped to the global memory. Such recommendation has been confirmed through extensive experiments using a recent C2050 Tesla GPU card.
-The Flowshop Scheduling Problem has been considered as a case study. The proposed approaches have been experimented using a Tesla C2050 GPU card on different classes of FSP instances. The experimental results show that the parallel evaluation of bounds is the parallelization paradigm that performs better on top of GPU accelerators. Compared to the parallel tree-exploration model, accelerations up to $\times$5.56 are achieved.
+The flowshop scheduling problem has been considered as a case study. The proposed approaches have been experimented using a Tesla C2050 GPU card on different classes of FSP instances. The experimental results show that the parallel evaluation of bounds is the parallelization paradigm that performs better on top of GPU accelerators. Compared to the parallel treeexploration model, accelerations up to $\times$5.56 are achieved.
-Experiments show also that the proposed refactoring approach improves the parallel efficiency whatever the FSP instance and the pool size are. However, the improvement was not significant because the factorized part of the branches in the FSP lower bound is very small. The optimizations obtained with the proposed thread reduction mechanisms allowed us to achieve accelerations up to $\times$77.46 compared to a sequential B\&B. The data access optimizations grant accelerations up to $\times 100$ compared to a single CPU-based B\&B.
+Experiments show also that the proposed refactoring approach improves the parallel efficiency whatever the FSP instance and pool size. However, the improvement was not significant because the factorized part of the branches in the FSP lower bound is very small. The optimizations obtained with the proposed thread reduction mechanisms allowed us to achieve accelerations up to $\times$77.46 compared to a sequential B\&B. The data access optimizations allow accelerations up to $\times 100$ compared to a single CPU-based B\&B.
-In the near future, we plan to extend this work to a cluster of GPU-accelerated multi-core processors. From the application point of view, the objective is to optimally solve challenging and unsolved Flow-Shop instances as we did it for one 50$\times$20 problem instance with grid computing \cite{ch8:Mezmaz_2007}. Finally, we plan to investigate other lower bound functions to deal with other combinatorial optimization problems.
+In the near future, we plan to extend this work to a cluster of GPU-accelerated multicore processors. From the application point of view, the objective is to optimally solve challenging and unsolved flowshop instances as we did it for one 50$\times$20 problem instance with grid computing \cite{ch8:Mezmaz_2007}. Finally, we plan to investigate other lower bound functions to deal with other combinatorial optimization problems.
\putbib[Chapters/chapter8/biblio8]