From: couturie Date: Fri, 4 Oct 2013 20:55:02 +0000 (+0200) Subject: new X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/book_gpu.git/commitdiff_plain/063fd4437e9bfbefc2f6ed6c932744bb20514751 new --- diff --git a/BookGPU/Chapters/chapter11/ch11.tex b/BookGPU/Chapters/chapter11/ch11.tex index 736d68a..9c16acd 100644 --- a/BookGPU/Chapters/chapter11/ch11.tex +++ b/BookGPU/Chapters/chapter11/ch11.tex @@ -1,5 +1,5 @@ -\chapterauthor{Gleb Beliakov and Shaowu Liu}{School of Information Technology, Deakin University, Burwood 3125, Australia} +\chapterauthor{Gleb Beliakov and Shaowu Liu}{School of Information Technology, Deakin University, Burwood, Australia} %\chapterauthor{Shaowu Liu}{School of Information Technology, Deakin University, Burwood 3125, Australia} diff --git a/BookGPU/Chapters/chapter12/ch12.tex b/BookGPU/Chapters/chapter12/ch12.tex index 5c0a5b2..4bc95a6 100755 --- a/BookGPU/Chapters/chapter12/ch12.tex +++ b/BookGPU/Chapters/chapter12/ch12.tex @@ -19,7 +19,7 @@ \label{ch12:sec:01} Sparse linear systems are used to model many scientific and industrial problems, such as the environmental simulations or the industrial processing of the complex or -non-Newtonian fluids. Moreover, the resolution of these problems often involves the +nonNewtonian fluids. Moreover, the resolution of these problems often involves the solving of such linear systems that are considered the most expensive process in terms of execution time and memory space. Therefore, solving sparse linear systems must be as efficient as possible in order to deal with problems of ever increasing diff --git a/BookGPU/Chapters/chapter13/ch13.tex b/BookGPU/Chapters/chapter13/ch13.tex index ce2c7c2..db2ab78 100755 --- a/BookGPU/Chapters/chapter13/ch13.tex +++ b/BookGPU/Chapters/chapter13/ch13.tex @@ -134,7 +134,7 @@ where $b=\{b_{1},b_{2},b_{3}\}$, $\|b\|_{2}$ denotes the Euclidean norm of $b$, $v=e^{-a}.u$ represents the general change of variables such that $a=\frac{b^{t}(x,y,z)}{2\eta}$. Consequently, the numerical resolution of the diffusion problem (the self-adjoint operator~(\ref{ch13:eq:04})) is done by optimization algorithms, in contrast to that -of the convection-diffusion problem (non self-adjoint operator~(\ref{ch13:eq:03})) +of the convection-diffusion problem (nonself-adjoint operator~(\ref{ch13:eq:03})) which is done by relaxation algorithms. In the case of our studied algorithm, the convergence\index{convergence} is ensured by M-matrix property; then, the performance is linked to the magnitude of the spectral radius of the iteration matrix, which is independent of the condition @@ -747,7 +747,7 @@ consequently it also depends on the number of computing nodes. %%--------------------------%% \section{Red-black ordering technique} \label{ch13:sec:06} -As is well-known, the Jacobi method\index{iterative method!Jacobi} is characterized +As is well known, the Jacobi method\index{iterative method!Jacobi} is characterized by a slow convergence\index{convergence} rate compared to some iterative methods\index{iterative method} (for example, Gauss-Seidel method\index{iterative method!Gauss-Seidel}). So, in this section, we present some solutions to reduce the execution time and the number of diff --git a/BookGPU/Chapters/chapter14/ch14.tex b/BookGPU/Chapters/chapter14/ch14.tex index 65177f4..89a72fd 100755 --- a/BookGPU/Chapters/chapter14/ch14.tex +++ b/BookGPU/Chapters/chapter14/ch14.tex @@ -156,7 +156,7 @@ site, may be thought of as follows. A matrix-vector multiplication ${\cal M}_{ij}f_j$ is used to transform the distributions into the hydrodynamic quantities, where ${\cal M}_{ij}$ is a constant 19x19 matrix related to the choice of -$\mathbf{c}_i$. The non-conserved hydrodynamic quantities are then +$\mathbf{c}_i$. The nonconserved hydrodynamic quantities are then relaxed toward their (known) equilibrium values and are transformed back to new post-collision distributions via the inverse transformation ${\cal M}^{-1}_{ij}$. This gives rise to the need for a minimum of $2\times 19^2$ @@ -744,7 +744,7 @@ we have tried to maintain the modular structure of the CPU where possible. For each data structure, such as the distribution, a separate analogue is maintained in both the CPU and GPU memory spaces. However, the GPU copy does not include the complete CPU structure: in -particular, non-intrinsic datatypes such as MPI datatypes are not +particular, nonintrinsic datatypes such as MPI datatypes are not required on the GPU. Functions to marshal data between CPU and GPU are provided for each data structure, abstracting the underlying CUDA implementation. (This reasonably lightweight abstraction layer diff --git a/BookGPU/Chapters/chapter16/bdf.tex b/BookGPU/Chapters/chapter16/bdf.tex index 83abbc3..6035b0b 100644 --- a/BookGPU/Chapters/chapter16/bdf.tex +++ b/BookGPU/Chapters/chapter16/bdf.tex @@ -117,8 +117,8 @@ J\! =\! \frac{\ud x_M}{\ud x_0} \end{equation} \begin{algorithm} -\caption{The matrix-free\index{matrix-free} method for - Krylov subspace\index{iterative method!Krylov subspace} construction.} +\caption{the matrix-free\index{matrix-free} method for + Krylov subspace\index{iterative method!Krylov subspace} construction} \label{alg:mf_Gear} \KwIn{ current Krylov subspace basis vector $v$, time step lengths $h_i$, diff --git a/BookGPU/Chapters/chapter16/ch16.tex b/BookGPU/Chapters/chapter16/ch16.tex index ac52b32..a08c901 100644 --- a/BookGPU/Chapters/chapter16/ch16.tex +++ b/BookGPU/Chapters/chapter16/ch16.tex @@ -1,7 +1,7 @@ -\chapterauthor{Xuexin Liu, Sheldon Xiang-Dong Tan}{Dept. Electrical Engineering, - University of California, Riverside, CA 92521, USA} +\chapterauthor{Xuexin Liu, Sheldon Xiang-Dong Tan}{Department of Electrical Engineering, + University of California, Riverside, CA, USA} %\chapterauthor{Sheldon Xiang-Dong Tan}{Dept. Electrical Engineering, University of California, Riverside, CA 92521} -\chapterauthor{Hai Wang}{Univ. of Electronics Science and Technology of China, +\chapterauthor{Hai Wang}{University of Electronics Science and Technology of China, Chengdu, Sichuan, China} \chapterauthor{Hao Yu}{School of Electrical \& Electronic Engineering, Nanyang Technological University, Singapore} @@ -18,7 +18,7 @@ %\renewcommand{\algorithmicensure}{\textbf{Output:}} %\renewcommand{\algorithmiccomment}[1]{\% \textit{#1}} -\chapter[GPU-Accelerated Envelope-Following Method]{A GPU-Accelerated Envelope-Following Method for Switching Power Converter Simulation} +\chapter[GPU-accelerated envelope-following method]{A GPU-accelerated envelope-following method for switching power converter simulation} % \section{Abstract} % % Power converters have seen a surge of new trends and novel diff --git a/BookGPU/Chapters/chapter16/exp.tex b/BookGPU/Chapters/chapter16/exp.tex index 7fda02b..a527ba4 100644 --- a/BookGPU/Chapters/chapter16/exp.tex +++ b/BookGPU/Chapters/chapter16/exp.tex @@ -84,11 +84,6 @@ produces a envelope matching the original waveform well. \begin{table} \centering -\caption{CPU and GPU time comparisons (in seconds) for solving Newton update equation - with the proposed Gear-2 sensitivity. -} -\vspace{0.1in} -\label{table:circuit} {%\normalsize \begin{tabular}{@{}c|c|c|c|c|c|c@{}} \hline\hline @@ -102,6 +97,10 @@ Boost-2 & 1093 & 729.5 & 720.8 & 71.0 & 8.5 & 9.9$\times$ \\ \hline\hline \end{tabular} } +\caption{CPU and GPU time comparisons (in seconds) for solving Newton update equation + with the proposed Gear-2 sensitivity. +} +\label{table:circuit} \end{table} For the comparison of running time spent in solving diff --git a/BookGPU/Chapters/chapter16/gpu.tex b/BookGPU/Chapters/chapter16/gpu.tex index bdd4ad0..4d4d6ef 100644 --- a/BookGPU/Chapters/chapter16/gpu.tex +++ b/BookGPU/Chapters/chapter16/gpu.tex @@ -78,7 +78,7 @@ a preset tolerance~\cite{Golub:Book'96}. %% \end{algorithm} \begin{algorithm} -\caption{Standard GMRES\index{iterative method!GMRES} algorithm.} \label{alg:GMRES} +\caption{standard GMRES\index{iterative method!GMRES} algorithm} \label{alg:GMRES} \KwIn{ $ A \in \mathbb{R}^{N \times N}$, $b \in \mathbb{R}^N$, and initial guess $x_0 \in \mathbb{R}^N$} \KwOut{ $x \in \mathbb{R}^N$: $\| b - A x\|_2 < tol$} diff --git a/BookGPU/Chapters/chapter16/intro.tex b/BookGPU/Chapters/chapter16/intro.tex index 875093f..6d021c3 100644 --- a/BookGPU/Chapters/chapter16/intro.tex +++ b/BookGPU/Chapters/chapter16/intro.tex @@ -76,7 +76,7 @@ next envelope step. {\resizebox{.9\textwidth}{!}{\input{./Chapters/chapter16/figures/envelope.pdf_t}} \label{fig:ef2} } \caption[Transient envelope-following\index{envelope-following} analysis.]{Transient envelope-following\index{envelope-following} analysis. - (Both two figures reflect backward Euler\index{Euler!backward Euler} style envelope-following.)} + (Both figures reflect backward Euler\index{Euler!backward Euler} style envelope-following.)} \label{fig:ef_intro} \end{figure} diff --git a/BookGPU/Chapters/chapter17/ch17.tex b/BookGPU/Chapters/chapter17/ch17.tex index 6cab2e3..8d3c8d7 100755 --- a/BookGPU/Chapters/chapter17/ch17.tex +++ b/BookGPU/Chapters/chapter17/ch17.tex @@ -13,7 +13,7 @@ \setlength\hangindent{\mylen}% \hspace*{\mylen}#1\\} -\chapter{Implementing Multi-Agent Systems on GPU} +\chapter{Implementing multi-agent systems on GPU} \label{chapter17} @@ -622,7 +622,7 @@ optimized representation of this matrix based on~\cite{Gomez-Luna:2009:PVS:1616772.1616869}, as illustrated in Figure~\ref{ch17:fig:csr_representation}. This compact representation considers each line of the matrix as an index list, and only stores -accessible agents compactly, to reduce the number of non-productive +accessible agents compactly, to reduce the number of nonproductive accesses. \begin{figure}[h] diff --git a/BookGPU/Chapters/chapter18/ch18.tex b/BookGPU/Chapters/chapter18/ch18.tex index 155cc04..5f0df4b 100755 --- a/BookGPU/Chapters/chapter18/ch18.tex +++ b/BookGPU/Chapters/chapter18/ch18.tex @@ -391,7 +391,7 @@ array\_comb1, array\_comb2: Arrays containing combinations of size combination\_ store internal variables in InternalVarXorLikeArray[threadIdx]\; } \end{small} -\caption{Main kernel for the chaotic iterations based PRNG GPU efficient +\caption{main kernel for the chaotic iterations based PRNG GPU efficient version\label{IR}} \label{algo:gpu_kernel2} \end{algorithm} diff --git a/BookGPU/Chapters/chapter7/ch7.tex b/BookGPU/Chapters/chapter7/ch7.tex index 5d9c0e3..f20975d 100644 --- a/BookGPU/Chapters/chapter7/ch7.tex +++ b/BookGPU/Chapters/chapter7/ch7.tex @@ -1,5 +1,5 @@ -\chapterauthor{Allan P. Engsig-Karup, Stefan L. Glimberg, Allan S. Nielsen, Ole Lindberg}{Technical University of Denmark} +\chapterauthor{Allan P. Engsig-Karup, Stefan L. Glimberg, Allan S. Nielsen, and Ole Lindberg}{Technical University of Denmark} %\chapterauthor{Stefan L. Glimberg}{Technical University of Denmark} %\chapterauthor{Allan S. Nielsen}{Technical University of Denmark} %\chapterauthor{Ole Lindberg}{Technical University of Denmark} @@ -14,21 +14,21 @@ \end{figure} -In this chapter, we use our library for heterogenous and massively parallel GPU implementations. The library is written in Compute Unified Device Architecture (CUDA) C/C++ and a fully nonlinear and dispersive free surface water wave model \cite{ch7:EngsigKarupEtAl2011} is implemented. We describe how flexible-order finite difference\index{finite difference} (stencil) approximations to the partial differential equations of the model can be prototyped using library components provided in an in-house library. In this library hardware-specific implementation details are hidden via template-based components, as described in Chapter \ref{ch5}. We provide details of the modeling basis and important unique numerical properties which have been made tunable to create a powerful and robust tool that can be tailored for specific purposes in engineering analysis. The model is based on unified potential flow theory and can be applied in scientific applications related to maritime engineering. It can be applied for cost-efficient estimation of broad banded wave propagation, transformation of irregular multidirectional waves over variable depth, kinematics and structural wave loads over large areas and scales. +In this chapter, we use our library for heterogeneous and massively parallel GPU implementations. The library is written in Compute Unified Device Architecture (CUDA) C/C++ and a fully nonlinear and dispersive free surface water wave model \cite{ch7:EngsigKarupEtAl2011} is implemented. We describe how flexible-order finite difference\index{finite difference} (stencil) approximations to the partial differential equations of the model can be prototyped using library components provided in an in-house library. In this library hardware-specific implementation details are hidden via template-based components, as described in Chapter \ref{ch5}. We provide details of the modeling basis and important unique numerical properties which have been made tunable to create a powerful and robust tool that can be tailored for specific purposes in engineering analysis. The model is based on unified potential flow theory and can be applied in scientific applications related to maritime engineering. It can be applied for cost-efficient estimation of broad banded wave propagation, transformation of irregular multidirectional waves over variable depth, kinematics and structural wave loads over large areas and scales. -A main motivation of this work is to deliver exceptional performance to minimize calculation times, using modern parallel hardware technologies in combination with a proper choice of discretization methods and data-local algorithms with optimal complexity. This enable work and memory requirements to grow (scale) linearly with problem size on a suitable hardware system. For the wave model this is achieved by explicit time integration and iterative solution of a large nonsymmetric and sparse linear $\sigma$-transformed Laplace problem. For the latter, we use an iterative Preconditioned Defect Correction (PDC) method, accelerated using a geometric multigrid preconditioning strategy. We use modern programming paradigms in the form of Message Passing Interface (MPI) and CUDA for development of a novel massively parallel wave modeling tool, executable on modern heterogenous many-core hardware. +A main motivation of this work is to deliver exceptional performance to minimize calculation times, using modern parallel hardware technologies in combination with a proper choice of discretization methods and data-local algorithms with optimal complexity. This enable work and memory requirements to grow (scale) linearly with problem size on a suitable hardware system. For the wave model this is achieved by explicit time integration and iterative solution of a large nonsymmetric and sparse linear $\sigma$-transformed Laplace problem. For the latter, we use an iterative Preconditioned Defect Correction (PDC) method, accelerated using a geometric multigrid preconditioning strategy. We use modern programming paradigms in the form of Message Passing Interface (MPI) and CUDA for development of a novel massively parallel wave modeling tool, executable on modern heterogeneous many-core hardware. One purpose of the developed numerical model is to ultimately perform hydrodynamic calculations in the time domain for practical analysis and simulation, e.g., to enable computationally intensive interactive real-time simulations. Realistic interactive simulations are, with present technology and available computational resources, a tremendous challenge in this setting. Yet, our aim is to take a first step in this direction and compute first-order accurate hydrodynamics for near-realistic simulations of unsteady ship-wave dynamics in a large ship simulator, used for training purposes in seakeeping operations. For this type of application, a mandatory ingredient for real-time and interactive simulation is a truly high-performance parallel implementation to ensure data processing in time for interactive visualization and responses. Details of the model properties, implementation, and promising novel combinations of techniques and algorithms for acceleration of performance are presented. Numerical experiments and benchmarks are provided to demonstrate the accuracy and efficiency of the model across recent generations of many-core CUDA-enabled hardware. \section{On hardware trends and challenges in scientific applications} -During the last two decades we have seen how computer graphics hardware has been developed from fixed pipeline processors with no level of programmability, to flexible high-performance hardware platforms, suitable for general purpose scientific computations other than computer graphics. This trend has contributed to a disruptive breakthrough in high-performance computing on mass-produced commodity hardware and fuelled new opportunities for computational science and engineering for a broad range of scientific as well as modern business applications. This emphasizes the increasingly important role of computers in simulation of real world dynamics \cite{ch7:Keyes201170}. In recent years, the CUDA programming model, based on the standard C/C++ programming language and introduced by NVIDIA Corporation worldwide, has become popular as a proprietary and widely used standard in high performance communities. It is, by design and supported functionality, easy and sufficient to be deployed for wide improvement of existing and new applications across science and engineering fields, that can benefit from the the use of heterogenous hardware. +During the last two decades we have seen how computer graphics hardware has been developed from fixed pipeline processors with no level of programmability, to flexible high-performance hardware platforms, suitable for general purpose scientific computations other than computer graphics. This trend has contributed to a disruptive breakthrough in high-performance computing on mass-produced commodity hardware and fuelled new opportunities for computational science and engineering for a broad range of scientific as well as modern business applications. This emphasizes the increasingly important role of computers in simulation of real world dynamics \cite{ch7:Keyes201170}. In recent years, the CUDA programming model, based on the standard C/C++ programming language and introduced by NVIDIA Corporation worldwide, has become popular as a proprietary and widely used standard in high performance communities. It is, by design and supported functionality, easy and sufficient to be deployed for wide improvement of existing and new applications across science and engineering fields, that can benefit from the the use of heterogeneous hardware. We should be careful about speculating about the future and extrapolating from current trends. The TOP 500 list\footnote{\url{http://www.top500.org.}} of supercomputers shows that there are some general noticeable hardware trends and gives indication of what to expect in the near future. First, since 2005 we have seen how power constraints and resulting heat dissipation problems forced chip producers to increase the number of cores rather than clock frequency. Multicore processors have become the new standard and many-core processors are becoming available as a standard in commodity hardware, from personal laptops to supercomputing clusters. This trend suggests that there will be less fast low-latency memory available per core in the future, favoring data-locality in algorithms. In addition, we have also seen how communication speed to computation speed ratio decreases, making it increasingly difficult to supply data to hungry floating point units. In addition, there will likely be increasing amounts of data to store as a result of increasing processing capabilities. The rapidly increasing floating point performance following Moore's law for transistor production has resulted in a significant memory gap which leaves most scientific applications based on partial differential equations (PDEs) bandwidth bound rather than compute bound. This trend is driven by pure commercial needs and not the needs of high-performance computing. Roads to better performance include standardization of software infrastructure, rethinking algorithms to better exploit memory hierarchies optimally to boost strong scaling properties, increasing locality in algorithms, and introducing as much concurrency and work as possible to both utilize and exploit the many cores. Also, software that can utilize many cores should be fault-tolerant to maximize time to solution for application users. We should also expect to see multiple layers of parallelism that will have to be exploited and possibly autotuned to optimally utilize available hardware resources. This introduces new challenges in compilers, requires programming experts with hardware knowledge, and introduces new trends in software developments to leverage productivity and utilize available computational resources in more optimal ways. We have observed a fundamental paradigm shift of underlying hardware design towards much more heterogeneity and parallelism. -A key problem is that improvements in performance require porting legacy codes\footnote{In the worst case, a legacy code is an undocumented serial code developed a long time ago by a developer no longer around.} to new hardware, and possibly changing algorithms which have been developed for the conventional single core processors decades ago. Without this, it may be impossible to utilize and scale algorithmic work optimally to achieve high performance on modern and emerging hardware. This problem is currently addressed with rapid progress by researchers and industry by development of new optimized libraries that can utilize such new hardware. While we have seen significant improvements in such efforts, and today see much more rapid development of applications, there are still relatively few scientific applications running entirely on heterogenous hardware. +A key problem is that improvements in performance require porting legacy codes\footnote{In the worst case, a legacy code is an undocumented serial code developed a long time ago by a developer no longer around.} to new hardware, and possibly changing algorithms which have been developed for the conventional single core processors decades ago. Without this, it may be impossible to utilize and scale algorithmic work optimally to achieve high performance on modern and emerging hardware. This problem is currently addressed with rapid progress by researchers and industry by development of new optimized libraries that can utilize such new hardware. While we have seen significant improvements in such efforts, and today see much more rapid development of applications, there are still relatively few scientific applications running entirely on heterogeneous hardware. %The main justification for porting or developing application on such hardware is a significant performance band expected (significant) @@ -266,7 +266,7 @@ where $S$ is a structural surface. \subsection{Finite difference approximations}\index{finite difference} -The numerical scheme is implemented as a flexible-order finite difference collocation scheme where all finite difference approximations of derivatives are constructed from one-dimensional approximations in a standard way, each having the maximum possible accuracy. In explicit numerical schemes, finite difference approximations can be implemented using a matrix-free technique to exploit that only a few different stencils are in fact needed. This can significantly reduce memory requirements of the implemented model by exploiting that the same small set of stencils can be reused. See Chapter \ref{ch5} for more details about matrix-free stencil operations supported in our in-house library for heterogenous and massively parallel computing using GPUs. +The numerical scheme is implemented as a flexible-order finite difference collocation scheme where all finite difference approximations of derivatives are constructed from one-dimensional approximations in a standard way, each having the maximum possible accuracy. In explicit numerical schemes, finite difference approximations can be implemented using a matrix-free technique to exploit that only a few different stencils are in fact needed. This can significantly reduce memory requirements of the implemented model by exploiting that the same small set of stencils can be reused. See Chapter \ref{ch5} for more details about matrix-free stencil operations supported in our in-house library for heterogeneous and massively parallel computing using GPUs. %\newpage \subsection{Time integration}\index{time integration} @@ -429,7 +429,7 @@ where $\Phi^{[k]},{\boldsymbol \delta}^{[k]},{\bf r}^{[k]}\in\mathbb{R}^n$ are t \subsection{Distributed computations via domain decomposition}\label{ch7:sec:dd}\index{domain decomposition} -Numerical modeling of large ocean areas to account for nonlinear wave-wave interactions and wave-structure interactions requires large degrees of spatial resolution, significant computational resources, and parallel computations to be practical. The recent generations of programmable GPUs are heavily optimized for on-chip bandwidth performance but not capacity. This implies that for the solution of large-scale PDE problems, distributed computing on multiple GPU devices is required due to limited capacity in the global memory space of current GPUs. Via a combination of MPI and CUDA we have recently demonstrated how both small and large systems can be solved efficiently by heterogenous computations using a data domain decomposition technique in parallel \cite{ch7:GlimbergEtAl2012}. The idea is to distribute the computational tasks to multiple GPUs, to enable reduced computational times and increased problem sizes. A homogenous partitioning of the data is used to ensure that the load balance across the GPUs is uniform. Data distribution and message passing introduce a data transfer bottleneck in the form of the Peripheral Component Interconnect express (PCIe) link and network interconnection. Thus, if the computational intensity of the local problem is not large enough to enable sufficient latency hiding of this bottleneck, the whole application is likely to be (severely) limited by the PCIe link or network bandwidth performance rather than the high on-chip bandwidth of the individual GPUs. +Numerical modeling of large ocean areas to account for nonlinear wave-wave interactions and wave-structure interactions requires large degrees of spatial resolution, significant computational resources, and parallel computations to be practical. The recent generations of programmable GPUs are heavily optimized for on-chip bandwidth performance but not capacity. This implies that for the solution of large-scale PDE problems, distributed computing on multiple GPU devices is required due to limited capacity in the global memory space of current GPUs. Via a combination of MPI and CUDA we have recently demonstrated how both small and large systems can be solved efficiently by heterogeneous computations using a data domain decomposition technique in parallel \cite{ch7:GlimbergEtAl2012}. The idea is to distribute the computational tasks to multiple GPUs, to enable reduced computational times and increased problem sizes. A homogenous partitioning of the data is used to ensure that the load balance across the GPUs is uniform. Data distribution and message passing introduce a data transfer bottleneck in the form of the Peripheral Component Interconnect express (PCIe) link and network interconnection. Thus, if the computational intensity of the local problem is not large enough to enable sufficient latency hiding of this bottleneck, the whole application is likely to be (severely) limited by the PCIe link or network bandwidth performance rather than the high on-chip bandwidth of the individual GPUs. The ratio between necessary data transfers and computational work for the proposed numerical model for free surface water waves is high enough to expect reasonable latency hiding. The data domain decomposition method consists of a logically structured division of the computational domain into multiple subdomains. Each of these subdomains is connected via fictitious ghost layers at the artificial boundaries of width corresponding to the half-width of the finite difference stencils employed. This results in a favorable volume-to-boundary ratio as the problem size increases, and diminishing communication overhead for message passing. Information between subdomains is exchanged through ghost layers at every step of the iterative PDC method, in connection with the matrix-vector evaluation for the $\sigma$-transformed Laplace problem, and before relaxation steps in the multigrid method. A single global synchronization point occurs at most once each iteration, if convergence is monitored, where a global reduction step (inner product) between all processor nodes takes place. The main advantage of this decomposition strategy is that the decomposition into multiple subdomains is straightforward. However, it comes with the cost of extra data transfers to update the set of fictitious ghost layers. @@ -444,7 +444,7 @@ The ratio between necessary data transfers and computational work for the propos {\scriptsize\input{Chapters/chapter7/figures/TeslaK20SpeedupGPUvsCPU3D.tikz}} } \end{center} - \caption[Performance timings per PDC iteration as a function of increasing problem size $N$, for single, mixed, and double precision arithmetics.]{Performance timings per PDC iteration as a function of increasing problem size $N$, for single, mixed, and double precision arithmetics. Three-dimensional nonlinear waves, using sixth order finite difference approximations, preconditioned with one multigrid V-cycle and with one pre- and post- red-black Gauss-Seidel smoothing operation. Speedup compared to fastest known serial implementation. Using test environment 4, CPU timings represent starting points for our investigations and have been obtained using the Fortran 90 code. These references results are based on a single-core (non-parallel) run on a Intel Core i7, 2.80GHz processor.}\label{ch7:fig:perftimings} + \caption[Performance timings per PDC iteration as a function of increasing problem size $N$, for single, mixed, and double precision arithmetics.]{Performance timings per PDC iteration as a function of increasing problem size $N$, for single, mixed, and double precision arithmetics. Three-dimensional nonlinear waves, using sixth order finite difference approximations, preconditioned with one multigrid V-cycle and with one pre- and post- red-black Gauss-Seidel smoothing operation. Speedup compared to fastest known serial implementation. Using test environment 4, CPU timings represent starting points for our investigations and have been obtained using the Fortran 90 code. These references results are based on a single-core (nonparallel) run on a Intel Core i7, 2.80GHz processor.}\label{ch7:fig:perftimings} \end{figure} The parallel domain decomposition solver has been validated against the sequential solvers with respect to algorithmic efficiency to establish that the code produces correct results. An analysis of the numerical efficiency has also been carried out on different GPU systems to identify comparative behaviors as both the problems sizes and number of compute nodes vary. For example, performance scalings on Test environment 1 and Test environment 3 are presented in Figure \ref{ch7:fig:multigpuperformance}. The figure confirms that there is only a limited benefit from using multiple GPUs for small problem sizes, since the computational intensity is simply too low to efficiently hide the latency of message passing. A substantial speedup is achieved compared to the single GPU version, while being able to solve even larger systems. @@ -469,7 +469,7 @@ Future work involves extending the domain decomposition method to include suppor \subsection{Assembling the wave model from library components} -It is described in Chapter \ref{ch5} how we have developed a heterogenous library has for fast prototyping of PDE solvers, utilizing the massively parallel architecture of CUDA-enabled GPUs. The objective is to provide a set of generic components within a single framework, such that software developers can assemble application-specific solvers efficiently at a high abstraction level, requiring a minimum of CUDA specific kernel implementations and parameter tuning. +It is described in Chapter \ref{ch5} how we have developed a heterogeneous library has for fast prototyping of PDE solvers, utilizing the massively parallel architecture of CUDA-enabled GPUs. The objective is to provide a set of generic components within a single framework, such that software developers can assemble application-specific solvers efficiently at a high abstraction level, requiring a minimum of CUDA specific kernel implementations and parameter tuning. The CUDA-based numerical wave model has been developed based on all the numerical techniques described in preceding sections. These techniques are a part of the library implemented as generic components, which makes them useful for the numerical solutions of PDE systems. The components of the numeral model as described in Section \ref{ch7:sec:nummodel} include an ERK4 time integrator, flexible-order finite difference approximations on regular grids, and an iterative multigrid PDC solver for the $\sigma$-transformed Laplace equation \eqref{ch7:TransformedLaplace}. Application developers either either can use these components directly from the library or are free to combine their own implementations with library components, if they need alternative strategies that are not present in the library. @@ -844,7 +844,7 @@ Ideally, the ratio $\mathcal{C}_\mathcal{G}/\mathcal{C}_\mathcal{F}$ is small an % What did we do and what are the results We have performed a scalability study for parareal using 2D nonlinear stream function waves based on a discretization with $(N_x,N_z)=(33,9)$ collocation points, cf. Figure \ref{ch7:fig:DDPA_SPEEDUP}. The study shows that moderate speedup is possible for this hyperbolic system. Using four GPU nodes, a speedup of slightly more than two was achieved while using sixteen GPU nodes resulted in a speedup of slightly less than five. As noticed in Figure \ref{ch7:fig:DDPA_SPEEDUP}, parallel efficiency decreases quite fast when using more GPUs. This limitation is due to the usages of a fairly slow and accurate coarse propagator and is linked to a known difficulty with parareal applied to hyperbolic systems. For hyperbolic systems, instabilities tend to arise when using a very inaccurate coarse propagator. This prevents using a large number of time subdomains, as this by Amdahl's law also requires a very fast coarse propagator. The numbers are still impressive though, considering that the speedup due to parareal comes as additional speedup to an already efficient and fast code. -Performance results for the Whalin test case are also shown in Figure \ref{ch7:fig:whalinparareal}. There is a natural limitation to how much we can increase $R$ (the ratio between the complexity of the fine and coarse propagators), because of stability issues with the coarse propagator. In this test case we simulate from $t=[0,1]$s, using up to $32$ GPUs. For low $R$ and only two GPUs, there is no speedup gain, but for the configuration with eight or more GPUs and $R\geq6$, we are able to get more than $2$ times speedup. Though these hyperbolic systems are not optimal for performance tuning using the parareal method, results still confirm that reasonable speedups are in fact possible on heterogenous systems. +Performance results for the Whalin test case are also shown in Figure \ref{ch7:fig:whalinparareal}. There is a natural limitation to how much we can increase $R$ (the ratio between the complexity of the fine and coarse propagators), because of stability issues with the coarse propagator. In this test case we simulate from $t=[0,1]$s, using up to $32$ GPUs. For low $R$ and only two GPUs, there is no speedup gain, but for the configuration with eight or more GPUs and $R\geq6$, we are able to get more than $2$ times speedup. Though these hyperbolic systems are not optimal for performance tuning using the parareal method, results still confirm that reasonable speedups are in fact possible on heterogeneous systems. \begin{figure}[!htb] \setlength\figureheight{0.29\textwidth} @@ -922,7 +922,7 @@ The modified numerical model can still be based on flexible-order finite differe %\newpage \section{Conclusion and future work} -We have presented implementation details together with several novel results on development of a new massively parallel and scalable tool for simulation of nonlinear free surface water waves on heterogenous hardware. The tool is based on the unified potential flow model referred to as OceanWave3D \cite{ch7:EBL08} which provides the basis for efficient and scalable simulation of water waves over uneven bottoms on arbitrary domain sizes. We have demonstrated in a few examples how we can accelerate performance by using single-precision math without compromising accuracy. We have shown that performance can be accelerated by introducing concurrency in the time integration using the parareal algorithm and for the first time in a heterogenous setup based on the use of multiple GPUs. Interestingly, we find that parallel computations using parareal may be more efficient than using conventional data-parallel distributed computations in a multi-GPU setup for moderate problem sizes. We have measured absolute performance and scalability using several of the most recent generations of NVIDIA GPUs to detail the efficiency of the current code. This is useful to predict time to results as explained in \cite{ch7:EngsigKarupEtAl2011} and may be compared against other wave models in fair comparisons. +We have presented implementation details together with several novel results on development of a new massively parallel and scalable tool for simulation of nonlinear free surface water waves on heterogeneous hardware. The tool is based on the unified potential flow model referred to as OceanWave3D \cite{ch7:EBL08} which provides the basis for efficient and scalable simulation of water waves over uneven bottoms on arbitrary domain sizes. We have demonstrated in a few examples how we can accelerate performance by using single-precision math without compromising accuracy. We have shown that performance can be accelerated by introducing concurrency in the time integration using the parareal algorithm and for the first time in a heterogeneous setup based on the use of multiple GPUs. Interestingly, we find that parallel computations using parareal may be more efficient than using conventional data-parallel distributed computations in a multi-GPU setup for moderate problem sizes. We have measured absolute performance and scalability using several of the most recent generations of NVIDIA GPUs to detail the efficiency of the current code. This is useful to predict time to results as explained in \cite{ch7:EngsigKarupEtAl2011} and may be compared against other wave models in fair comparisons. Work in progress focuses on extending the governing equations to account for lack of physics such as wave runup and wave breaking. Also, we plan to extend the domain decomposition method to unstructured grids of blocks that can be boundary-fitted to more general bottom-mounted structures to be able to address wave-structure problems, cf. \cite{ch7:EHBM06,ch7:EHBW08}. For example, this will provide the basis for simulations of wave transformations in large harbor areas or predict wave climates in near-coastal areas.