From: couturie Date: Tue, 16 Jul 2013 18:05:41 +0000 (+0200) Subject: new X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/book_gpu.git/commitdiff_plain/453cfeb12ac41d9063d5fb8f41ee725905b3adec new --- diff --git a/BookGPU/Chapters/chapter1/ch1.tex b/BookGPU/Chapters/chapter1/ch1.tex index 17a7e47..200efb1 100755 --- a/BookGPU/Chapters/chapter1/ch1.tex +++ b/BookGPU/Chapters/chapter1/ch1.tex @@ -1,7 +1,7 @@ \chapterauthor{Raphaël Couturier}{Femto-ST Institute, University of Franche-Comte, France} -\chapter{Presentation of the GPU architecture and of the Cuda environment} +\chapter{Presentation of the GPU architecture and of the CUDA environment} \label{chapter1} \section{Introduction}\label{ch1:intro} @@ -9,7 +9,7 @@ This chapter introduces the Graphics Processing Unit (GPU) architecture and all the concepts needed to understand how GPUs work and can be used to speed up the execution of some algorithms. First of all this chapter gives a brief history of -the development of Graphics card until they have been used in order to make +the development of the graphics cards until they have been used in order to make general purpose computation. Then the architecture of a GPU is illustrated. There are many fundamental differences between a GPU and a tradition processor. In order to benefit from the power of a GPU, a Cuda @@ -18,7 +18,7 @@ Cuda model to be efficient and scalable when some constraints are addressed. -\section{Brief history of Video Card} +\section{Brief history of video card} Video cards or Graphics cards have been introduced in personal computers to produce high quality graphics faster than classical Central Processing Units @@ -174,7 +174,7 @@ Task parallelism is the common parallelism achieved out on clusters and grids a high performance architectures where different tasks are executed by different computing units. -\section{Cuda Multithreading} +\section{CUDA multithreading} The data parallelism of Cuda is more precisely based on the Single Instruction Multiple Thread (SIMT) model. This is due to the fact that a programmer accesses diff --git a/BookGPU/frontmatter/preface.tex b/BookGPU/frontmatter/preface.tex index a2bd0f2..e36a430 100644 --- a/BookGPU/frontmatter/preface.tex +++ b/BookGPU/frontmatter/preface.tex @@ -1,39 +1,40 @@ +\chapter{Preface} + This book is intended to present the design of significant scientific applications on GPUs. Scientific applications require more and more -computational power in a large vaariety of fields: biology, physics, -chemisty, phenomon model and prediction, simulation, mathematics, ... +computational power in a large variety of fields: biology, physics, +chemisty, phenomon model and prediction, simulation, mathematics, etc. In order to be able to handle more complex applications, the use of parallel architectures is the solution to decrease the execution times of theses applications. Using simulataneously many computing cores can significantly speed up the processing time. -Nevertheless using parallel architectures is not so easy and has -always required an endeavor to parallelize an application. Nowadays -with general purpose graphics processing units (GPGPU), it is possible -to use either general graphic cards or dedicated graphic cards to benefit from -the computational power of all the cores available inside these -cards. The NVidia company introduced CUDA in 2007 to unify the -programming model to use their video card. CUDA is currently the most -used environment for designing GPU applications although some -alternatives are available, for example OpenCL. According to -applications and the GPU considered, a speed up from 5 up to 50 or even more can be -expected using a GPU instead of computing with a CPU. +Nevertheless using parallel architectures is not so easy and has always required +an endeavor to parallelize an application. Nowadays with general purpose +graphics processing units (GPGPU), it is possible to use either general graphic +cards or dedicated graphic cards to benefit from the computational power of all +the cores available inside these cards. The NVidia company introduced Compute +Unified Device Architecture (CUDA) in 2007 to unify the programming model to use +their video card. CUDA is currently the most used environment for designing GPU +applications although some alternatives are available, for example, +Open Computing Language (OpenCL). According to applications and the GPU considered, a speed up from 5 up +to 50, or even more can be expected using a GPU over computing with a CPU. The programming model of GPU is quite different from the one of CPU. It is well adapted to data parallelism applications. Several books present the CUDA programming models and multi-core applications design. This book is only focused on scientific applications on GPUs. It -contains 19 chapters gathered in 5 parts. +contains 20 chapters gathered in 6 parts. The first part presents the GPUs. The second part focuses on two significant image processing applications on GPUs. Part three presents two general methodologies for software development on GPUs. Part four -describes three optmitization problems on GPUs. The fifth part, the -longuest one, presents 7 numerical applications. Finally part six -illustrates 3 other applications that are not included in the previous +describes three optimization problems on GPUs. The fifth part, the +longest one, presents seven numerical applications. Finally part six +illustrates three other applications that are not included in the previous parts. Some codes presented in this book are available online on my webpage: -http://members.femto-st.fr/raphael-couturier/gpu-book/. +http://members.femto-st.fr/raphael-couturier/gpu-book/