X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/book_gpu.git/blobdiff_plain/44b8f845847505b81dc0f1199c49e67a495ed7a0..6318153555fcb28c475d77850cce474032d79f5a:/BookGPU/Chapters/chapter6/Intro.tex?ds=sidebyside diff --git a/BookGPU/Chapters/chapter6/Intro.tex b/BookGPU/Chapters/chapter6/Intro.tex index 5a8d0e7..77b314d 100644 --- a/BookGPU/Chapters/chapter6/Intro.tex +++ b/BookGPU/Chapters/chapter6/Intro.tex @@ -1,28 +1,28 @@ \section{Introduction}\label{ch6:intro} -This chapter proposes to draw several development methodologies to obtain +This chapter proposes to draw upon several development methodologies to obtain efficient codes in classical scientific applications. Those methodologies are -based on the feedback from several research works involving GPUs, either alone -in a single machine or in a cluster of machines. Indeed, our past -collaborations with industries have allowed us to point out that in their -economical context, they can adopt a parallel technology only if its -implementation and maintenance costs are small according to the potential -benefits (performance, accuracy,...). So, in such contexts, GPU programming is -still regarded with some distance according to its specific field of -applicability (SIMD/SIMT model) and its still higher programming complexity and -maintenance. In the academic domain, things are a bit different but studies for -efficiently integrating GPU computations in multi-core clusters with maximal -overlapping of computations with communications and/or other computations, are -still rare. +based on the feedback from several research works involving GPUs, either in a +single machine or in a cluster of machines. Indeed, our past collaborations +with industries have allowed us to point out that in their economical context, +they can adopt a parallel technology only if its implementation and maintenance +costs are small compared with the potential benefits (performance, +accuracy, etc.). So, in such contexts, GPU programming is still regarded with +some distance due to its specific field of applicability (SIMD/SIMT model: +Single Instruction Multiple Data/Thread) and its still higher programming +complexity and maintenance. In the academic domain, things are a bit different, +but studies for efficiently integrating GPU computations in multicore clusters +with maximal overlapping of computations with communications and/or other +computations are still rare. -For these reasons, the major aim of that chapter is to propose as simple as -possible general programming patterns that can be followed or adapted in +For these reasons, the major aim of that chapter is to propose general +programming patterns, as simple as possible, that can be followed or adapted in practical implementations of parallel scientific applications. % Also, according to our experience in industrial collaborations, we propose a % small prospect analysis about the perenity of such accelerators in the % middle/long term. -Also, we propose in a third part, a prospect analysis together with a particular -programming tool that is intended to ease multi-core GPU cluster programming. +In addition, we propose a prospect analysis together with a particular +programming tool that is intended to ease multicore GPU cluster programming. %%% Local Variables: