-based on the feedback from several research works involving GPUs, either alone
-in a single machine or in a cluster of machines. Indeed, our past
-collaborations with industries have allowed us to point out that in their
-economical context, they can adopt a parallel technology only if its
-implementation and maintenance costs are small according to the potential
-benefits (performance, accuracy,...). So, in such contexts, GPU programming is
-still regarded with some distance according to its specific field of
-applicability (SIMD/SIMT model) and its still higher programming complexity and
-maintenance. In the academic domain, things are a bit different but studies for
-efficiently integrating GPU computations in multi-core clusters with maximal
-overlapping of computations with communications and/or other computations, are
-still rare.
+based on the feedback from several research works involving GPUs, either in a
+single machine or in a cluster of machines. Indeed, our past collaborations
+with industries have allowed us to point out that in their economical context,
+they can adopt a parallel technology only if its implementation and maintenance
+costs are small compared with the potential benefits (performance,
+accuracy, etc.). So, in such contexts, GPU programming is still regarded with
+some distance due to its specific field of applicability (SIMD/SIMT model:
+Single Instruction Multiple Data/Thread) and its still higher programming
+complexity and maintenance. In the academic domain, things are a bit different,
+but studies for efficiently integrating GPU computations in multicore clusters
+with maximal overlapping of computations with communications and/or other
+computations are still rare.