-In the GPU, the schduler assigns the execution of this loop to a group of threads organised as a grid of blocks with block containing a number of threads. All threads within a block are executed concurrently in parallel. The instructions run on the GPU are grouped in special function called kernels. It's up to the programmer, to describe the execution context, that is the size of the Grid, the number of blocks and the number of threads per block upon the call of a given kernel, according to a special syntax defined by CUDA.
+In the GPU, the scheduler assigns the execution of this loop to a
+group of threads organised as a grid of blocks with block containing a
+number of threads. All threads within a block are executed
+concurrently in parallel. The instructions run on the GPU are grouped
+in special function called kernels. With CUDA, a programmer must
+describe the kernel execution context: the size of the Grid, the number of blocks and the number of threads per block.