As GPUs have their own memory, the first step consists of allocating memory on
the GPU. A call to \texttt{cudaMalloc}\index{CUDA functions!cudaMalloc}
-allocates memory on the GPU. {\bf REREAD The first parameter of this function is a pointer
-on a memory on the device, i.e. the GPU.} The second parameter represents the
-size of the allocated variables, this size is expressed in bits.
+allocates memory on the GPU. The first parameter of this function is a pointer
+on a memory on the device, i.e., the GPU. The second parameter represents the
+size of the allocated variables; this size is expressed in bits.
\pagebreak
\lstinputlisting[label=ch2:lst:ex1,caption=simple example]{Chapters/chapter2/ex1.cu}
-
+\pagebreak
\section{Second example: using CUBLAS \index{CUBLAS}}
\label{ch2:2ex}