X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/book_gpu.git/blobdiff_plain/17bff40b83bcdcc39769f9e59c70ffae1c525b72..ecd01808b5702d940bd77107a2bf829d3832179b:/BookGPU/Chapters/chapter13/ch13.tex diff --git a/BookGPU/Chapters/chapter13/ch13.tex b/BookGPU/Chapters/chapter13/ch13.tex index 90dd946..b60105d 100755 --- a/BookGPU/Chapters/chapter13/ch13.tex +++ b/BookGPU/Chapters/chapter13/ch13.tex @@ -322,7 +322,7 @@ each component of the vector must be relaxed an infinite number of times. The ch relaxed components to be used in the computational process may be guided by any criterion, and in particular, a natural criterion is to pickup the most recently available values of the components computed by the processors. Furthermore, the asynchronous -iterations are implemented by means of nonblocking MPI communication subroutines\index{MPI subroutines!nonblocking} +iterations are implemented by means of nonblocking MPI communication subroutines\index{MPI!nonblocking} (asynchronous communications). The important property ensuring the convergence of the parallel projected Richardson @@ -486,13 +486,13 @@ in the synchronous algorithm and the asynchronous ones: \verb+cublasSetVectorAsy and \verb+cublasGetVectorAsync()+ in the asynchronous algorithm. Moreover, we use the communication routines of the MPI library to carry out the data exchanges between the neighboring nodes. We use the following communication routines: \verb+MPI_Isend()+ -and \verb+MPI_Irecv()+ to perform nonblocking\index{MPI subroutines!nonblocking} +and \verb+MPI_Irecv()+ to perform nonblocking\index{MPI!nonblocking} sends and receives, respectively. For the synchronous algorithm, we use the MPI routine \verb+MPI_Waitall()+ which puts the MPI process of a computing node in blocking status until all data exchanges with neighboring nodes (sends and receives) are completed. In contrast, for the asynchronous algorithms, we use the MPI routine \verb+MPI_Test()+ which tests the completion of a data exchange (send or receives) -without putting the MPI process in blocking status\index{MPI subroutines!blocking}. +without putting the MPI process in blocking status\index{MPI!blocking}. The function $Compute\_New\_Vector\_Elements()$ (line~$6$ in Algorithm~\ref{ch13:alg:02}) computes, at each iteration, the new elements of the iterate vector $U$. Its general code @@ -598,7 +598,7 @@ AllReduce(error,\hspace{0.1cm}maxerror,\hspace{0.1cm}MAX); \\ conv \leftarrow true; \end{array} $$ -where the function $AllReduce()$ uses the MPI global reduction subroutine\index{MPI subroutines!global} +where the function $AllReduce()$ uses the MPI global reduction subroutine\index{MPI!global} \verb+MPI_Allreduce()+ to compute the maximal value, $maxerror$, among the local absolute errors, $error$, of all computing nodes, and $p$ (in Algorithm~\ref{ch13:alg:02}) is used as a counter of the local relaxations carried out by a computing node. In