the computations~\cite{ChVCV13,Hoefler08a}. So, the logical and classical way
to implement such an overlap is to use three threads: one for
computing, one for sending, and one for receiving. Moreover, since
-the communication is performed by threads, blocking synchronous communications\index{MPI!communication!blocking}\index{MPI!communication!synchronous}
+the communication is performed by threads, blocking synchronous communications\index{MPI!blocking}\index{MPI!synchronous}
can be used without deteriorating the overall performance.
In this basic version, the termination\index{termination} of the global process is performed
at the end of the entire process. Line~23 directly updates the
number of other nodes that are in local convergence by adding the
received state of the source node. This is possible due to the encoding that is used to
-represent the local convergence (1) and the non convergence (0).
+represent the local convergence (1) and the nonconvergence (0).
%\begin{algorithm}[H]
% \caption{Reception function in the synchronized scheme.}
case tagState: // Management of local state messages
// Actual reception of the message
MPI_Recv(&recvdState, 1, MPI_CHAR, status.MPI_SOURCE, tagState, MPI_COMM_WORLD, &status);
- // Updates of numbers of stabilized nodes and received state msgs
+ // Updates of numbers of stabilized nodes and recvd state msgs
nbOtherCVs += recvdState;
nbStateMsg++;
// Unlocking of the computing thread when states of all other
should be accelerated a little bit.
We compare the performance obtained with overlapped Jacobian updatings and
-non overlapped ones for several problem sizes (see~\Fig{fig:ch6p2aux}).
+nonoverlapped ones for several problem sizes (see~\Fig{fig:ch6p2aux}).
\begin{figure}[h]
\centering
\includegraphics[width=.75\columnwidth]{Chapters/chapter6/curves/recouvs.pdf}