\{\texttt{lilia.ziane\_khoja},~\texttt{raphael.couturier},~\texttt{arnaud.giersch},~\texttt{jacques.bahi}\}\texttt{@univ-fcomte.fr}
}
\{\texttt{lilia.ziane\_khoja},~\texttt{raphael.couturier},~\texttt{arnaud.giersch},~\texttt{jacques.bahi}\}\texttt{@univ-fcomte.fr}
}
\newcommand{\Iter}{\mathit{iter}}
\newcommand{\Max}{\mathit{max}}
\newcommand{\Offset}{\mathit{offset}}
\newcommand{\Iter}{\mathit{iter}}
\newcommand{\Max}{\mathit{max}}
\newcommand{\Offset}{\mathit{offset}}
which its local sub-matrix has nonzero values. Consequently, each computing node manages a global
vector composed of a local vector of size $\frac{n}{p}$ and a shared vector of size $S$:
\begin{equation}
which its local sub-matrix has nonzero values. Consequently, each computing node manages a global
vector composed of a local vector of size $\frac{n}{p}$ and a shared vector of size $S$:
\begin{equation}
sub-matrix which represents the number of columns between the minimum and the maximum column indices
(see Figure~\ref{fig:01}). In order to improve memory accesses, we use the texture memory to
cache elements of the global vector.
sub-matrix which represents the number of columns between the minimum and the maximum column indices
(see Figure~\ref{fig:01}). In order to improve memory accesses, we use the texture memory to
cache elements of the global vector.