}x
@Article{Ostrowski41,
- title = " On a Theorem by {J. L. Walsh} Concerning the Moduli of Roots of Algebraic Equations,Bull. A.M.S.",
+ title = " On a Theorem by {J. L. Walsh} Concerning the Moduli of Roots of Algebraic Equations. A.M.S.",
journal = " Algorithmes itératifs paralléles et distribués",
volume = "1",
number = "47",
}
@Article{Kahinall14,
- title = "Parallel implementation of the {D}urand-{K}erner algorithm for polynomial root-finding on GPU",
+ title = "Parallel implementation of the {D}urand-{K}erner algorithm for polynomial root-finding on {GPU}",
journal = "IEEE. Conf. on advanced Networking, Distributed Systems and Applications",
volume = "",
number = "",
@InProceedings{Winogard72,
title = "Parallel Iteration Methods",
- author = "Shmuel Winograd",
+ author = "S. Winograd",
bibdate = "2011-09-13",
bibsource = "DBLP,
http://dblp.uni-trier.de/db/conf/coco/cocc1972.html#Winograd72",
\end{equation}
This solution is applied when the root except the circle unit, represented by the radius $R$ evaluated in C language as :
+\begin{equation}
\label{R.EL}
-\begin{center}
-\begin{verbatim}
R = exp(log(DBL_MAX)/(2*n) );
-\end{verbatim}
-\end{center}
+\end{equation}
+
%\begin{equation}
%In CUDA programming, all the instructions of the \verb=for= loop are executed by the GPU as a kernel. A kernel is a function written in CUDA and defined by the \verb=__global__= qualifier added before a usual \verb=C= function, which instructs the compiler to generate appropriate code to pass it to the CUDA runtime in order to be executed on the GPU.
-Algorithm~\ref{alg2-cuda} shows steps of the Ehrlich-Aberth algorithm using CUDA.
+Algorithm~\ref{alg2-cuda} shows sketch of the Ehrlich-Aberth algorithm using CUDA.
\begin{enumerate}
\begin{algorithm}[H]
must be copied from the CPU memory to the GPU global memory. Next, all
the data-parallel arithmetic operations inside the main loop
\verb=(while(...))= are executed as kernels by the GPU. The
-first kernel named \textit{save} in line 6 of
+first kernel named \textit{save} in line 7 of
Algorithm~\ref{alg2-cuda} consists in saving the vector of
polynomial's root found at the previous time-step in GPU memory, in
order to check the convergence of the roots after each iteration (line
-8, Algorithm~\ref{alg2-cuda}).
+10, Algorithm~\ref{alg2-cuda}).
The second kernel executes the iterative function and updates
$Z$, according to Algorithm~\ref{alg3-update}. We notice that the
of the current complex is less than the a certain value called the
radius i.e. ($ |z^{k}_{i}|<= R$), else the kernel executes the EA.EL
function Eq.~\ref{Log_H2}
-(with Eq.~\ref{deflncomplex}, Eq.~\ref{defexpcomplex}). The radius $R$ is evaluated as in ~\ref{R.EL} :
-
-$$R = \exp( \log(DBL\_MAX) / (2*n) )$$ where $DBL\_MAX$ stands for the maximum representable double value.
+(with Eq.~\ref{deflncomplex}, Eq.~\ref{defexpcomplex}). The radius $R$ is evaluated as in Eq.~\ref{R.EL}.
The last kernel checks the convergence of the roots after each update
-of $Z^{(k)}$, according to formula Eq.~\ref{eq:Aberth-Conv-Cond}. We used the functions of the CUBLAS Library (CUDA Basic Linear Algebra Subroutines) to implement this kernel.
+of $Z^{k}$, according to formula Eq.~\ref{eq:Aberth-Conv-Cond}. We used the functions of the CUBLAS Library (CUDA Basic Linear Algebra Subroutines) to implement this kernel.
The kernel terminates its computations when all the roots have
converged. It should be noticed that, as blocks of threads are
%polynomials. The execution time remains the
%element-key which justifies our work of parallelization.
For our tests, a CPU Intel(R) Xeon(R) CPU
-E5620@2.40GHz and a GPU K40 (with 6 Go of ram) is used.
+E5620@2.40GHz and a GPU K40 (with 6 Go of ram) are used.
%\subsection{Comparative study}
In Figure~\ref{fig:01}, we report the execution times of the
Ehrlich-Aberth method on one core of a Quad-Core Xeon E5620 CPU, on
four cores on the same machine with \textit{OpenMP} and on a Nvidia
-Tesla K40c GPU. We chose different sparse polynomials with degrees
+Tesla K40 GPU. We chose different sparse polynomials with degrees
ranging from 100,000 to 1,000,000. We can see that the implementation
on the GPU is faster than those implemented on the CPU.
However, the execution time for the