@@ -95,7+96,7 @@ root finding of polynomials, high degree, iterative methods, Durant-Kerner, GPU,
Polynomials are algebraic structures used in mathematics that capture physical phenomenons and that express the outcome in the form of a function of some unknown variable. Formally speaking, a polynomial $p(x)$ of degree \textit{n} having $n$ coefficients in the complex plane \textit{C} and zeros $\alpha_{i},\textit{i=1,...,n}$
%%\begin{center}
\begin{equation}
Polynomials are algebraic structures used in mathematics that capture physical phenomenons and that express the outcome in the form of a function of some unknown variable. Formally speaking, a polynomial $p(x)$ of degree \textit{n} having $n$ coefficients in the complex plane \textit{C} and zeros $\alpha_{i},\textit{i=1,...,n}$
In this sequential algorithm, one CPU thread executes all the steps. Let us look to the $3^{rd}$ step i.e. the execution of the iterative function, 2 sub-steps are needed. The first sub-step \textit{save}s the solution vector of the previous iteration, the second sub-step \textit{update}s or computes the new values of the roots vector.
~\\
In this sequential algorithm, one CPU thread executes all the steps. Let us look to the $3^{rd}$ step i.e. the execution of the iterative function, 2 sub-steps are needed. The first sub-step \textit{save}s the solution vector of the previous iteration, the second sub-step \textit{update}s or computes the new values of the roots vector.
-There exists two ways to execute the iterative function that we call a Jacobi one and a Gauss-Seidel one. With the Jacobi iteration, at iteration $k+1$ we need all the previous values $z^{(k)}_{i}$ to compute the new values $z^{(k+1)}_{i}$, taht is :
+There exists two ways to execute the iterative function that we call a Jacobi one and a Gauss-Seidel one. With the Jacobi iteration, at iteration $k+1$ we need all the previous values $z^{(k)}_{i}$ to compute the new values $z^{(k+1)}_{i}$, that is :
@@ -579,34+580,30 @@ The last kernel verifies the convergence of the roots after each update of $Z^{(
The kernels terminate it computations when all the roots converge. Finally, the solution of the root finding problem is copied back from GPU global memory to CPU memory. We use the communication functions of CUDA for the memory allocation in the GPU \verb=(cudaMalloc())= and for data transfers from the CPU memory to the GPU memory \verb=(cudaMemcpyHostToDevice)=
or from GPU memory to CPU memory \verb=(cudaMemcpyDeviceToHost))=.
%%HIER END MY REVISIONS (SIDER)
The kernels terminate it computations when all the roots converge. Finally, the solution of the root finding problem is copied back from GPU global memory to CPU memory. We use the communication functions of CUDA for the memory allocation in the GPU \verb=(cudaMalloc())= and for data transfers from the CPU memory to the GPU memory \verb=(cudaMemcpyHostToDevice)=
or from GPU memory to CPU memory \verb=(cudaMemcpyDeviceToHost))=.
%%HIER END MY REVISIONS (SIDER)
-\subsection{Experimental study}
+\section{Experimental study}
-\subsubsection{Definition of the polynomial used}
-We use a polynomial of the following form for which the
-roots are distributed on 2 distinct circles:
+\subsection{Definition of the polynomial used}
+We study two forms of polynomials the sparse polynomials and the full polynomials:
+\paragraph{Sparse polynomial}: in this following form, the roots are distributed on 2 distinct circles:
-with this formula, we can have until \textit{n} non zero terms.
-
-\subsubsection{The study condition}
-In order to have representative average values, for each
-point of our curves we measured the roots finding of 10
-different polynomials.
+with this form, we can have until \textit{n} non zero terms.
+\subsection{The study condition}
The our experiences results concern two parameters which are
the polynomial degree and the execution time of our program
to converge on the solution. The polynomial degree allows us
The our experiences results concern two parameters which are
the polynomial degree and the execution time of our program
to converge on the solution. The polynomial degree allows us
@@ -614,12+611,13 @@ to validate that our algorithm is powerful with high degree
polynomials. The execution time remains the
element-key which justifies our work of parallelization.
For our tests we used a CPU Intel(R) Xeon(R) CPU
polynomials. The execution time remains the
element-key which justifies our work of parallelization.
For our tests we used a CPU Intel(R) Xeon(R) CPU
-E5620@2.40GHz and a GPU Tesla C2070 (with 6 Go of ram)
+E5620@2.40GHz and a GPU K40 (with 6 Go of ram).
-\subsubsection{Comparative study}
+
+\subsection{Comparative study}
We initially carried out the convergence of Aberth algorithm with various sizes of polynomial, in second we evaluate the influence of the size of the threads per block....
We initially carried out the convergence of Aberth algorithm with various sizes of polynomial, in second we evaluate the influence of the size of the threads per block....
-\paragraph{Aberth algorithm on CPU and GPU}
+\subsubsection{Aberth algorithm on CPU and GPU}
%\begin{table}[!ht]
% \centering
%\begin{table}[!ht]
% \centering
@@ -646,7+644,7 @@ We initially carried out the convergence of Aberth algorithm with various sizes
\end{figure}
\end{figure}
-\paragraph{The impact of the thread's number into the convergence of Aberth algorithm}
+\subsubsection{The impact of the thread's number into the convergence of Aberth algorithm}
%\begin{table}[!h]
% \centering
%\begin{table}[!h]
% \centering
@@ -674,9+672,14 @@ We initially carried out the convergence of Aberth algorithm with various sizes