\label{fig:04}
\end{figure}
-\begin{figure}[htbp]
-\centering
- \includegraphics[angle=-90,width=0.5\textwidth]{Sparse}
-\caption{Comparaison between MPI and OpenMP versions of the Ehrlich-Aberth method for solving sparse plynomials on GPUs}
-\label{fig:05}
-\end{figure}
-\begin{figure}[htbp]
-\centering
- \includegraphics[angle=-90,width=0.5\textwidth]{Full}
-\caption{Comparaison between MPI and OpenMP versions of the Ehrlich-Aberth method for solving full polynomials on GPUs}
-\label{fig:06}
-\end{figure}
+this figure shows the execution time of the algorithm EA, on single GPU and Multi-GPUS with (2, 3, 4) GPUs for full polynomials. With (CUDA-MPI) approach we notice that the three curves are distinct from each other, more we use GPUs more the execution time decreases, on the other hand the curve with single GPU is well above the other curves.
+This is due to the use of parallelization MPI paradigm that divides the polynomial into sub polynomials assigned to each GPU. unlike the single GPU which solves all the polynomial on a single GPU, consequently it engenders more execution time.
-\begin{figure}[htbp]
-\centering
- \includegraphics[angle=-90,width=0.5\textwidth]{MPI}
-\caption{Comparaison of execution times of the Ehrlich-Aberth method for solving sparse and full polynomials on GPUs with distributed memory paradigm using MPI}
-\label{fig:07}
-\end{figure}
+%\begin{figure}[htbp]
+%\centering
+ % \includegraphics[angle=-90,width=0.5\textwidth]{Sparse}
+%\caption{Comparaison between MPI and OpenMP versions of the Ehrlich-Aberth method for solving sparse plynomials on GPUs}
+%\label{fig:05}
+%\end{figure}
-\begin{figure}[htbp]
-\centering
- \includegraphics[angle=-90,width=0.5\textwidth]{OMP}
-\caption{Comparaison of execution times of the Ehrlich-Aberth method for solving sparse and full polynomials on GPUs with shared memory paradigm using OpenMP}
-\label{fig:08}
-\end{figure}
+%\begin{figure}[htbp]
+%\centering
+ % \includegraphics[angle=-90,width=0.5\textwidth]{Full}
+%\caption{Comparaison between MPI and OpenMP versions of the Ehrlich-Aberth method for solving full polynomials on GPUs}
+%\label{fig:06}
+%\end{figure}
+
+%\begin{figure}[htbp]
+%\centering
+ % \includegraphics[angle=-90,width=0.5\textwidth]{MPI}
+%\caption{Comparaison of execution times of the Ehrlich-Aberth method for solving sparse and full polynomials on GPUs with distributed memory paradigm using MPI}
+%\label{fig:07}
+%\end{figure}
+
+%\begin{figure}[htbp]
+%\centering
+ % \includegraphics[angle=-90,width=0.5\textwidth]{OMP}
+%\caption{Comparaison of execution times of the Ehrlich-Aberth method for solving sparse and full polynomials on GPUs with shared memory paradigm using OpenMP}
+%\label{fig:08}
+%\end{figure}
% An example of a floating figure using the graphicx package.
% Note that \label must occur AFTER (or within) \caption.
\section{Conclusion}
-The conclusion goes here~\cite{IEEEexample:bibtexdesign}.
+In this paper, we have presented a parallel implementation of Ehrlich-Aberth algorithm for solving full and sparse polynomials, on single GPU with CUDA and Multi-GPUs using two parallel paradigm, shared memory with OpenMP, distributed memory with MPI.(CUDA-OpenMP) approach and (CUDA-MPI) approach,
+We have performed many experiments with the Ehrlich-Aberth method in single GPU, Multi-GPU with (CUDA-OpenMP) approach, Multi-GPU with (CUDA-MPI) approach for sparse and full polynomials. the experiments show that, using parallel programming model like (OpenMP, MPI) can effectively manage multiple graphics cards to work together to solve the same problem and accelerate parallel applications, like (CUDA MPI) approach with 4 GPUs can solve a polynomial of 1,000,000 4 speed up than on single GPU.
+
+
+In future, we will evaluate our parallel implementation of Ehrlich-Aberth algorithm on other parallel programming model
+
+
+%present a communication approach between multiple GPUs. The comparison between MPI and OpenMP as GPUs controllers shows that these
+%solutions can effectively manage multiple graphics cards to work together
+%to solve the same problem
+
+
+ %than we have presented two communication approach between multiple GPUs.(CUDA-OpenMP) approach and (CUDA-MPI) approach, in the objective to manage multiple graphics cards to work together and solve the same problem. in the objective to manage multiple graphics cards to work together and solve the same problem.