X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/kahina_paper2.git/blobdiff_plain/19056f53cfad07a463aa7197cef66838543b2777..53e4e1ca95c2a0805ab15a71eb47157aeb187aed:/paper.tex?ds=inline diff --git a/paper.tex b/paper.tex index 02bb637..d1ead14 100644 --- a/paper.tex +++ b/paper.tex @@ -351,22 +351,16 @@ % author names and affiliations % use a multiple column layout for up to three different % affiliations -\author{\IEEEauthorblockN{Michael Shell} -\IEEEauthorblockA{School of Electrical and\\Computer Engineering\\ -Georgia Institute of Technology\\ -Atlanta, Georgia 30332--0250\\ -Email: http://www.michaelshell.org/contact.html} +\author{\IEEEauthorblockN{Kahina Guidouche, Abderrahmane Sider } + \IEEEauthorblockA{Laboratoire LIMED\\ + Faculté des sciences exactes\\ + Université de Bejaia, 06000, Algeria\\ +Email: \{kahina.ghidouche,ar.sider\}@univ-bejaia.dz} \and -\IEEEauthorblockN{Homer Simpson} -\IEEEauthorblockA{Twentieth Century Fox\\ -Springfield, USA\\ -Email: homer@thesimpsons.com} -\and -\IEEEauthorblockN{James Kirk\\ and Montgomery Scott} -\IEEEauthorblockA{Starfleet Academy\\ -San Francisco, California 96678--2391\\ -Telephone: (800) 555--1212\\ -Fax: (888) 555--1212}} +\IEEEauthorblockN{Lilia Ziane Khodja, Raphaël Couturier} +\IEEEauthorblockA{FEMTO-ST Institute\\ + University of Bourgogne Franche-Comte, France\\ +Email: zianekhodja.lilia@gmail.com\\ raphael.couturier@univ-fcomte.fr}} % conference papers do not typically use \thanks and this command % is locked out in conference mode. If really needed, such as for @@ -1224,13 +1218,25 @@ Under 1 million, OpenMPI and MPI are almost equivalent. \section{Conclusion} \label{sec6} -In this paper, we have presented a parallel implementation of Ehrlich-Aberth algorithm for solving full and sparse polynomials, on single GPU with CUDA and on multiple GPUs using two parallel paradigms : shared memory with OpenMP and distributed memory with MPI. These architectures were addressed by a CUDA-OpenMP approach and CUDA-MPI approach, respectively. -The experiments show that, using parallel programming model like (OpenMP, MPI), we can efficiently manage multiple graphics cards to work together to solve the same problem and accelerate the parallel execution with 4 GPUs and solve a polynomial of degree 1,000,000, four times faster than on single GPU, that is a quasi-linear speedup. +In this paper, we have presented a parallel implementation of +Ehrlich-Aberth algorithm to solve full and sparse polynomials, on +single GPU with CUDA and on multiple GPUs using two parallel +paradigms: shared memory with OpenMP and distributed memory with +MPI. These architectures were addressed by a CUDA-OpenMP approach and +CUDA-MPI approach, respectively. Experiments show that, using +parallel programming model like (OpenMP, MPI). We can efficiently +manage multiple graphics cards to solve the same +problem and accelerate the parallel execution with 4 GPUs and solve a +polynomial of degree up to 5,000,000, four times faster than on single +GPU. %In future, we will evaluate our parallel implementation of Ehrlich-Aberth algorithm on other parallel programming model -Our next objective is to extend the model presented here at clusters of nodes featuring multiple GPUs, with a three-level scheme: inter-node communication via MPI processes (distributed memory), management of multi-GPU node by OpenMP threads (shared memory). +Our next objective is to extend the model presented here with clusters +of GPU nodes, with a three-level scheme: inter-node communication via +MPI processes (distributed memory), management of multi-GPU node by +OpenMP threads (shared memory). %present a communication approach between multiple GPUs. The comparison between MPI and OpenMP as GPUs controllers shows that these %solutions can effectively manage multiple graphics cards to work together @@ -1248,8 +1254,10 @@ Our next objective is to extend the model presented here at clusters of nodes fe % use section* for acknowledgment \section*{Acknowledgment} +Computations have been performed on the supercomputer facilities of +the Mésocentre de calcul de Franche-Comté. We also would like to thank +Nvidia for hardware donation under CUDA Research Center 2014. -The authors would like to thank...