X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/kahina_paper2.git/blobdiff_plain/19056f53cfad07a463aa7197cef66838543b2777..3afb8ef68cbf6eadd54fd00ebcfcef99abdf42e5:/paper.tex diff --git a/paper.tex b/paper.tex index 02bb637..93aeced 100644 --- a/paper.tex +++ b/paper.tex @@ -178,7 +178,7 @@ % *** SPECIALIZED LIST PACKAGES *** % -\usepackage{algorithmic} + % algorithmic.sty was written by Peter Williams and Rogerio Brito. % This package provides an algorithmic environment fo describing algorithms. % You can use the algorithmic environment in-text or within a figure @@ -351,22 +351,16 @@ % author names and affiliations % use a multiple column layout for up to three different % affiliations -\author{\IEEEauthorblockN{Michael Shell} -\IEEEauthorblockA{School of Electrical and\\Computer Engineering\\ -Georgia Institute of Technology\\ -Atlanta, Georgia 30332--0250\\ -Email: http://www.michaelshell.org/contact.html} -\and -\IEEEauthorblockN{Homer Simpson} -\IEEEauthorblockA{Twentieth Century Fox\\ -Springfield, USA\\ -Email: homer@thesimpsons.com} +\author{\IEEEauthorblockN{Kahina Guidouche, Abderrahmane Sider } + \IEEEauthorblockA{Laboratoire LIMED\\ + Faculté des sciences exactes\\ + Université de Bejaia, 06000, Algeria\\ +Email: \{kahina.ghidouche,ar.sider\}@univ-bejaia.dz} \and -\IEEEauthorblockN{James Kirk\\ and Montgomery Scott} -\IEEEauthorblockA{Starfleet Academy\\ -San Francisco, California 96678--2391\\ -Telephone: (800) 555--1212\\ -Fax: (888) 555--1212}} +\IEEEauthorblockN{Lilia Ziane Khodja, Raphaël Couturier} +\IEEEauthorblockA{FEMTO-ST Institute\\ + University of Bourgogne Franche-Comte, France\\ +Email: zianekhodja.lilia@gmail.com\\ raphael.couturier@univ-fcomte.fr}} % conference papers do not typically use \thanks and this command % is locked out in conference mode. If really needed, such as for @@ -415,10 +409,11 @@ using different parallel paradigms: OpenMP or MPI. The experiments show a quasi-linear speedup by using up-to 4 GPU devices compared to 1 GPU to find roots of polynomials of degree up-to 1.4 million. Moreover, other experiments show it is possible to find roots -of polynomials of degree up to 5 millions. +of polynomials of degree up-to 5 millions. \end{abstract} % no keywords +\LZK{Faut pas mettre des keywords?} @@ -447,7 +442,7 @@ of polynomials of degree up to 5 millions. Finding roots of polynomials of very high degrees arises in many complex problems in various domains such as algebra, biology or physics. A polynomial $p(x)$ in $\mathbb{C}$ in one variable $x$ is an algebraic expression in $x$ of the form: \begin{equation} -p(x) = \displaystyle\sum^n_{i=0}{a_ix^i},a_n\neq 0. +p(x) = \displaystyle\sum^n_{i=0}{a_ix^i},a_n\neq 0, \end{equation} where $\{a_i\}_{0\leq i\leq n}$ are complex coefficients and $n$ is a high integer number. If $a_n\neq0$ then $n$ is called the degree of the polynomial. The root-finding problem consists in finding the $n$ different values of the unknown variable $x$ for which $p(x)=0$. Such values are called roots of $p(x)$. Let $\{z_i\}_{1\leq i\leq n}$ be the roots of polynomial $p(x)$, then $p(x)$ can be written as : \begin{equation} @@ -497,11 +492,11 @@ of parallelization for a shared memory architecture with OpenMP and for a distributed memory one with MPI. They are able to compute the roots of sparse polynomials of degree 10,000 in 116 seconds with OpenMP and 135 seconds with MPI only by using 8 personal computers and -2 communications per iteration. \RC{si on donne des temps faut donner - le proc, comme c'est vieux à mon avis faut supprimer ca, votre avis?} The authors showed an interesting +2 communications per iteration. The authors showed an interesting speedup comparing to the sequential implementation which takes up-to 3,300 seconds to obtain same results. -\LZK{``only by using 8 personal computers and 2 communications per iteration''. Pour MPI? et Pour OpenMP: Rep: c'est MPI seulement} +\RC{si on donne des temps faut donner le proc, comme c'est vieux à mon avis faut supprimer ca, votre avis?} +\LZK{Supprimons ces détails et mettons une référence s'il y en a une} Very few work had been performed since then until the appearing of the Compute Unified Device Architecture (CUDA)~\cite{CUDA15}, a parallel computing platform and a programming model invented by NVIDIA. The computing power of GPUs (Graphics Processing Units) has exceeded that of traditional processors CPUs. However, CUDA adopts a totally new computing architecture to use the hardware resources provided by the GPU in order to offer a stronger computing ability to the massive data computing. Ghidouche et al.~\cite{Kahinall14} proposed an implementation of the Durand-Kerner method on a single GPU. Their main results showed that a parallel CUDA implementation is about 10 times faster than the sequential implementation on a single CPU for sparse polynomials of degree 48,000. @@ -509,19 +504,22 @@ Very few work had been performed since then until the appearing of the Compute U %\LZK{Cette partie est réécrite. \\ Sinon qu'est ce qui a été fait pour l'accuracy dans ce papier (Finding polynomial roots rapidly and accurately is the main objective of our work.)?} %\LZK{Les contributions ne sont pas définies !!} -In this paper we propose the parallelization of Ehrlich-Aberth method using two parallel programming paradigms OpenMP and MPI on CUDA multi-GPU platforms. Our CUDA/MPI and CUDA/OpenMP codes are the first implementations of Ehrlich-Aberth method with multiple GPUs for finding roots of polynomials. Our major contributions include: -\LZK{Pourquoi la méthode Ehrlich-Aberth et pas autres? the Ehrlich-Aberth have very good convergence and it is suitable to be implemented in parallel computers.} +%In this paper we propose the parallelization of Ehrlich-Aberth method using two parallel programming paradigms OpenMP and MPI on CUDA multi-GPU platforms. Our CUDA-MPI and CUDA-OpenMP codes are the first implementations of Ehrlich-Aberth method with multiple GPUs for finding roots of polynomials. Our major contributions include: +%\LZK{Pourquoi la méthode Ehrlich-Aberth et pas autres? the Ehrlich-Aberth have very good convergence and it is suitable to be implemented in parallel computers.} +In this paper we propose the parallelization of Ehrlich-Aberth method which has a good convergence and it is suitable to be implemented in parallel computers. We use two parallel programming paradigms OpenMP and MPI on CUDA multi-GPU platforms. Our CUDA-MPI and CUDA-OpenMP codes are the first implementations of Ehrlich-Aberth method with multiple GPUs for finding roots of polynomials. Our major contributions include: +\LZK{J'ai ajouté une phrase pour justifier notre choix de la méthode Ehrlich-Aberth. A revérifier.} \begin{itemize} - \item An improvements for the Ehrlich-Aberth method using the exponential logarithm in order to be able to solve sparse and full polynomial of degree up to 1, 000, 000.\RC{j'ai envie de virer ca, car c'est pas la nouveauté dans ce papier} - \item A parallel implementation of Ehrlich-Aberth method on single GPU with CUDA.\RC{idem} + %\item An improvements for the Ehrlich-Aberth method using the exponential logarithm in order to be able to solve sparse and full polynomial of degree up to 1, 000, 000.\RC{j'ai envie de virer ca, car c'est pas la nouveauté dans ce papier} + %\item A parallel implementation of Ehrlich-Aberth method on single GPU with CUDA.\RC{idem} \item The parallel implementation of Ehrlich-Aberth algorithm on a multi-GPU platform with a shared memory using OpenMP API. It is based on threads created from the same system process, such that each thread is attached to one GPU. In this case the communications between GPUs are done by OpenMP threads through shared memory. -\item The parallel implementation of Ehrlich-Aberth algorithm on a multi-GPU platform with a distributed memory using MPI API, such that each GPU is attached and managed by a MPI process. The GPUs exchange their data by message-passing communications. This latter approach is more used on clusters to solve very complex problems that are too large for traditional supercomputers, which are very expensive to build and run. +\item The parallel implementation of Ehrlich-Aberth algorithm on a multi-GPU platform with a distributed memory using MPI API, such that each GPU is attached and managed by a MPI process. The GPUs exchange their data by message-passing communications. \end{itemize} -\LZK{Pas d'autres contributions possibles?: j'ai rajouté 2} +This latter approach is more used on clusters to solve very complex problems that are too large for traditional supercomputers, which are very expensive to build and run. +\LZK{Pas d'autres contributions possibles? J'ai supprimé les deux premiers points proposés précédemment.} %This paper is organized as follows. In Section~\ref{sec2} we recall the Ehrlich-Aberth method. In section~\ref{sec3} we present EA algorithm on single GPU. In section~\ref{sec4} we propose the EA algorithm implementation on Multi-GPU for (OpenMP-CUDA) approach and (MPI-CUDA) approach. In sectioné\ref{sec5} we present our experiments and discus it. Finally, Section~\ref{sec6} concludes this paper and gives some hints for future research directions in this topic.} -The paper is organized as follows. In Section~\ref{sec2} we present three different parallel programming models OpenMP, MPI and CUDA. In Section~\ref{sec3} we present the implementation of the Ehrlich-Aberth algorithm on a single GPU. In Section~\ref{sec4} we present the parallel implementations of the Ehrlich-Aberth algorithm on Multi-GPU using the OpenMP and MPI approaches. In section\ref{sec5} we present our experiments and discus it. Finally, Section~\ref{sec6} concludes this paper and gives some hints for future research directions in this topic. +The paper is organized as follows. In Section~\ref{sec2} we present three different parallel programming models OpenMP, MPI and CUDA. In Section~\ref{sec3} we present the implementation of the Ehrlich-Aberth algorithm on a single GPU. In Section~\ref{sec4} we present the parallel implementations of the Ehrlich-Aberth algorithm on multiple GPUs using the OpenMP and MPI approaches. In section~\ref{sec5} we present our experiments and discuss them. Finally, Section~\ref{sec6} concludes this paper and gives some hints for future research directions in this topic. %\LZK{A revoir toute cette organization: je viens de la revoir} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% @@ -661,11 +659,11 @@ CUDA (Compute Unified Device Architecture) is a parallel computing architecture The Ehrlich-Aberth method is a simultaneous method~\cite{Aberth73} using the following iteration \begin{equation} \label{Eq:EA1} -EA: z^{k+1}_{i}=z_{i}^{k}-\frac{\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}} -{1-\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}\sum_{j=1,j\neq i}^{j=n}{\frac{1}{(z_{i}^{k}-z_{j}^{k})}}}, i=1,. . . .,n +z^{k+1}_{i}=z_{i}^{k}-\frac{\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}} +{1-\frac{p(z_{i}^{k})}{p'(z_{i}^{k})}\sum_{j=1,j\neq i}^{j=n}{\frac{1}{(z_{i}^{k}-z_{j}^{k})}}}, i=1,\ldots,n \end{equation} -This methods contains 4 steps. The first step consists of the initial +This method contains 4 steps. The first step consists of the initial approximations of all the roots of the polynomial. The second step initializes the solution vector $Z$ using the Guggenheimer method~\cite{Gugg86} to ensure the distinction of the initial vector @@ -778,10 +776,10 @@ CUDA running threads like threads on a CPU host. In the following paragraph Algorithm~\ref{alg1-cuda} shows the GPU parallel implementation of Ehrlich-Aberth method. -\begin{enumerate} \begin{algorithm}[htpb] \label{alg1-cuda} -%\LinesNumbered +\LinesNumbered +\SetAlgoNoLine \caption{CUDA Algorithm to find roots with the Ehrlich-Aberth method} \KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance @@ -791,27 +789,19 @@ implementation of Ehrlich-Aberth method. %\BlankLine -\item Initialization of P\; -\item Initialization of Pu\; -\item Initialization of the solution vector $Z^{0}$\; -\item Allocate and copy initial data to the GPU global memory\; -\item k=0\; -\item \While {$\Delta z_{max} > \epsilon$}{ -\item Let $\Delta z_{max}=0$\; -\item $ kernel\_save(ZPrec,Z)$\; -\item k=k+1\; -\item $ kernel\_update(Z,P,Pu)$\; -\item $kernel\_testConverge(\Delta z_{max},Z,ZPrec)$\; +Initialization of P\; +Initialization of Pu\; +Initialization of the solution vector $Z^{0}$\; +Allocate and copy initial data to the GPU global memory\; +\While {$\Delta z_{max} > \epsilon$}{ + $ kernel\_save(ZPrec,Z)$\; + $ kernel\_update(Z,P,Pu)$\; + $\Delta z_{max}=kernel\_testConverge(Z,ZPrec)$\; } -\item Copy results from GPU memory to CPU memory\; +Copy results from GPU memory to CPU memory\; \end{algorithm} -\end{enumerate} -~\\ -\RC{Au final, on laisse ce code, on l'explique, si c'est kahina qui - rajoute l'explication, il faut absolument ajouter \KG{dfsdfsd}, car - l'anglais sera à relire et je ne veux pas tout relire... } \section{The EA algorithm on Multiple GPUs} \label{sec4} @@ -868,43 +858,41 @@ shared memory arrays containing all the roots. %% roots sufficiently converge. -%% \begin{enumerate} -%% \begin{algorithm}[htpb] -%% \label{alg2-cuda-openmp} -%% %\LinesNumbered -%% \caption{CUDA-OpenMP Algorithm to find roots with the Ehrlich-Aberth method} - -%% \KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance -%% threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degree), $\Delta z$ ( Vector of errors for stop condition), $num_gpus$ (number of OpenMP threads/ Number of GPUs), $Size$ (number of roots)} - -%% \KwOut {$Z$ ( Root's vector), $ZPrec$ (Previous root's vector)} - -%% \BlankLine - -%% \item Initialization of P\; -%% \item Initialization of Pu\; -%% \item Initialization of the solution vector $Z^{0}$\; -%% \verb=omp_set_num_threads(num_gpus);= -%% \verb=#pragma omp parallel shared(Z,$\Delta$ z,P);= -%% \verb=cudaGetDevice(gpu_id);= -%% \item Allocate and copy initial data from CPU memory to the GPU global memories\; -%% \item index= $Size/num\_gpus$\; -%% \item k=0\; -%% \While {$error > \epsilon$}{ -%% \item Let $\Delta z=0$\; -%% \item $ kernel\_save(ZPrec,Z)$\; -%% \item k=k+1\; -%% \item $ kernel\_update(Z,P,Pu,index)$\; -%% \item $kernel\_testConverge(\Delta z[gpu\_id],Z,ZPrec)$\; -%% %\verb=#pragma omp barrier;= -%% \item error= Max($\Delta z$)\; -%% } - -%% \item Copy results from GPU memories to CPU memory\; -%% \end{algorithm} -%% \end{enumerate} -%% ~\\ -%% \RC{C'est encore pire ici, on ne voit pas les comm CPU <-> GPU } +\begin{algorithm}[h] +\label{alg2-cuda-openmp} +\LinesNumbered +\SetAlgoNoLine +\caption{CUDA-OpenMP Algorithm to find roots with the Ehrlich-Aberth method} + +\KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance + threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degree), $\Delta z$ ( Vector of errors for stop condition), $num\_gpus$ (number of OpenMP threads/ Number of GPUs), $Size$ (number of roots)} + +\KwOut {$Z$ ( Root's vector), $ZPrec$ (Previous root's vector)} + +\BlankLine + +Initialization of P\; +Initialization of Pu\; +Initialization of the solution vector $Z^{0}$\; +omp\_set\_num\_threads(num\_gpus)\; +\#pragma omp parallel shared(Z,$\Delta z$,P)\; +\Indp +{ +gpu\_id=cudaGetDevice()\; +Allocate memory on GPU\; +Compute local size and offet according to gpu\_id\; +\While {$error > \epsilon$}{ + copy Z from CPU to GPU\; +$ ZPrec_{loc}=kernel\_save(Z_{loc})$\; +$ Z_{loc}=kernel\_update(Z,P,Pu)$\; +$\Delta z[gpu\_id] = kernel\_testConv(Z_{loc},ZPrec_{loc})$\; +$ error= Max(\Delta z)$\; + copy $Z_{loc}$ from GPU to Z in CPU +} +\Indm} +\RC{Est ce qu'on fait apparaitre le pragma? J'hésite...} +\end{algorithm} + \subsection{Multi-GPU : an MPI-CUDA approach} @@ -918,40 +906,33 @@ Our parallel implementation of EA to find root of polynomials using a CUDA-MPI a Since a GPU works only on data already allocated in its memory, all local input data, $Z_{k}$, $ZPrec$ and $\Delta z_{k}$, must be transferred from CPU memories to the corresponding GPU memories. Afterwards, the same EA algorithm (Algorithm \ref{alg1-cuda}) is run by all processes but on different polynomial subset of roots $ p(x)_{k}=\sum_{i=1}^{n} a_{i}x^{i}, k=1,...,p$. Each MPI process executes the loop \verb=(While(...)...do)= containing the CUDA kernels but each MPI process computes only its own portion of the roots according to the rule ``''owner computes``''. The local range of roots is indicated with the \textit{index} variable initialized at (line 5, Algorithm \ref{alg2-cuda-mpi}), and passed as an input variable to $kernel\_update$ (line 10, Algorithm \ref{alg2-cuda-mpi}). After each iteration, MPI processes synchronize (\verb=MPI_Allreduce= function) by a reduction on $\Delta z_{k}$ in order to compute the maximum error related to the stop condition. Finally, processes copy the values of new computed roots from GPU memories to CPU memories, then communicate their results to other processes with \verb=MPI_Alltoall= broadcast. If the stop condition is not verified ($error > \epsilon$) then processes stay withing the loop \verb= while(...)...do= until all the roots sufficiently converge. -%% \begin{enumerate} -%% \begin{algorithm}[htpb] -%% \label{alg2-cuda-mpi} -%% %\LinesNumbered -%% \caption{CUDA-MPI Algorithm to find roots with the Ehrlich-Aberth method} - -%% \KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance -%% threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degrees), $\Delta z$ ( error of stop condition), $num_gpus$ (number of MPI processes/ number of GPUs), Size (number of roots)} - -%% \KwOut {$Z$ (Solution root's vector), $ZPrec$ (Previous solution root's vector)} - -%% \BlankLine -%% \item Initialization of P\; -%% \item Initialization of Pu\; -%% \item Initialization of the solution vector $Z^{0}$\; -%% \item Allocate and copy initial data from CPU memories to GPU global memories\; -%% \item $index= Size/num_gpus$\; -%% \item k=0\; -%% \While {$error > \epsilon$}{ -%% \item Let $\Delta z=0$\; -%% \item $kernel\_save(ZPrec,Z)$\; -%% \item k=k+1\; -%% \item $kernel\_update(Z,P,Pu,index)$\; -%% \item $kernel\_testConverge(\Delta z,Z,ZPrec)$\; -%% \item ComputeMaxError($\Delta z$,error)\; -%% \item Copy results from GPU memories to CPU memories\; -%% \item Send $Z[id]$ to all processes\; -%% \item Receive $Z[j]$ from every other process j\; -%% } -%% \end{algorithm} -%% \end{enumerate} -%% ~\\ - -%% \RC{ENCORE ENCORE PIRE} +\begin{algorithm}[htpb] +\label{alg2-cuda-mpi} +%\LinesNumbered +\caption{CUDA-MPI Algorithm to find roots with the Ehrlich-Aberth method} + +\KwIn{$Z^{0}$ (Initial root's vector), $\varepsilon$ (Error tolerance + threshold), P (Polynomial to solve), Pu (Derivative of P), $n$ (Polynomial degrees), $\Delta z$ ( error of stop condition), $num_gpus$ (number of MPI processes/ number of GPUs), Size (number of roots)} + +\KwOut {$Z$ (Solution root's vector), $ZPrec$ (Previous solution root's vector)} + +\BlankLine +Initialization of P\; +Initialization of Pu\; +Initialization of the solution vector $Z^{0}$\; +Distribution of Z\; +Allocate memory to GPU\; +\While {$error > \epsilon$}{ +copy Z from CPU to GPU\; +$ZPrec_{loc}=kernel\_save(Z_{loc})$\; +$Z_{loc}=kernel\_update(Z,P,Pu)$\; +$\Delta z=kernel\_testConv(Z_{loc},ZPrec_{loc})$\; +$error=MPI\_Reduce(\Delta z)$\; +$Copy Z_{loc} from GPU to CPU$\; +$Z=MPI\_AlltoAll(Z_{loc})$\; +} +\end{algorithm} + \section{Experiments} \label{sec5} @@ -1116,11 +1097,12 @@ sparse and full polynomials ranging from 1,000,000 to 5,000,000. \label{fig:09} \end{figure} In Figure~\ref{fig:09} we can see that both approaches are scalable -and can solve very high degree polynomials. With full polynomial both -approaches give very similar results. However, for sparse polynomials -there are a noticeable difference in favour of MPI when the degree is -above 4 millions. Between 1 and 3 millions, OpenMP is more effecient. -Under 1 million, OpenMPI and MPI are almost equivalent. +and can solve very high degree polynomials. In addition, with full polynomial as well as sparse ones, both +approaches give very similar results. + +%SIDER JE viens de virer \c ca For sparse polynomials here are a noticeable difference in favour of MPI when the degree is +%above 4 millions. Between 1 and 3 millions, OpenMP is more effecient. +%Under 1 million, OpenMPI and MPI are almost equivalent. %SIDER : il faut une explication sur les différences ici aussi. @@ -1224,13 +1206,25 @@ Under 1 million, OpenMPI and MPI are almost equivalent. \section{Conclusion} \label{sec6} -In this paper, we have presented a parallel implementation of Ehrlich-Aberth algorithm for solving full and sparse polynomials, on single GPU with CUDA and on multiple GPUs using two parallel paradigms : shared memory with OpenMP and distributed memory with MPI. These architectures were addressed by a CUDA-OpenMP approach and CUDA-MPI approach, respectively. -The experiments show that, using parallel programming model like (OpenMP, MPI), we can efficiently manage multiple graphics cards to work together to solve the same problem and accelerate the parallel execution with 4 GPUs and solve a polynomial of degree 1,000,000, four times faster than on single GPU, that is a quasi-linear speedup. +In this paper, we have presented a parallel implementation of +Ehrlich-Aberth algorithm to solve full and sparse polynomials, on +single GPU with CUDA and on multiple GPUs using two parallel +paradigms: shared memory with OpenMP and distributed memory with +MPI. These architectures were addressed by a CUDA-OpenMP approach and +CUDA-MPI approach, respectively. Experiments show that, using +parallel programming model like (OpenMP, MPI). We can efficiently +manage multiple graphics cards to solve the same +problem and accelerate the parallel execution with 4 GPUs and solve a +polynomial of degree up to 5,000,000, four times faster than on single +GPU. %In future, we will evaluate our parallel implementation of Ehrlich-Aberth algorithm on other parallel programming model -Our next objective is to extend the model presented here at clusters of nodes featuring multiple GPUs, with a three-level scheme: inter-node communication via MPI processes (distributed memory), management of multi-GPU node by OpenMP threads (shared memory). +Our next objective is to extend the model presented here with clusters +of GPU nodes, with a three-level scheme: inter-node communication via +MPI processes (distributed memory), management of multi-GPU node by +OpenMP threads (shared memory). %present a communication approach between multiple GPUs. The comparison between MPI and OpenMP as GPUs controllers shows that these %solutions can effectively manage multiple graphics cards to work together @@ -1248,8 +1242,10 @@ Our next objective is to extend the model presented here at clusters of nodes fe % use section* for acknowledgment \section*{Acknowledgment} +Computations have been performed on the supercomputer facilities of +the Mésocentre de calcul de Franche-Comté. We also would like to thank +Nvidia for hardware donation under CUDA Research Center 2014. -The authors would like to thank...