From: raphael couturier Date: Sun, 27 Apr 2014 14:37:22 +0000 (+0200) Subject: deplacement de l'explication de poisson dans la partie multisplitting X-Git-Tag: hpcc2014_submission~57^2 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/hpcc2014.git/commitdiff_plain/6785b9ef58de0db67c33ca901c7813f3dfdc76e0?ds=inline deplacement de l'explication de poisson dans la partie multisplitting --- diff --git a/hpcc.tex b/hpcc.tex index e043c78..49caa2f 100644 --- a/hpcc.tex +++ b/hpcc.tex @@ -402,6 +402,42 @@ where $\MI$ is the maximum number of outer iterations and $\epsilon$ is the tolerance threshold of the error computed between two successive local solution $X_\ell^k$ and $X_\ell^{k+1}$. + + +In this paper, we solve the 3D Poisson problem whose the mathematical model is +\begin{equation} +\left\{ +\begin{array}{l} +\nabla^2 u = f \text{~in~} \Omega \\ +u =0 \text{~on~} \Gamma =\partial\Omega +\end{array} +\right. +\label{eq:02} +\end{equation} +where $\nabla^2$ is the Laplace operator, $f$ and $u$ are real-valued functions, and $\Omega=[0,1]^3$. The spatial discretization with a finite difference scheme reduces problem~(\ref{eq:02}) to a system of sparse linear equations. The general iteration scheme of our multisplitting method in a 3D domain using a seven point stencil could be written as +\begin{equation} +\begin{array}{ll} +u^{k+1}(x,y,z)= & u^k(x,y,z) - \frac{1}{6}\times\\ + & (u^k(x-1,y,z) + u^k(x+1,y,z) + \\ + & u^k(x,y-1,z) + u^k(x,y+1,z) + \\ + & u^k(x,y,z-1) + u^k(x,y,z+1)), +\end{array} +\label{eq:03} +\end{equation} +where the iteration matrix $A$ of size $N_x\times N_y\times N_z$ of the discretized linear system is sparse, symmetric and positive definite. + +The parallel solving of the 3D Poisson problem with our multisplitting method requires a data partitioning of the problem between clusters and between processors within a cluster. We have chosen the 3D partitioning instead of the row-by-row partitioning in order to reduce the data exchanges at sub-domain boundaries. Figure~\ref{fig:4.2} shows an example of the data partitioning of the 3D Poisson problem between two clusters of processors, where each sub-problem is assigned to a processor. In this context, a processor has at most six neighbors within a cluster or in distant clusters with which it shares data at sub-domain boundaries. + +\begin{figure}[!t] +\centering + \includegraphics[width=80mm,keepaspectratio]{partition} +\caption{Example of the 3D data partitioning between two clusters of processors.} +\label{fig:4.2} +\end{figure} + + + + %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We did not encounter major blocking problems when adapting the multisplitting algorithm previously described to a simulation environment like SimGrid unless some code debugging. Indeed, apart from the review of the program sequence for asynchronous exchanges between processors within a cluster or between clusters, the algorithm was executed successfully with SMPI and provided identical outputs as those obtained with direct execution under MPI. In synchronous @@ -420,7 +456,7 @@ environment. We have successfully executed the code in synchronous mode using pa -\section{Experimental results} +\section{Simulation results} When the \textit{real} application runs in the simulation environment and produces the expected results, varying the input parameters and the program arguments allows us to compare outputs from the code execution. We have noticed from this @@ -452,40 +488,6 @@ synchronous mode allowing to get a relative gain greater than 1. This action simulates the case of distant clusters linked with long distance network like Internet. -\AG{Cette partie sur le poisson 3D - % on sait donc que ce n'est pas une plie ou une sole (/me fatigué) - n'est pas à sa place. Elle devrait être placée plus tôt.} -In this paper, we solve the 3D Poisson problem whose the mathematical model is -\begin{equation} -\left\{ -\begin{array}{l} -\nabla^2 u = f \text{~in~} \Omega \\ -u =0 \text{~on~} \Gamma =\partial\Omega -\end{array} -\right. -\label{eq:02} -\end{equation} -where $\nabla^2$ is the Laplace operator, $f$ and $u$ are real-valued functions, and $\Omega=[0,1]^3$. The spatial discretization with a finite difference scheme reduces problem~(\ref{eq:02}) to a system of sparse linear equations. The general iteration scheme of our multisplitting method in a 3D domain using a seven point stencil could be written as -\begin{equation} -\begin{array}{ll} -u^{k+1}(x,y,z)= & u^k(x,y,z) - \frac{1}{6}\times\\ - & (u^k(x-1,y,z) + u^k(x+1,y,z) + \\ - & u^k(x,y-1,z) + u^k(x,y+1,z) + \\ - & u^k(x,y,z-1) + u^k(x,y,z+1)), -\end{array} -\label{eq:03} -\end{equation} -where the iteration matrix $A$ of size $N_x\times N_y\times N_z$ of the discretized linear system is sparse, symmetric and positive definite. - -The parallel solving of the 3D Poisson problem with our multisplitting method requires a data partitioning of the problem between clusters and between processors within a cluster. We have chosen the 3D partitioning instead of the row-by-row partitioning in order to reduce the data exchanges at sub-domain boundaries. Figure~\ref{fig:4.2} shows an example of the data partitioning of the 3D Poisson problem between two clusters of processors, where each sub-problem is assigned to a processor. In this context, a processor has at most six neighbors within a cluster or in distant clusters with which it shares data at sub-domain boundaries. - -\begin{figure}[!t] -\centering - \includegraphics[width=80mm,keepaspectratio]{partition} -\caption{Example of the 3D data partitioning between two clusters of processors.} -\label{fig:4.2} -\end{figure} - As a first step, the algorithm was run on a network consisting of two clusters containing 50 hosts each, totaling 100 hosts. Various combinations of the above