X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/loba-papers.git/blobdiff_plain/6f6ebdac1613681f5093eb2303424044b5f60f12..f13866736c2d1dda2bc227e4a05626ea5535e7d6:/supercomp11/supercomp11.tex diff --git a/supercomp11/supercomp11.tex b/supercomp11/supercomp11.tex index 3a1ec31..1faf1b0 100644 --- a/supercomp11/supercomp11.tex +++ b/supercomp11/supercomp11.tex @@ -29,12 +29,12 @@ } \institute{R. Couturier \and A. Giersch \at - LIFC, University of Franche-Comté, Belfort, France \\ + FEMTO-ST, University of Franche-Comté, Belfort, France \\ % Tel.: +123-45-678910\\ % Fax: +123-45-678910\\ \email{% - raphael.couturier@univ-fcomte.fr, - arnaud.giersch@univ-fcomte.fr} + raphael.couturier@femto-st.fr, + arnaud.giersch@femto-st.fr} } \maketitle @@ -468,38 +468,144 @@ available at In order to assess the performances of our algorithms, we ran our simulator with various parameters, and extracted several metrics, that -we will describe in this section. Overall, the experiments represent -more than 240 hours of computing time. +we will describe in this section. \paragraph{Load balancing strategies} -We ran the experiments with the \emph{Best effort}, and with the \emph{Makhoul} -strategies. \emph{Best effort} was tested with parameter $k = 1$, $k = 2$, and -$k = 4$. Secondly, each strategy was run in its two variants: with, and without -the management of \emph{virtual load}. Finally, we tested each configuration -with \emph{real}, and with \emph{integer} load. -This gives us as many as 32 different strategies. +Several load balancing strategies were compared. We ran the experiments with +the \emph{Best effort}, and with the \emph{Makhoul} strategies. \emph{Best + effort} was tested with parameter $k = 1$, $k = 2$, and $k = 4$. Secondly, +each strategy was run in its two variants: with, and without the management of +\emph{virtual load}. Finally, we tested each configuration with \emph{real}, +and with \emph{integer} load. + +To summarize the different load balancing strategies, we have: +\begin{description} +\item[\textbf{strategies:}] \emph{Makhoul}, or \emph{Best effort} with $k\in + \{1,2,4\}$ +\item[\textbf{variants:}] with, or without virtual load +\item[\textbf{domain:}] real load, or integer load +\end{description} +% +This gives us as many as $4\times 2\times 2 = 16$ different strategies. + +\paragraph{End of the simulation} + +The simulations were run until the load was nearly balanced among the +participating nodes. More precisely the simulation stops when each node holds +an amount of load at less than 1\% of the load average, during an arbitrary +number of computing iterations (2000 in our case). + +Note that this convergence detection was implemented in a centralized manner. +This is easy to do within the simulator, but it's obviously not realistic. In a +real application we would have chosen a decentralized convergence detection +algorithm, like the one described in \cite{10.1109/TPDS.2005.2}. + +\paragraph{Platforms} + +In order to show the behavior of the different strategies in different +settings, we simulated the executions on two sorts of platforms. These two +sorts of platforms differ by their underlaid network topology. On the one hand, +we have homogeneous platforms, modeled as a cluster. On the other hand, we have +heterogeneous platforms, modeled as the interconnection of a number of clusters. +The heterogeneous platform descriptions were created by taking a subset of the +Grid'5000 infrastructure\footnote{Grid'5000 is a French large scale experimental + Grid (see \url{https://www.grid5000.fr/}).}, as described in the platform file +\texttt{g5k.xml} distributed with SimGrid. Note that the heterogeneity of the +platform only comes from the network topology. The processor speeds, and +network bandwidths were normalized since our algorithms currently are not aware +of such heterogeneity. We arbitrarily chose to fix the processor speed to +1~GFlop/s, and the network bandwidth to 125~MB/s, with a latency of 50~$\mu$s, +except for the links between geographically distant sites, where the network +bandwidth was fixed to 2.25~GB/s, with a latency of 500~$\mu$s. + +Then we derived each sort of platform with four different number of computing +nodes: 16, 64, 256, and 1024 nodes. \paragraph{Configurations} + +The distributed processes of the application were then logically organized along +three possible topologies: a line, a torus or an hypercube. We ran tests where +the total load was initially on an only node (at one end for the line topology), +and other tests where the load was initially randomly distributed across all the +participating nodes. The total amount of load was fixed to a number of load +units equal to 1000 times the number of node. The average load is then of 1000 +load units. + +For each of the preceding configuration, we finally had to choose the +computation and communication costs of a load unit. We chose them, such as to +have three different computation over communication cost ratios, and hence model +three different kinds of applications: +\begin{itemize} +\item mainly communicating, with a computation/communication cost ratio of $1/10$; +\item mainly computing, with a computation/communication cost ratio of $10/1$ ; +\item balanced, with a computation/communication cost ratio of $1/1$. +\end{itemize} + +To summarize the various configurations, we have: \begin{description} -\item[\textbf{platforms}] homogeneous (cluster); heterogeneous (subset - of Grid5000) -\item[\textbf{platform size}] platforms with 16, 64, 256, and 1024 nodes -\item[\textbf{topologies}] line; torus; hypercube -\item[\textbf{initial load distribution}] initially on a only node; - initially on all nodes -\item[\textbf{comp/comm ratio}] $10/1$, $1/1$, $1/10$ +\item[\textbf{platforms:}] homogeneous (cluster), or heterogeneous (subset of + Grid'5000) +\item[\textbf{platform sizes:}] platforms with 16, 64, 256, or 1024 nodes +\item[\textbf{process topologies:}] line, torus, or hypercube +\item[\textbf{initial load distribution:}] initially on a only node, or + initially randomly distributed over all nodes +\item[\textbf{computation/communication ratio:}] $10/1$, $1/1$, or $1/10$ \end{description} +% +This gives us as many as $2\times 4\times 3\times 2\times 3 = 144$ different +configurations. +% +Combined with the various load balancing strategies, we had $16\times 144 = +2304$ distinct settings to evaluate. In fact, as it will be shown later, we +didn't run all the strategies, nor all the configurations for the bigger +platforms with 1024 nodes, since to simulations would have run for a too long +time. + +Anyway, all these the experiments represent more than 240 hours of computing +time. \paragraph{Metrics} +In order to evaluate and compare the different load balancing strategies we had +to define several metrics. Our goal, when choosing these metrics, was to have +something tending to a constant value, i.e. to have a measure which is not +changing anymore once the convergence state is reached. Moreover, we wanted to +have some normalized value, in order to be able to compare them across different +settings. + +With these constraints in mind, we defined the following metrics: +% \begin{description} -\item[\textbf{average idle time}] -\item[\textbf{average convergence date}] -\item[\textbf{maximum convergence date}] -\item[\textbf{data transfer amount}] relative to the total data amount +\item[\textbf{average idle time:}] that's the total time spent, when the nodes + don't hold any share of load, and thus have nothing to compute. This total + time is divided by the number of participating nodes, such as to have a number + that can be compared between simulations of different sizes. + + This metric is expected to give an idea of the ability of the strategy to + diffuse the load quickly. A smaller value is better. + +\item[\textbf{average convergence date:}] that's the average of the dates when + all nodes reached the convergence state. The dates are measured as a number + of (simulated) seconds since the beginning of the simulation. + +\item[\textbf{maximum convergence date:}] that's the date when the last node + reached the convergence state. + + These two dates give an idea of the time needed by the strategy to reach the + equilibrium state. A smaller value is better. + +\item[\textbf{data transfer amount:}] that's the sum of the amount of all data + transfers during the simulation. This sum is then normalized by dividing it + by the total amount of data present in the system. + + This metric is expected to give an idea of the efficiency of the strategy in + terms of data movements, i.e. its ability to reach the equilibrium with fewer + transfers. Again, a smaller value is better. + \end{description} + \subsection{Validation of our approaches} \label{Results} @@ -552,4 +658,4 @@ Taille : 10 100 très gros % LocalWords: Raphaël Couturier Arnaud Giersch Abderrahmane Sider Franche ij % LocalWords: Bertsekas Tsitsiklis SimGrid DASUD Comté Béjaïa asynchronism ji -% LocalWords: ik isend irecv Cortés et al chan ctrl fifo +% LocalWords: ik isend irecv Cortés et al chan ctrl fifo Makhoul