X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/loba-papers.git/blobdiff_plain/857b46f3102ac8bd783b10f189c18ac937ba7fe2..3e0139b6d08c5073cf7665a95af6cb6ae750c923:/supercomp11/supercomp11.tex?ds=inline diff --git a/supercomp11/supercomp11.tex b/supercomp11/supercomp11.tex index edf5207..2a6c04b 100644 --- a/supercomp11/supercomp11.tex +++ b/supercomp11/supercomp11.tex @@ -2,9 +2,12 @@ \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{mathptmx} +\usepackage{amsmath} \usepackage{courier} \usepackage{graphicx} +\newcommand{\abs}[1]{\lvert#1\rvert} % \abs{x} -> |x| + \begin{document} \title{Best effort strategy and virtual load @@ -94,17 +97,17 @@ message at each neighbor at each iteration. Latter messages contains data that migrates from one node to another one. Depending on the application, it may have sense or not that nodes try to balance a part of their load at each computing iteration. But the time to transfer a load message from a node to another one is -often much nore longer that to time to transfer a load information message. So, +often much more longer that to time to transfer a load information message. So, when a node receives the information that later it will receive a data message, it can take this information into account and it can consider that its new load is larger. Consequently, it can send a part of it real load to some of its -neighbors if required. We call this trick the \emph{virtual load} mecanism. +neighbors if required. We call this trick the \emph{virtual load} mechanism. So, in this work, we propose a new strategy for improving the distribution of the load and a simple but efficient trick that also improves the load -balacing. Moreover, we have conducted many simulations with simgrid in order to +balancing. Moreover, we have conducted many simulations with SimGrid in order to validate our improvements are really efficient. Our simulations consider that in order to send a message, a latency delays the sending and according to the network performance and the message size, the time of the reception of the @@ -114,7 +117,7 @@ In the following of this paper, Section~\ref{BT algo} describes the Bertsekas and Tsitsiklis' asynchronous load balancing algorithm. Moreover, we present a possible problem in the convergence conditions. Section~\ref{Best-effort} presents the best effort strategy which provides an efficient way to reduce the -execution times. In Section~\ref{Virtual load}, the virtual load mecanism is +execution times. In Section~\ref{Virtual load}, the virtual load mechanism is proposed. Simulations allowed to show that both our approaches are valid using a quite realistic model detailed in Section~\ref{Simulations}. Finally we give a conclusion and some perspectives to this work. @@ -126,11 +129,11 @@ conclusion and some perspectives to this work. \label{BT algo} In order prove the convergence of asynchronous iterative load balancing -Bertesekas and Tsitsiklis proposed a model +Bertsekas and Tsitsiklis proposed a model in~\cite{bertsekas+tsitsiklis.1997.parallel}. Here we recall some notations. Consider that $N={1,...,n}$ processors are connected through a network. Communication links are represented by a connected undirected graph $G=(N,V)$ -where $V$ is the set of links connecting differents processors. In this work, we +where $V$ is the set of links connecting different processors. In this work, we consider that processors are homogeneous for sake of simplicity. It is quite easy to tackle the heterogeneous case~\cite{ElsMonPre02}. Load of processor $i$ at time $t$ is represented by $x_i(t)\geq 0$. Let $V(i)$ be the set of @@ -141,7 +144,7 @@ consider that the load is described by a continuous variable. When a processor send a part of its load to one or some of its neighbors, the transfer takes time to be completed. Let $s_{ij}(t)$ be the amount of load that -processor $i$ has transfered to processor $j$ at time $t$ and let $r_{ij}(t)$ be the +processor $i$ has transferred to processor $j$ at time $t$ and let $r_{ij}(t)$ be the amount of load received by processor $j$ from processor $i$ at time $t$. Then the amount of load of processor $i$ at time $t+1$ is given by: \begin{equation} @@ -156,13 +159,13 @@ called the \emph{ping-pong} condition which specifies that: x_i(t)-\sum _{k\in V(i)} s_{ik}(t) \geq x_j^i(t)+s_{ij}(t) \end{equation} for any processor $i$ and any $j \in V(i)$ such that $x_i(t)>x_j^i(t)$. This -condition aims at avoiding a processor to send a part of its load and beeing +condition aims at avoiding a processor to send a part of its load and being less loaded after that. Nevertheless, we think that this condition may lead to deadlocks in some cases. For example, if we consider only three processors and that processor $1$ is linked to processor $2$ which is also linked to processor $3$ (i.e. a simple -chain wich 3 processors). Now consider we have the following values at time $t$: +chain which 3 processors). Now consider we have the following values at time $t$: \begin{eqnarray*} x_1(t)=10 \\ x_2(t)=100 \\ @@ -180,16 +183,60 @@ condition or with a weaker condition. \section{Best effort strategy} \label{Best-effort} -\textbf{À traduire} Ordonne les voisins du moins chargé au plus chargé. -Trouve ensuite, en les prenant dans ce ordre, le nombre maximal de -voisins tels que tous ont une charge inférieure à la moyenne des -charges des voisins sélectionnés, et de soi-même. - -Les transferts de charge sont ensuite fait en visant cette moyenne pour -tous les voisins sélectionnés. On envoie une quantité de charge égale -à (moyenne - charge\_du\_voisin). - -~\\\textbf{Question} faut-il décrire les stratégies makhoul et simple ? +We will describe here a new load-balancing strategy that we called +\emph{best effort}. The general idea behind this strategy is, for a +processor, to send some load to the most of its neighbors, doing its +best to reach the equilibrium between those neighbors and himself. + +More precisely, when a processors $i$ is in its load-balancing phase, +he proceeds as following. +\begin{enumerate} +\item First, the neighbors are sorted in non-decreasing order of their + known loads $x^i_j(t)$. + +\item Then, this sorted list is traversed in order to find its largest + prefix such as the load of each selected neighbor is lesser than: + \begin{itemize} + \item the processor's own load, and + \item the mean of the loads of the selected neighbors and of the + processor's load. + \end{itemize} + Let's call $S_i(t)$ the set of the selected neighbors, and + $\bar{x}(t)$ the mean of the loads of the selected neighbors and of + the processor load: + \begin{equation*} + \bar{x}(t) = \frac{1}{\abs{S_i(t)} + 1} + \left( x_i(t) + \sum_{j\in S_i(t)} x^i_j(t) \right) + \end{equation*} + The following properties hold: + \begin{equation*} + \begin{cases} + S_i(t) \subset V(i) \\ + x^i_j(t) < x_i(t) & \forall j \in S_i(t) \\ + x^i_j(t) < \bar{x} & \forall j \in S_i(t) \\ + x^i_j(t) \leq x^i_k(t) & \forall j \in S_i(t), \forall k \in V(i) \setminus S_i(t) \\ + \bar{x} \leq x_i(t) + \end{cases} + \end{equation*} + +\item Once this selection is completed, processor $i$ sends to each of + the selected neighbor $j\in S_i(t)$ an amount of load $s_{ij}(t) = + \bar{x} - x^i_j(t)$. + + From the above equations, and notably from the definition of + $\bar{x}$, it can easily be verified that: + \begin{equation*} + \begin{cases} + x_i(t) - \sum_{j\in S_i(t)} s_{ij}(t) = \bar{x} \\ + x^i_j(t) + s_{ij}(t) = \bar{x} & \forall j \in S_i(t) + \end{cases} + \end{equation*} +\end{enumerate} + +\section{Other strategies} +\label{Other} + +\textbf{Question} faut-il décrire les stratégies makhoul et simple ? \paragraph{simple} Tentative de respecter simplement les conditions de Bertsekas. Parmi les voisins moins chargés que soi, on sélectionne : @@ -332,5 +379,6 @@ Taille : 10 100 très gros %%% ispell-local-dictionary: "american" %%% End: -% LocalWords: Raphaël Couturier Arnaud Giersch Abderrahmane Sider -% LocalWords: Bertsekas Tsitsiklis SimGrid DASUD +% LocalWords: Raphaël Couturier Arnaud Giersch Abderrahmane Sider Franche ij +% LocalWords: Bertsekas Tsitsiklis SimGrid DASUD Comté Béjaïa asynchronism ji +% LocalWords: ik