\begin{tabular}[t]{@{}l@{:~}l@{}}}{%
\end{tabular}}
-\newcommand{\FIXME}[1]{%
- \textbf{$\triangleright$\marginpar{\textbf{[FIXME]}}~#1}}
+\newcommand{\FIXMEmargin}[1]{%
+ \marginpar{\textbf{[FIXME]} {\footnotesize #1}}}
+\newcommand{\FIXME}[2][]{%
+ \ifx #2\relax\relax \FIXMEmargin{#1}%
+ \else \textbf{$\triangleright$\FIXMEmargin{#1}~#2}\fi}
\newcommand{\VAR}[1]{\textit{#1}}
\section{Best effort strategy}
\label{Best-effort}
-In this section we describe a new load-balancing strategy that we call
-\emph{best effort}. The general idea behind this strategy is that each
-processor, that detects it has more load than some of its neighbors,
-sends some load to the most of its less loaded neighbors, doing its
-best to reach the equilibrium between those neighbors and himself.
+In this section we describe a new load-balancing strategy that we call
+\emph{best effort}. First, we explain the general idea behind this strategy,
+and then we describe some variants of this basic strategy.
+
+\subsection{Basic strategy}
+
+The general idea behind the \emph{best effort} strategy is that each processor,
+that detects it has more load than some of its neighbors, sends some load to the
+most of its less loaded neighbors, doing its best to reach the equilibrium
+between those neighbors and himself.
More precisely, when a processor $i$ is in its load-balancing phase,
he proceeds as following.
\end{equation*}
\end{enumerate}
-\FIXME{describe parameter $k$}
-
-\section{Other strategies}
-\label{Other}
+\subsection{Leveling the amount to send}
-\FIXME{Réécrire en angliche.}
+With the aforementioned basic strategy, each node does its best to reach the
+equilibrium with its neighbors. Since each node may be taking the same kind of
+decision at the same moment, there is the risk that a node receives load from
+several of its neighbors, and then is temporary going off the equilibrium state.
+This is particularly true with strongly connected applications.
-% \FIXME{faut-il décrire les stratégies makhoul et simple ?}
+In order to reduce this effect, we add the ability to level the amount to send.
+The idea, here, is to make smaller steps toward the equilibrium, such that a
+potentially wrong decision has a lower impact.
-% \paragraph{simple} Tentative de respecter simplement les conditions de Bertsekas.
-% Parmi les voisins moins chargés que soi, on sélectionne :
-% \begin{itemize}
-% \item un des moins chargés (vmin) ;
-% \item un des plus chargés (vmax),
-% \end{itemize}
-% puis on équilibre avec vmin en s'assurant que notre charge reste
-% toujours supérieure à celle de vmin et à celle de vmax.
+Concretely, once $s_{ij}$ has been evaluated as before, it is simply divided by
+some configurable factor. That's what we named the ``parameter $k$'' in
+Section~\ref{Results}. The amount of data to send is then $s_{ij}(t) = (\bar{x}
+- x^i_j(t))/k$.
+\FIXME[check that it's still named $k$ in Sec.~\ref{Results}]{}
-% On envoie donc (avec "self" pour soi-même) :
-% \[
-% \min\left(\frac{load(self) - load(vmin)}{2}, load(self) - load(vmax)\right)
-% \]
+\section{Other strategies}
+\label{Other}
-\paragraph{makhoul} Ordonne les voisins du moins chargé au plus chargé
-puis calcule les différences de charge entre soi-même et chacun des
-voisins.
+Another load balancing strategy, working under the same conditions, was
+previously developed by Bahi, Giersch, and Makhoul in
+\cite{bahi+giersch+makhoul.2008.scalable}. In order to assess the performances
+of the new \emph{best effort}, we naturally chose to compare it to this anterior
+work. More precisely, we will use the algorithm~2 from
+\cite{bahi+giersch+makhoul.2008.scalable} and, in the following, we will
+reference it under the name of Makhoul's.
-Ensuite, pour chaque voisin, dans l'ordre, et tant qu'on reste plus
-chargé que le voisin en question, on lui envoie 1/(N+1) de la
-différence calculée au départ, avec N le nombre de voisins.
+Here is an outline of the Makhoul's algorithm. When a given node needs to take
+a load balancing decision, it starts by sorting its neighbors by increasing
+order of their load. Then, it computes the difference between its own load, and
+the load of each of its neighbors. Finally, taking the neighbors following the
+order defined before, the amount of load to send $s_{ij}$ is computed as
+$1/(N+1)$ of the load difference, with $N$ being the number of neighbors. This
+process continues as long as the node is more loaded than the considered
+neighbor.
-C'est l'algorithme~2 dans~\cite{bahi+giersch+makhoul.2008.scalable}.
\section{Virtual load}
\label{Virtual load}
Note that this convergence detection was implemented in a centralized manner.
This is easy to do within the simulator, but it's obviously not realistic. In a
real application we would have chosen a decentralized convergence detection
-algorithm, like the one described in \cite{10.1109/TPDS.2005.2}.
+algorithm, like the one described by Bahi, Contassot-Vivier, Couturier, and
+Vernier in \cite{10.1109/TPDS.2005.2}.
\paragraph{Platforms}
sorts of platforms differ by their underlaid network topology. On the one hand,
we have homogeneous platforms, modeled as a cluster. On the other hand, we have
heterogeneous platforms, modeled as the interconnection of a number of clusters.
+
+The clusters were modeled by a fixed number of computing nodes interconnected
+through a backbone link. Each computing node has a computing power of
+1~GFlop/s, and is connected to the backbone by a network link whose bandwidth is
+of 125~MB/s, with a latency of 50~$\mu$s. The backbone has a network bandwidth
+of 2.25~GB/s, with a latency of 500~$\mu$s.
+
The heterogeneous platform descriptions were created by taking a subset of the
Grid'5000 infrastructure\footnote{Grid'5000 is a French large scale experimental
Grid (see \url{https://www.grid5000.fr/}).}, as described in the platform file
\texttt{g5k.xml} distributed with SimGrid. Note that the heterogeneity of the
-platform only comes from the network topology. The processor speeds, and
-network bandwidths were normalized since our algorithms currently are not aware
-of such heterogeneity. We arbitrarily chose to fix the processor speed to
-1~GFlop/s, and the network bandwidth to 125~MB/s, with a latency of 50~$\mu$s,
-except for the links between geographically distant sites, where the network
-bandwidth was fixed to 2.25~GB/s, with a latency of 500~$\mu$s.
+platform here only comes from the network topology. Indeed, since our
+algorithms currently do not handle heterogeneous computing resources, the
+processor speeds were normalized, and we arbitrarily chose to fix them to
+1~GFlop/s.
Then we derived each sort of platform with four different number of computing
nodes: 16, 64, 256, and 1024 nodes.
% LocalWords: Raphaël Couturier Arnaud Giersch Abderrahmane Sider Franche ij
% LocalWords: Bertsekas Tsitsiklis SimGrid DASUD Comté Béjaïa asynchronism ji
-% LocalWords: ik isend irecv Cortés et al chan ctrl fifo Makhoul
+% LocalWords: ik isend irecv Cortés et al chan ctrl fifo Makhoul GFlop xml