Load balancing algorithms are extensively used in parallel and distributed
applications in order to reduce the execution times. They can be applied in
different scientific fields from high performance computation to micro sensor
-networks. They are iterative by nature. In literature many kinds of load
+networks. They are iterative by nature.\FIXME{really?}
+In literature many kinds of load
balancing algorithms have been studied. They can be classified according
different criteria: centralized or decentralized, in static or dynamic
environment, with homogeneous or heterogeneous load, using synchronous or
amount of load. Moreover, when real asynchronous applications are considered,
using asynchronous load balancing algorithms can reduce the execution
times. Most of the times, it is simpler to distinguish load information messages
-from data migration messages. Former ones allows a node to inform its
+from data migration messages. Former ones allow a node to inform its
neighbors of its current load. These messages are very small, they can be sent
-quite often. For example, if an computing iteration takes a significant times
+quite often. For example, if a computing iteration takes a significant times
(ranging from seconds to minutes), it is possible to send a new load information
-message at each neighbor at each iteration. Latter messages contains data that
+message to each neighbor at each iteration. Latter messages contain data that
migrates from one node to another one. Depending on the application, it may have
sense or not that nodes try to balance a part of their load at each computing
iteration. But the time to transfer a load message from a node to another one is
is larger. Consequently, it can send a part of it real load to some of its
neighbors if required. We call this trick the \emph{virtual load} mechanism.
-
-
-So, in this work, we propose a new strategy for improving the distribution of
-the load and a simple but efficient trick that also improves the load
-balancing. Moreover, we have conducted many simulations with SimGrid in order to
-validate our improvements are really efficient. Our simulations consider that in
-order to send a message, a latency delays the sending and according to the
-network performance and the message size, the time of the reception of the
+So, in this work, we propose a new strategy to improve the distribution of the
+load and a simple but efficient trick that also improves the load
+balancing. Moreover, we have conducted many simulations with SimGrid in order to
+validate that our improvements are really efficient. Our simulations consider
+that in order to send a message, a latency delays the sending and according to
+the network performance and the message size, the time of the reception of the
message also varies.
In the following of this paper, Section~\ref{sec.bt-algo} describes the
Bertsekas and Tsitsiklis proposed a model
in~\cite{bertsekas+tsitsiklis.1997.parallel}. Here we recall some notations.
Consider that $N={1,...,n}$ processors are connected through a network.
-Communication links are represented by a connected undirected graph $G=(N,V)$
-where $V$ is the set of links connecting different processors. In this work, we
+Communication links are represented by a connected undirected graph $G=(N,A)$
+where $A$ is the set of links connecting different processors. In this work, we
consider that processors are homogeneous for sake of simplicity. It is quite
easy to tackle the heterogeneous case~\cite{ElsMonPre02}. Load of processor $i$
at time $t$ is represented by $x_i(t)\geq 0$. Let $V(i)$ be the set of
cases. For example, if we consider only three processors and that processor $1$
is linked to processor $2$ which is also linked to processor $3$ (i.e. a simple
chain which 3 processors). Now consider we have the following values at time $t$:
-\begin{eqnarray*}
-x_1(t)=10 \\
-x_2(t)=100 \\
-x_3(t)=99.99\\
- x_3^2(t)=99.99\\
-\end{eqnarray*}
+\begin{align*}
+ x_1(t) &= 10 \\
+ x_2(t) &= 100 \\
+ x_3(t) &= 99.99 \\
+ x_3^2(t) &= 99.99 \\
+\end{align*}
In this case, processor $2$ can either sends load to processor $1$ or processor
$3$. If it sends load to processor $1$ it will not satisfy condition
-(\ref{eq.ping-pong}) because after the sending it will be less loaded that
+\eqref{eq.ping-pong} because after the sending it will be less loaded that
$x_3^2(t)$. So we consider that the \emph{ping-pong} condition is probably to
strong. Currently, we did not try to make another convergence proof without this
condition or with a weaker condition.
order of their load. Then, it computes the difference between its own load, and
the load of each of its neighbors. Finally, taking the neighbors following the
order defined before, the amount of load to send $s_{ij}$ is computed as
-$1/(N+1)$ of the load difference, with $N$ being the number of neighbors. This
+$1/(n+1)$ of the load difference, with $n$ being the number of neighbors. This
process continues as long as the node is more loaded than the considered
neighbor.
In this section, we present the concept of \emph{virtual load}. In order to
use this concept, load balancing messages must be sent using two different kinds
of messages: load information messages and load balancing messages. More
-precisely, a node wanting to send a part of its load to one of its neighbors,
-can first send a load information message containing the load it will send and
+precisely, a node wanting to send a part of its load to one of its neighbors
+can first send a load information message containing the load it will send, and
then it can send the load balancing message containing data to be transferred.
Load information message are really short, consequently they will be received
very quickly. In opposition, load balancing messages are often bigger and thus
balancing message.
Doing this, we can expect a faster convergence since nodes have a faster
-information of the load they will receive, so they can take in into account.
+information of the load they will receive, so they can take it into account.
\FIXME{Est ce qu'on donne l'algo avec virtual load?}
In order to test and validate our approaches, we wrote a simulator
using the SimGrid
-framework~\cite{casanova+legrand+quinson.2008.simgrid}. This
+framework~\cite{simgrid.web,casanova+giersch+legrand+al.2014.simgrid}. This
simulator, which consists of about 2,700 lines of C++, allows to run
the different load-balancing strategies under various parameters, such
as the initial distribution of load, the interconnection topology, the
these descriptions. For an exhaustive presentation, we refer to the
actual source code that was used for the experiments%
\footnote{As mentioned before, our simulator relies on the SimGrid
- framework~\cite{casanova+legrand+quinson.2008.simgrid}. For the
+ framework~\cite{casanova+giersch+legrand+al.2014.simgrid}. For the
experiments, we used a pre-release of SimGrid 3.7 (Git commit
67d62fca5bdee96f590c942b50021cdde5ce0c07, available from
\url{https://gforge.inria.fr/scm/?group_id=12})}, and which is
processor speeds were normalized, and we arbitrarily chose to fix them to
1~GFlop/s.
-Then we derived each sort of platform with four different number of computing
+Then we derived each kind of platform with four different numbers of computing
nodes: 16, 64, 256, and 1024 nodes.
\subsubsection{Configurations}
\FIXME{annoncer le plan de la suite}
-\subsubsection{The \besteffort{} strategy with the load initially on only one
- node}
+\subsubsection{The \besteffort{} and \makhoul{} strategies without virtual load}
Before looking at the different variations, we will first show that the plain
\besteffort{} strategy is valuable, and may be as good as the \makhoul{}
-strategy. On the graphs from the figure~\ref{fig.results1}, these strategies
-(with virtual load feature) are respectively labeled ``b'' and ``a''.
+strategy. On Figures~\ref{fig.results1} and~\ref{fig.resultsN},
+these strategies are respectively labeled ``b'' and ``a''.
We can see that the relative performance of these strategies is mainly
influenced by the application topology. It is for the line topology that the
difference is the more important. In this case, the \besteffort{} strategy is
-nearly twice as fast as the \makhoul{} strategy. This can be explained by the
-fact that the \besteffort{} strategy tries to distribute the load faitly between
+nearly faster than the \makhoul{} strategy. This can be explained by the
+fact that the \besteffort{} strategy tries to distribute the load fairly between
all the nodes and with the line topology, it is easy to load balance the load
fairly.
On the contrary, for the hypercube topology, the \besteffort{} strategy performs
worse than the \makhoul{} strategy. In this case, the \makhoul{} strategy which
-tries to give more load to few neighbors reaches the equilibrum faster.
+tries to give more load to few neighbors reaches the equilibrium faster.
For the torus topology, for which the number of links is between the line and
the hypercube, the \makhoul{} strategy is slightly better but the difference is
-more nuanced.
+more nuanced when the initial load is only on one node. The only case where the
+\makhoul{} strategy is really faster than the \besteffort{} strategy is with the
+random initial distribution when the communication are slow.
Globally the number of interconnection is very important. The more
-interconnection links there are, the faster the \makhoul{} strategy is because
-it distributes quickly significant amount of load even if this is unfair between
+the interconnection links are, the faster the \makhoul{} strategy is because
+it distributes quickly significant amount of load, even if this is unfair, between
all the neighbors. In opposition, the \besteffort{} strategy distributes the
load fairly so this strategy is better for low connected strategy.
-\subsubsection{With the virtual load extension with the load initially on only
- one node}
+\subsubsection{Virtual load}
+
+The influence of virtual load is most of the time really significant compared to
+the same configuration without it. Sometimes it has no effect but {\bf A
+ VERIFIER} it has never a negative effect on the load balancing we tested.
+
+On Figure~\ref{fig.results1}, when the load is initially on one node, it can be
+noticed that the average idle times are generally longer with the virtual load
+than without it. This can be explained by the fact that, with virtual load,
+processors will exchange all the load they need to exchange as soon as the
+virtual load has been balanced between all the processors. So consequently they
+cannot compute at the beginning. This is especially noticeable when the
+communication are slow (on the left part of Figure ~\ref{fig.results1}.
-Dans ce cas légère amélioration de la cvg. max. Temps moyen de cvg. amélioré,
-mais plus de temps passé en idle, surtout quand les comms coutent cher.
+%Dans ce cas légère amélioration de la cvg. max. Temps moyen de cvg. amélioré,
+%mais plus de temps passé en idle, surtout quand les comms coutent cher.
-\subsubsection{The \besteffort{} strategy with an initial random load
- distribution, and larger platforms}
+%\subsubsection{The \besteffort{} strategy with an initial random load
+% distribution, and larger platforms}
-Mêmes conclusions pour line et hcube.
-Sur tore, BE se fait exploser quand les comms coutent cher.
+%In
+%Mêmes conclusions pour line et hcube.
+%Sur tore, BE se fait exploser quand les comms coutent cher.
-\FIXME{virer les 1024 ?}
+%\FIXME{virer les 1024 ?}
-\subsubsection{With the virtual load extension with an initial random load
- distribution}
+%\subsubsection{With the virtual load extension with an initial random load
+% distribution}
-Soit c'est équivalent, soit on gagne -> surtout quand les comms coutent cher et
-qu'il y a beaucoup de voisins.
+%Soit c'est équivalent, soit on gagne -> surtout quand les comms coutent cher et
+%qu'il y a beaucoup de voisins.
\subsubsection{The $k$ parameter}
\label{results-k}
-Dans le cas où les comms coutent cher et ou BE se fait avoir, on peut ameliorer
-les perfs avec le param k.
+As explained previously when the communication are slow the \besteffort{}
+strategy is efficient. This is due to the fact that it tries to balance the load
+fairly and consequently a significant amount of the load is transfered between
+processors. In this situation, it is possible to reduce the convergence time by
+using the leveler parameter (parameter $k$). The advantage of using this
+solution is particularly efficient when the initial load is randomly distributed
+on the nodes with torus and hypercube topology and slow communication. When
+virtual load mechanism is used, the effect of this parameter is also visible
+with the same condition.
+
+
+
+\subsubsection{With integer load}
+
+We also performed some experiments with integer load instead of load with real
+value. In this case, the results have globally the same behavior. The most
+intereting result, from our point of view, is that the virtual mode allows
+processors in a line topology to converge to the uniform load balancing. Without
+the virtual load, most of the time, processors converge to what we call the
+``stairway effect'', that is to say that there is only a difference of one in
+the load of each processor and its neighbors (for example with 10 processors, we
+obtain 10 9 8 7 6 6 7 8 9 10 instead of 8 8 8 8 8 8 8 8 8 8).
-\subsubsection{With integer load, 1 ou N}
+%Cas normal, ligne -> converge pas (effet d'escalier).
+%Avec vload, ça converge.
-Cas normal, ligne -> converge pas (effet d'escalier).
-Avec vload, ça converge.
+%Dans les autres cas, résultats similaires au cas réel: redire que vload est
+%intéressant.
-Dans les autres cas, résultats similaires au cas réel: redire que vload est
-intéressant.
+\FIXME{ajouter une courbe avec l'équilibrage en entier}
\FIXME{virer la metrique volume de comms}