+\label{Sim model}
+
+In the simulation model the processors exchange messages which are of
+two kinds. First, there are \emph{control messages} which only carry
+information that is exchanged between the processors, such as the
+current load, or the virtual load transfers if this option is
+selected. These messages are rather small, and their size is
+constant. Then, there are \emph{data messages} that carry the real
+load transferred between the processors. The size of a data message
+is a function of the amount of load that it carries, and it can be
+pretty large. In order to receive the messages, each processor has
+two receiving channels, one for each kind of messages. Finally, when
+a message is sent or received, this is done by using the non-blocking
+primitives of SimGrid\footnote{That are \texttt{MSG\_task\_isend()},
+ and \texttt{MSG\_task\_irecv()}.}.
+
+During the simulation, each processor concurrently runs three threads:
+a \emph{receiving thread}, a \emph{computing thread}, and a
+\emph{load-balancing thread}, which we will briefly describe now.
+
+For the sake of simplicity, a few details were voluntary omitted from
+these descriptions. For an exhaustive presentation, we refer to the
+actual source code that was used for the experiments%
+\footnote{As mentioned before, our simulator relies on the SimGrid
+ framework~\cite{casanova+legrand+quinson.2008.simgrid}. For the
+ experiments, we used a pre-release of SimGrid 3.7 (Git commit
+ 67d62fca5bdee96f590c942b50021cdde5ce0c07, available from
+ \url{https://gforge.inria.fr/scm/?group_id=12})}, and which is
+available at
+\url{http://info.iut-bm.univ-fcomte.fr/staff/giersch/software/loba.tar.gz}.
+
+\subsubsection{Receiving thread}
+
+The receiving thread is in charge of waiting for messages to come, either on the
+control channel, or on the data channel. Its behavior is sketched by
+Algorithm~\ref{algo.recv}. When a message is received, it is pushed in a buffer
+of received message, to be later consumed by one of the other threads. There
+are two such buffers, one for the control messages, and one for the data
+messages. The buffers are implemented with a lock-free FIFO
+\cite{sutter.2008.writing} to avoid contention between the threads.
+
+\begin{algorithm}
+ \caption{Receiving thread}
+ \label{algo.recv}
+ \KwData{
+ \begin{algodata}
+ \VAR{ctrl\_chan}, \VAR{data\_chan}
+ & communication channels (control and data) \\
+ \VAR{ctrl\_fifo}, \VAR{data\_fifo}
+ & buffers of received messages (control and data) \\
+ \end{algodata}}
+ \While{true}{%
+ wait for a message to be available on either \VAR{ctrl\_chan},
+ or \VAR{data\_chan}\;
+ \If{a message is available on \VAR{ctrl\_chan}}{%
+ get the message from \VAR{ctrl\_chan}, and push it into \VAR{ctrl\_fifo}\;
+ }
+ \If{a message is available on \VAR{data\_chan}}{%
+ get the message from \VAR{data\_chan}, and push it into \VAR{data\_fifo}\;
+ }
+ }
+\end{algorithm}
+
+\subsubsection{Computing thread}
+
+The computing thread is in charge of the real load management. As exposed in
+Algorithm~\ref{algo.comp}, it iteratively runs the following operations:
+\begin{itemize}
+\item if some load was received from the neighbors, get it;
+\item if there is some load to send to the neighbors, send it;
+\item run some computation, whose duration is function of the current
+ load of the processor.
+\end{itemize}
+Practically, after the computation, the computing thread waits for a
+small amount of time if the iterations are looping too fast (for
+example, when the current load is near zero).
+
+\begin{algorithm}
+ \caption{Computing thread}
+ \label{algo.comp}
+ \KwData{
+ \begin{algodata}
+ \VAR{data\_fifo} & buffer of received data messages \\
+ \VAR{real\_load} & current load \\
+ \end{algodata}}
+ \While{true}{%
+ \If{\VAR{data\_fifo} is empty and $\VAR{real\_load} = 0$}{%
+ wait until a message is pushed into \VAR{data\_fifo}\;
+ }
+ \While{\VAR{data\_fifo} is not empty}{%
+ pop a message from \VAR{data\_fifo}\;
+ get the load embedded in the message, and add it to \VAR{real\_load}\;
+ }
+ \ForEach{neighbor $n$}{%
+ \If{there is some amount of load $a$ to send to $n$}{%
+ send $a$ units of load to $n$, and subtract it from \VAR{real\_load}\;
+ }
+ }
+ \If{$\VAR{real\_load} > 0.0$}{
+ simulate some computation, whose duration is function of \VAR{real\_load}\;
+ ensure that the main loop does not iterate too fast\;
+ }
+ }
+\end{algorithm}
+
+\subsubsection{Load-balancing thread}
+
+The load-balancing thread is in charge of running the load-balancing algorithm,
+and exchange the control messages. As shown in Algorithm~\ref{algo.lb}, it
+iteratively runs the following operations:
+\begin{itemize}
+\item get the control messages that were received from the neighbors;
+\item run the load-balancing algorithm;
+\item send control messages to the neighbors, to inform them of the
+ processor's current load, and possibly of virtual load transfers;
+\item wait a minimum (configurable) amount of time, to avoid to
+ iterate too fast.
+\end{itemize}
+
+\begin{algorithm}
+ \caption{Load-balancing}
+ \label{algo.lb}
+ \While{true}{%
+ \While{\VAR{ctrl\_fifo} is not empty}{%
+ pop a message from \VAR{ctrl\_fifo}\;
+ identify the sender of the message,
+ and update the current knowledge of its load\;
+ }
+ run the load-balancing algorithm to make the decision about load transfers\;
+ \ForEach{neighbor $n$}{%
+ send a control messages to $n$\;
+ }
+ ensure that the main loop does not iterate too fast\;
+ }
+\end{algorithm}
+
+\paragraph{}\FIXME{ajouter des détails sur la gestion de la charge virtuelle ?
+par ex, donner l'idée générale de l'implémentation. l'idée générale est déja décrite en section~\ref{Virtual load}}
+
+\subsection{Experimental contexts}
+\label{Contexts}
+
+In order to assess the performances of our algorithms, we ran our
+simulator with various parameters, and extracted several metrics, that
+we will describe in this section.
+
+\subsubsection{Load balancing strategies}
+
+Several load balancing strategies were compared. We ran the experiments with
+the \emph{Best effort}, and with the \emph{Makhoul} strategies. \emph{Best
+ effort} was tested with parameter $k = 1$, $k = 2$, and $k = 4$. Secondly,
+each strategy was run in its two variants: with, and without the management of
+\emph{virtual load}. Finally, we tested each configuration with \emph{real},
+and with \emph{integer} load.
+
+To summarize the different load balancing strategies, we have:
+\begin{description}
+\item[\textbf{strategies:}] \emph{Makhoul}, or \emph{Best effort} with $k\in
+ \{1,2,4\}$
+\item[\textbf{variants:}] with, or without virtual load
+\item[\textbf{domain:}] real load, or integer load
+\end{description}
+%
+This gives us as many as $4\times 2\times 2 = 16$ different strategies.
+
+\subsubsection{End of the simulation}
+
+The simulations were run until the load was nearly balanced among the
+participating nodes. More precisely the simulation stops when each node holds
+an amount of load at less than 1\% of the load average, during an arbitrary
+number of computing iterations (2000 in our case).
+
+Note that this convergence detection was implemented in a centralized manner.
+This is easy to do within the simulator, but it's obviously not realistic. In a
+real application we would have chosen a decentralized convergence detection
+algorithm, like the one described by Bahi, Contassot-Vivier, Couturier, and
+Vernier in \cite{10.1109/TPDS.2005.2}.
+
+\subsubsection{Platforms}
+
+In order to show the behavior of the different strategies in different
+settings, we simulated the executions on two sorts of platforms. These two
+sorts of platforms differ by their underlaid network topology. On the one hand,
+we have homogeneous platforms, modeled as a cluster. On the other hand, we have
+heterogeneous platforms, modeled as the interconnection of a number of clusters.
+
+The clusters were modeled by a fixed number of computing nodes interconnected
+through a backbone link. Each computing node has a computing power of
+1~GFlop/s, and is connected to the backbone by a network link whose bandwidth is
+of 125~MB/s, with a latency of 50~$\mu$s. The backbone has a network bandwidth
+of 2.25~GB/s, with a latency of 500~$\mu$s.
+
+The heterogeneous platform descriptions were created by taking a subset of the
+Grid'5000 infrastructure\footnote{Grid'5000 is a French large scale experimental
+ Grid (see \url{https://www.grid5000.fr/}).}, as described in the platform file
+\texttt{g5k.xml} distributed with SimGrid. Note that the heterogeneity of the
+platform here only comes from the network topology. Indeed, since our
+algorithms currently do not handle heterogeneous computing resources, the
+processor speeds were normalized, and we arbitrarily chose to fix them to
+1~GFlop/s.
+
+Then we derived each sort of platform with four different number of computing
+nodes: 16, 64, 256, and 1024 nodes.
+
+\subsubsection{Configurations}
+
+The distributed processes of the application were then logically organized along
+three possible topologies: a line, a torus or an hypercube. We ran tests where
+the total load was initially on an only node (at one end for the line topology),
+and other tests where the load was initially randomly distributed across all the
+participating nodes. The total amount of load was fixed to a number of load
+units equal to 1000 times the number of node. The average load is then of 1000
+load units.
+
+For each of the preceding configuration, we finally had to choose the
+computation and communication costs of a load unit. We chose them, such as to
+have three different computation over communication cost ratios, and hence model
+three different kinds of applications:
+\begin{itemize}
+\item mainly communicating, with a computation/communication cost ratio of $1/10$;
+\item mainly computing, with a computation/communication cost ratio of $10/1$ ;
+\item balanced, with a computation/communication cost ratio of $1/1$.
+\end{itemize}
+
+To summarize the various configurations, we have:
+\begin{description}
+\item[\textbf{platforms:}] homogeneous (cluster), or heterogeneous (subset of
+ Grid'5000)
+\item[\textbf{platform sizes:}] platforms with 16, 64, 256, or 1024 nodes
+\item[\textbf{process topologies:}] line, torus, or hypercube
+\item[\textbf{initial load distribution:}] initially on a only node, or
+ initially randomly distributed over all nodes
+\item[\textbf{computation/communication ratio:}] $10/1$, $1/1$, or $1/10$
+\end{description}
+%
+This gives us as many as $2\times 4\times 3\times 2\times 3 = 144$ different
+configurations.
+%
+Combined with the various load balancing strategies, we had $16\times 144 =
+2304$ distinct settings to evaluate. In fact, as it will be shown later, we
+didn't run all the strategies, nor all the configurations for the bigger
+platforms with 1024 nodes, since to simulations would have run for a too long
+time.
+
+Anyway, all these the experiments represent more than 240 hours of computing
+time.
+
+\subsubsection{Metrics}
+
+In order to evaluate and compare the different load balancing strategies we had
+to define several metrics. Our goal, when choosing these metrics, was to have
+something tending to a constant value, i.e. to have a measure which is not
+changing anymore once the convergence state is reached. Moreover, we wanted to
+have some normalized value, in order to be able to compare them across different
+settings.
+
+With these constraints in mind, we defined the following metrics:
+%
+\begin{description}
+\item[\textbf{average idle time:}] that's the total time spent, when the nodes
+ don't hold any share of load, and thus have nothing to compute. This total
+ time is divided by the number of participating nodes, such as to have a number
+ that can be compared between simulations of different sizes.
+
+ This metric is expected to give an idea of the ability of the strategy to
+ diffuse the load quickly. A smaller value is better.
+
+\item[\textbf{average convergence date:}] that's the average of the dates when
+ all nodes reached the convergence state. The dates are measured as a number
+ of (simulated) seconds since the beginning of the simulation.
+
+\item[\textbf{maximum convergence date:}] that's the date when the last node
+ reached the convergence state.
+
+ These two dates give an idea of the time needed by the strategy to reach the
+ equilibrium state. A smaller value is better.
+
+\item[\textbf{data transfer amount:}] that's the sum of the amount of all data
+ transfers during the simulation. This sum is then normalized by dividing it
+ by the total amount of data present in the system.
+
+ This metric is expected to give an idea of the efficiency of the strategy in
+ terms of data movements, i.e. its ability to reach the equilibrium with fewer
+ transfers. Again, a smaller value is better.
+
+\end{description}
+
+
+\subsection{Experimental results}
+\label{Results}
+
+In this section, the results for the different simulations will be presented,
+and we'll try to explain our observations.
+
+\FIXME{cluster vs. grid : cluster légèrement plus rapide, mais c'est tout -> chose to give results for grid only.}
+
+\FIXME{explain how to read the graphs}
+
+\subsubsection{Main results}
+
+\begin{figure}
+ \centering
+ \includegraphics[width=.5\linewidth]{data/graphs/R1-1:10-grid-line}%
+ \includegraphics[width=.5\linewidth]{data/graphs/R1-10:1-grid-line}
+ \includegraphics[width=.5\linewidth]{data/graphs/R1-1:10-grid-torus}%
+ \includegraphics[width=.5\linewidth]{data/graphs/R1-10:1-grid-torus}
+ \includegraphics[width=.5\linewidth]{data/graphs/R1-1:10-grid-hcube}%
+ \includegraphics[width=.5\linewidth]{data/graphs/R1-10:1-grid-hcube}
+ \caption{Real mode, initially on an only mode, comp/comm ratio = 1/10 (left), or 10/1 (right).}
+ \label{fig.results1}
+\end{figure}
+
+On fig.~\ref{fig.results1}, \dots
+
+\subsubsection{With the virtual load extension}
+
+\subsubsection{The $k$ parameter}
+
+\subsubsection{With an initial random repartition, and larger platforms}
+
+\begin{figure}
+ \centering
+ \includegraphics[width=.5\linewidth]{data/graphs/RN-1:10-grid-line}%
+ \includegraphics[width=.5\linewidth]{data/graphs/RN-10:1-grid-line}
+ \includegraphics[width=.5\linewidth]{data/graphs/RN-1:10-grid-torus}%
+ \includegraphics[width=.5\linewidth]{data/graphs/RN-10:1-grid-torus}
+ \includegraphics[width=.5\linewidth]{data/graphs/RN-1:10-grid-hcube}%
+ \includegraphics[width=.5\linewidth]{data/graphs/RN-10:1-grid-hcube}
+ \caption{Real mode, random initial distribution, comp/comm ratio = 1/10 (left), or 10/1 (right).}
+ \label{fig.resultsN}
+\end{figure}
+
+\subsubsection{With integer load}
+
+\FIXME{what about the amount of data?}
+
+\begin{itshape}
+\FIXME{remove that part}
+Dans cet ordre:
+...
+- comparer be/makhoul -> be tient la route
+ -> en réel uniquement
+- valider l'extension virtual load -> c'est 'achement bien
+- proposer le -k -> ça peut aider dans certains cas
+- conclure avec la version entière -> on n'a pas l'effet d'escalier !
+Q: comment inclure les types/tailles de platesformes ?
+Q: comment faire des moyennes ?
+Q: comment introduire les distrib 1/N ?
+...