1 \documentclass[smallextended]{svjour3}
2 \usepackage[utf8]{inputenc}
3 \usepackage[T1]{fontenc}
9 \usepackage[ruled,lined]{algorithm2e}
11 \newcommand{\abs}[1]{\lvert#1\rvert} % \abs{x} -> |x|
13 \newenvironment{algodata}{%
14 \begin{tabular}[t]{@{}l@{:~}l@{}}}{%
17 \newcommand{\FIXME}[1]{%
18 \textbf{$\triangleright$\marginpar{\textbf{[FIXME]}}~#1}}
20 \newcommand{\VAR}[1]{\textit{#1}}
24 \title{Best effort strategy and virtual load
25 for asynchronous iterative load balancing}
27 \author{Raphaël Couturier \and
31 \institute{R. Couturier \and A. Giersch \at
32 FEMTO-ST, University of Franche-Comté, Belfort, France \\
33 % Tel.: +123-45-678910\\
34 % Fax: +123-45-678910\\
36 raphael.couturier@femto-st.fr,
37 arnaud.giersch@femto-st.fr}
45 Most of the time, asynchronous load balancing algorithms have extensively been
46 studied in a theoretical point of view. The Bertsekas and Tsitsiklis'
47 algorithm~\cite[section~7.4]{bertsekas+tsitsiklis.1997.parallel}
48 is certainly the most well known algorithm for which the convergence proof is
49 given. From a practical point of view, when a node wants to balance a part of
50 its load to some of its neighbors, the strategy is not described. In this
51 paper, we propose a strategy called \emph{best effort} which tries to balance
52 the load of a node to all its less loaded neighbors while ensuring that all the
53 nodes concerned by the load balancing phase have the same amount of load.
54 Moreover, asynchronous iterative algorithms in which an asynchronous load
55 balancing algorithm is implemented most of the time can dissociate messages
56 concerning load transfers and message concerning load information. In order to
57 increase the converge of a load balancing algorithm, we propose a simple
58 heuristic called \emph{virtual load} which allows a node that receives a load
59 information message to integrate the load that it will receive later in its
60 load (virtually) and consequently sends a (real) part of its load to some of its
61 neighbors. In order to validate our approaches, we have defined a simulator
62 based on SimGrid which allowed us to conduct many experiments.
67 \section{Introduction}
69 Load balancing algorithms are extensively used in parallel and distributed
70 applications in order to reduce the execution times. They can be applied in
71 different scientific fields from high performance computation to micro sensor
72 networks. They are iterative by nature. In literature many kinds of load
73 balancing algorithms have been studied. They can be classified according
74 different criteria: centralized or decentralized, in static or dynamic
75 environment, with homogeneous or heterogeneous load, using synchronous or
76 asynchronous iterations, with a static topology or a dynamic one which evolves
77 during time. In this work, we focus on asynchronous load balancing algorithms
78 where computer nodes are considered homogeneous and with homogeneous load with
79 no external load. In this context, Bertsekas and Tsitsiklis have proposed an
80 algorithm which is definitively a reference for many works. In their work, they
81 proved that under classical hypotheses of asynchronous iterative algorithms and
82 a special constraint avoiding \emph{ping-pong} effect, an asynchronous
83 iterative algorithm converge to the uniform load distribution. This work has
84 been extended by many authors. For example, Cortés et al., with
85 DASUD~\cite{cortes+ripoll+cedo+al.2002.asynchronous}, propose a
86 version working with integer load. This work was later generalized by
87 the same authors in \cite{cedo+cortes+ripoll+al.2007.convergence}.
88 \FIXME{Rajouter des choses ici.}
90 Although the Bertsekas and Tsitsiklis' algorithm describes the condition to
91 ensure the convergence, there is no indication or strategy to really implement
92 the load distribution. In other word, a node can send a part of its load to one
93 or many of its neighbors while all the convergence conditions are
94 followed. Consequently, we propose a new strategy called \emph{best effort}
95 that tries to balance the load of a node to all its less loaded neighbors while
96 ensuring that all the nodes concerned by the load balancing phase have the same
97 amount of load. Moreover, when real asynchronous applications are considered,
98 using asynchronous load balancing algorithms can reduce the execution
99 times. Most of the times, it is simpler to distinguish load information messages
100 from data migration messages. Former ones allows a node to inform its
101 neighbors of its current load. These messages are very small, they can be sent
102 quite often. For example, if an computing iteration takes a significant times
103 (ranging from seconds to minutes), it is possible to send a new load information
104 message at each neighbor at each iteration. Latter messages contains data that
105 migrates from one node to another one. Depending on the application, it may have
106 sense or not that nodes try to balance a part of their load at each computing
107 iteration. But the time to transfer a load message from a node to another one is
108 often much more longer that to time to transfer a load information message. So,
109 when a node receives the information that later it will receive a data message,
110 it can take this information into account and it can consider that its new load
111 is larger. Consequently, it can send a part of it real load to some of its
112 neighbors if required. We call this trick the \emph{virtual load} mechanism.
116 So, in this work, we propose a new strategy for improving the distribution of
117 the load and a simple but efficient trick that also improves the load
118 balancing. Moreover, we have conducted many simulations with SimGrid in order to
119 validate our improvements are really efficient. Our simulations consider that in
120 order to send a message, a latency delays the sending and according to the
121 network performance and the message size, the time of the reception of the
124 In the following of this paper, Section~\ref{BT algo} describes the Bertsekas
125 and Tsitsiklis' asynchronous load balancing algorithm. Moreover, we present a
126 possible problem in the convergence conditions. Section~\ref{Best-effort}
127 presents the best effort strategy which provides an efficient way to reduce the
128 execution times. This strategy will be compared with other ones, presented in
129 Section~\ref{Other}. In Section~\ref{Virtual load}, the virtual load mechanism
130 is proposed. Simulations allowed to show that both our approaches are valid
131 using a quite realistic model detailed in Section~\ref{Simulations}. Finally we
132 give a conclusion and some perspectives to this work.
136 \section{Bertsekas and Tsitsiklis' asynchronous load balancing algorithm}
139 In order prove the convergence of asynchronous iterative load balancing
140 Bertsekas and Tsitsiklis proposed a model
141 in~\cite{bertsekas+tsitsiklis.1997.parallel}. Here we recall some notations.
142 Consider that $N={1,...,n}$ processors are connected through a network.
143 Communication links are represented by a connected undirected graph $G=(N,V)$
144 where $V$ is the set of links connecting different processors. In this work, we
145 consider that processors are homogeneous for sake of simplicity. It is quite
146 easy to tackle the heterogeneous case~\cite{ElsMonPre02}. Load of processor $i$
147 at time $t$ is represented by $x_i(t)\geq 0$. Let $V(i)$ be the set of
148 neighbors of processor $i$. Each processor $i$ has an estimate of the load of
149 each of its neighbors $j \in V(i)$ represented by $x_j^i(t)$. According to
150 asynchronism and communication delays, this estimate may be outdated. We also
151 consider that the load is described by a continuous variable.
153 When a processor send a part of its load to one or some of its neighbors, the
154 transfer takes time to be completed. Let $s_{ij}(t)$ be the amount of load that
155 processor $i$ has transferred to processor $j$ at time $t$ and let $r_{ij}(t)$ be the
156 amount of load received by processor $j$ from processor $i$ at time $t$. Then
157 the amount of load of processor $i$ at time $t+1$ is given by:
159 x_i(t+1)=x_i(t)-\sum_{j\in V(i)} s_{ij}(t) + \sum_{j\in V(i)} r_{ji}(t)
164 Some conditions are required to ensure the convergence. One of them can be
165 called the \emph{ping-pong} condition which specifies that:
167 x_i(t)-\sum _{k\in V(i)} s_{ik}(t) \geq x_j^i(t)+s_{ij}(t)
169 for any processor $i$ and any $j \in V(i)$ such that $x_i(t)>x_j^i(t)$. This
170 condition aims at avoiding a processor to send a part of its load and being
171 less loaded after that.
173 Nevertheless, we think that this condition may lead to deadlocks in some
174 cases. For example, if we consider only three processors and that processor $1$
175 is linked to processor $2$ which is also linked to processor $3$ (i.e. a simple
176 chain which 3 processors). Now consider we have the following values at time $t$:
183 In this case, processor $2$ can either sends load to processor $1$ or processor
184 $3$. If it sends load to processor $1$ it will not satisfy condition
185 (\ref{eq:ping-pong}) because after the sending it will be less loaded that
186 $x_3^2(t)$. So we consider that the \emph{ping-pong} condition is probably to
187 strong. Currently, we did not try to make another convergence proof without this
188 condition or with a weaker condition.
190 \FIXME{Develop: We have the feeling that such a weaker condition
191 exists, because (it's not a proof, but) we have never seen any
192 scenario that is not leading to convergence, even with LB-strategies
193 that are not fulfilling these two conditions.}
195 \section{Best effort strategy}
198 In this section we describe a new load-balancing strategy that we call
199 \emph{best effort}. First, we explain the general idea behind this strategy,
200 and then we describe some variants of this basic strategy.
202 \subsection{Basic strategy}
204 The general idea behind the \emph{best effort} strategy is that each processor,
205 that detects it has more load than some of its neighbors, sends some load to the
206 most of its less loaded neighbors, doing its best to reach the equilibrium
207 between those neighbors and himself.
209 More precisely, when a processor $i$ is in its load-balancing phase,
210 he proceeds as following.
212 \item First, the neighbors are sorted in non-decreasing order of their
213 known loads $x^i_j(t)$.
215 \item Then, this sorted list is traversed in order to find its largest
216 prefix such as the load of each selected neighbor is lesser than:
218 \item the processor's own load, and
219 \item the mean of the loads of the selected neighbors and of the
222 Let's call $S_i(t)$ the set of the selected neighbors, and
223 $\bar{x}(t)$ the mean of the loads of the selected neighbors and of
226 \bar{x}(t) = \frac{1}{\abs{S_i(t)} + 1}
227 \left( x_i(t) + \sum_{j\in S_i(t)} x^i_j(t) \right)
229 The following properties hold:
232 S_i(t) \subset V(i) \\
233 x^i_j(t) < x_i(t) & \forall j \in S_i(t) \\
234 x^i_j(t) < \bar{x} & \forall j \in S_i(t) \\
235 x^i_j(t) \leq x^i_k(t) & \forall j \in S_i(t), \forall k \in V(i) \setminus S_i(t) \\
240 \item Once this selection is completed, processor $i$ sends to each of
241 the selected neighbor $j\in S_i(t)$ an amount of load $s_{ij}(t) =
244 From the above equations, and notably from the definition of
245 $\bar{x}$, it can easily be verified that:
248 x_i(t) - \sum_{j\in S_i(t)} s_{ij}(t) = \bar{x} \\
249 x^i_j(t) + s_{ij}(t) = \bar{x} & \forall j \in S_i(t)
254 \subsection{Leveling the amount to send}
256 With the aforementioned basic strategy, each node does its best to reach the
257 equilibrium with its neighbors. Since each node may be taking the same kind of
258 decision at the same moment, there is the risk that a node receives load from
259 several of its neighbors, and then is temporary going off the equilibrium state.
260 This is particularly true with strongly connected applications.
262 In order to reduce this effect, we add the ability to level the amount to send.
263 The idea, here, is to make smaller steps toward the equilibrium, such as a
264 potentially wrong decision has a lower impact.
266 Concretely, once $s_{ij}$ has been evaluated as before, it is simply divided by
267 some configurable factor. That's what we named the ``parameter $k$'' in
268 Section~\ref{Results}. The amount of data to send is then $s_{ij}(t) = (\bar{x}
270 \FIXME{check the name ($k$)}
272 \section{Other strategies}
275 \FIXME{Réécrire en anglais.}
277 % \FIXME{faut-il décrire les stratégies makhoul et simple ?}
279 % \paragraph{simple} Tentative de respecter simplement les conditions de Bertsekas.
280 % Parmi les voisins moins chargés que soi, on sélectionne :
282 % \item un des moins chargés (vmin) ;
283 % \item un des plus chargés (vmax),
285 % puis on équilibre avec vmin en s'assurant que notre charge reste
286 % toujours supérieure à celle de vmin et à celle de vmax.
288 % On envoie donc (avec "self" pour soi-même) :
290 % \min\left(\frac{load(self) - load(vmin)}{2}, load(self) - load(vmax)\right)
293 \paragraph{makhoul} Ordonne les voisins du moins chargé au plus chargé
294 puis calcule les différences de charge entre soi-même et chacun des
297 Ensuite, pour chaque voisin, dans l'ordre, et tant qu'on reste plus
298 chargé que le voisin en question, on lui envoie 1/(N+1) de la
299 différence calculée au départ, avec N le nombre de voisins.
301 C'est l'algorithme~2 dans~\cite{bahi+giersch+makhoul.2008.scalable}.
303 \section{Virtual load}
306 In this section, we present the concept of \texttt{virtual load}. In order to
307 use this concept, load balancing messages must be sent using two different kinds
308 of messages: load information messages and load balancing messages. More
309 precisely, a node wanting to send a part of its load to one of its neighbors,
310 can first send a load information message containing the load it will send and
311 then it can send the load balancing message containing data to be transferred.
312 Load information message are really short, consequently they will be received
313 very quickly. In opposition, load balancing messages are often bigger and thus
314 require more time to be transferred.
316 The concept of \texttt{virtual load} allows a node that received a load
317 information message to integrate the load that it will receive later in its load
318 (virtually) and consequently send a (real) part of its load to some of its
319 neighbors. In fact, a node that receives a load information message knows that
320 later it will receive the corresponding load balancing message containing the
321 corresponding data. So if this node detects it is too loaded compared to some
322 of its neighbors and if it has enough load (real load), then it can send more
323 load to some of its neighbors without waiting the reception of the load
326 Doing this, we can expect a faster convergence since nodes have a faster
327 information of the load they will receive, so they can take in into account.
329 \FIXME{Est ce qu'on donne l'algo avec virtual load?}
331 \FIXME{describe integer mode}
333 \section{Simulations}
336 In order to test and validate our approaches, we wrote a simulator
338 framework~\cite{casanova+legrand+quinson.2008.simgrid}. This
339 simulator, which consists of about 2,700 lines of C++, allows to run
340 the different load-balancing strategies under various parameters, such
341 as the initial distribution of load, the interconnection topology, the
342 characteristics of the running platform, etc. Then several metrics
343 are issued that permit to compare the strategies.
345 The simulation model is detailed in the next section (\ref{Sim
346 model}), and the experimental contexts are described in
347 section~\ref{Contexts}. Then the results of the simulations are
348 presented in section~\ref{Results}.
350 \subsection{Simulation model}
353 In the simulation model the processors exchange messages which are of
354 two kinds. First, there are \emph{control messages} which only carry
355 information that is exchanged between the processors, such as the
356 current load, or the virtual load transfers if this option is
357 selected. These messages are rather small, and their size is
358 constant. Then, there are \emph{data messages} that carry the real
359 load transferred between the processors. The size of a data message
360 is a function of the amount of load that it carries, and it can be
361 pretty large. In order to receive the messages, each processor has
362 two receiving channels, one for each kind of messages. Finally, when
363 a message is sent or received, this is done by using the non-blocking
364 primitives of SimGrid\footnote{That are \texttt{MSG\_task\_isend()},
365 and \texttt{MSG\_task\_irecv()}.}.
367 During the simulation, each processor concurrently runs three threads:
368 a \emph{receiving thread}, a \emph{computing thread}, and a
369 \emph{load-balancing thread}, which we will briefly describe now.
371 \paragraph{Receiving thread} The receiving thread is in charge of
372 waiting for messages to come, either on the control channel, or on the
373 data channel. Its behavior is sketched by Algorithm~\ref{algo.recv}.
374 When a message is received, it is pushed in a buffer of
375 received message, to be later consumed by one of the other threads.
376 There are two such buffers, one for the control messages, and one for
377 the data messages. The buffers are implemented with a lock-free FIFO
378 \cite{sutter.2008.writing} to avoid contention between the threads.
381 \caption{Receiving thread}
385 \VAR{ctrl\_chan}, \VAR{data\_chan}
386 & communication channels (control and data) \\
387 \VAR{ctrl\_fifo}, \VAR{data\_fifo}
388 & buffers of received messages (control and data) \\
391 wait for a message to be available on either \VAR{ctrl\_chan},
392 or \VAR{data\_chan}\;
393 \If{a message is available on \VAR{ctrl\_chan}}{%
394 get the message from \VAR{ctrl\_chan}, and push it into \VAR{ctrl\_fifo}\;
396 \If{a message is available on \VAR{data\_chan}}{%
397 get the message from \VAR{data\_chan}, and push it into \VAR{data\_fifo}\;
402 \paragraph{Computing thread} The computing thread is in charge of the
403 real load management. As exposed in Algorithm~\ref{algo.comp}, it
404 iteratively runs the following operations:
406 \item if some load was received from the neighbors, get it;
407 \item if there is some load to send to the neighbors, send it;
408 \item run some computation, whose duration is function of the current
409 load of the processor.
411 Practically, after the computation, the computing thread waits for a
412 small amount of time if the iterations are looping too fast (for
413 example, when the current load is near zero).
416 \caption{Computing thread}
420 \VAR{data\_fifo} & buffer of received data messages \\
421 \VAR{real\_load} & current load \\
424 \If{\VAR{data\_fifo} is empty and $\VAR{real\_load} = 0$}{%
425 wait until a message is pushed into \VAR{data\_fifo}\;
427 \While{\VAR{data\_fifo} is not empty}{%
428 pop a message from \VAR{data\_fifo}\;
429 get the load embedded in the message, and add it to \VAR{real\_load}\;
431 \ForEach{neighbor $n$}{%
432 \If{there is some amount of load $a$ to send to $n$}{%
433 send $a$ units of load to $n$, and subtract it from \VAR{real\_load}\;
436 \If{$\VAR{real\_load} > 0.0$}{
437 simulate some computation, whose duration is function of \VAR{real\_load}\;
438 ensure that the main loop does not iterate too fast\;
443 \paragraph{Load-balancing thread} The load-balancing thread is in
444 charge of running the load-balancing algorithm, and exchange the
445 control messages. As shown in Algorithm~\ref{algo.lb}, it iteratively
446 runs the following operations:
448 \item get the control messages that were received from the neighbors;
449 \item run the load-balancing algorithm;
450 \item send control messages to the neighbors, to inform them of the
451 processor's current load, and possibly of virtual load transfers;
452 \item wait a minimum (configurable) amount of time, to avoid to
457 \caption{Load-balancing}
460 \While{\VAR{ctrl\_fifo} is not empty}{%
461 pop a message from \VAR{ctrl\_fifo}\;
462 identify the sender of the message,
463 and update the current knowledge of its load\;
465 run the load-balancing algorithm to make the decision about load transfers\;
466 \ForEach{neighbor $n$}{%
467 send a control messages to $n$\;
469 ensure that the main loop does not iterate too fast\;
474 For the sake of simplicity, a few details were voluntary omitted from
475 these descriptions. For an exhaustive presentation, we refer to the
476 actual source code that was used for the experiments%
477 \footnote{As mentioned before, our simulator relies on the SimGrid
478 framework~\cite{casanova+legrand+quinson.2008.simgrid}. For the
479 experiments, we used a pre-release of SimGrid 3.7 (Git commit
480 67d62fca5bdee96f590c942b50021cdde5ce0c07, available from
481 \url{https://gforge.inria.fr/scm/?group_id=12})}, and which is
483 \url{http://info.iut-bm.univ-fcomte.fr/staff/giersch/software/loba.tar.gz}.
485 \FIXME{ajouter des détails sur la gestion de la charge virtuelle ?}
487 \subsection{Experimental contexts}
490 In order to assess the performances of our algorithms, we ran our
491 simulator with various parameters, and extracted several metrics, that
492 we will describe in this section.
494 \paragraph{Load balancing strategies}
496 Several load balancing strategies were compared. We ran the experiments with
497 the \emph{Best effort}, and with the \emph{Makhoul} strategies. \emph{Best
498 effort} was tested with parameter $k = 1$, $k = 2$, and $k = 4$. Secondly,
499 each strategy was run in its two variants: with, and without the management of
500 \emph{virtual load}. Finally, we tested each configuration with \emph{real},
501 and with \emph{integer} load.
503 To summarize the different load balancing strategies, we have:
505 \item[\textbf{strategies:}] \emph{Makhoul}, or \emph{Best effort} with $k\in
507 \item[\textbf{variants:}] with, or without virtual load
508 \item[\textbf{domain:}] real load, or integer load
511 This gives us as many as $4\times 2\times 2 = 16$ different strategies.
513 \paragraph{End of the simulation}
515 The simulations were run until the load was nearly balanced among the
516 participating nodes. More precisely the simulation stops when each node holds
517 an amount of load at less than 1\% of the load average, during an arbitrary
518 number of computing iterations (2000 in our case).
520 Note that this convergence detection was implemented in a centralized manner.
521 This is easy to do within the simulator, but it's obviously not realistic. In a
522 real application we would have chosen a decentralized convergence detection
523 algorithm, like the one described in \cite{10.1109/TPDS.2005.2}.
525 \paragraph{Platforms}
527 In order to show the behavior of the different strategies in different
528 settings, we simulated the executions on two sorts of platforms. These two
529 sorts of platforms differ by their underlaid network topology. On the one hand,
530 we have homogeneous platforms, modeled as a cluster. On the other hand, we have
531 heterogeneous platforms, modeled as the interconnection of a number of clusters.
533 The clusters were modeled by a fixed number of computing nodes interconnected
534 through a backbone link. Each computing node has a computing power of
535 1~GFlop/s, and is connected to the backbone by a network link whose bandwidth is
536 of 125~MB/s, with a latency of 50~$\mu$s. The backbone has a network bandwidth
537 of 2.25~GB/s, with a latency of 500~$\mu$s.
539 The heterogeneous platform descriptions were created by taking a subset of the
540 Grid'5000 infrastructure\footnote{Grid'5000 is a French large scale experimental
541 Grid (see \url{https://www.grid5000.fr/}).}, as described in the platform file
542 \texttt{g5k.xml} distributed with SimGrid. Note that the heterogeneity of the
543 platform here only comes from the network topology. Indeed, since our
544 algorithms currently do not handle heterogeneous computing resources, the
545 processor speeds were normalized, and we arbitrarily chose to fix them to
548 Then we derived each sort of platform with four different number of computing
549 nodes: 16, 64, 256, and 1024 nodes.
551 \paragraph{Configurations}
553 The distributed processes of the application were then logically organized along
554 three possible topologies: a line, a torus or an hypercube. We ran tests where
555 the total load was initially on an only node (at one end for the line topology),
556 and other tests where the load was initially randomly distributed across all the
557 participating nodes. The total amount of load was fixed to a number of load
558 units equal to 1000 times the number of node. The average load is then of 1000
561 For each of the preceding configuration, we finally had to choose the
562 computation and communication costs of a load unit. We chose them, such as to
563 have three different computation over communication cost ratios, and hence model
564 three different kinds of applications:
566 \item mainly communicating, with a computation/communication cost ratio of $1/10$;
567 \item mainly computing, with a computation/communication cost ratio of $10/1$ ;
568 \item balanced, with a computation/communication cost ratio of $1/1$.
571 To summarize the various configurations, we have:
573 \item[\textbf{platforms:}] homogeneous (cluster), or heterogeneous (subset of
575 \item[\textbf{platform sizes:}] platforms with 16, 64, 256, or 1024 nodes
576 \item[\textbf{process topologies:}] line, torus, or hypercube
577 \item[\textbf{initial load distribution:}] initially on a only node, or
578 initially randomly distributed over all nodes
579 \item[\textbf{computation/communication ratio:}] $10/1$, $1/1$, or $1/10$
582 This gives us as many as $2\times 4\times 3\times 2\times 3 = 144$ different
585 Combined with the various load balancing strategies, we had $16\times 144 =
586 2304$ distinct settings to evaluate. In fact, as it will be shown later, we
587 didn't run all the strategies, nor all the configurations for the bigger
588 platforms with 1024 nodes, since to simulations would have run for a too long
591 Anyway, all these the experiments represent more than 240 hours of computing
596 In order to evaluate and compare the different load balancing strategies we had
597 to define several metrics. Our goal, when choosing these metrics, was to have
598 something tending to a constant value, i.e. to have a measure which is not
599 changing anymore once the convergence state is reached. Moreover, we wanted to
600 have some normalized value, in order to be able to compare them across different
603 With these constraints in mind, we defined the following metrics:
606 \item[\textbf{average idle time:}] that's the total time spent, when the nodes
607 don't hold any share of load, and thus have nothing to compute. This total
608 time is divided by the number of participating nodes, such as to have a number
609 that can be compared between simulations of different sizes.
611 This metric is expected to give an idea of the ability of the strategy to
612 diffuse the load quickly. A smaller value is better.
614 \item[\textbf{average convergence date:}] that's the average of the dates when
615 all nodes reached the convergence state. The dates are measured as a number
616 of (simulated) seconds since the beginning of the simulation.
618 \item[\textbf{maximum convergence date:}] that's the date when the last node
619 reached the convergence state.
621 These two dates give an idea of the time needed by the strategy to reach the
622 equilibrium state. A smaller value is better.
624 \item[\textbf{data transfer amount:}] that's the sum of the amount of all data
625 transfers during the simulation. This sum is then normalized by dividing it
626 by the total amount of data present in the system.
628 This metric is expected to give an idea of the efficiency of the strategy in
629 terms of data movements, i.e. its ability to reach the equilibrium with fewer
630 transfers. Again, a smaller value is better.
635 \subsection{Validation of our approaches}
639 On veut montrer quoi ? :
641 1) best plus rapide que les autres (simple, makhoul)
642 2) avantage virtual load
644 Est ce qu'on peut trouver des contre exemple?
648 Simulation avec temps définies assez long et on mesure la qualité avec : volume de calcul effectué, volume de données échangées
649 Mais aussi simulation avec temps court qui montre que seul best converge
652 Expés avec ratio calcul/comm rapide et lent
654 Quelques expés avec charge initiale aléatoire plutot que sur le premier proc
656 Cadre processeurs homogènes
660 On ne tient pas compte de la vitesse des liens donc on la considère homogène
662 Prendre un réseau hétérogène et rendre processeur homogène
664 Taille : 10 100 très gros
666 \section{Conclusion and perspectives}
668 \begin{acknowledgements}
669 Computations have been performed on the supercomputer facilities of
670 the Mésocentre de calcul de Franche-Comté.
671 \end{acknowledgements}
673 \bibliographystyle{spmpsci}
674 \bibliography{biblio}
682 %%% ispell-local-dictionary: "american"
685 % LocalWords: Raphaël Couturier Arnaud Giersch Abderrahmane Sider Franche ij
686 % LocalWords: Bertsekas Tsitsiklis SimGrid DASUD Comté Béjaïa asynchronism ji
687 % LocalWords: ik isend irecv Cortés et al chan ctrl fifo Makhoul GFlop xml