3 Simulating MPI Applications
4 ===========================
9 SimGrid can not only :ref:`simulate algorithms <Tutorial_Algorithms>`, but
10 it can also be used to execute real MPI applications on top of
11 virtual, simulated platforms with the SMPI module. Even complex
12 C/C++/F77/F90 applications should run out of the box in this
13 environment. In fact, almost all proxy apps provided by the `ExaScale
14 Project <https://proxyapps.exascaleproject.org/>`_ only require minor
15 modifications to `run on top of SMPI
16 <https://framagit.org/simgrid/SMPI-proxy-apps>`_.
18 This setting permits to debug your MPI applications in a perfectly
19 reproducible setup, with no Heisenbugs. Enjoy the full Clairevoyance
20 provided by the simulator while running what-if analysis on platforms
21 that are still to be built! Several `production-grade MPI applications
22 <https://framagit.org/simgrid/SMPI-proxy-apps#full-scale-applications>`_
23 use SimGrid for their integration and performance testing.
25 MPI 2.2 is already partially covered: over 160 primitives are
26 supported. Some parts of the standard are still missing: MPI-IO, MPI3
27 collectives, spawning ranks, inter-communicators, and some others. If
28 one of the functions you use is still missing, please drop us an
29 email. We may find the time to implement it for you.
31 Multi-threading support is very limited in SMPI. Only funneled
32 applications are supported: at most one thread per rank can issue any
33 MPI calls. For better timing predictions, your application should even
34 be completely mono-threaded. Using OpenMP (or pthreads directly) may
35 greatly decrease SimGrid predictive power. That may still be OK if you
36 only plan to debug your application in a reproducible setup, without
37 any performance-related analysis.
42 In SMPI, communications are simulated while computations are
43 emulated. This means that while computations occur as they would in
44 the real systems, communication calls are intercepted and achived by
47 To start using SMPI, you just need to compile your application with
48 ``smpicc`` instead of ``mpicc``, or with ``smpiff`` instead of
49 ``mpiff``, or with ``smpicxx`` instead of ``mpicxx``. Then, the only
50 difference between the classical ``mpirun`` and the new ``smpirun`` is
51 that it requires a new parameter ``-platform`` with a file describing
52 the simulated platform on which your application shall run.
54 Internally, all ranks of your application are executed as threads of a
55 single unix process. That's not a problem if your application has
56 global variables, because ``smpirun`` loads one application instance
57 per MPI rank as if it was another dynamic library. Then, MPI
58 communication calls are implemented using SimGrid: data is exchanged
59 through memory copy, while the simulator's performance models are used
60 to predict the time taken by each communications. Any computations
61 occuring between two MPI calls are benchmarked, and the corresponding
62 time is reported into the simulator.
64 .. image:: /tuto_smpi/img/big-picture.svg
67 Describing Your Platform
68 ------------------------
70 As a SMPI user, you are supposed to provide a description of your
71 simulated platform, that is mostly a set of simulated hosts and network
72 links with some performance characteristics. SimGrid provides a plenty
73 of :ref:`documentation <platform>` and examples (in the
74 `examples/platforms <https://framagit.org/simgrid/simgrid/tree/master/examples/platforms>`_
75 source directory), and this section only shows a small set of introductory
78 Feel free to skip this section if you want to jump right away to usage
81 Simple Example with 3 hosts
82 ...........................
84 Imagine you want to describe a little platform with three hosts,
85 interconnected as follows:
87 .. image:: /tuto_smpi/3hosts.png
90 This can be done with the following platform file, that considers the
91 simulated platform as a graph of hosts and network links.
93 .. literalinclude:: /tuto_smpi/3hosts.xml
96 The elements basic elements (with :ref:`pf_tag_host` and
97 :ref:`pf_tag_link`) are described first, and then the routes between
98 any pair of hosts are explicitely given with :ref:`pf_tag_route`. Any
99 host must be given a computational speed (in flops) while links must
100 be given a latency (in seconds) and a bandwidth (in bytes per
101 second). Note that you can write 1Gflops instead of 1000000000flops,
102 and similar. Last point: :ref:`pf_tag_route`s are symmetrical by
103 default (but this can be changed).
105 Cluster with a Crossbar
106 .......................
108 A very common parallel computing platform is a homogeneous cluster in
109 which hosts are interconnected via a crossbar switch with as many
110 ports as hosts, so that any disjoint pairs of hosts can communicate
111 concurrently at full speed. For instance:
113 .. literalinclude:: ../../examples/platforms/cluster_crossbar.xml
117 One specifies a name prefix and suffix for each host, and then give an
118 integer range. In the example the cluster contains 65535 hosts (!),
119 named ``node-0.simgrid.org`` to ``node-65534.simgrid.org``. All hosts
120 have the same power (1 Gflop/sec) and are connected to the switch via
121 links with same bandwidth (125 MBytes/sec) and latency (50
128 Cluster with a Shared Backbone
129 ..............................
131 Another popular model for a parallel platform is that of a set of
132 homogeneous hosts connected to a shared communication medium, a
133 backbone, with some finite bandwidth capacity and on which
134 communicating host pairs can experience contention. For instance:
137 .. literalinclude:: ../../examples/platforms/cluster_backbone.xml
141 The only differences with the crossbar cluster above are the ``bb_bw``
142 and ``bb_lat`` attributes that specify the backbone characteristics
143 (here, a 500 microseconds latency and a 2.25 GByte/sec
144 bandwidth). This link is used for every communication within the
145 cluster. The route from ``node-0.simgrid.org`` to ``node-1.simgrid.org``
146 counts 3 links: the private link of ``node-0.simgrid.org``, the backbone
147 and the private link of ``node-1.simgrid.org``.
156 Many HPC facilities use torus clusters to reduce sharing and
157 performance loss on concurrent internal communications. Modeling this
158 in SimGrid is very easy. Simply add a ``topology="TORUS"`` attribute
159 to your cluster. Configure it with the ``topo_parameters="X,Y,Z"``
160 attribute, where ``X``, ``Y`` and ``Z`` are the dimension of your
163 .. image:: ../../examples/platforms/cluster_torus.svg
166 .. literalinclude:: ../../examples/platforms/cluster_torus.xml
169 Note that in this example, we used ``loopback_bw`` and
170 ``loopback_lat`` to specify the characteristics of the loopback link
171 of each node (i.e., the link allowing each node to communicate with
172 itself). We could have done so in previous example too. When no
173 loopback is given, the communication from a node to itself is handled
174 as if it were two distinct nodes: it goes twice through the private
175 link and through the backbone (if any).
180 This topology was introduced to reduce the amount of links in the
181 cluster (and thus reduce its price) while maintaining a high bisection
182 bandwidth and a relatively low diameter. To model this in SimGrid,
183 pass a ``topology="FAT_TREE"`` attribute to your cluster. The
184 ``topo_parameters=#levels;#downlinks;#uplinks;link count`` follows the
185 semantic introduced in the `Figure 1B of this article
186 <http://webee.eedev.technion.ac.il/wp-content/uploads/2014/08/publication_574.pdf>`_.
188 Here is the meaning of this example: ``2 ; 4,4 ; 1,2 ; 1,2``
190 - That's a two-level cluster (thus the initial ``2``).
191 - Routers are connected to 4 elements below them, regardless of its
192 level. Thus the ``4,4`` component that is used as
193 ``#downlinks``. This means that the hosts are grouped by 4 on a
194 given router, and that there is 4 level-1 routers (in the middle of
196 - Hosts are connected to only 1 router above them, while these routers
197 are connected to 2 routers above them (thus the ``1,2`` used as
199 - Hosts have only one link to their router while every path between a
200 level-1 routers and level-2 routers use 2 parallel links. Thus the
201 ``1,2`` that is used as ``link count``.
203 .. image:: ../../examples/platforms/cluster_fat_tree.svg
206 .. literalinclude:: ../../examples/platforms/cluster_fat_tree.xml
214 This topology was introduced to further reduce the amount of links
215 while maintaining a high bandwidth for local communications. To model
216 this in SimGrid, pass a ``topology="DRAGONFLY"`` attribute to your
217 cluster. It's based on the implementation of the topology used on
218 Cray XC systems, described in paper
219 `Cray Cascade: A scalable HPC system based on a Dragonfly network <https://dl.acm.org/citation.cfm?id=2389136>`_.
221 System description follows the format ``topo_parameters=#groups;#chassis;#routers;#nodes``
222 For example, ``3,4 ; 3,2 ; 3,1 ; 2``:
224 - ``3,4``: There are 3 groups with 4 links between each (blue level).
225 Links to nth group are attached to the nth router of the group
226 on our implementation.
227 - ``3,2``: In each group, there are 3 chassis with 2 links between each nth router
228 of each group (black level)
229 - ``3,1``: In each chassis, 3 routers are connected together with a single link
231 - ``2``: Each router has two nodes attached (single link)
233 .. image:: ../../examples/platforms/cluster_dragonfly.svg
236 .. literalinclude:: ../../examples/platforms/cluster_dragonfly.xml
242 We only glanced over the abilities offered by SimGrid to describe the
243 platform topology. Other networking zones model non-HPC platforms
244 (such as wide area networks, ISP network comprising set-top boxes, or
245 even your own routing schema). You can interconnect several networking
246 zones in your platform to form a tree of zones, that is both a time-
247 and memory-efficient representation of distributed platforms. Please
248 head to the dedicated :ref:`documentation <platform>` for more
254 It is time to start using SMPI yourself. For that, you first need to
255 install it somehow, and then you will need a MPI application to play with.
260 The easiest way to take the tutorial is to use the dedicated Docker
261 image. Once you `installed Docker itself
262 <https://docs.docker.com/install/>`_, simply do the following:
264 .. code-block:: shell
266 docker pull simgrid/tuto-smpi
267 docker run -it --rm --name simgrid --volume ~/smpi-tutorial:/source/tutorial simgrid/tuto-smpi bash
269 This will start a new container with all you need to take this
270 tutorial, and create a ``smpi-tutorial`` directory in your home on
271 your host machine that will be visible as ``/source/tutorial`` within the
272 container. You can then edit the files you want with your favorite
273 editor in ``~/smpi-tutorial``, and compile them within the
274 container to enjoy the provided dependencies.
278 Any change to the container out of ``/source/tutorial`` will be lost
279 when you log out of the container, so don't edit the other files!
281 All needed dependencies are already installed in this container
282 (SimGrid, the C/C++/Fortran compilers, make, pajeng, R and pajengr). Vite being
283 only optional in this tutorial, it is not installed to reduce the
286 The container also include the example platform files from the
287 previous section as well as the source code of the NAS Parallel
288 Benchmarks. These files are available under
289 ``/source/simgrid-template-smpi`` in the image. You should copy it to
290 your working directory when you first log in:
292 .. code-block:: shell
294 cp -r /source/simgrid-template-smpi/* /source/tutorial
297 Using your Computer Natively
298 ............................
300 To take the tutorial on your machine, you first need to :ref:`install
301 SimGrid <install>`, the C/C++/Fortran compilers and also ``pajeng`` to
302 visualize the traces. You may want to install `Vite
303 <http://vite.gforge.inria.fr/>`_ to get a first glance at the
304 traces. The provided code template requires make to compile. On
305 Debian and Ubuntu for example, you can get them as follows:
307 .. code-block:: shell
309 sudo apt install simgrid pajeng make gcc g++ gfortran vite
311 For R analysis of the produced traces, you may want to install R,
312 and the `pajengr <https://github.com/schnorr/pajengr#installation/>`_ package.
314 .. code-block:: shell
316 sudo apt install r-base r-cran-devtools cmake flex bison
317 Rscript -e "library(devtools); install_github('schnorr/pajengr');"
319 To take this tutorial, you will also need the platform files from the
320 previous section as well as the source code of the NAS Parallel
321 Benchmarks. Just clone `this repository
322 <https://framagit.org/simgrid/simgrid-template-smpi>`_ to get them all:
324 .. code-block:: shell
326 git clone https://framagit.org/simgrid/simgrid-template-smpi.git
327 cd simgrid-template-smpi/
329 If you struggle with the compilation, then you should double check
330 your :ref:`SimGrid installation <install>`. On need, please refer to
331 the :ref:`Troubleshooting your Project Setup
332 <install_yours_troubleshooting>` section.
337 It is time to simulate your first MPI program. Use the simplistic
339 <https://framagit.org/simgrid/simgrid-template-smpi/raw/master/roundtrip.c?inline=false>`_
340 that comes with the template.
342 .. literalinclude:: /tuto_smpi/roundtrip.c
345 Compiling and Executing
346 .......................
348 Compiling the program is straightforward (double check your
349 :ref:`SimGrid installation <install>` if you get an error message):
352 .. code-block:: shell
354 $ smpicc -O3 roundtrip.c -o roundtrip
357 Once compiled, you can simulate the execution of this program on 16
358 nodes from the ``cluster_crossbar.xml`` platform as follows:
360 .. code-block:: shell
362 $ smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile ./roundtrip
364 - The ``-np 16`` option, just like in regular MPI, specifies the
365 number of MPI processes to use.
366 - The ``-hostfile cluster_hostfile`` option, just like in regular
367 MPI, specifies the host file. If you omit this option, ``smpirun``
368 will deploy the application on the first machines of your platform.
369 - The ``-platform cluster_crossbar.xml`` option, **which doesn't exist
370 in regular MPI**, specifies the platform configuration to be
372 - At the end of the line, one finds the executable name and
373 command-line arguments (if any -- roundtrip does not expect any arguments).
375 Feel free to tweak the content of the XML platform file and the
376 program to see the effect on the simulated execution time. It may be
377 easier to compare the executions with the extra option
378 ``--cfg=smpi/display-timing:yes``. Note that the simulation accounts
379 for realistic network protocol effects and MPI implementation
380 effects. As a result, you may see "unexpected behavior" like in the
381 real world (e.g., sending a message 1 byte larger may lead to
382 significant higher execution time).
384 Lab 1: Visualizing LU
385 ---------------------
387 We will now simulate a larger application: the LU benchmark of the NAS
388 suite. The version provided in the code template was modified to
389 compile with SMPI instead of the regular MPI. Compare the difference
390 between the original ``config/make.def.template`` and the
391 ``config/make.def`` that was adapted to SMPI. We use ``smpiff`` and
392 ``smpicc`` as compilers, and don't pass any additional library.
394 Now compile and execute the LU benchmark, class S (i.e., for `small
396 <https://www.nas.nasa.gov/publications/npb_problem_sizes.html>`_) with
399 .. code-block:: shell
401 $ make lu NPROCS=4 CLASS=S
403 $ smpirun -np 4 -platform ../cluster_backbone.xml bin/lu.S.4
406 To get a better understanding of what is going on, activate the
407 vizualization tracing, and convert the produced trace for later
410 .. code-block:: shell
412 smpirun -np 4 -platform ../cluster_backbone.xml -trace --cfg=tracing/filename:lu.S.4.trace bin/lu.S.4
414 You can then produce a Gantt Chart with the following R chunk. You can
415 either copy/paste it in a R session, or `turn it into a Rscript executable
416 <https://swcarpentry.github.io/r-novice-inflammation/05-cmdline/>`_ to
417 run it again and again.
425 df_state = pajeng_read("lu.S.4.trace")
426 names(df_state) = c("Type", "Rank", "Container", "Start", "End", "Duration", "Level", "State");
427 df_state = df_state[!(names(df_state) %in% c("Type","Container","Level"))]
428 df_state$Rank = as.numeric(gsub("rank-","",df_state$Rank))
430 # Draw the Gantt Chart
431 gc = ggplot(data=df_state) + geom_rect(aes(xmin=Start, xmax=End, ymin=Rank, ymax=Rank+1,fill=State))
437 This produces a file called ``Rplots.pdf`` with the following
438 content. You can find more visualization examples `online
439 <https://simgrid.org/contrib/R_visualization.html>`_.
441 .. image:: /tuto_smpi/img/lu.S.4.png
444 Lab 2: Tracing and Replay of LU
445 -------------------------------
447 Now compile and execute the LU benchmark, class A, with 32 nodes.
449 .. code-block:: shell
451 $ make lu NPROCS=32 CLASS=A
453 This takes several minutes to to simulate, because all code from all
454 processes has to be really executed, and everything is serialized.
456 SMPI provides several methods to speed things up. One of them is to
457 capture a time independent trace of the running application, and
458 replay it on a different platform with the same amount of nodes. The
459 replay is much faster than live simulation, as the computations are
460 skipped (the application must be network-dependent for this to work).
462 You can even generate the trace during the live simulation as follows:
464 .. code-block:: shell
466 $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
468 The produced trace is composed of a file ``LU.A.32`` and a folder
469 ``LU.A.32_files``. You can replay this trace with SMPI thanks to ``smpirun``.
470 For example, the following command replays the trace on a different platform:
472 .. code-block:: shell
474 $ smpirun -np 32 -platform ../cluster_crossbar.xml -hostfile ../cluster_hostfile -replay LU.A.32
476 All the outputs are gone, as the application is not really simulated
477 here. Its trace is simply replayed. But if you visualize the live
478 simulation and the replay, you will see that the behavior is
479 unchanged. The simulation does not run much faster on this very
480 example, but this becomes very interesting when your application
481 is computationally hungry.
485 The commands should be separated and executed by some CI to make sure
486 the documentation is up-to-date.
488 Lab 3: Execution Sampling on Matrix Multiplication example
489 ----------------------------------------------------------
491 The second method to speed up simulations is to sample the computation
492 parts in the code. This means that the person doing the simulation
493 needs to know the application and identify parts that are compute
494 intensive and take time, while being regular enough not to ruin
495 simulation accuracy. Furthermore there should not be any MPI calls
496 inside such parts of the code.
498 Use for this part the `gemm_mpi.c
499 <https://gitlab.com/PRACE-4IP/CodeVault/raw/master/hpc_kernel_samples/dense_linear_algebra/gemm/mpi/src/gemm_mpi.cpp>`_
500 example, which is provided by the `PRACE Codevault repository
501 <http://www.prace-ri.eu/prace-codevault/>`_.
503 The computing part of this example is the matrix multiplication routine
505 .. literalinclude:: /tuto_smpi/gemm_mpi.cpp
509 .. code-block:: shell
511 $ smpicc -O3 gemm_mpi.cpp -o gemm
512 $ time smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile --cfg=smpi/display-timing:yes --cfg=smpi/running-power:1000000000 ./gemm
514 This should end quite quickly, as the size of each matrix is only 1000x1000.
515 But what happens if we want to simulate larger runs ?
516 Replace the size by 2000, 3000, and try again.
518 The simulation time increases a lot, while there are no more MPI calls performed, only computation.
520 The ``--cfg=smpi/display-timing`` option gives more details about execution,
521 and advises to use sampling if the time spent in computing loops seems too high.
523 The ``--cfg=smpi/running-power:1000000000`` option sets the speed of the processor used for
524 running the simulation. Here we say that its speed is the same as one of the
525 processors we are simulation (1Gf), so that 1 second of computation is injected
526 as 1 second in the simulation.
528 .. code-block:: shell
530 [5.568556] [smpi_kernel/INFO] Simulated time: 5.56856 seconds.
532 The simulation took 24.9403 seconds (after parsing and platform setup)
533 24.0764 seconds were actual computation of the application
534 [5.568556] [smpi_kernel/INFO] More than 75% of the time was spent inside the application code.
535 You may want to use sampling functions or trace replay to reduce this.
537 So in our case (size 3000) the simulation ran for 25 seconds, and simulated time was 5.57s at the end.
538 Computation by itself took 24 seconds, and can quickly grow with larger sizes
539 (as computation is really performed, there will be variability between similar runs).
541 SMPI provides sampling macros in order to accelerate simulation by sampling iterations
542 of large computation loops, and skip computation after a certain amount of iterations,
543 or when the sampling is stable enough.
545 The two macros only slightly differ :
547 - ``SMPI_SAMPLE_GLOBAL`` : specified number of samples is produced by all processors
548 - ``SMPI_SAMPLE_LOCAL`` : each process executes a specified number of iterations
550 So if the size of the computed part varies between processes (imbalance),
551 it's safer to use the LOCAL one.
553 To use one of them, replacing the external for loop of the multiply routine:
557 for (int i = istart; i <= iend; ++i)
563 SMPI_SAMPLE_GLOBAL(int i = istart, i <= iend, ++i, 10, 0.005)
565 First three parameters are the ones from the loop, while the two last ones are for sampling.
566 They mean that at most 10 iterations will be performed, and that sampling phase can be exited
567 earlier if a certain stability is reached after less samples.
569 Now run the code again with various sizes and parameters and check the time taken for the
570 simulation, as well as the resulting simulated time.
572 .. code-block:: shell
574 [5.575691] [smpi_kernel/INFO] Simulated time: 5.57569 seconds.
575 The simulation took 1.23698 seconds (after parsing and platform setup)
576 0.0319454 seconds were actual computation of the application
578 In this case the simulation only took 1.2 seconds, while the simulated time
579 stayed almost identical.
581 Obviously the results of the computation will be altered as most of it is skipped,
582 so these macros cannot be used when results are critical for the application behavior
583 (convergence estimation for instance will be wrong on some codes).
586 Lab 4: Memory folding on large allocations
587 ------------------------------------------
589 Another issue that can be encountered when simulation with SMPI is lack of memory.
590 Indeed we are executing all MPI processes on a single node, which can lead to crashes.
591 We will use the DT benchmark of the NAS suite to illustrate how to avoid such issues.
593 With 85 processes and class C, the DT simulated benchmark will try to allocate 35GB of memory
594 , which may not be available on the node your are using.
596 To avoid this we can simply replace the largest calls to malloc and free by calls
597 to ``SMPI_SHARED_MALLOC`` and ``SMPI_SHARED_FREE``.
598 This means that all processes will share one single instance of this buffer.
599 As for sampling, results will be altered, and it shouldn't be used for control structures.
601 For DT example, there are three different calls to malloc in the file, and one of them is for a needed structure.
602 Find it and replace the two other ones by SMPI_SHARED_MALLOC (there is only one free to replace for both of them).
604 Once done, you can now run
606 .. code-block:: shell
608 $ make dt NPROCS=85 CLASS=C
610 $ smpirun -np 85 -platform ../cluster_backbone.xml bin/dt.C.x BH
613 And simulation should finish without swapping/crashing (Ignore the warning about the return value).
615 If control structures are also problematic, you can use ``SMPI_PARTIAL_SHARED_MALLOC(size, offsets, offsetscount)``
616 macro, which will shared only specific parts of the structure between processes,
617 and use specific memory for the important parts.
618 It can be freed afterwards with SMPI_SHARED_FREE.
623 You may also be interested in the `SMPI reference article
624 <https://hal.inria.fr/hal-01415484>`_ or these `introductory slides
625 <http://simgrid.org/tutorials/simgrid-smpi-101.pdf>`_. The :ref:`SMPI
626 reference documentation <app_smpi>` covers much more content than
629 Finally, we regularly use SimGrid in our teachings on MPI. This way,
630 our student can experiment with platforms that they do not have access
631 to, and the associated visualisation tools helps them to understand
632 their work. The whole material is available online, in a separate
633 project: the `SMPI CourseWare <https://simgrid.github.io/SMPI_CourseWare/>`_.
635 .. LocalWords: SimGrid