3 Simulating MPI Applications
4 ===========================
9 SimGrid can not only :ref:`simulate algorithms <usecase_simalgo>`, but
10 it can also be used to execute real MPI applications on top of
11 virtual, simulated platforms with the SMPI module. Even complex
12 C/C++/F77/F90 applications should run out of the box in this
13 environment. In fact, almost all proxy apps provided by the `ExaScale
14 Project <https://proxyapps.exascaleproject.org/>`_ only require minor
15 modifications to `run on top of SMPI
16 <https://framagit.org/simgrid/SMPI-proxy-apps>`_.
18 This setting permits to debug your MPI applications in a perfectly
19 reproducible setup, with no Heisenbugs. Enjoy the full Clairevoyance
20 provided by the simulator while running what-if analysis on platforms
21 that are still to be built! Several `production-grade MPI applications
22 <https://framagit.org/simgrid/SMPI-proxy-apps#full-scale-applications>`_
23 use SimGrid for their integration and performance testing.
25 MPI 2.2 is already partially covered: over 160 primitives are
26 supported. Some parts of the standard are still missing: MPI-IO, MPI3
27 collectives, spawning ranks, inter-communicators, and some others. If
28 one of the functions you use is still missing, please drop us an
29 email. We may find the time to implement it for you.
31 Multi-threading support is very limited in SMPI. Only funneled
32 applications are supported: at most one thread per rank can issue any
33 MPI calls. For better timing predictions, your application should even
34 be completely mono-threaded. Using OpenMP (or pthreads directly) may
35 greatly decrease SimGrid predictive power. That may still be OK if you
36 only plan to debug your application in a reproducible setup, without
37 any performance-related analysis.
42 In SMPI, communications are simulated while computations are
43 emulated. This means that while computations occur as they would in
44 the real systems, communication calls are intercepted and achived by
47 To start using SMPI, you just need to compile your application with
48 ``smpicc`` instead of ``mpicc``, or with ``smpiff`` instead of
49 ``mpiff``, or with ``smpicxx`` instead of ``mpicxx``. Then, the only
50 difference between the classical ``mpirun`` and the new ``smpirun`` is
51 that it requires a new parameter ``-platform`` with a file describing
52 the simulated platform on which your application shall run.
54 Internally, all ranks of your application are executed as threads of a
55 single unix process. That's not a problem if your application has
56 global variables, because ``smpirun`` loads one application instance
57 per MPI rank as if it was another dynamic library. Then, MPI
58 communication calls are implemented using SimGrid: data is exchanged
59 through memory copy, while the simulator's performance models are used
60 to predict the time taken by each communications. Any computations
61 occuring between two MPI calls are benchmarked, and the corresponding
62 time is reported into the simulator.
64 .. image:: /tuto_smpi/img/big-picture.svg
67 Describing Your Platform
68 ------------------------
70 As a SMPI user, you are supposed to provide a description of your
71 simulated platform, that is mostly a set of simulated hosts and network
72 links with some performance characteristics. SimGrid provides a plenty
73 of :ref:`documentation <platform>` and examples (in the
74 `examples/platforms <https://framagit.org/simgrid/simgrid/tree/master/examples/platforms>`_
75 source directory), and this section only shows a small set of introductory
78 Feel free to skip this section if you want to jump right away to usage
81 Simple Example with 3 hosts
82 ...........................
84 Imagine you want to describe a little platform with three hosts,
85 interconnected as follows:
87 .. image:: /tuto_smpi/3hosts.png
90 This can be done with the following platform file, that considers the
91 simulated platform as a graph of hosts and network links.
93 .. literalinclude:: /tuto_smpi/3hosts.xml
96 The elements basic elements (with :ref:`pf_tag_host` and
97 :ref:`pf_tag_link`) are described first, and then the routes between
98 any pair of hosts are explicitely given with :ref:`pf_tag_route`.
100 Any host must be given a computational speed in flops while links must
101 be given a latency and a bandwidth. You can write 1Gf for
102 1,000,000,000 flops (full list of units in the reference guide of
103 :ref:`pf_tag_host` and :ref:`pf_tag_link`).
105 Routes defined with :ref:`pf_tag_route` are symmetrical by default,
106 meaning that the list of traversed links from A to B is the same as
107 from B to A. Explicitely define non-symmetrical routes if you prefer.
109 Cluster with a Crossbar
110 .......................
112 A very common parallel computing platform is a homogeneous cluster in
113 which hosts are interconnected via a crossbar switch with as many
114 ports as hosts, so that any disjoint pairs of hosts can communicate
115 concurrently at full speed. For instance:
117 .. literalinclude:: ../../examples/platforms/cluster_crossbar.xml
121 One specifies a name prefix and suffix for each host, and then give an
122 integer range. In the example the cluster contains 65535 hosts (!),
123 named ``node-0.simgrid.org`` to ``node-65534.simgrid.org``. All hosts
124 have the same power (1 Gflop/sec) and are connected to the switch via
125 links with same bandwidth (125 MBytes/sec) and latency (50
132 Cluster with a Shared Backbone
133 ..............................
135 Another popular model for a parallel platform is that of a set of
136 homogeneous hosts connected to a shared communication medium, a
137 backbone, with some finite bandwidth capacity and on which
138 communicating host pairs can experience contention. For instance:
141 .. literalinclude:: ../../examples/platforms/cluster_backbone.xml
145 The only differences with the crossbar cluster above are the ``bb_bw``
146 and ``bb_lat`` attributes that specify the backbone characteristics
147 (here, a 500 microseconds latency and a 2.25 GByte/sec
148 bandwidth). This link is used for every communication within the
149 cluster. The route from ``node-0.simgrid.org`` to ``node-1.simgrid.org``
150 counts 3 links: the private link of ``node-0.simgrid.org``, the backbone
151 and the private link of ``node-1.simgrid.org``.
160 Many HPC facilities use torus clusters to reduce sharing and
161 performance loss on concurrent internal communications. Modeling this
162 in SimGrid is very easy. Simply add a ``topology="TORUS"`` attribute
163 to your cluster. Configure it with the ``topo_parameters="X,Y,Z"``
164 attribute, where ``X``, ``Y`` and ``Z`` are the dimension of your
167 .. image:: ../../examples/platforms/cluster_torus.svg
170 .. literalinclude:: ../../examples/platforms/cluster_torus.xml
173 Note that in this example, we used ``loopback_bw`` and
174 ``loopback_lat`` to specify the characteristics of the loopback link
175 of each node (i.e., the link allowing each node to communicate with
176 itself). We could have done so in previous example too. When no
177 loopback is given, the communication from a node to itself is handled
178 as if it were two distinct nodes: it goes twice through the private
179 link and through the backbone (if any).
184 This topology was introduced to reduce the amount of links in the
185 cluster (and thus reduce its price) while maintaining a high bisection
186 bandwidth and a relatively low diameter. To model this in SimGrid,
187 pass a ``topology="FAT_TREE"`` attribute to your cluster. The
188 ``topo_parameters=#levels;#downlinks;#uplinks;link count`` follows the
189 semantic introduced in the `Figure 1B of this article
190 <http://webee.eedev.technion.ac.il/wp-content/uploads/2014/08/publication_574.pdf>`_.
192 Here is the meaning of this example: ``2 ; 4,4 ; 1,2 ; 1,2``
194 - That's a two-level cluster (thus the initial ``2``).
195 - Routers are connected to 4 elements below them, regardless of its
196 level. Thus the ``4,4`` component that is used as
197 ``#downlinks``. This means that the hosts are grouped by 4 on a
198 given router, and that there is 4 level-1 routers (in the middle of
200 - Hosts are connected to only 1 router above them, while these routers
201 are connected to 2 routers above them (thus the ``1,2`` used as
203 - Hosts have only one link to their router while every path between a
204 level-1 routers and level-2 routers use 2 parallel links. Thus the
205 ``1,2`` that is used as ``link count``.
207 .. image:: ../../examples/platforms/cluster_fat_tree.svg
210 .. literalinclude:: ../../examples/platforms/cluster_fat_tree.xml
218 This topology was introduced to further reduce the amount of links
219 while maintaining a high bandwidth for local communications. To model
220 this in SimGrid, pass a ``topology="DRAGONFLY"`` attribute to your
221 cluster. It's based on the implementation of the topology used on
222 Cray XC systems, described in paper
223 `Cray Cascade: A scalable HPC system based on a Dragonfly network <https://dl.acm.org/citation.cfm?id=2389136>`_.
225 System description follows the format ``topo_parameters=#groups;#chassis;#routers;#nodes``
226 For example, ``3,4 ; 3,2 ; 3,1 ; 2``:
228 - ``3,4``: There are 3 groups with 4 links between each (blue level).
229 Links to nth group are attached to the nth router of the group
230 on our implementation.
231 - ``3,2``: In each group, there are 3 chassis with 2 links between each nth router
232 of each group (black level)
233 - ``3,1``: In each chassis, 3 routers are connected together with a single link
235 - ``2``: Each router has two nodes attached (single link)
237 .. image:: ../../examples/platforms/cluster_dragonfly.svg
240 .. literalinclude:: ../../examples/platforms/cluster_dragonfly.xml
246 We only glanced over the abilities offered by SimGrid to describe the
247 platform topology. Other networking zones model non-HPC platforms
248 (such as wide area networks, ISP network comprising set-top boxes, or
249 even your own routing schema). You can interconnect several networking
250 zones in your platform to form a tree of zones, that is both a time-
251 and memory-efficient representation of distributed platforms. Please
252 head to the dedicated :ref:`documentation <platform>` for more
258 It is time to start using SMPI yourself. For that, you first need to
259 install it somehow, and then you will need a MPI application to play with.
264 The easiest way to take the tutorial is to use the dedicated Docker
265 image. Once you `installed Docker itself
266 <https://docs.docker.com/install/>`_, simply do the following:
268 .. code-block:: shell
270 docker pull simgrid/tuto-smpi
271 docker run -it --rm --name simgrid --volume ~/smpi-tutorial:/source/tutorial simgrid/tuto-smpi bash
273 This will start a new container with all you need to take this
274 tutorial, and create a ``smpi-tutorial`` directory in your home on
275 your host machine that will be visible as ``/source/tutorial`` within the
276 container. You can then edit the files you want with your favorite
277 editor in ``~/smpi-tutorial``, and compile them within the
278 container to enjoy the provided dependencies.
282 Any change to the container out of ``/source/tutorial`` will be lost
283 when you log out of the container, so don't edit the other files!
285 All needed dependencies are already installed in this container
286 (SimGrid, the C/C++/Fortran compilers, make, pajeng, R and pajengr). Vite being
287 only optional in this tutorial, it is not installed to reduce the
290 The container also include the example platform files from the
291 previous section as well as the source code of the NAS Parallel
292 Benchmarks. These files are available under
293 ``/source/simgrid-template-smpi`` in the image. You should copy it to
294 your working directory when you first log in:
296 .. code-block:: shell
298 cp -r /source/simgrid-template-smpi/* /source/tutorial
301 Using your Computer Natively
302 ............................
304 To take the tutorial on your machine, you first need to :ref:`install
305 SimGrid <install>`, the C/C++/Fortran compilers and also ``pajeng`` to
306 visualize the traces. You may want to install `Vite
307 <http://vite.gforge.inria.fr/>`_ to get a first glance at the
308 traces. The provided code template requires make to compile. On
309 Debian and Ubuntu for example, you can get them as follows:
311 .. code-block:: shell
313 sudo apt install simgrid pajeng make gcc g++ gfortran vite
315 For R analysis of the produced traces, you may want to install R,
316 and the `pajengr <https://github.com/schnorr/pajengr#installation/>`_ package.
318 .. code-block:: shell
320 sudo apt install r-base r-cran-devtools cmake flex bison
321 Rscript -e "library(devtools); install_github('schnorr/pajengr');"
323 To take this tutorial, you will also need the platform files from the
324 previous section as well as the source code of the NAS Parallel
325 Benchmarks. Just clone `this repository
326 <https://framagit.org/simgrid/simgrid-template-smpi>`_ to get them all:
328 .. code-block:: shell
330 git clone https://framagit.org/simgrid/simgrid-template-smpi.git
331 cd simgrid-template-smpi/
333 If you struggle with the compilation, then you should double check
334 your :ref:`SimGrid installation <install>`. On need, please refer to
335 the :ref:`Troubleshooting your Project Setup
336 <install_yours_troubleshooting>` section.
341 It is time to simulate your first MPI program. Use the simplistic
343 <https://framagit.org/simgrid/simgrid-template-smpi/raw/master/roundtrip.c?inline=false>`_
344 that comes with the template.
346 .. literalinclude:: /tuto_smpi/roundtrip.c
349 Compiling and Executing
350 .......................
352 Compiling the program is straightforward (double check your
353 :ref:`SimGrid installation <install>` if you get an error message):
356 .. code-block:: shell
358 $ smpicc -O3 roundtrip.c -o roundtrip
361 Once compiled, you can simulate the execution of this program on 16
362 nodes from the ``cluster_crossbar.xml`` platform as follows:
364 .. code-block:: shell
366 $ smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile ./roundtrip
368 - The ``-np 16`` option, just like in regular MPI, specifies the
369 number of MPI processes to use.
370 - The ``-hostfile cluster_hostfile`` option, just like in regular
371 MPI, specifies the host file. If you omit this option, ``smpirun``
372 will deploy the application on the first machines of your platform.
373 - The ``-platform cluster_crossbar.xml`` option, **which doesn't exist
374 in regular MPI**, specifies the platform configuration to be
376 - At the end of the line, one finds the executable name and
377 command-line arguments (if any -- roundtrip does not expect any arguments).
379 Feel free to tweak the content of the XML platform file and the
380 program to see the effect on the simulated execution time. It may be
381 easier to compare the executions with the extra option
382 ``--cfg=smpi/display-timing:yes``. Note that the simulation accounts
383 for realistic network protocol effects and MPI implementation
384 effects. As a result, you may see "unexpected behavior" like in the
385 real world (e.g., sending a message 1 byte larger may lead to
386 significant higher execution time).
388 Lab 1: Visualizing LU
389 ---------------------
391 We will now simulate a larger application: the LU benchmark of the NAS
392 suite. The version provided in the code template was modified to
393 compile with SMPI instead of the regular MPI. Compare the difference
394 between the original ``config/make.def.template`` and the
395 ``config/make.def`` that was adapted to SMPI. We use ``smpiff`` and
396 ``smpicc`` as compilers, and don't pass any additional library.
398 Now compile and execute the LU benchmark, class S (i.e., for `small
400 <https://www.nas.nasa.gov/publications/npb_problem_sizes.html>`_) with
403 .. code-block:: shell
405 $ make lu NPROCS=4 CLASS=S
407 $ smpirun -np 4 -platform ../cluster_backbone.xml bin/lu.S.4
410 To get a better understanding of what is going on, activate the
411 vizualization tracing, and convert the produced trace for later
414 .. code-block:: shell
416 smpirun -np 4 -platform ../cluster_backbone.xml -trace --cfg=tracing/filename:lu.S.4.trace bin/lu.S.4
418 You can then produce a Gantt Chart with the following R chunk. You can
419 either copy/paste it in a R session, or `turn it into a Rscript executable
420 <https://swcarpentry.github.io/r-novice-inflammation/05-cmdline/>`_ to
421 run it again and again.
429 df_state = pajeng_read("lu.S.4.trace")
430 names(df_state) = c("Type", "Rank", "Container", "Start", "End", "Duration", "Level", "State");
431 df_state = df_state[!(names(df_state) %in% c("Type","Container","Level"))]
432 df_state$Rank = as.numeric(gsub("rank-","",df_state$Rank))
434 # Draw the Gantt Chart
435 gc = ggplot(data=df_state) + geom_rect(aes(xmin=Start, xmax=End, ymin=Rank, ymax=Rank+1,fill=State))
441 This produces a file called ``Rplots.pdf`` with the following
442 content. You can find more visualization examples `online
443 <https://simgrid.org/contrib/R_visualization.html>`_.
445 .. image:: /tuto_smpi/img/lu.S.4.png
448 Lab 2: Tracing and Replay of LU
449 -------------------------------
451 Now compile and execute the LU benchmark, class A, with 32 nodes.
453 .. code-block:: shell
455 $ make lu NPROCS=32 CLASS=A
457 This takes several minutes to to simulate, because all code from all
458 processes has to be really executed, and everything is serialized.
460 SMPI provides several methods to speed things up. One of them is to
461 capture a time independent trace of the running application, and
462 replay it on a different platform with the same amount of nodes. The
463 replay is much faster than live simulation, as the computations are
464 skipped (the application must be network-dependent for this to work).
466 You can even generate the trace during the live simulation as follows:
468 .. code-block:: shell
470 $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
472 The produced trace is composed of a file ``LU.A.32`` and a folder
473 ``LU.A.32_files``. You can replay this trace with SMPI thanks to ``smpirun``.
474 For example, the following command replays the trace on a different platform:
476 .. code-block:: shell
478 $ smpirun -np 32 -platform ../cluster_crossbar.xml -hostfile ../cluster_hostfile -replay LU.A.32
480 All the outputs are gone, as the application is not really simulated
481 here. Its trace is simply replayed. But if you visualize the live
482 simulation and the replay, you will see that the behavior is
483 unchanged. The simulation does not run much faster on this very
484 example, but this becomes very interesting when your application
485 is computationally hungry.
489 The commands should be separated and executed by some CI to make sure
490 the documentation is up-to-date.
492 Lab 3: Execution Sampling on Matrix Multiplication example
493 ----------------------------------------------------------
495 The second method to speed up simulations is to sample the computation
496 parts in the code. This means that the person doing the simulation
497 needs to know the application and identify parts that are compute
498 intensive and take time, while being regular enough not to ruin
499 simulation accuracy. Furthermore there should not be any MPI calls
500 inside such parts of the code.
502 Use for this part the `gemm_mpi.c
503 <https://gitlab.com/PRACE-4IP/CodeVault/raw/master/hpc_kernel_samples/dense_linear_algebra/gemm/mpi/src/gemm_mpi.cpp>`_
504 example, which is provided by the `PRACE Codevault repository
505 <http://www.prace-ri.eu/prace-codevault/>`_.
507 The computing part of this example is the matrix multiplication routine
509 .. literalinclude:: /tuto_smpi/gemm_mpi.cpp
513 .. code-block:: shell
515 $ smpicc -O3 gemm_mpi.cpp -o gemm
516 $ time smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile --cfg=smpi/display-timing:yes --cfg=smpi/running-power:1000000000 ./gemm
518 This should end quite quickly, as the size of each matrix is only 1000x1000.
519 But what happens if we want to simulate larger runs ?
520 Replace the size by 2000, 3000, and try again.
522 The simulation time increases a lot, while there are no more MPI calls performed, only computation.
524 The ``--cfg=smpi/display-timing`` option gives more details about execution,
525 and advises to use sampling if the time spent in computing loops seems too high.
527 The ``--cfg=smpi/running-power:1000000000`` option sets the speed of the processor used for
528 running the simulation. Here we say that its speed is the same as one of the
529 processors we are simulation (1Gf), so that 1 second of computation is injected
530 as 1 second in the simulation.
532 .. code-block:: shell
534 [5.568556] [smpi_kernel/INFO] Simulated time: 5.56856 seconds.
536 The simulation took 24.9403 seconds (after parsing and platform setup)
537 24.0764 seconds were actual computation of the application
538 [5.568556] [smpi_kernel/INFO] More than 75% of the time was spent inside the application code.
539 You may want to use sampling functions or trace replay to reduce this.
541 So in our case (size 3000) the simulation ran for 25 seconds, and simulated time was 5.57s at the end.
542 Computation by itself took 24 seconds, and can quickly grow with larger sizes
543 (as computation is really performed, there will be variability between similar runs).
545 SMPI provides sampling macros in order to accelerate simulation by sampling iterations
546 of large computation loops, and skip computation after a certain amount of iterations,
547 or when the sampling is stable enough.
549 The two macros only slightly differ :
551 - ``SMPI_SAMPLE_GLOBAL`` : specified number of samples is produced by all processors
552 - ``SMPI_SAMPLE_LOCAL`` : each process executes a specified number of iterations
554 So if the size of the computed part varies between processes (imbalance),
555 it's safer to use the LOCAL one.
557 To use one of them, replacing the external for loop of the multiply routine:
561 for (int i = istart; i <= iend; ++i)
567 SMPI_SAMPLE_GLOBAL(int i = istart, i <= iend, ++i, 10, 0.005)
569 First three parameters are the ones from the loop, while the two last ones are for sampling.
570 They mean that at most 10 iterations will be performed, and that sampling phase can be exited
571 earlier if a certain stability is reached after less samples.
573 Now run the code again with various sizes and parameters and check the time taken for the
574 simulation, as well as the resulting simulated time.
576 .. code-block:: shell
578 [5.575691] [smpi_kernel/INFO] Simulated time: 5.57569 seconds.
579 The simulation took 1.23698 seconds (after parsing and platform setup)
580 0.0319454 seconds were actual computation of the application
582 In this case the simulation only took 1.2 seconds, while the simulated time
583 stayed almost identical.
585 Obviously the results of the computation will be altered as most of it is skipped,
586 so these macros cannot be used when results are critical for the application behavior
587 (convergence estimation for instance will be wrong on some codes).
590 Lab 4: Memory folding on large allocations
591 ------------------------------------------
593 Another issue that can be encountered when simulation with SMPI is lack of memory.
594 Indeed we are executing all MPI processes on a single node, which can lead to crashes.
595 We will use the DT benchmark of the NAS suite to illustrate how to avoid such issues.
597 With 85 processes and class C, the DT simulated benchmark will try to allocate 35GB of memory
598 , which may not be available on the node your are using.
600 To avoid this we can simply replace the largest calls to malloc and free by calls
601 to ``SMPI_SHARED_MALLOC`` and ``SMPI_SHARED_FREE``.
602 This means that all processes will share one single instance of this buffer.
603 As for sampling, results will be altered, and it shouldn't be used for control structures.
605 For DT example, there are three different calls to malloc in the file, and one of them is for a needed structure.
606 Find it and replace the two other ones by SMPI_SHARED_MALLOC (there is only one free to replace for both of them).
608 Once done, you can now run
610 .. code-block:: shell
612 $ make dt NPROCS=85 CLASS=C
614 $ smpirun -np 85 -platform ../cluster_backbone.xml bin/dt.C.x BH
617 And simulation should finish without swapping/crashing (Ignore the warning about the return value).
619 If control structures are also problematic, you can use ``SMPI_PARTIAL_SHARED_MALLOC(size, offsets, offsetscount)``
620 macro, which will shared only specific parts of the structure between processes,
621 and use specific memory for the important parts.
622 It can be freed afterwards with SMPI_SHARED_FREE.
627 You may also be interested in the `SMPI reference article
628 <https://hal.inria.fr/hal-01415484>`_ or these `introductory slides
629 <http://simgrid.org/tutorials/simgrid-smpi-101.pdf>`_. The :ref:`SMPI
630 reference documentation <SMPI_doc>` covers much more content than
633 Finally, we regularly use SimGrid in our teachings on MPI. This way,
634 our student can experiment with platforms that they do not have access
635 to, and the associated visualisation tools helps them to understand
636 their work. The whole material is available online, in a separate
637 project: the `SMPI CourseWare <https://simgrid.github.io/SMPI_CourseWare/>`_.
639 .. LocalWords: SimGrid