3 ===============================
4 SMPI: Simulate MPI Applications
5 ===============================
9 <object id="TOC" data="graphical-toc.svg" type="image/svg+xml"></object>
11 window.onload=function() { // Wait for the SVG to be loaded before changing it
12 var elem=document.querySelector("#TOC").contentDocument.getElementById("SMPIBox")
13 elem.style="opacity:0.93999999;fill:#ff0000;fill-opacity:0.1";
14 elem.style="opacity:0.93999999;fill:#ff0000;fill-opacity:0.1;stroke:#000000;stroke-width:0.35277778;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1";
20 SMPI enables the study of MPI application by emulating them on top of
21 the SimGrid simulator. This is particularly interesting to study
22 existing MPI applications within the comfort of the simulator.
24 To get started with SMPI, you should head to :ref:`the SMPI tutorial
25 <usecase_smpi>`. You may also want to read the `SMPI reference
26 article <https://hal.inria.fr/hal-01415484>`_ or these `introductory
27 slides <http://simgrid.org/tutorials/simgrid-smpi-101.pdf>`_. If you
28 are new to MPI, you should first take our online `SMPI CourseWare
29 <https://simgrid.github.io/SMPI_CourseWare/>`_. It consists in several
30 projects that progressively introduce the MPI concepts. It proposes to
31 use SimGrid and SMPI to run the experiments, but the learning
32 objectives are centered on MPI itself.
34 Our goal is to enable the study of **unmodified MPI applications**.
35 Some constructs and features are still missing, but we can probably
36 add them on demand. If you already used MPI before, SMPI should sound
37 very familiar to you: Use smpicc instead of mpicc, and smpirun instead
38 of mpirun. The main difference is that smpirun takes a :ref:`simulated
39 platform <platform>` as an extra parameter.
41 For **further scalability**, you may modify your code to speed up your
42 studies or save memory space. Maximal **simulation accuracy**
43 requires some specific care from you.
51 In this mode, your application is actually executed. Every computation
52 occurs for real while every communication is simulated. In addition,
53 the executions are automatically benchmarked so that their timings can
54 be applied within the simulator.
56 SMPI can also go offline by replaying a trace. :ref:`Trace replay
57 <SMPI_offline>` is usually ways faster than online simulation (because
58 the computation are skipped), but it can only applied to applications
59 with constant execution and communication patterns (for the exact same
66 If your application is in C, then simply use ``smpicc`` as a
67 compiler just like you use mpicc with other MPI implementations. This
68 script still calls your default compiler (gcc, clang, ...) and adds
69 the right compilation flags along the way. If your application is in
70 C++, Fortran 77 or Fortran 90, use respectively ``smpicxx``,
71 ``smpiff`` or ``smpif90``.
73 If you use cmake, set the variables ``MPI_C_COMPILER``, ``MPI_CXX_COMPILER`` and
74 ``MPI_Fortran_COMPILER`` to the full path of smpicc, smpicxx and smpiff (or
75 smpif90), respectively. Example:
79 cmake -DMPI_C_COMPILER=/opt/simgrid/bin/smpicc -DMPI_CXX_COMPILER=/opt/simgrid/bin/smpicxx -DMPI_Fortran_COMPILER=/opt/simgrid/bin/smpiff .
85 Use the ``smpirun`` script as follows:
89 smpirun -hostfile my_hostfile.txt -platform my_platform.xml ./program -blah
91 - ``my_hostfile.txt`` is a classical MPI hostfile (that is, this file
92 lists the machines on which the processes must be dispatched, one
94 - ``my_platform.xml`` is a classical SimGrid platform file. Of course,
95 the hosts of the hostfile must exist in the provided platform.
96 - ``./program`` is the MPI program to simulate, that you compiled with ``smpicc``
97 - ``-blah`` is a command-line parameter passed to this program.
99 ``smpirun`` accepts other parameters, such as ``-np`` if you don't
100 want to use all the hosts defined in the hostfile, ``-map`` to display
101 on which host each rank gets mapped of ``-trace`` to activate the
102 tracing during the simulation. You can get the full list by running
105 Finally, you can pass :ref:`any valid SimGrid parameter <options>` to your
106 program. In particular, you can pass ``--cfg=network/model:ns-3`` to
107 switch to use :ref:`model_ns3`. These parameters should be placed after
108 the name of your binary on the command line.
110 ...............................
111 Debugging your Code within SMPI
112 ...............................
114 If you want to explore the automatic platform and deployment files
115 that are generated by ``smpirun``, add ``-keep-temps`` to the command
118 You can also run your simulation within valgrind or gdb using the
119 following commands. Once in GDB, each MPI ranks will be represented as
120 a regular thread, and you can explore the state of each of them as
123 .. code-block:: shell
125 smpirun -wrapper valgrind ...other args...
126 smpirun -wrapper "gdb --args" --cfg=contexts/factory:thread ...other args...
127 Some shortcuts are available:
128 ``-gdb`` is equivalent to ``-wrapper "gdb --args" -keep-temps``, to run within gdb debugger
129 ``-lldb`` is equivalent to ``-wrapper "lldb --" -keep-temps``, to run within lldb debugger
130 ``-vgdb`` is equivalent to ``-wrapper "valgrind --vgdb=yes --vgdb-error=0"
131 -keep-temps``, to run within valgrind and allow to attach a debugger
133 To help locate bottlenecks and largest allocations in the simulated application,
134 the -analyze flag can be passed to smpirun. It will activate
135 :ref:`smpi/display-timing<cfg=smpi/display-timing>` and
136 :ref:`smpi/display-allocs<cfg=display-allocs>` options and provide hints
137 at the end of execution.
139 SMPI will also report MPI handle (MPI_Comm, Request, Datatype) leaks at the end
140 of execution. This can help identify memory leaks that can trigger
141 crashes and slowdowns.
142 By default it only displays the number of leaked items detected.
143 Option :ref:`smpi/list-leaks:n<cfg=smpi/list-leaks>` can be used to display the
144 n first leaks encountered and their type. To get more information, running smpirun
145 with -wrapper "valgrind --leak-check=full --track-origins=yes" should show
146 the exact origin of leaked handles.
147 Known issue : MPI_Cancel may trigger internal leaks within SMPI.
152 ................................
153 Simulating Collective Operations
154 ................................
156 MPI collective operations are crucial to the performance of MPI
157 applications and must be carefully optimized according to many
158 parameters. Every existing implementation provides several algorithms
159 for each collective operation, and selects by default the best suited
160 one, depending on the sizes sent, the number of nodes, the
161 communicator, or the communication library being used. These
162 decisions are based on empirical results and theoretical complexity
163 estimation, and are very different between MPI implementations. In
164 most cases, the users can also manually tune the algorithm used for
165 each collective operation.
167 SMPI can simulate the behavior of several MPI implementations:
168 OpenMPI, MPICH, `STAR-MPI <http://star-mpi.sourceforge.net/>`_, and
169 MVAPICH2. For that, it provides 115 collective algorithms and several
170 selector algorithms, that were collected directly in the source code
171 of the targeted MPI implementations.
173 You can switch the automatic selector through the
174 ``smpi/coll-selector`` configuration item. Possible values:
176 - **ompi:** default selection logic of OpenMPI (version 3.1.2)
177 - **mpich**: default selection logic of MPICH (version 3.3b)
178 - **mvapich2**: selection logic of MVAPICH2 (version 1.9) tuned
179 on the Stampede cluster
180 - **impi**: preliminary version of an Intel MPI selector (version
181 4.1.3, also tuned for the Stampede cluster). Due the closed source
182 nature of Intel MPI, some of the algorithms described in the
183 documentation are not available, and are replaced by mvapich ones.
184 - **default**: legacy algorithms used in the earlier days of
185 SimGrid. Do not use for serious perform performance studies.
187 .. todo:: default should not even exist.
193 You can also pick the algorithm used for each collective with the
194 corresponding configuration item. For example, to use the pairwise
195 alltoall algorithm, one should add ``--cfg=smpi/alltoall:pair`` to the
196 line. This will override the selector (if any) for this algorithm. It
197 means that the selected algorithm will be used
199 .. Warning:: Some collective may require specific conditions to be
200 executed correctly (for instance having a communicator with a power
201 of two number of nodes only), which are currently not enforced by
202 Simgrid. Some crashes can be expected while trying these algorithms
203 with unusual sizes/parameters
208 Most of these are best described in `STAR-MPI's white paper <https://doi.org/10.1145/1183401.1183431>`_.
210 - default: naive one, by default
211 - ompi: use openmpi selector for the alltoall operations
212 - mpich: use mpich selector for the alltoall operations
213 - mvapich2: use mvapich2 selector for the alltoall operations
214 - impi: use intel mpi selector for the alltoall operations
215 - automatic (experimental): use an automatic self-benchmarking algorithm
216 - bruck: Described by Bruck et.al. in `this paper <http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949>`_
217 - 2dmesh: organizes the nodes as a two dimensional mesh, and perform allgather
219 - 3dmesh: adds a third dimension to the previous algorithm
220 - rdb: recursive doubling: extends the mesh to a nth dimension, each one
222 - pair: pairwise exchange, only works for power of 2 procs, size-1 steps,
223 each process sends and receives from the same process at each step
224 - pair_light_barrier: same, with small barriers between steps to avoid
226 - pair_mpi_barrier: same, with MPI_Barrier used
227 - pair_one_barrier: only one barrier at the beginning
228 - ring: size-1 steps, at each step a process send to process (n+i)%size, and receives from (n-i)%size
229 - ring_light_barrier: same, with small barriers between some phases to avoid contention
230 - ring_mpi_barrier: same, with MPI_Barrier used
231 - ring_one_barrier: only one barrier at the beginning
232 - basic_linear: posts all receives and all sends,
233 starts the communications, and waits for all communication to finish
234 - mvapich2_scatter_dest: isend/irecv with scattered destinations, posting only a few messages at the same time
238 - default: naive one, by default
239 - ompi: use openmpi selector for the alltoallv operations
240 - mpich: use mpich selector for the alltoallv operations
241 - mvapich2: use mvapich2 selector for the alltoallv operations
242 - impi: use intel mpi selector for the alltoallv operations
243 - automatic (experimental): use an automatic self-benchmarking algorithm
244 - bruck: same as alltoall
245 - pair: same as alltoall
246 - pair_light_barrier: same as alltoall
247 - pair_mpi_barrier: same as alltoall
248 - pair_one_barrier: same as alltoall
249 - ring: same as alltoall
250 - ring_light_barrier: same as alltoall
251 - ring_mpi_barrier: same as alltoall
252 - ring_one_barrier: same as alltoall
253 - ompi_basic_linear: same as alltoall
258 - default: naive one, by default
259 - ompi: use openmpi selector for the gather operations
260 - mpich: use mpich selector for the gather operations
261 - mvapich2: use mvapich2 selector for the gather operations
262 - impi: use intel mpi selector for the gather operations
263 - automatic (experimental): use an automatic self-benchmarking algorithm which will iterate over all implemented versions and output the best
264 - ompi_basic_linear: basic linear algorithm from openmpi, each process sends to the root
265 - ompi_binomial: binomial tree algorithm
266 - ompi_linear_sync: same as basic linear, but with a synchronization at the
267 beginning and message cut into two segments.
268 - mvapich2_two_level: SMP-aware version from MVAPICH. Gather first intra-node (defaults to mpich's gather), and then exchange with only one process/node. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
273 - default: naive one, by default
274 - ompi: use openmpi selector for the barrier operations
275 - mpich: use mpich selector for the barrier operations
276 - mvapich2: use mvapich2 selector for the barrier operations
277 - impi: use intel mpi selector for the barrier operations
278 - automatic (experimental): use an automatic self-benchmarking algorithm
279 - ompi_basic_linear: all processes send to root
280 - ompi_two_procs: special case for two processes
281 - ompi_bruck: nsteps = sqrt(size), at each step, exchange data with rank-2^k and rank+2^k
282 - ompi_recursivedoubling: recursive doubling algorithm
283 - ompi_tree: recursive doubling type algorithm, with tree structure
284 - ompi_doublering: double ring algorithm
285 - mvapich2_pair: pairwise algorithm
286 - mpich_smp: barrier intra-node, then inter-node
291 - default: naive one, by default
292 - ompi: use openmpi selector for the scatter operations
293 - mpich: use mpich selector for the scatter operations
294 - mvapich2: use mvapich2 selector for the scatter operations
295 - impi: use intel mpi selector for the scatter operations
296 - automatic (experimental): use an automatic self-benchmarking algorithm
297 - ompi_basic_linear: basic linear scatter
298 - ompi_binomial: binomial tree scatter
299 - mvapich2_two_level_direct: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a basic linear inter node stage. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
300 - mvapich2_two_level_binomial: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a binomial phase. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
305 - default: naive one, by default
306 - ompi: use openmpi selector for the reduce operations
307 - mpich: use mpich selector for the reduce operations
308 - mvapich2: use mvapich2 selector for the reduce operations
309 - impi: use intel mpi selector for the reduce operations
310 - automatic (experimental): use an automatic self-benchmarking algorithm
311 - arrival_pattern_aware: root exchanges with the first process to arrive
312 - binomial: uses a binomial tree
313 - flat_tree: uses a flat tree
314 - NTSL: Non-topology-specific pipelined linear-bcast function
315 0->1, 1->2 ,2->3, ....., ->last node: in a pipeline fashion, with segments
317 - scatter_gather: scatter then gather
318 - ompi_chain: openmpi reduce algorithms are built on the same basis, but the
319 topology is generated differently for each flavor
320 chain = chain with spacing of size/2, and segment size of 64KB
321 - ompi_pipeline: same with pipeline (chain with spacing of 1), segment size
322 depends on the communicator size and the message size
323 - ompi_binary: same with binary tree, segment size of 32KB
324 - ompi_in_order_binary: same with binary tree, enforcing order on the
326 - ompi_binomial: same with binomial algo (redundant with default binomial
328 - ompi_basic_linear: basic algorithm, each process sends to root
329 - mvapich2_knomial: k-nomial algorithm. Default factor is 4 (mvapich2 selector adapts it through tuning)
330 - mvapich2_two_level: SMP-aware reduce, with default set to mpich both for intra and inter communicators. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
331 - rab: `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_'s reduce algorithm
336 - default: naive one, by default
337 - ompi: use openmpi selector for the allreduce operations
338 - mpich: use mpich selector for the allreduce operations
339 - mvapich2: use mvapich2 selector for the allreduce operations
340 - impi: use intel mpi selector for the allreduce operations
341 - automatic (experimental): use an automatic self-benchmarking algorithm
342 - lr: logical ring reduce-scatter then logical ring allgather
343 - rab1: variations of the `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_ algorithm: reduce_scatter then allgather
344 - rab2: variations of the `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_ algorithm: alltoall then allgather
345 - rab_rsag: variation of the `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_ algorithm: recursive doubling
346 reduce_scatter then recursive doubling allgather
347 - rdb: recursive doubling
348 - smp_binomial: binomial tree with smp: binomial intra
349 SMP reduce, inter reduce, inter broadcast then intra broadcast
350 - smp_binomial_pipeline: same with segment size = 4096 bytes
351 - smp_rdb: intra: binomial allreduce, inter: Recursive
352 doubling allreduce, intra: binomial broadcast
353 - smp_rsag: intra: binomial allreduce, inter: reduce-scatter,
354 inter:allgather, intra: binomial broadcast
355 - smp_rsag_lr: intra: binomial allreduce, inter: logical ring
356 reduce-scatter, logical ring inter:allgather, intra: binomial broadcast
357 - smp_rsag_rab: intra: binomial allreduce, inter: rab
358 reduce-scatter, rab inter:allgather, intra: binomial broadcast
359 - redbcast: reduce then broadcast, using default or tuned algorithms if specified
360 - ompi_ring_segmented: ring algorithm used by OpenMPI
361 - mvapich2_rs: rdb for small messages, reduce-scatter then allgather else
362 - mvapich2_two_level: SMP-aware algorithm, with mpich as intra algorithm, and rdb as inter (Change this behavior by using mvapich2 selector to use tuned values)
363 - rab: default `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_ implementation
368 - default: naive one, by default
369 - ompi: use openmpi selector for the reduce_scatter operations
370 - mpich: use mpich selector for the reduce_scatter operations
371 - mvapich2: use mvapich2 selector for the reduce_scatter operations
372 - impi: use intel mpi selector for the reduce_scatter operations
373 - automatic (experimental): use an automatic self-benchmarking algorithm
374 - ompi_basic_recursivehalving: recursive halving version from OpenMPI
375 - ompi_ring: ring version from OpenMPI
376 - mpich_pair: pairwise exchange version from MPICH
377 - mpich_rdb: recursive doubling version from MPICH
378 - mpich_noncomm: only works for power of 2 procs, recursive doubling for noncommutative ops
384 - default: naive one, by default
385 - ompi: use openmpi selector for the allgather operations
386 - mpich: use mpich selector for the allgather operations
387 - mvapich2: use mvapich2 selector for the allgather operations
388 - impi: use intel mpi selector for the allgather operations
389 - automatic (experimental): use an automatic self-benchmarking algorithm
390 - 2dmesh: see alltoall
391 - 3dmesh: see alltoall
392 - bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">
393 Efficient algorithms for all-to-all communications in multiport message-passing systems</a>
394 - GB: Gather - Broadcast (uses tuned version if specified)
395 - loosely_lr: Logical Ring with grouping by core (hardcoded, default
397 - NTSLR: Non Topology Specific Logical Ring
398 - NTSLR_NB: Non Topology Specific Logical Ring, Non Blocking operations
401 - rhv: only power of 2 number of processes
403 - SMP_NTS: gather to root of each SMP, then every root of each SMP node
404 post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
405 using logical ring algorithm (hardcoded, default processes/SMP: 8)
406 - smp_simple: gather to root of each SMP, then every root of each SMP node
407 post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
408 using simple algorithm (hardcoded, default processes/SMP: 8)
409 - spreading_simple: from node i, order of communications is i -> i + 1, i ->
410 i + 2, ..., i -> (i + p -1) % P
411 - ompi_neighborexchange: Neighbor Exchange algorithm for allgather.
412 Described by Chen et.al. in `Performance Evaluation of Allgather
413 Algorithms on Terascale Linux Cluster with Fast Ethernet <http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1592302>`_
414 - mvapich2_smp: SMP aware algorithm, performing intra-node gather, inter-node allgather with one process/node, and bcast intra-node
419 - default: naive one, by default
420 - ompi: use openmpi selector for the allgatherv operations
421 - mpich: use mpich selector for the allgatherv operations
422 - mvapich2: use mvapich2 selector for the allgatherv operations
423 - impi: use intel mpi selector for the allgatherv operations
424 - automatic (experimental): use an automatic self-benchmarking algorithm
425 - GB: Gatherv - Broadcast (uses tuned version if specified, but only for Bcast, gatherv is not tuned)
428 - ompi_neighborexchange: see allgather
429 - ompi_bruck: see allgather
430 - mpich_rdb: recursive doubling algorithm from MPICH
431 - mpich_ring: ring algorithm from MPICh - performs differently from the one from STAR-MPI
436 - default: naive one, by default
437 - ompi: use openmpi selector for the bcast operations
438 - mpich: use mpich selector for the bcast operations
439 - mvapich2: use mvapich2 selector for the bcast operations
440 - impi: use intel mpi selector for the bcast operations
441 - automatic (experimental): use an automatic self-benchmarking algorithm
442 - arrival_pattern_aware: root exchanges with the first process to arrive
443 - arrival_pattern_aware_wait: same with slight variation
444 - binomial_tree: binomial tree exchange
445 - flattree: flat tree exchange
446 - flattree_pipeline: flat tree exchange, message split into 8192 bytes pieces
447 - NTSB: Non-topology-specific pipelined binary tree with 8192 bytes pieces
448 - NTSL: Non-topology-specific pipelined linear with 8192 bytes pieces
449 - NTSL_Isend: Non-topology-specific pipelined linear with 8192 bytes pieces, asynchronous communications
450 - scatter_LR_allgather: scatter followed by logical ring allgather
451 - scatter_rdb_allgather: scatter followed by recursive doubling allgather
452 - arrival_scatter: arrival pattern aware scatter-allgather
453 - SMP_binary: binary tree algorithm with 8 cores/SMP
454 - SMP_binomial: binomial tree algorithm with 8 cores/SMP
455 - SMP_linear: linear algorithm with 8 cores/SMP
456 - ompi_split_bintree: binary tree algorithm from OpenMPI, with message split in 8192 bytes pieces
457 - ompi_pipeline: pipeline algorithm from OpenMPI, with message split in 128KB pieces
458 - mvapich2_inter_node: Inter node default mvapich worker
459 - mvapich2_intra_node: Intra node default mvapich worker
460 - mvapich2_knomial_intra_node: k-nomial intra node default mvapich worker. default factor is 4.
465 .. warning:: This is still very experimental.
467 An automatic version is available for each collective (or even as a selector). This specific
468 version will loop over all other implemented algorithm for this particular collective, and apply
469 them while benchmarking the time taken for each process. It will then output the quickest for
470 each process, and the global quickest. This is still unstable, and a few algorithms which need
471 specific number of nodes may crash.
476 To add a new algorithm, one should check in the src/smpi/colls folder
477 how other algorithms are coded. Using plain MPI code inside Simgrid
478 can't be done, so algorithms have to be changed to use smpi version of
479 the calls instead (MPI_Send will become smpi_mpi_send). Some functions
480 may have different signatures than their MPI counterpart, please check
481 the other algorithms or contact us using the `>SimGrid
482 developers mailing list <http://lists.gforge.inria.fr/mailman/listinfo/simgrid-devel>`_.
484 Example: adding a "pair" version of the Alltoall collective.
486 - Implement it in a file called alltoall-pair.c in the src/smpi/colls folder. This file should include colls_private.hpp.
488 - The name of the new algorithm function should be smpi_coll_tuned_alltoall_pair, with the same signature as MPI_Alltoall.
490 - Once the adaptation to SMPI code is done, add a reference to the file ("src/smpi/colls/alltoall-pair.c") in the SMPI_SRC part of the DefinePackages.cmake file inside buildtools/cmake, to allow the file to be built and distributed.
492 - To register the new version of the algorithm, simply add a line to the corresponding macro in src/smpi/colls/cools.h ( add a "COLL_APPLY(action, COLL_ALLTOALL_SIG, pair)" to the COLL_ALLTOALLS macro ). The algorithm should now be compiled and be selected when using --cfg=smpi/alltoall:pair at runtime.
494 - To add a test for the algorithm inside Simgrid's test suite, juste add the new algorithm name in the ALLTOALL_COLL list found inside teshsuite/smpi/CMakeLists.txt . When running ctest, a test for the new algorithm should be generated and executed. If it does not pass, please check your code or contact us.
496 - Please submit your patch for inclusion in SMPI, for example through a pull request on GitHub or directly per email.
499 Tracing of Internal Communications
500 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
502 By default, the collective operations are traced as a unique operation
503 because tracing all point-to-point communications composing them could
504 result in overloaded, hard to interpret traces. If you want to debug
505 and compare collective algorithms, you should set the
506 ``tracing/smpi/internals`` configuration item to 1 instead of 0.
508 Here are examples of two alltoall collective algorithms runs on 16 nodes,
509 the first one with a ring algorithm, the second with a pairwise one.
511 .. image:: /img/smpi_simgrid_alltoall_ring_16.png
514 Alltoall on 16 Nodes with the Ring Algorithm.
516 .. image:: /img/smpi_simgrid_alltoall_pair_16.png
519 Alltoall on 16 Nodes with the Pairwise Algorithm.
521 -------------------------
522 What can run within SMPI?
523 -------------------------
525 You can run unmodified MPI applications (both C/C++ and Fortran) within
526 SMPI, provided that you only use MPI calls that we implemented. Global
527 variables should be handled correctly on Linux systems.
533 Our coverage of the interface is very decent, but still incomplete;
534 Given the size of the MPI standard, we may well never manage to
535 implement absolutely all existing primitives. Currently, we have
536 almost no support for I/O primitives, but we still pass a very large
537 amount of the MPICH coverage tests.
539 The full list of not yet implemented functions is documented in the
540 file `include/smpi/smpi.h
541 <https://framagit.org/simgrid/simgrid/tree/master/include/smpi/smpi.h>`_
542 in your version of SimGrid, between two lines containing the ``FIXME``
543 marker. If you really miss a feature, please get in touch with us: we
544 can guide you though the SimGrid code to help you implementing it, and
545 we'd be glad to integrate your contribution to the main project.
547 .. _SMPI_what_globals:
549 .................................
550 Privatization of global variables
551 .................................
553 Concerning the globals, the problem comes from the fact that usually,
554 MPI processes run as real UNIX processes while they are all folded
555 into threads of a unique system process in SMPI. Global variables are
556 usually private to each MPI process while they become shared between
557 the processes in SMPI. The problem and some potential solutions are
558 discussed in this article: `Automatic Handling of Global Variables for
559 Multi-threaded MPI Programs
560 <http://charm.cs.illinois.edu/newPapers/11-23/paper.pdf>` (note that
561 this article does not deal with SMPI but with a competing solution
562 called AMPI that suffers of the same issue). This point used to be
563 problematic in SimGrid, but the problem should now be handled
564 automatically on Linux.
566 Older versions of SimGrid came with a script that automatically
567 privatized the globals through static analysis of the source code. But
568 our implementation was not robust enough to be used in production, so
569 it was removed at some point. Currently, SMPI comes with two
570 privatization mechanisms that you can :ref:`select at runtime
571 <cfg=smpi/privatization>`. The dlopen approach is used by
572 default as it is much faster and still very robust. The mmap approach
573 is an older approach that proves to be slower.
575 With the **mmap approach**, SMPI duplicates and dynamically switch the
576 ``.data`` and ``.bss`` segments of the ELF process when switching the
577 MPI ranks. This allows each ranks to have its own copy of the global
578 variables. No copy actually occurs as this mechanism uses ``mmap()``
579 for efficiency. This mechanism is considered to be very robust on all
580 systems supporting ``mmap()`` (Linux and most BSDs). Its performance
581 is questionable since each context switch between MPI ranks induces
582 several syscalls to change the ``mmap`` that redirects the ``.data``
583 and ``.bss`` segments to the copies of the new rank. The code will
584 also be copied several times in memory, inducing a slight increase of
587 Another limitation is that SMPI only accounts for global variables
588 defined in the executable. If the processes use external global
589 variables from dynamic libraries, they won't be switched
590 correctly. The easiest way to solve this is to statically link against
591 the library with these globals. This way, each MPI rank will get its
592 own copy of these libraries. Of course you should never statically
593 link against the SimGrid library itself.
595 With the **dlopen approach**, SMPI loads several copies of the same
596 executable in memory as if it were a library, so that the global
597 variables get naturally duplicated. It first requires the executable
598 to be compiled as a relocatable binary, which is less common for
599 programs than for libraries. But most distributions are now compiled
600 this way for security reason as it allows one to randomize the address
601 space layout. It should thus be safe to compile most (any?) program
602 this way. The second trick is that the dynamic linker refuses to link
603 the exact same file several times, be it a library or a relocatable
604 executable. It makes perfectly sense in the general case, but we need
605 to circumvent this rule of thumb in our case. To that extend, the
606 binary is copied in a temporary file before being re-linked against.
607 ``dlmopen()`` cannot be used as it only allows 256 contextes, and as it
608 would also duplicate simgrid itself.
610 This approach greatly speeds up the context switching, down to about
611 40 CPU cycles with our raw contextes, instead of requesting several
612 syscalls with the ``mmap()`` approach. Another advantage is that it
613 permits one to run the SMPI contexts in parallel, which is obviously not
614 possible with the ``mmap()`` approach. It was tricky to implement, but
615 we are not aware of any flaws, so smpirun activates it by default.
617 In the future, it may be possible to further reduce the memory and
618 disk consumption. It seems that we could `punch holes
619 <https://lwn.net/Articles/415889/>`_ in the files before dl-loading
620 them to remove the code and constants, and mmap these area onto a
621 unique copy. If done correctly, this would reduce the disk- and
622 memory- usage to the bare minimum, and would also reduce the pressure
623 on the CPU instruction cache. See the `relevant bug
624 <https://github.com/simgrid/simgrid/issues/137>`_ on github for
625 implementation leads.\n
627 Also, currently, only the binary is copied and dlopen-ed for each MPI
628 rank. We could probably extend this to external dependencies, but for
629 now, any external dependencies must be statically linked into your
630 application. As usual, simgrid itself shall never be statically linked
631 in your app. You don't want to give a copy of SimGrid to each MPI rank:
632 that's ways too much for them to deal with.
634 .. todo: speak of smpi/privatize-libs here
636 ----------------------------------------------
637 Adapting your MPI code for further scalability
638 ----------------------------------------------
640 As detailed in the `reference article
641 <http://hal.inria.fr/hal-01415484>`_, you may want to adapt your code
642 to improve the simulation performance. But these tricks may seriously
643 hinder the result quality (or even prevent the app to run) if used
644 wrongly. We assume that if you want to simulate an HPC application,
645 you know what you are doing. Don't prove us wrong!
647 ..............................
648 Reducing your memory footprint
649 ..............................
651 If you get short on memory (the whole app is executed on a single node when
652 simulated), you should have a look at the SMPI_SHARED_MALLOC and
653 SMPI_SHARED_FREE macros. It allows one to share memory areas between processes: The
654 purpose of these macro is that the same line malloc on each process will point
655 to the exact same memory area. So if you have a malloc of 2M and you have 16
656 processes, this macro will change your memory consumption from 2M*16 to 2M
657 only. Only one block for all processes.
659 If your program is ok with a block containing garbage value because all
660 processes write and read to the same place without any kind of coordination,
661 then this macro can dramatically shrink your memory consumption. For example,
662 that will be very beneficial to a matrix multiplication code, as all blocks will
663 be stored on the same area. Of course, the resulting computations will useless,
664 but you can still study the application behavior this way.
666 Naturally, this won't work if your code is data-dependent. For example, a Jacobi
667 iterative computation depends on the result computed by the code to detect
668 convergence conditions, so turning them into garbage by sharing the same memory
669 area between processes does not seem very wise. You cannot use the
670 SMPI_SHARED_MALLOC macro in this case, sorry.
672 This feature is demoed by the example file
673 `examples/smpi/NAS/dt.c <https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/NAS/dt.c>`_
677 .........................
678 Toward Faster Simulations
679 .........................
681 If your application is too slow, try using SMPI_SAMPLE_LOCAL,
682 SMPI_SAMPLE_GLOBAL and friends to indicate which computation loops can
683 be sampled. Some of the loop iterations will be executed to measure
684 their duration, and this duration will be used for the subsequent
685 iterations. These samples are done per processor with
686 SMPI_SAMPLE_LOCAL, and shared between all processors with
687 SMPI_SAMPLE_GLOBAL. Of course, none of this will work if the execution
688 time of your loop iteration are not stable.
690 This feature is demoed by the example file
691 `examples/smpi/NAS/ep.c <https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/NAS/ep.c>`_
693 .............................
694 Ensuring Accurate Simulations
695 .............................
697 Out of the box, SimGrid may give you fairly accurate results, but
698 there is a plenty of factors that could go wrong and make your results
699 inaccurate or even plainly wrong. Actually, you can only get accurate
700 results of a nicely built model, including both the system hardware
701 and your application. Such models are hard to pass over and reuse in
702 other settings, because elements that are not relevant to an
703 application (say, the latency of point-to-point communications,
704 collective operation implementation details or CPU-network
705 interaction) may be irrelevant to another application. The dream of
706 the perfect model, encompassing every aspects is only a chimera, as
707 the only perfect model of the reality is the reality. If you go for
708 simulation, then you have to ignore some irrelevant aspects of the
709 reality, but which aspects are irrelevant is actually
710 application-dependent...
712 The only way to assess whether your settings provide accurate results
713 is to double-check these results. If possible, you should first run
714 the same experiment in simulation and in real life, gathering as much
715 information as you can. Try to understand the discrepancies in the
716 results that you observe between both settings (visualization can be
717 precious for that). Then, try to modify your model (of the platform,
718 of the collective operations) to reduce the most preeminent differences.
720 If the discrepancies come from the computing time, try adapting the
721 ``smpi/host-speed``: reduce it if your simulation runs faster than in
722 reality. If the error come from the communication, then you need to
723 fiddle with your platform file.
725 Be inventive in your modeling. Don't be afraid if the names given by
726 SimGrid does not match the real names: we got very good results by
727 modeling multicore/GPU machines with a set of separate hosts
728 interconnected with very fast networks (but don't trust your model
729 because it has the right names in the right place either).
731 Finally, you may want to check `this article
732 <https://hal.inria.fr/hal-00907887>`_ on the classical pitfalls in
733 modeling distributed systems.
735 ----------------------
736 Examples of SMPI Usage
737 ----------------------
739 A small amount of examples can be found directly in the SimGrid
740 archive, under `examples/smpi <https://framagit.org/simgrid/simgrid/-/tree/master/examples/smpi>`_.
741 Some show how to simply run MPI code in SimGrid, how to use the
742 tracing/replay mechanism or how to use plugins written in S4U to
743 extend the simulator abilities.
745 Another source of examples lay in the SimGrid archive, under
746 `teshsuite/smpi <https://framagit.org/simgrid/simgrid/-/tree/master/examples/smpi>`_.
747 They are not in the ``examples`` directory because they probably don't
748 constitute pedagogical examples. Instead, they are intended to stress
749 our implementation during the tests. Some of you may be interested
752 But the best source of SMPI examples is certainly the `proxy app
753 <https://framagit.org/simgrid/SMPI-proxy-apps>`_ external project.
754 Proxy apps are scale models of real, massive HPC applications: each of
755 them exhibits the same communication and computation patterns than the
756 massive application that it stands for. But they last only a few
757 thousands lines instead of some millions of lines. These proxy apps
758 are usually provided for educational purpose, and also to ensure that
759 the represented large HPC applications will correctly work with the
760 next generation of runtimes and hardware. `This project
761 <https://framagit.org/simgrid/SMPI-proxy-apps>`_ gathers proxy apps
762 from different sources, along with the patches needed (if any) to run
765 -------------------------
766 Troubleshooting with SMPI
767 -------------------------
769 .................................
770 ./configure refuses to use smpicc
771 .................................
773 If your ``./configure`` reports that the compiler is not
774 functional or that you are cross-compiling, try to define the
775 ``SMPI_PRETEND_CC`` environment variable before running the
778 .. code-block:: shell
780 SMPI_PRETEND_CC=1 ./configure # here come the configure parameters
783 Indeed, the programs compiled with ``smpicc`` cannot be executed
784 without ``smpirun`` (they are shared libraries and do weird things on
785 startup), while configure wants to test them directly. With
786 ``SMPI_PRETEND_CC`` smpicc does not compile as shared, and the SMPI
787 initialization stops and returns 0 before doing anything that would
788 fail without ``smpirun``.
792 Make sure that SMPI_PRETEND_CC is only set when calling ./configure,
793 not during the actual execution, or any program compiled with smpicc
794 will stop before starting.
796 ..............................................
797 ./configure does not pick smpicc as a compiler
798 ..............................................
800 In addition to the previous answers, some projects also need to be
801 explicitly told what compiler to use, as follows:
803 .. code-block:: shell
805 SMPI_PRETEND_CC=1 ./configure CC=smpicc # here come the other configure parameters
808 Maybe your configure is using another variable, such as ``cc`` (in
809 lower case) or similar. Just check the logs.
811 .....................................
812 error: unknown type name 'useconds_t'
813 .....................................
815 Try to add ``-D_GNU_SOURCE`` to your compilation line to get ride
818 The reason is that SMPI provides its own version of ``usleep(3)``
819 to override it and to block in the simulation world, not in the real
820 one. It needs the ``useconds_t`` type for that, which is declared
821 only if you declare ``_GNU_SOURCE`` before including
822 ``unistd.h``. If your project includes that header file before
823 SMPI, then you need to ensure that you pass the right configuration
824 defines as advised above.
830 -----------------------------
831 Trace Replay and Offline SMPI
832 -----------------------------
834 Although SMPI is often used for :ref:`online simulation
835 <SMPI_online>`, where the application is executed for real, you can
836 also go for offline simulation through trace replay.
838 SimGrid uses time-independent traces, in which each actor is given a
839 script of the actions to do sequentially. These trace files can
840 actually be captured with the online version of SMPI, as follows:
842 .. code-block:: shell
844 $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
846 The produced trace is composed of a file ``LU.A.32`` and a folder
847 ``LU.A.32_files``. The file names don't match with the MPI ranks, but
850 To replay this with SMPI, you need to first compile the provided
851 ``smpi_replay.cpp`` file, that comes from
852 `simgrid/examples/smpi/replay
853 <https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/replay>`_.
855 .. code-block:: shell
857 $ smpicxx ../replay.cpp -O3 -o ../smpi_replay
859 Afterward, you can replay your trace in SMPI as follows:
861 $ smpirun -np 32 -platform ../cluster_torus.xml -ext smpi_replay ../smpi_replay LU.A.32
863 All the outputs are gone, as the application is not really simulated
864 here. Its trace is simply replayed. But if you visualize the live
865 simulation and the replay, you will see that the behavior is
866 unchanged. The simulation does not run much faster on this very
867 example, but this becomes very interesting when your application
868 is computationally hungry.