3 ===============================
4 SMPI: Simulate MPI Applications
5 ===============================
9 <object id="TOC" data="graphical-toc.svg" width="100%" type="image/svg+xml"></object>
11 window.onload=function() { // Wait for the SVG to be loaded before changing it
12 var elem=document.querySelector("#TOC").contentDocument.getElementById("SMPIBox")
13 elem.style="opacity:0.93999999;fill:#ff0000;fill-opacity:0.1";
19 SMPI enables the study of MPI application by emulating them on top of
20 the SimGrid simulator. This is particularly interesting to study
21 existing MPI applications within the comfort of the simulator.
23 To get started with SMPI, you should head to `the SMPI tutorial
24 <usecase_smpi>`_. You may also want to read the `SMPI reference
25 article <https://hal.inria.fr/hal-01415484>`_ or these `introductory
26 slides <http://simgrid.org/tutorials/simgrid-smpi-101.pdf>`_. If you
27 are new to MPI, you should first take our online `SMPI CourseWare
28 <https://simgrid.github.io/SMPI_CourseWare/>`_. It consists in several
29 projects that progressively introduce the MPI concepts. It proposes to
30 use SimGrid and SMPI to run the experiments, but the learning
31 objectives are centered on MPI itself.
33 Our goal is to enable the study of **unmodified MPI applications**.
34 Some constructs and features are still missing, but we can probably
35 add them on demand. If you already used MPI before, SMPI should sound
36 very familiar to you: Use smpicc instead of mpicc, and smpirun instead
37 of mpirun. The main difference is that smpirun takes a :ref:`simulated
38 platform <platform>` as an extra parameter.
40 For **further scalability**, you may modify your code to speed up your
41 studies or save memory space. Maximal **simulation accuracy**
42 requires some specific care from you.
52 If your application is in C, then simply use ``smpicc`` as a
53 compiler just like you use mpicc with other MPI implementations. This
54 script still calls your default compiler (gcc, clang, ...) and adds
55 the right compilation flags along the way. If your application is in
56 C++, Fortran 77 or Fortran 90, use respectively ``smpicxx``,
57 ``smpiff`` or ``smpif90``.
63 Use the ``smpirun`` script as follows:
67 smpirun -hostfile my_hostfile.txt -platform my_platform.xml ./program -blah
69 - ``my_hostfile.txt`` is a classical MPI hostfile (that is, this file
70 lists the machines on which the processes must be dispatched, one
72 - ``my_platform.xml`` is a classical SimGrid platform file. Of course,
73 the hosts of the hostfile must exist in the provided platform.
74 - ``./program`` is the MPI program to simulate, that you compiled with ``smpicc``
75 - ``-blah`` is a command-line parameter passed to this program.
77 ``smpirun`` accepts other parameters, such as ``-np`` if you don't
78 want to use all the hosts defined in the hostfile, ``-map`` to display
79 on which host each rank gets mapped of ``-trace`` to activate the
80 tracing during the simulation. You can get the full list by running
83 ...............................
84 Debugging your Code within SMPI
85 ...............................
87 If you want to explore the automatic platform and deployment files
88 that are generated by ``smpirun``, add ``-keep-temps`` to the command
91 You can also run your simulation within valgrind or gdb using the
92 following commands. Once in GDB, each MPI ranks will be represented as
93 a regular thread, and you can explore the state of each of them as
98 smpirun -wrapper valgrind ...other args...
99 smpirun -wrapper "gdb --args" --cfg=contexts/factory:thread ...other args...
103 ................................
104 Simulating Collective Operations
105 ................................
107 MPI collective operations are crucial to the performance of MPI
108 applications and must be carefully optimized according to many
109 parameters. Every existing implementation provides several algorithms
110 for each collective operation, and selects by default the best suited
111 one, depending on the sizes sent, the number of nodes, the
112 communicator, or the communication library being used. These
113 decisions are based on empirical results and theoretical complexity
114 estimation, and are very different between MPI implementations. In
115 most cases, the users can also manually tune the algorithm used for
116 each collective operation.
118 SMPI can simulate the behavior of several MPI implementations:
119 OpenMPI, MPICH, `STAR-MPI <http://star-mpi.sourceforge.net/>`_, and
120 MVAPICH2. For that, it provides 115 collective algorithms and several
121 selector algorithms, that were collected directly in the source code
122 of the targeted MPI implementations.
124 You can switch the automatic selector through the
125 ``smpi/coll-selector`` configuration item. Possible values:
127 - **ompi:** default selection logic of OpenMPI (version 3.1.2)
128 - **mpich**: default selection logic of MPICH (version 3.3b)
129 - **mvapich2**: selection logic of MVAPICH2 (version 1.9) tuned
130 on the Stampede cluster
131 - **impi**: preliminary version of an Intel MPI selector (version
132 4.1.3, also tuned for the Stampede cluster). Due the closed source
133 nature of Intel MPI, some of the algorithms described in the
134 documentation are not available, and are replaced by mvapich ones.
135 - **default**: legacy algorithms used in the earlier days of
136 SimGrid. Do not use for serious perform performance studies.
138 .. todo:: default should not even exist.
144 You can also pick the algorithm used for each collective with the
145 corresponding configuration item. For example, to use the pairwise
146 alltoall algorithm, one should add ``--cfg=smpi/alltoall:pair`` to the
147 line. This will override the selector (if any) for this algorithm. It
148 means that the selected algorithm will be used
150 .. Warning:: Some collective may require specific conditions to be
151 executed correctly (for instance having a communicator with a power
152 of two number of nodes only), which are currently not enforced by
153 Simgrid. Some crashes can be expected while trying these algorithms
154 with unusual sizes/parameters
159 Most of these are best described in `STAR-MPI <http://www.cs.arizona.edu/~dkl/research/papers/ics06.pdf>`_.
161 - default: naive one, by default
162 - ompi: use openmpi selector for the alltoall operations
163 - mpich: use mpich selector for the alltoall operations
164 - mvapich2: use mvapich2 selector for the alltoall operations
165 - impi: use intel mpi selector for the alltoall operations
166 - automatic (experimental): use an automatic self-benchmarking algorithm
167 - bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">this paper</a>
168 - 2dmesh: organizes the nodes as a two dimensional mesh, and perform allgather
170 - 3dmesh: adds a third dimension to the previous algorithm
171 - rdb: recursive doubling: extends the mesh to a nth dimension, each one
173 - pair: pairwise exchange, only works for power of 2 procs, size-1 steps,
174 each process sends and receives from the same process at each step
175 - pair_light_barrier: same, with small barriers between steps to avoid
177 - pair_mpi_barrier: same, with MPI_Barrier used
178 - pair_one_barrier: only one barrier at the beginning
179 - ring: size-1 steps, at each step a process send to process (n+i)%size, and receives from (n-i)%size
180 - ring_light_barrier: same, with small barriers between some phases to avoid contention
181 - ring_mpi_barrier: same, with MPI_Barrier used
182 - ring_one_barrier: only one barrier at the beginning
183 - basic_linear: posts all receives and all sends,
184 starts the communications, and waits for all communication to finish
185 - mvapich2_scatter_dest: isend/irecv with scattered destinations, posting only a few messages at the same time
189 - default: naive one, by default
190 - ompi: use openmpi selector for the alltoallv operations
191 - mpich: use mpich selector for the alltoallv operations
192 - mvapich2: use mvapich2 selector for the alltoallv operations
193 - impi: use intel mpi selector for the alltoallv operations
194 - automatic (experimental): use an automatic self-benchmarking algorithm
195 - bruck: same as alltoall
196 - pair: same as alltoall
197 - pair_light_barrier: same as alltoall
198 - pair_mpi_barrier: same as alltoall
199 - pair_one_barrier: same as alltoall
200 - ring: same as alltoall
201 - ring_light_barrier: same as alltoall
202 - ring_mpi_barrier: same as alltoall
203 - ring_one_barrier: same as alltoall
204 - ompi_basic_linear: same as alltoall
209 - default: naive one, by default
210 - ompi: use openmpi selector for the gather operations
211 - mpich: use mpich selector for the gather operations
212 - mvapich2: use mvapich2 selector for the gather operations
213 - impi: use intel mpi selector for the gather operations
214 - automatic (experimental): use an automatic self-benchmarking algorithm which will iterate over all implemented versions and output the best
215 - ompi_basic_linear: basic linear algorithm from openmpi, each process sends to the root
216 - ompi_binomial: binomial tree algorithm
217 - ompi_linear_sync: same as basic linear, but with a synchronization at the
218 beginning and message cut into two segments.
219 - mvapich2_two_level: SMP-aware version from MVAPICH. Gather first intra-node (defaults to mpich's gather), and then exchange with only one process/node. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
224 - default: naive one, by default
225 - ompi: use openmpi selector for the barrier operations
226 - mpich: use mpich selector for the barrier operations
227 - mvapich2: use mvapich2 selector for the barrier operations
228 - impi: use intel mpi selector for the barrier operations
229 - automatic (experimental): use an automatic self-benchmarking algorithm
230 - ompi_basic_linear: all processes send to root
231 - ompi_two_procs: special case for two processes
232 - ompi_bruck: nsteps = sqrt(size), at each step, exchange data with rank-2^k and rank+2^k
233 - ompi_recursivedoubling: recursive doubling algorithm
234 - ompi_tree: recursive doubling type algorithm, with tree structure
235 - ompi_doublering: double ring algorithm
236 - mvapich2_pair: pairwise algorithm
237 - mpich_smp: barrier intra-node, then inter-node
242 - default: naive one, by default
243 - ompi: use openmpi selector for the scatter operations
244 - mpich: use mpich selector for the scatter operations
245 - mvapich2: use mvapich2 selector for the scatter operations
246 - impi: use intel mpi selector for the scatter operations
247 - automatic (experimental): use an automatic self-benchmarking algorithm
248 - ompi_basic_linear: basic linear scatter
249 - ompi_binomial: binomial tree scatter
250 - mvapich2_two_level_direct: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a basic linear inter node stage. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
251 - mvapich2_two_level_binomial: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a binomial phase. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
256 - default: naive one, by default
257 - ompi: use openmpi selector for the reduce operations
258 - mpich: use mpich selector for the reduce operations
259 - mvapich2: use mvapich2 selector for the reduce operations
260 - impi: use intel mpi selector for the reduce operations
261 - automatic (experimental): use an automatic self-benchmarking algorithm
262 - arrival_pattern_aware: root exchanges with the first process to arrive
263 - binomial: uses a binomial tree
264 - flat_tree: uses a flat tree
265 - NTSL: Non-topology-specific pipelined linear-bcast function
266 0->1, 1->2 ,2->3, ....., ->last node: in a pipeline fashion, with segments
268 - scatter_gather: scatter then gather
269 - ompi_chain: openmpi reduce algorithms are built on the same basis, but the
270 topology is generated differently for each flavor
271 chain = chain with spacing of size/2, and segment size of 64KB
272 - ompi_pipeline: same with pipeline (chain with spacing of 1), segment size
273 depends on the communicator size and the message size
274 - ompi_binary: same with binary tree, segment size of 32KB
275 - ompi_in_order_binary: same with binary tree, enforcing order on the
277 - ompi_binomial: same with binomial algo (redundant with default binomial
279 - ompi_basic_linear: basic algorithm, each process sends to root
280 - mvapich2_knomial: k-nomial algorithm. Default factor is 4 (mvapich2 selector adapts it through tuning)
281 - mvapich2_two_level: SMP-aware reduce, with default set to mpich both for intra and inter communicators. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
282 - rab: `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_'s reduce algorithm
287 - default: naive one, by default
288 - ompi: use openmpi selector for the allreduce operations
289 - mpich: use mpich selector for the allreduce operations
290 - mvapich2: use mvapich2 selector for the allreduce operations
291 - impi: use intel mpi selector for the allreduce operations
292 - automatic (experimental): use an automatic self-benchmarking algorithm
293 - lr: logical ring reduce-scatter then logical ring allgather
294 - rab1: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: reduce_scatter then allgather
295 - rab2: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: alltoall then allgather
296 - rab_rsag: variation of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: recursive doubling
297 reduce_scatter then recursive doubling allgather
298 - rdb: recursive doubling
299 - smp_binomial: binomial tree with smp: binomial intra
300 SMP reduce, inter reduce, inter broadcast then intra broadcast
301 - smp_binomial_pipeline: same with segment size = 4096 bytes
302 - smp_rdb: intra: binomial allreduce, inter: Recursive
303 doubling allreduce, intra: binomial broadcast
304 - smp_rsag: intra: binomial allreduce, inter: reduce-scatter,
305 inter:allgather, intra: binomial broadcast
306 - smp_rsag_lr: intra: binomial allreduce, inter: logical ring
307 reduce-scatter, logical ring inter:allgather, intra: binomial broadcast
308 - smp_rsag_rab: intra: binomial allreduce, inter: rab
309 reduce-scatter, rab inter:allgather, intra: binomial broadcast
310 - redbcast: reduce then broadcast, using default or tuned algorithms if specified
311 - ompi_ring_segmented: ring algorithm used by OpenMPI
312 - mvapich2_rs: rdb for small messages, reduce-scatter then allgather else
313 - mvapich2_two_level: SMP-aware algorithm, with mpich as intra algoritm, and rdb as inter (Change this behavior by using mvapich2 selector to use tuned values)
314 - rab: default `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_ implementation
319 - default: naive one, by default
320 - ompi: use openmpi selector for the reduce_scatter operations
321 - mpich: use mpich selector for the reduce_scatter operations
322 - mvapich2: use mvapich2 selector for the reduce_scatter operations
323 - impi: use intel mpi selector for the reduce_scatter operations
324 - automatic (experimental): use an automatic self-benchmarking algorithm
325 - ompi_basic_recursivehalving: recursive halving version from OpenMPI
326 - ompi_ring: ring version from OpenMPI
327 - mpich_pair: pairwise exchange version from MPICH
328 - mpich_rdb: recursive doubling version from MPICH
329 - mpich_noncomm: only works for power of 2 procs, recursive doubling for noncommutative ops
335 - default: naive one, by default
336 - ompi: use openmpi selector for the allgather operations
337 - mpich: use mpich selector for the allgather operations
338 - mvapich2: use mvapich2 selector for the allgather operations
339 - impi: use intel mpi selector for the allgather operations
340 - automatic (experimental): use an automatic self-benchmarking algorithm
341 - 2dmesh: see alltoall
342 - 3dmesh: see alltoall
343 - bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">
344 Efficient algorithms for all-to-all communications in multiport message-passing systems</a>
345 - GB: Gather - Broadcast (uses tuned version if specified)
346 - loosely_lr: Logical Ring with grouping by core (hardcoded, default
348 - NTSLR: Non Topology Specific Logical Ring
349 - NTSLR_NB: Non Topology Specific Logical Ring, Non Blocking operations
352 - rhv: only power of 2 number of processes
354 - SMP_NTS: gather to root of each SMP, then every root of each SMP node
355 post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
356 using logical ring algorithm (hardcoded, default processes/SMP: 8)
357 - smp_simple: gather to root of each SMP, then every root of each SMP node
358 post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
359 using simple algorithm (hardcoded, default processes/SMP: 8)
360 - spreading_simple: from node i, order of communications is i -> i + 1, i ->
361 i + 2, ..., i -> (i + p -1) % P
362 - ompi_neighborexchange: Neighbor Exchange algorithm for allgather.
363 Described by Chen et.al. in `Performance Evaluation of Allgather
364 Algorithms on Terascale Linux Cluster with Fast Ethernet <http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1592302>`_
365 - mvapich2_smp: SMP aware algorithm, performing intra-node gather, inter-node allgather with one process/node, and bcast intra-node
370 - default: naive one, by default
371 - ompi: use openmpi selector for the allgatherv operations
372 - mpich: use mpich selector for the allgatherv operations
373 - mvapich2: use mvapich2 selector for the allgatherv operations
374 - impi: use intel mpi selector for the allgatherv operations
375 - automatic (experimental): use an automatic self-benchmarking algorithm
376 - GB: Gatherv - Broadcast (uses tuned version if specified, but only for Bcast, gatherv is not tuned)
379 - ompi_neighborexchange: see allgather
380 - ompi_bruck: see allgather
381 - mpich_rdb: recursive doubling algorithm from MPICH
382 - mpich_ring: ring algorithm from MPICh - performs differently from the one from STAR-MPI
387 - default: naive one, by default
388 - ompi: use openmpi selector for the bcast operations
389 - mpich: use mpich selector for the bcast operations
390 - mvapich2: use mvapich2 selector for the bcast operations
391 - impi: use intel mpi selector for the bcast operations
392 - automatic (experimental): use an automatic self-benchmarking algorithm
393 - arrival_pattern_aware: root exchanges with the first process to arrive
394 - arrival_pattern_aware_wait: same with slight variation
395 - binomial_tree: binomial tree exchange
396 - flattree: flat tree exchange
397 - flattree_pipeline: flat tree exchange, message split into 8192 bytes pieces
398 - NTSB: Non-topology-specific pipelined binary tree with 8192 bytes pieces
399 - NTSL: Non-topology-specific pipelined linear with 8192 bytes pieces
400 - NTSL_Isend: Non-topology-specific pipelined linear with 8192 bytes pieces, asynchronous communications
401 - scatter_LR_allgather: scatter followed by logical ring allgather
402 - scatter_rdb_allgather: scatter followed by recursive doubling allgather
403 - arrival_scatter: arrival pattern aware scatter-allgather
404 - SMP_binary: binary tree algorithm with 8 cores/SMP
405 - SMP_binomial: binomial tree algorithm with 8 cores/SMP
406 - SMP_linear: linear algorithm with 8 cores/SMP
407 - ompi_split_bintree: binary tree algorithm from OpenMPI, with message split in 8192 bytes pieces
408 - ompi_pipeline: pipeline algorithm from OpenMPI, with message split in 128KB pieces
409 - mvapich2_inter_node: Inter node default mvapich worker
410 - mvapich2_intra_node: Intra node default mvapich worker
411 - mvapich2_knomial_intra_node: k-nomial intra node default mvapich worker. default factor is 4.
416 .. warning:: This is still very experimental.
418 An automatic version is available for each collective (or even as a selector). This specific
419 version will loop over all other implemented algorithm for this particular collective, and apply
420 them while benchmarking the time taken for each process. It will then output the quickest for
421 each process, and the global quickest. This is still unstable, and a few algorithms which need
422 specific number of nodes may crash.
427 To add a new algorithm, one should check in the src/smpi/colls folder
428 how other algorithms are coded. Using plain MPI code inside Simgrid
429 can't be done, so algorithms have to be changed to use smpi version of
430 the calls instead (MPI_Send will become smpi_mpi_send). Some functions
431 may have different signatures than their MPI counterpart, please check
432 the other algorithms or contact us using the `>SimGrid
433 developers mailing list <http://lists.gforge.inria.fr/mailman/listinfo/simgrid-devel>`_.
435 Example: adding a "pair" version of the Alltoall collective.
437 - Implement it in a file called alltoall-pair.c in the src/smpi/colls folder. This file should include colls_private.hpp.
439 - The name of the new algorithm function should be smpi_coll_tuned_alltoall_pair, with the same signature as MPI_Alltoall.
441 - Once the adaptation to SMPI code is done, add a reference to the file ("src/smpi/colls/alltoall-pair.c") in the SMPI_SRC part of the DefinePackages.cmake file inside buildtools/cmake, to allow the file to be built and distributed.
443 - To register the new version of the algorithm, simply add a line to the corresponding macro in src/smpi/colls/cools.h ( add a "COLL_APPLY(action, COLL_ALLTOALL_SIG, pair)" to the COLL_ALLTOALLS macro ). The algorithm should now be compiled and be selected when using --cfg=smpi/alltoall:pair at runtime.
445 - To add a test for the algorithm inside Simgrid's test suite, juste add the new algorithm name in the ALLTOALL_COLL list found inside teshsuite/smpi/CMakeLists.txt . When running ctest, a test for the new algorithm should be generated and executed. If it does not pass, please check your code or contact us.
447 - Please submit your patch for inclusion in SMPI, for example through a pull request on GitHub or directly per email.
450 Tracing of Internal Communications
451 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
453 By default, the collective operations are traced as a unique operation
454 because tracing all point-to-point communications composing them could
455 result in overloaded, hard to interpret traces. If you want to debug
456 and compare collective algorithms, you should set the
457 ``tracing/smpi/internals`` configuration item to 1 instead of 0.
459 Here are examples of two alltoall collective algorithms runs on 16 nodes,
460 the first one with a ring algorithm, the second with a pairwise one.
462 .. image:: /img/smpi_simgrid_alltoall_ring_16.png
465 Alltoall on 16 Nodes with the Ring Algorithm.
467 .. image:: /img/smpi_simgrid_alltoall_pair_16.png
470 Alltoall on 16 Nodes with the Pairwise Algorithm.
472 -------------------------
473 What can run within SMPI?
474 -------------------------
476 You can run unmodified MPI applications (both C/C++ and Fortran) within
477 SMPI, provided that you only use MPI calls that we implemented. Global
478 variables should be handled correctly on Linux systems.
484 Our coverage of the interface is very decent, but still incomplete;
485 Given the size of the MPI standard, we may well never manage to
486 implement absolutely all existing primitives. Currently, we have
487 almost no support for I/O primitives, but we still pass a very large
488 amount of the MPICH coverage tests.
490 The full list of not yet implemented functions is documented in the
491 file `include/smpi/smpi.h
492 <https://framagit.org/simgrid/simgrid/tree/master/include/smpi/smpi.h>`_
493 in your version of SimGrid, between two lines containing the ``FIXME``
494 marker. If you really miss a feature, please get in touch with us: we
495 can guide you though the SimGrid code to help you implementing it, and
496 we'd be glad to integrate your contribution to the main project.
498 .. _SMPI_what_globals:
500 .................................
501 Privatization of global variables
502 .................................
504 Concerning the globals, the problem comes from the fact that usually,
505 MPI processes run as real UNIX processes while they are all folded
506 into threads of a unique system process in SMPI. Global variables are
507 usually private to each MPI process while they become shared between
508 the processes in SMPI. The problem and some potential solutions are
509 discussed in this article: `Automatic Handling of Global Variables for
510 Multi-threaded MPI Programs
511 <http://charm.cs.illinois.edu/newPapers/11-23/paper.pdf>` (note that
512 this article does not deal with SMPI but with a competing solution
513 called AMPI that suffers of the same issue). This point used to be
514 problematic in SimGrid, but the problem should now be handled
515 automatically on Linux.
517 Older versions of SimGrid came with a script that automatically
518 privatized the globals through static analysis of the source code. But
519 our implementation was not robust enough to be used in production, so
520 it was removed at some point. Currently, SMPI comes with two
521 privatization mechanisms that you can :ref:`select at runtime
522 <options_smpi_privatization>`_. The dlopen approach is used by
523 default as it is much faster and still very robust. The mmap approach
524 is an older approach that proves to be slower.
526 With the **mmap approach**, SMPI duplicates and dynamically switch the
527 ``.data`` and ``.bss`` segments of the ELF process when switching the
528 MPI ranks. This allows each ranks to have its own copy of the global
529 variables. No copy actually occures as this mechanism uses ``mmap()``
530 for efficiency. This mechanism is considered to be very robust on all
531 systems supporting ``mmap()`` (Linux and most BSDs). Its performance
532 is questionable since each context switch between MPI ranks induces
533 several syscalls to change the ``mmap`` that redirects the ``.data``
534 and ``.bss`` segments to the copies of the new rank. The code will
535 also be copied several times in memory, inducing a slight increase of
538 Another limitation is that SMPI only accounts for global variables
539 defined in the executable. If the processes use external global
540 variables from dynamic libraries, they won't be switched
541 correctly. The easiest way to solve this is to statically link against
542 the library with these globals. This way, each MPI rank will get its
543 own copy of these libraries. Of course you should never statically
544 link against the SimGrid library itself.
546 With the **dlopen approach**, SMPI loads several copies of the same
547 executable in memory as if it were a library, so that the global
548 variables get naturally dupplicated. It first requires the executable
549 to be compiled as a relocatable binary, which is less common for
550 programs than for libraries. But most distributions are now compiled
551 this way for security reason as it allows to randomize the address
552 space layout. It should thus be safe to compile most (any?) program
553 this way. The second trick is that the dynamic linker refuses to link
554 the exact same file several times, be it a library or a relocatable
555 executable. It makes perfectly sense in the general case, but we need
556 to circumvent this rule of thumb in our case. To that extend, the
557 binary is copied in a temporary file before being re-linked against.
558 ``dlmopen()`` cannot be used as it only allows 256 contextes, and as it
559 would also dupplicate simgrid itself.
561 This approach greatly speeds up the context switching, down to about
562 40 CPU cycles with our raw contextes, instead of requesting several
563 syscalls with the ``mmap()`` approach. Another advantage is that it
564 permits to run the SMPI contexts in parallel, which is obviously not
565 possible with the ``mmap()`` approach. It was tricky to implement, but
566 we are not aware of any flaws, so smpirun activates it by default.
568 In the future, it may be possible to further reduce the memory and
569 disk consumption. It seems that we could `punch holes
570 <https://lwn.net/Articles/415889/>`_ in the files before dl-loading
571 them to remove the code and constants, and mmap these area onto a
572 unique copy. If done correctly, this would reduce the disk- and
573 memory- usage to the bare minimum, and would also reduce the pressure
574 on the CPU instruction cache. See the `relevant bug
575 <https://github.com/simgrid/simgrid/issues/137>`_ on github for
576 implementation leads.\n
578 Also, currently, only the binary is copied and dlopen-ed for each MPI
579 rank. We could probably extend this to external dependencies, but for
580 now, any external dependencies must be statically linked into your
581 application. As usual, simgrid itself shall never be statically linked
582 in your app. You don't want to give a copy of SimGrid to each MPI rank:
583 that's ways too much for them to deal with.
585 .. todo: speak of smpi/privatize-libs here
587 ----------------------------------------------
588 Adapting your MPI code for further scalability
589 ----------------------------------------------
591 As detailed in the `reference article
592 <http://hal.inria.fr/hal-01415484>`_, you may want to adapt your code
593 to improve the simulation performance. But these tricks may seriously
594 hinder the result quality (or even prevent the app to run) if used
595 wrongly. We assume that if you want to simulate an HPC application,
596 you know what you are doing. Don't prove us wrong!
598 ..............................
599 Reducing your memory footprint
600 ..............................
602 If you get short on memory (the whole app is executed on a single node when
603 simulated), you should have a look at the SMPI_SHARED_MALLOC and
604 SMPI_SHARED_FREE macros. It allows to share memory areas between processes: The
605 purpose of these macro is that the same line malloc on each process will point
606 to the exact same memory area. So if you have a malloc of 2M and you have 16
607 processes, this macro will change your memory consumption from 2M*16 to 2M
608 only. Only one block for all processes.
610 If your program is ok with a block containing garbage value because all
611 processes write and read to the same place without any kind of coordination,
612 then this macro can dramatically shrink your memory consumption. For example,
613 that will be very beneficial to a matrix multiplication code, as all blocks will
614 be stored on the same area. Of course, the resulting computations will useless,
615 but you can still study the application behavior this way.
617 Naturally, this won't work if your code is data-dependent. For example, a Jacobi
618 iterative computation depends on the result computed by the code to detect
619 convergence conditions, so turning them into garbage by sharing the same memory
620 area between processes does not seem very wise. You cannot use the
621 SMPI_SHARED_MALLOC macro in this case, sorry.
623 This feature is demoed by the example file
624 `examples/smpi/NAS/dt.c <https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/NAS/dt.c>`_
626 .........................
627 Toward Faster Simulations
628 .........................
630 If your application is too slow, try using SMPI_SAMPLE_LOCAL,
631 SMPI_SAMPLE_GLOBAL and friends to indicate which computation loops can
632 be sampled. Some of the loop iterations will be executed to measure
633 their duration, and this duration will be used for the subsequent
634 iterations. These samples are done per processor with
635 SMPI_SAMPLE_LOCAL, and shared between all processors with
636 SMPI_SAMPLE_GLOBAL. Of course, none of this will work if the execution
637 time of your loop iteration are not stable.
639 This feature is demoed by the example file
640 `examples/smpi/NAS/ep.c <https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/NAS/ep.c>`_
642 .............................
643 Ensuring Accurate Simulations
644 .............................
646 Out of the box, SimGrid may give you fairly accurate results, but
647 there is a plenty of factors that could go wrong and make your results
648 inaccurate or even plainly wrong. Actually, you can only get accurate
649 results of a nicely built model, including both the system hardware
650 and your application. Such models are hard to pass over and reuse in
651 other settings, because elements that are not relevant to an
652 application (say, the latency of point-to-point communications,
653 collective operation implementation details or CPU-network
654 interaction) may be irrelevant to another application. The dream of
655 the perfect model, encompassing every aspects is only a chimera, as
656 the only perfect model of the reality is the reality. If you go for
657 simulation, then you have to ignore some irrelevant aspects of the
658 reality, but which aspects are irrelevant is actually
659 application-dependent...
661 The only way to assess whether your settings provide accurate results
662 is to double-check these results. If possible, you should first run
663 the same experiment in simulation and in real life, gathering as much
664 information as you can. Try to understand the discrepancies in the
665 results that you observe between both settings (visualization can be
666 precious for that). Then, try to modify your model (of the platform,
667 of the collective operations) to reduce the most preeminent differences.
669 If the discrepancies come from the computing time, try adapting the
670 ``smpi/host-speed``: reduce it if your simulation runs faster than in
671 reality. If the error come from the communication, then you need to
672 fiddle with your platform file.
674 Be inventive in your modeling. Don't be afraid if the names given by
675 SimGrid does not match the real names: we got very good results by
676 modeling multicore/GPU machines with a set of separate hosts
677 interconnected with very fast networks (but don't trust your model
678 because it has the right names in the right place either).
680 Finally, you may want to check `this article
681 <https://hal.inria.fr/hal-00907887>`_ on the classical pitfalls in
682 modeling distributed systems.
684 -------------------------
685 Troubleshooting with SMPI
686 -------------------------
688 .................................
689 ./configure refuses to use smpicc
690 .................................
692 If your ``./configure`` reports that the compiler is not
693 functional or that you are cross-compiling, try to define the
694 ``SMPI_PRETEND_CC`` environment variable before running the
697 .. code-block:: shell
699 SMPI_PRETEND_CC=1 ./configure # here come the configure parameters
702 Indeed, the programs compiled with ``smpicc`` cannot be executed
703 without ``smpirun`` (they are shared libraries and do weird things on
704 startup), while configure wants to test them directly. With
705 ``SMPI_PRETEND_CC`` smpicc does not compile as shared, and the SMPI
706 initialization stops and returns 0 before doing anything that would
707 fail without ``smpirun``.
711 Make sure that SMPI_PRETEND_CC is only set when calling ./configure,
712 not during the actual execution, or any program compiled with smpicc
713 will stop before starting.
715 ..............................................
716 ./configure does not pick smpicc as a compiler
717 ..............................................
719 In addition to the previous answers, some projects also need to be
720 explicitely told what compiler to use, as follows:
722 .. code-block:: shell
724 SMPI_PRETEND_CC=1 ./configure CC=smpicc # here come the other configure parameters
727 Maybe your configure is using another variable, such as ``cc`` (in
728 lower case) or similar. Just check the logs.
730 .....................................
731 error: unknown type name 'useconds_t'
732 .....................................
734 Try to add ``-D_GNU_SOURCE`` to your compilation line to get ride
737 The reason is that SMPI provides its own version of ``usleep(3)``
738 to override it and to block in the simulation world, not in the real
739 one. It needs the ``useconds_t`` type for that, which is declared
740 only if you declare ``_GNU_SOURCE`` before including
741 ``unistd.h``. If your project includes that header file before
742 SMPI, then you need to ensure that you pass the right configuration
743 defines as advised above.