``MPI_Fortran_COMPILER`` to the full path of smpicc, smpicxx and smpiff (or
smpif90), respectively. Example:
-.. code-block:: shell
+.. code-block:: console
- cmake -DMPI_C_COMPILER=/opt/simgrid/bin/smpicc -DMPI_CXX_COMPILER=/opt/simgrid/bin/smpicxx -DMPI_Fortran_COMPILER=/opt/simgrid/bin/smpiff .
+ $ cmake -DMPI_C_COMPILER=/opt/simgrid/bin/smpicc -DMPI_CXX_COMPILER=/opt/simgrid/bin/smpicxx -DMPI_Fortran_COMPILER=/opt/simgrid/bin/smpiff .
....................
Simulating your Code
Use the ``smpirun`` script as follows:
-.. code-block:: shell
+.. code-block:: console
- smpirun -hostfile my_hostfile.txt -platform my_platform.xml ./program -blah
+ $ smpirun -hostfile my_hostfile.txt -platform my_platform.xml ./program -blah
- ``my_hostfile.txt`` is a classical MPI hostfile (that is, this file
lists the machines on which the processes must be dispatched, one
tracing during the simulation. You can get the full list by running
``smpirun -help``
+Finally, you can pass :ref:`any valid SimGrid parameter <options>` to your
+program. In particular, you can pass ``--cfg=network/model:ns-3`` to
+switch to use :ref:`model_ns3`. These parameters should be placed after
+the name of your binary on the command line.
+
...............................
Debugging your Code within SMPI
...............................
a regular thread, and you can explore the state of each of them as
usual.
-.. code-block:: shell
+.. code-block:: console
+
+ $ smpirun -wrapper valgrind ...other args...
+ $ smpirun -wrapper "gdb --args" --cfg=contexts/factory:thread ...other args...
+
+Some shortcuts are available:
+
+- ``-gdb`` is equivalent to ``-wrapper "gdb --args" -keep-temps``, to run within gdb debugger
+- ``-lldb`` is equivalent to ``-wrapper "lldb --" -keep-temps``, to run within lldb debugger
+- ``-vgdb`` is equivalent to ``-wrapper "valgrind --vgdb=yes --vgdb-error=0" -keep-temps``,
+ to run within valgrind and allow to attach a debugger
+
+To help locate bottlenecks and largest allocations in the simulated application,
+the -analyze flag can be passed to smpirun. It will activate
+:ref:`smpi/display-timing<cfg=smpi/display-timing>` and
+:ref:`smpi/display-allocs<cfg=smpi/display-allocs>` options and provide hints
+at the end of execution.
+
+SMPI will also report MPI handle (Comm, Request, Op, Datatype...) leaks
+at the end of execution. This can help identify memory leaks that can trigger
+crashes and slowdowns.
+By default it only displays the number of leaked items detected.
+Option :ref:`smpi/list-leaks:n<cfg=smpi/list-leaks>` can be used to display the
+n first leaks encountered and their type. To get more information, running smpirun
+with ``-wrapper "valgrind --leak-check=full --track-origins=yes"`` should show
+the exact origin of leaked handles.
+Known issue : MPI_Cancel may trigger internal leaks within SMPI.
- smpirun -wrapper valgrind ...other args...
- smpirun -wrapper "gdb --args" --cfg=contexts/factory:thread ...other args...
.. _SMPI_use_colls:
- mvapich2: use mvapich2 selector for the alltoall operations
- impi: use intel mpi selector for the alltoall operations
- automatic (experimental): use an automatic self-benchmarking algorithm
- - bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">this paper</a>
+ - bruck: Described by Bruck et.al. in `this paper <http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949>`_
- 2dmesh: organizes the nodes as a two dimensional mesh, and perform allgather
along the dimensions
- 3dmesh: adds a third dimension to the previous algorithm
- impi: use intel mpi selector for the allreduce operations
- automatic (experimental): use an automatic self-benchmarking algorithm
- lr: logical ring reduce-scatter then logical ring allgather
- - rab1: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: reduce_scatter then allgather
- - rab2: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: alltoall then allgather
- - rab_rsag: variation of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: recursive doubling
+ - rab1: variations of the `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_ algorithm: reduce_scatter then allgather
+ - rab2: variations of the `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_ algorithm: alltoall then allgather
+ - rab_rsag: variation of the `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_ algorithm: recursive doubling
reduce_scatter then recursive doubling allgather
- rdb: recursive doubling
- smp_binomial: binomial tree with smp: binomial intra
<https://framagit.org/simgrid/simgrid/tree/master/include/smpi/smpi.h>`_
in your version of SimGrid, between two lines containing the ``FIXME``
marker. If you really miss a feature, please get in touch with us: we
-can guide you though the SimGrid code to help you implementing it, and
+can guide you through the SimGrid code to help you implementing it, and
we'd be glad to integrate your contribution to the main project.
.. _SMPI_what_globals:
iterations. These samples are done per processor with
SMPI_SAMPLE_LOCAL, and shared between all processors with
SMPI_SAMPLE_GLOBAL. Of course, none of this will work if the execution
-time of your loop iteration are not stable.
+time of your loop iteration are not stable. If some parameters have an
+incidence on the timing of a kernel, and if they are reused often
+(same kernel launched with a few different sizes during the run, for example),
+SMPI_SAMPLE_LOCAL_TAG and SMPI_SAMPLE_GLOBAL_TAG can be used, with a tag
+as last parameter, to differentiate between calls. The tag is a character
+chain crafted by the user, with a maximum size of 128, and should include
+what is necessary to group calls of a given size together.
This feature is demoed by the example file
`examples/smpi/NAS/ep.c <https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/NAS/ep.c>`_
<https://hal.inria.fr/hal-00907887>`_ on the classical pitfalls in
modeling distributed systems.
+----------------------
+Examples of SMPI Usage
+----------------------
+
+A small amount of examples can be found directly in the SimGrid
+archive, under `examples/smpi <https://framagit.org/simgrid/simgrid/-/tree/master/examples/smpi>`_.
+Some show how to simply run MPI code in SimGrid, how to use the
+tracing/replay mechanism or how to use plugins written in S4U to
+extend the simulator abilities.
+
+Another source of examples lay in the SimGrid archive, under
+`teshsuite/smpi <https://framagit.org/simgrid/simgrid/-/tree/master/examples/smpi>`_.
+They are not in the ``examples`` directory because they probably don't
+constitute pedagogical examples. Instead, they are intended to stress
+our implementation during the tests. Some of you may be interested
+anyway.
+
+But the best source of SMPI examples is certainly the `proxy app
+<https://framagit.org/simgrid/SMPI-proxy-apps>`_ external project.
+Proxy apps are scale models of real, massive HPC applications: each of
+them exhibits the same communication and computation patterns than the
+massive application that it stands for. But they last only a few
+thousands lines instead of some millions of lines. These proxy apps
+are usually provided for educational purpose, and also to ensure that
+the represented large HPC applications will correctly work with the
+next generation of runtimes and hardware. `This project
+<https://framagit.org/simgrid/SMPI-proxy-apps>`_ gathers proxy apps
+from different sources, along with the patches needed (if any) to run
+them on top of SMPI.
+
-------------------------
Troubleshooting with SMPI
-------------------------
``SMPI_PRETEND_CC`` environment variable before running the
configuration.
-.. code-block:: shell
+.. code-block:: console
- SMPI_PRETEND_CC=1 ./configure # here come the configure parameters
- make
+ $ SMPI_PRETEND_CC=1 ./configure # here come the configure parameters
+ $ make
Indeed, the programs compiled with ``smpicc`` cannot be executed
without ``smpirun`` (they are shared libraries and do weird things on
In addition to the previous answers, some projects also need to be
explicitly told what compiler to use, as follows:
-.. code-block:: shell
+.. code-block:: console
- SMPI_PRETEND_CC=1 ./configure CC=smpicc # here come the other configure parameters
- make
+ $ SMPI_PRETEND_CC=1 ./configure CC=smpicc # here come the other configure parameters
+ $ make
Maybe your configure is using another variable, such as ``cc`` (in
lower case) or similar. Just check the logs.
error: unknown type name 'useconds_t'
.....................................
-Try to add ``-D_GNU_SOURCE`` to your compilation line to get ride
+Try to add ``-D_GNU_SOURCE`` to your compilation line to get rid
of that error.
The reason is that SMPI provides its own version of ``usleep(3)``
script of the actions to do sequentially. These trace files can
actually be captured with the online version of SMPI, as follows:
-.. code-block:: shell
+.. code-block:: console
$ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
`simgrid/examples/smpi/replay
<https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/replay>`_.
-.. code-block:: shell
+.. code-block:: console
$ smpicxx ../replay.cpp -O3 -o ../smpi_replay
Afterward, you can replay your trace in SMPI as follows:
+.. code-block:: console
+
$ smpirun -np 32 -platform ../cluster_torus.xml -ext smpi_replay ../smpi_replay LU.A.32
All the outputs are gone, as the application is not really simulated