- ``my_hostfile.txt`` is a classical MPI hostfile (that is, this file
lists the machines on which the processes must be dispatched, one
- per line)
+ per line). Using the ``hostname:num_procs`` syntax will deploy num_procs
+ MPI processes on the host, sharing available cores (equivalent to listing
+ the same host num_procs times on different lines).
- ``my_platform.xml`` is a classical SimGrid platform file. Of course,
the hosts of the hostfile must exist in the provided platform.
- ``./program`` is the MPI program to simulate, that you compiled with ``smpicc``
Finally, you can pass :ref:`any valid SimGrid parameter <options>` to your
program. In particular, you can pass ``--cfg=network/model:ns-3`` to
-switch to use :ref:`model_ns3`. These parameters should be placed after
+switch to use :ref:`models_ns3`. These parameters should be placed after
the name of your binary on the command line.
...............................
You can switch the automatic selector through the
``smpi/coll-selector`` configuration item. Possible values:
- - **ompi:** default selection logic of OpenMPI (version 3.1.2)
+ - **ompi:** default selection logic of OpenMPI (version 4.1.2)
- **mpich**: default selection logic of MPICH (version 3.3b)
- **mvapich2**: selection logic of MVAPICH2 (version 1.9) tuned
on the Stampede cluster
.. Warning:: Some collective may require specific conditions to be
executed correctly (for instance having a communicator with a power
of two number of nodes only), which are currently not enforced by
- Simgrid. Some crashes can be expected while trying these algorithms
+ SimGrid. Some crashes can be expected while trying these algorithms
with unusual sizes/parameters
+To retrieve the full list of implemented algorithms in your version of SimGrid, simply use ``smpirun --help-coll``.
+
MPI_Alltoall
^^^^^^^^^^^^
``impi``: use intel mpi selector for the scatter operations. |br|
``automatic (experimental)``: use an automatic self-benchmarking algorithm. |br|
``ompi_basic_linear``: basic linear scatter. |br|
+``ompi_linear_nb``: linear scatter, non blocking sends. |br|
``ompi_binomial``: binomial tree scatter. |br|
``mvapich2_two_level_direct``: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a basic linear inter node stage. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster. |br|
``mvapich2_two_level_binomial``: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a binomial phase. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster. |br|
``automatic (experimental)``: use an automatic self-benchmarking algorithm. |br|
``ompi_basic_recursivehalving``: recursive halving version from OpenMPI. |br|
``ompi_ring``: ring version from OpenMPI. |br|
+``ompi_butterfly``: butterfly version from OpenMPI. |br|
``mpich_pair``: pairwise exchange version from MPICH. |br|
``mpich_rdb``: recursive doubling version from MPICH. |br|
``mpich_noncomm``: only works for power of 2 procs, recursive doubling for noncommutative ops. |br|
^^^^^^^^^^^^^^^^^^^
To add a new algorithm, one should check in the src/smpi/colls folder
-how other algorithms are coded. Using plain MPI code inside Simgrid
+how other algorithms are coded. Using plain MPI code inside SimGrid
can't be done, so algorithms have to be changed to use smpi version of
the calls instead (MPI_Send will become smpi_mpi_send). Some functions
may have different signatures than their MPI counterpart, please check
the other algorithms or contact us using the `>SimGrid
-developers mailing list <http://lists.gforge.inria.fr/mailman/listinfo/simgrid-devel>`_.
+user mailing list <https://sympa.inria.fr/sympa/info/simgrid-community>`_,
+or on `>Mattermost <https://framateam.org/simgrid/channels/town-square>`_.
Example: adding a "pair" version of the Alltoall collective.
- To register the new version of the algorithm, simply add a line to the corresponding macro in src/smpi/colls/cools.h ( add a "COLL_APPLY(action, COLL_ALLTOALL_SIG, pair)" to the COLL_ALLTOALLS macro ). The algorithm should now be compiled and be selected when using --cfg=smpi/alltoall:pair at runtime.
- - To add a test for the algorithm inside Simgrid's test suite, juste add the new algorithm name in the ALLTOALL_COLL list found inside teshsuite/smpi/CMakeLists.txt . When running ctest, a test for the new algorithm should be generated and executed. If it does not pass, please check your code or contact us.
+ - To add a test for the algorithm inside SimGrid's test suite, juste add the new algorithm name in the ALLTOALL_COLL list found inside teshsuite/smpi/CMakeLists.txt . When running ctest, a test for the new algorithm should be generated and executed. If it does not pass, please check your code or contact us.
- Please submit your patch for inclusion in SMPI, for example through a pull request on GitHub or directly per email.
executable. It makes perfectly sense in the general case, but we need
to circumvent this rule of thumb in our case. To that extend, the
binary is copied in a temporary file before being re-linked against.
-``dlmopen()`` cannot be used as it only allows 256 contextes, and as it
-would also duplicate simgrid itself.
+``dlmopen()`` cannot be used as it only allows 256 contexts, and as it
+would also duplicate SimGrid itself.
This approach greatly speeds up the context switching, down to about
-40 CPU cycles with our raw contextes, instead of requesting several
+40 CPU cycles with our raw contexts, instead of requesting several
syscalls with the ``mmap()`` approach. Another advantage is that it
permits one to run the SMPI contexts in parallel, which is obviously not
possible with the ``mmap()`` approach. It was tricky to implement, but
Also, currently, only the binary is copied and dlopen-ed for each MPI
rank. We could probably extend this to external dependencies, but for
now, any external dependencies must be statically linked into your
-application. As usual, simgrid itself shall never be statically linked
+application. As usual, SimGrid itself shall never be statically linked
in your app. You don't want to give a copy of SimGrid to each MPI rank:
that's ways too much for them to deal with.
Troubleshooting with SMPI
-------------------------
-.................................
-./configure refuses to use smpicc
-.................................
+.........................................
+./configure or cmake refuse to use smpicc
+.........................................
-If your ``./configure`` reports that the compiler is not
+If your configuration script (such as ``./configure`` or ``cmake``) reports that the compiler is not
functional or that you are cross-compiling, try to define the
``SMPI_PRETEND_CC`` environment variable before running the
configuration.
.. warning::
- Make sure that SMPI_PRETEND_CC is only set when calling ./configure,
+ Make sure that SMPI_PRETEND_CC is only set when calling the configuration script but
not during the actual execution, or any program compiled with smpicc
will stop before starting.
-..............................................
-./configure does not pick smpicc as a compiler
-..............................................
+.....................................................
+./configure or cmake do not pick smpicc as a compiler
+.....................................................
In addition to the previous answers, some projects also need to be
explicitly told what compiler to use, as follows:
.. code-block:: console
- $ SMPI_PRETEND_CC=1 ./configure CC=smpicc # here come the other configure parameters
+ $ SMPI_PRETEND_CC=1 cmake CC=smpicc # here come the other configure parameters
$ make
Maybe your configure is using another variable, such as ``cc`` (in