Discover SMPI
-------------
-SimGrid can not only :ref:`simulate algorithms <Tutorial_Algorithms>`, but
+SimGrid can not only :ref:`simulate algorithms <usecase_simalgo>`, but
it can also be used to execute real MPI applications on top of
virtual, simulated platforms with the SMPI module. Even complex
C/C++/F77/F90 applications should run out of the box in this
modifications to `run on top of SMPI
<https://framagit.org/simgrid/SMPI-proxy-apps>`_.
-This setting permits to debug your MPI applications in a perfectly
-reproducible setup, with no Heisenbugs. Enjoy the full Clairevoyance
-provided by the simulator while running what-if analysis on platforms
+This setting permits one to debug your MPI applications in a perfectly
+reproducible setup, with no Heisenbugs. Enjoy the full Clairvoyance
+provided by the simulator while running what-if analyses on platforms
that are still to be built! Several `production-grade MPI applications
<https://framagit.org/simgrid/SMPI-proxy-apps#full-scale-applications>`_
use SimGrid for their integration and performance testing.
In SMPI, communications are simulated while computations are
emulated. This means that while computations occur as they would in
-the real systems, communication calls are intercepted and achived by
+the real systems, communication calls are intercepted and achieved by
the simulator.
To start using SMPI, you just need to compile your application with
communication calls are implemented using SimGrid: data is exchanged
through memory copy, while the simulator's performance models are used
to predict the time taken by each communications. Any computations
-occuring between two MPI calls are benchmarked, and the corresponding
+occurring between two MPI calls are benchmarked, and the corresponding
time is reported into the simulator.
.. image:: /tuto_smpi/img/big-picture.svg
Describing Your Platform
------------------------
-As a SMPI user, you are supposed to provide a description of your
+As an SMPI user, you are supposed to provide a description of your
simulated platform, that is mostly a set of simulated hosts and network
links with some performance characteristics. SimGrid provides a plenty
of :ref:`documentation <platform>` and examples (in the
The elements basic elements (with :ref:`pf_tag_host` and
:ref:`pf_tag_link`) are described first, and then the routes between
-any pair of hosts are explicitely given with :ref:`pf_tag_route`. Any
-host must be given a computational speed (in flops) while links must
-be given a latency (in seconds) and a bandwidth (in bytes per
-second). Note that you can write 1Gflops instead of 1000000000flops,
-and similar. Last point: :ref:`pf_tag_route`s are symmetrical by
-default (but this can be changed).
+any pair of hosts are explicitly given with :ref:`pf_tag_route`.
+
+Any host must be given a computational speed in flops while links must
+be given a latency and a bandwidth. You can write 1Gf for
+1,000,000,000 flops (full list of units in the reference guide of
+:ref:`pf_tag_host` and :ref:`pf_tag_link`).
+
+Routes defined with :ref:`pf_tag_route` are symmetrical by default,
+meaning that the list of traversed links from A to B is the same as
+from B to A. Explicitly define non-symmetrical routes if you prefer.
Cluster with a Crossbar
.......................
---------
It is time to start using SMPI yourself. For that, you first need to
-install it somehow, and then you will need a MPI application to play with.
+install it somehow, and then you will need an MPI application to play with.
Using Docker
............
.. code-block:: shell
- sudo apt install simgrid pajeng make gcc g++ gfortran vite
+ sudo apt install simgrid pajeng make gcc g++ gfortran python3 vite
For R analysis of the produced traces, you may want to install R,
and the `pajengr <https://github.com/schnorr/pajengr#installation/>`_ package.
simulation accuracy. Furthermore there should not be any MPI calls
inside such parts of the code.
-Use for this part the `gemm_mpi.c
+Use for this part the `gemm_mpi.cpp
<https://gitlab.com/PRACE-4IP/CodeVault/raw/master/hpc_kernel_samples/dense_linear_algebra/gemm/mpi/src/gemm_mpi.cpp>`_
example, which is provided by the `PRACE Codevault repository
<http://www.prace-ri.eu/prace-codevault/>`_.
The computing part of this example is the matrix multiplication routine
.. literalinclude:: /tuto_smpi/gemm_mpi.cpp
- :language: c
+ :language: cpp
:lines: 4-19
.. code-block:: shell
- $ smpicc -O3 gemm_mpi.cpp -o gemm
+ $ smpicxx -O3 gemm_mpi.cpp -o gemm
$ time smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile --cfg=smpi/display-timing:yes --cfg=smpi/running-power:1000000000 ./gemm
This should end quite quickly, as the size of each matrix is only 1000x1000.
You may also be interested in the `SMPI reference article
<https://hal.inria.fr/hal-01415484>`_ or these `introductory slides
<http://simgrid.org/tutorials/simgrid-smpi-101.pdf>`_. The :ref:`SMPI
-reference documentation <app_smpi>` covers much more content than
+reference documentation <SMPI_doc>` covers much more content than
this short tutorial.
Finally, we regularly use SimGrid in our teachings on MPI. This way,