example, to set the item ``Item`` to the value ``Value``, simply
type the following on the command-line:
-.. code-block:: shell
+.. code-block:: console
- my_simulator --cfg=Item:Value (other arguments)
+ $ my_simulator --cfg=Item:Value (other arguments)
Several ``--cfg`` command line arguments can naturally be used. If you
need to include spaces in the argument, don't forget to quote the
- **smpi/or:** :ref:`cfg=smpi/or`
- **smpi/os:** :ref:`cfg=smpi/os`
- **smpi/papi-events:** :ref:`cfg=smpi/papi-events`
+- **smpi/pedantic:** :ref:`cfg=smpi/pedantic`
- **smpi/privatization:** :ref:`cfg=smpi/privatization`
- **smpi/privatize-libs:** :ref:`cfg=smpi/privatize-libs`
- **smpi/send-is-detached-thresh:** :ref:`cfg=smpi/send-is-detached-thresh`
end, you have two host models: The default one allows aggregation of
an existing CPU model with an existing network model, but does not
allow parallel tasks because these beasts need some collaboration
- between the network and CPU model. That is why, ptask_07 is used by
- default when using SimDag.
+ between the network and CPU model.
- **default:** Default host model. Currently, CPU:Cas01 and
network:LV08 (with cross traffic enabled)
be retrieved using the following commands. Both give a set of values,
and you should use the last one, which is the maximal size.
-.. code-block:: shell
+.. code-block:: console
- cat /proc/sys/net/ipv4/tcp_rmem # gives the sender window
- cat /proc/sys/net/ipv4/tcp_wmem # gives the receiver window
+ $ cat /proc/sys/net/ipv4/tcp_rmem # gives the sender window
+ $ cat /proc/sys/net/ipv4/tcp_wmem # gives the receiver window
.. _cfg=network/bandwidth-factor:
.. _cfg=network/latency-factor:
Study and Improvement of Network Simulation in the SimGrid Framework
<http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf>`_.
+- **network/latency-factor**: apply a multiplier to latency.
+ Models the TCP slow-start mechanism.
+- **network/bandwidth-factor**: actual bandwidth perceived by the
+ user.
+- **network/weight-S**: bottleneck sharing constant parameter. Used
+ to calculate RTT.
+
+These parameters are the same for all communications in your simulation,
+independently of message size or source/destination hosts. A more flexible
+mechanism based on callbacks was introduced in SimGrid. It provides the user
+a callback that will be called for each communication, allowing the user
+to set different latency and bandwidth factors, based on the message size, links used
+or zones traversed. To more details of how to use it, please look at the
+`examples/cpp/network-factors/s4u-network-factors.cpp <https://framagit.org/simgrid/simgrid/tree/master/examples/cpp/network-factors/s4u-network-factors.cpp>`_.
+
If you are using the SMPI model, these correction coefficients are
themselves corrected by constant values depending on the size of the
exchange. By default SMPI uses factors computed on the Stampede
Supercomputer at TACC, with optimal deployment of processes on
nodes. Again, only hardcore experts should bother about this fact.
+For more details, see SMPI sections about :ref:`cfg=smpi/bw-factor` and :ref:`cfg=smpi/lat-factor`.
-.. todo:: This section should be rewritten, and actually explain the
- options network/bandwidth-factor, network/latency-factor,
- network/weight-S.
-
.. _cfg=smpi/IB-penalty-factors:
Infiniband model
even if that paper does only describe models for myrinet and ethernet.
You can see in Fig 2 some results for Infiniband, for example. This model
may be outdated by now for modern infiniband, anyway, so a new
-validation would be good.
+validation would be good.
The three paramaters are defined as follows:
To determine the penalty for a communication, two values need to be calculated. First, the penalty caused by the conflict in transmission, noted ps.
-- if ∆s (i) = 1 then ps = 1.
+- if ∆s (i) = 1 then ps = 1.
- if ∆s (i) ≥ 2 and ∆e (i) ≥ 3 then ps = ∆s (i) × βs × γr
-- else, ps = ∆s (i) × βs
+- else, ps = ∆s (i) × βs
Then, the penalty caused by the conflict in reception (noted pe) should be computed as follows:
- if ∆e (i) = 1 then pe = 1
-- else, pe = Φ (e) × βe × Ω (s, e)
+- else, pe = Φ (e) × βe × Ω (s, e)
Finally, the penalty associated with the communication is:
p = max (ps ∈ s, pe)
Configuring loopback link
^^^^^^^^^^^^^^^^^^^^^^^^^
-Several network model provide an implicit loopback link to account for local
+Several network model provide an implicit loopback link to account for local
communication on a host. By default it has a 10GBps bandwidth and a null latency.
-This can be changed with ``network/loopback-lat`` and ``network/loopback-bw``
+This can be changed with ``network/loopback-lat`` and ``network/loopback-bw``
items.
.. _cfg=smpi/async-small-thresh:
To enable SimGrid's model-checking support, the program should
be executed using the simgrid-mc wrapper:
-.. code-block:: shell
+.. code-block:: console
- simgrid-mc ./my_program
+ $ simgrid-mc ./my_program
Safety properties are expressed as assertions using the function
:cpp:func:`void MC_assert(int prop)`.
property, as formatted by the `ltl2ba <https://github.com/utwente-fmt/ltl2ba>`_ program.
Note that ltl2ba is not part of SimGrid and must be installed separately.
-.. code-block:: shell
+.. code-block:: console
- simgrid-mc ./my_program --cfg=model-check/property:<filename>
+ $ simgrid-mc ./my_program --cfg=model-check/property:<filename>
.. _cfg=model-check/checkpoint:
execution graph (where a safety or liveness property is violated), it
generates an identifier for this path. Here is an example of the output:
-.. code-block:: shell
+.. code-block:: console
[ 0.000000] (0:@) Check a safety property
[ 0.000000] (0:@) **************************
8192 (in KiB), while our Chord simulation works with stacks as small
as 16 KiB, for example. You can ensure that some actors have a specific
size by simply changing the value of this configuration item before
-creating these actors. The :cpp:func:`simgrid::s4u::Engine::set_config`
+creating these actors. The :cpp:func:`simgrid::s4u::Engine::set_config`
functions are handy for that.
This *setting is ignored* when using the thread factory (because there
Configuring the Tracing
-----------------------
-The :ref:`tracing subsystem <outcomes_vizu>` can be configured in
-several different ways depending on the used interface (S4U, SMPI, SimDag)
+The :ref:`tracing subsystem <outcome_vizu>` can be configured in
+several different ways depending on the used interface (S4U, SMPI)
and the kind of traces that needs to be obtained. See the
:ref:`Tracing Configuration Options subsection
<tracing_tracing_options>` for a full description of each
you never used the tracing API.
-- Any SimGrid-based simulator (MSG, SimDag, SMPI, ...) and raw traces:
+- Any SimGrid-based simulator (MSG, SMPI, ...) and raw traces:
- .. code-block:: shell
+ .. code-block:: none
--cfg=tracing:yes --cfg=tracing/uncategorized:yes
tells it to trace host and link utilization (without any
categorization).
-- MSG or SimDag-based simulator and categorized traces (you need to
- declare categories and classify your tasks according to them)
+- MSG-based simulator and categorized traces (you need to
+ declare categories and classify your tasks according to them)
- .. code-block:: shell
+ .. code-block:: none
--cfg=tracing:yes --cfg=tracing/categorized:yes
- SMPI simulator and traces for a space/time view:
- .. code-block:: shell
+ .. code-block:: console
- smpirun -trace ...
+ $ smpirun -trace ...
The `-trace` parameter for the smpirun script runs the simulation
with ``--cfg=tracing:yes --cfg=tracing/smpi:yes``. Check the
- Add a string on top of the trace file as comment:
- .. code-block:: shell
+ .. code-block:: none
--cfg=tracing/comment:my_simulation_identifier
- Add the contents of a textual file on top of the trace file as comment:
- .. code-block:: shell
+ .. code-block:: none
--cfg=tracing/comment-file:my_file_with_additional_information.txt
first column defines the section that will be subject to a speedup;
the second column is the speedup. For instance:
-.. code-block:: shell
+.. code-block:: none
"start:stop","ratio"
"exchange_1.f:30:exchange_1.f:130",1.18244559422142
The first draft, however, just implements a "global" (i.e., for all processes) set
of counters, the "default" set.
-.. code-block:: shell
+.. code-block:: none
--cfg=smpi/papi-events:"default:PAPI_L3_LDM:PAPI_L2_LDM"
or full names. Check with ldd the name of the library you want to
use. For example:
-.. code-block:: shell
+.. code-block:: console
- ldd allpairf90
+ $ ldd allpairf90
...
libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007fbb4d91b000)
...
.. TODO:: All available collective algorithms will be made available
via the ``smpirun --help-coll`` command.
+.. _cfg=smpi/finalization-barrier:
+
Add a barrier in MPI_Finalize
.............................
-.. _cfg=smpi/finalization-barrier:
-
**Option** ``smpi/finalization-barrier`` **default:** off
By default, SMPI processes are destroyed as soon as soon as their code ends,
.. _cfg=smpi/errors-are-fatal:
+Disable MPI fatal errors
+........................
+
**Option** ``smpi/errors-are-fatal`` **default:** on
By default, SMPI processes will crash if a MPI error code is returned. MPI allows
will turn on this behaviour by default (for all concerned types and errhandlers).
This can ease debugging by going after the first reported error.
+.. _cfg=smpi/pedantic:
+
+Disable pedantic MPI errors
+...........................
+
+**Option** ``smpi/pedantic`` **default:** on
+
+By default, SMPI will report all errors it finds in MPI codes. Some of these errors
+may not be considered as errors by all developers. This flag can be turned off to
+avoid reporting some usually harmless mistakes.
+Concerned errors list (will be expanded in the future):
+
+ - Calling MPI_Win_fence only once in a program, hence just opening an epoch without
+ ever closing it.
+
.. _cfg=smpi/iprobe:
Inject constant times for MPI_Iprobe
assignment). In practice, change the calls for malloc() and free() into
SMPI_SHARED_MALLOC() and SMPI_SHARED_FREE().
-SMPI provides two algorithms for this feature. The first one, called
+SMPI provides two algorithms for this feature. The first one, called
``local``, allocates one block per call to SMPI_SHARED_MALLOC()
(each call site gets its own block) ,and this block is shared
among all MPI ranks. This is implemented with the shm_* functions
To activate this, you must mount a hugetlbfs on your system and allocate
at least one huge page:
-.. code-block:: shell
+.. code-block:: console
- mkdir /home/huge
- sudo mount none /home/huge -t hugetlbfs -o rw,mode=0777
- sudo sh -c 'echo 1 > /proc/sys/vm/nr_hugepages' # echo more if you need more
+ $ mkdir /home/huge
+ $ sudo mount none /home/huge -t hugetlbfs -o rw,mode=0777
+ $ sudo sh -c 'echo 1 > /proc/sys/vm/nr_hugepages' # echo more if you need more
Then, you can pass the option
``--cfg=smpi/shared-malloc-hugepage:/home/huge`` to smpirun to
writing in global variable simgrid::simix::breakpoint. For example,
with gdb:
-.. code-block:: shell
+.. code-block:: none
set variable simgrid::simix::breakpoint = 3.1416
If you want to have spaces in your log format, you should protect it. Otherwise, SimGrid will consider that this is a space-separated list of several parameters. But you should
-also protect it from the shell that also splits command line arguments on spaces. At the end, you should use something such as ``--log="'root.fmt:%l: [%p/%c]: %m%n'"``.
-Another option is to use the ``%e`` directive for spaces, as in ``--log=root.fmt:%l:%e[%p/%c]:%e%m%n``.
+also protect it from the shell that also splits command line arguments on spaces. At the end, you should use something such as ``--log="'root.fmt:%l: [%p/%c]: %m%n'"``.
+Another option is to use the ``%e`` directive for spaces, as in ``--log=root.fmt:%l:%e[%p/%c]:%e%m%n``.
Category appender
.................