</config>
A last solution is to pass your configuration directly in your program
-with :cpp:func:`simgrid::s4u::Engine::set_config` or :cpp:func:`MSG_config`.
+with :cpp:func:`simgrid::s4u::Engine::set_config`.
.. code-block:: cpp
- **host/model:** :ref:`options_model_select`
-- **maxmin/precision:** :ref:`cfg=maxmin/precision`
- **maxmin/concurrency-limit:** :ref:`cfg=maxmin/concurrency-limit`
-- **msg/debug-multiple-use:** :ref:`cfg=msg/debug-multiple-use`
-
- **model-check:** :ref:`options_modelchecking`
- **model-check/checkpoint:** :ref:`cfg=model-check/checkpoint`
- **model-check/communications-determinism:** :ref:`cfg=model-check/communications-determinism`
- **storage/max_file_descriptors:** :ref:`cfg=storage/max_file_descriptors`
-- **surf/precision:** :ref:`cfg=surf/precision`
+- **precision/timing:** :ref:`cfg=precision/timing`
+- **precision/work-amount:** :ref:`cfg=precision/work-amount`
- **For collective operations of SMPI,** please refer to Section :ref:`cfg=smpi/coll-selector`
- **smpi/auto-shared-malloc-thresh:** :ref:`cfg=smpi/auto-shared-malloc-thresh`
<https://hal.inria.fr/inria-00071989/document>`_.
- **ns-3** (only available if you compiled SimGrid accordingly):
Use the packet-level network
- simulators as network models (see :ref:`model_ns3`).
+ simulators as network models (see :ref:`models_ns3`).
This model can be :ref:`further configured <options_pls>`.
-- ``cpu/model``: specify the used CPU model. We have only one model
- for now:
+- ``cpu/model``: specify the used CPU model. We have only one model for now:
- **Cas01:** Simplistic CPU model (time=size/speed)
-- ``host/model``: The host concept is the aggregation of a CPU with a
- network card. Three models exists, but actually, only 2 of them are
- interesting. The "compound" one is simply due to the way our
- internal code is organized, and can easily be ignored. So at the
- end, you have two host models: The default one allows aggregation of
- an existing CPU model with an existing network model, but does not
- allow parallel tasks because these beasts need some collaboration
- between the network and CPU model.
-
- - **default:** Default host model. Currently, CPU:Cas01 and
- network:LV08 (with cross traffic enabled)
- - **compound:** Host model that is automatically chosen if
- you change the network and CPU models
- - **ptask_L07:** Host model somehow similar to Cas01+CM02 but
- allowing "parallel tasks", that are intended to model the moldable
- tasks of the grid scheduling literature.
+- ``host/model``: we have two such models for now.
+
+ - **default:** Default host model. It simply uses the otherwise configured models for cpu, disk and network (i.e. CPU:Cas01,
+ disk:S19 and network:LV08 by default)
+ - **ptask_L07:** This model is mandatory if you plan to use parallel tasks (and useless otherwise). ptasks are intended to
+ model the moldable tasks of the grid scheduling literature. A specific host model is necessary because each such activity
+ has a both compute and communicate components, so the CPU and network models must be mixed together.
- ``storage/model``: specify the used storage model. Only one model is
provided so far.
The different models rely on a linear inequalities solver to share
the underlying resources. SimGrid allows you to change the solver, but
be cautious, **don't change it unless you are 100% sure**.
-
+
- items ``cpu/solver``, ``network/solver``, ``disk/solver`` and ``host/solver``
allow you to change the solver for each model:
and slow pattern that follows the actual dependencies.
.. _cfg=bmf/precision:
-.. _cfg=maxmin/precision:
-.. _cfg=surf/precision:
+.. _cfg=precision/timing:
+.. _cfg=precision/work-amount:
Numerical Precision
...................
-**Option** ``maxmin/precision`` **Default:** 1e-5 (in flops or bytes) |br|
-**Option** ``surf/precision`` **Default:** 1e-9 (in seconds) |br|
+**Option** ``precision/timing`` **Default:** 1e-9 (in seconds) |br|
+**Option** ``precision/work-amount`` **Default:** 1e-5 (in flops or bytes) |br|
**Option** ``bmf/precision`` **Default:** 1e-12 (no unit)
The analytical models handle a lot of floating point values. It is
**Option** ``network/TCP-gamma`` **Default:** 4194304
-The analytical models need to know the maximal TCP window size to take
-the TCP congestion mechanism into account. On Linux, this value can
-be retrieved using the following commands. Both give a set of values,
-and you should use the last one, which is the maximal size.
+The analytical models need to know the maximal TCP window size to take the TCP congestion mechanism into account (see
+:ref:`this page <understanding_cm02>` for details). On Linux, this value can be retrieved using the following commands.
+Both give a set of values, and you should use the last one, which is the maximal size.
.. code-block:: console
$ cat /proc/sys/net/ipv4/tcp_rmem # gives the sender window
$ cat /proc/sys/net/ipv4/tcp_wmem # gives the receiver window
+If you want to disable the TCP windowing mechanism, set this parameter to 0.
+
.. _cfg=network/bandwidth-factor:
.. _cfg=network/latency-factor:
.. _cfg=network/weight-S:
**Option** ``network/weight-S`` **Default:** depends on the model
Value used to account for RTT-unfairness when sharing a bottleneck (network connections with a large RTT are generally penalized
-against those with a small one). Described in `Accuracy Study and Improvement of Network Simulation in the SimGrid Framework
-<http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf>`_
+against those with a small one). See :ref:`models_TCP` and also this scientific paper: `Accuracy Study and Improvement of Network
+Simulation in the SimGrid Framework <http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf>`_
Default values for ``CM02`` is 0. ``LV08`` sets it to 20537 while both ``SMPI`` and ``IB`` set it to 8775.
InfiniBand network behavior can be modeled through 3 parameters
``smpi/IB-penalty-factors:"βe;βs;γs"``, as explained in `the PhD
-thesis of Jean-Marc Vincent
+thesis of Jérôme Vienne
<http://mescal.imag.fr/membres/jean-marc.vincent/index.html/PhD/Vienne.pdf>`_ (in French)
or more concisely in `this paper <https://hal.inria.fr/hal-00953618/document>`_,
even if that paper does only describe models for myrinet and ethernet.
the ``smpi/async-small-thresh`` item. The default value is 0. This
behavior can also be manually set for mailboxes, by setting the
receiving mode of the mailbox with a call to
-:cpp:func:`MSG_mailbox_set_async`. After this, all messages sent to
+:cpp:func:`sg_mailbox_set_receiver`. After this, all messages sent to
this mailbox will have this behavior regardless of the message size.
This value needs to be smaller than or equals to the threshold set at
Configuring ns-3
^^^^^^^^^^^^^^^^
-**Option** ``ns3/TcpModel`` **Default:** "default" (ns-3 default)
+**Option** ``ns3/NetworkModel`` **Default:** "default" (ns-3 default TCP)
-When using ns-3, there is an extra item ``ns3/TcpModel``, corresponding
-to the ``ns3::TcpL4Protocol::SocketType`` configuration item in
-ns-3. The only valid values (enforced on the SimGrid side) are
-'default' (no change to the ns-3 configuration), 'NewReno' or 'Reno' or
-'Tahoe'.
+When using ns-3, the item ``ns3/NetworkModel`` can be used to switch between TCP or UDP, and switch the used TCP variante. If
+the item is left unchanged, ns-3 uses the default TCP implementation. With a value of "UDP", ns-3 is set to use UDP instead.
+With the value of either 'NewReno' or 'Cubic', the ``ns3::TcpL4Protocol::SocketType`` configuration item in ns-3 is set to the
+corresponding protocol.
**Option** ``ns3/seed`` **Default:** "" (don't set the seed in ns-3)
In SMPI, this depends on the message size, that is compared against two thresholds:
- if (size < :ref:`smpi/async-small-thresh <cfg=smpi/async-small-thresh>`) then
- MPI_Send returns immediately, even if the corresponding receive has not be issued yet.
-- if (:ref:`smpi/async-small-thresh <cfg=smpi/async-small-thresh>` < size < :ref:`smpi/send-is-detached-thresh <cfg=smpi/send-is-detached-thresh>`) then
- MPI_Send returns as soon as the corresponding receive has been issued. This is known as the eager mode.
+ MPI_Send returns immediately, and the message is sent even if the
+ corresponding receive has not be issued yet. This is known as the eager mode.
+- if (:ref:`smpi/async-small-thresh <cfg=smpi/async-small-thresh>` < size <
+ :ref:`smpi/send-is-detached-thresh <cfg=smpi/send-is-detached-thresh>`) then
+ MPI_Send also returns immediately, but SMPI waits for the corresponding
+ receive to be posted, in order to perform the communication operation.
- if (:ref:`smpi/send-is-detached-thresh <cfg=smpi/send-is-detached-thresh>` < size) then
MPI_Send returns only when the message has actually been sent over the network. This is known as the rendez-vous mode.
auto-detection fails for you). They are approximately sorted here from
the slowest to the most efficient:
- - **thread:** very slow factory using full featured threads (either
- pthreads or windows native threads). They are slow but very
- standard. Some debuggers or profilers only work with this factory.
- - **java:** Java applications are virtualized onto java threads (that
- are regular pthreads registered to the JVM)
+ - **thread:** very slow factory using full featured, standard threads.
+ They are slow but very standard. Some debuggers or profilers only work with this factory.
- **ucontext:** fast factory using System V contexts (Linux and FreeBSD only)
- **boost:** This uses the `context
implementation <http://www.boost.org/doc/libs/1_59_0/libs/context/doc/html/index.html>`_
Disabling Stack Guard Pages
...........................
-**Option** ``contexts/guard-size`` **Default** 1 page in most case (0 pages on Windows or with MC)
+**Option** ``contexts/guard-size`` **Default** 1 page in most case (0 pages with MC)
Unless you use the threads context factory (see
:ref:`cfg=contexts/factory`), a stack guard page is usually used
.............................
Parallel execution of the user code is only considered stable in
-SimGrid v3.7 and higher, and mostly for MSG simulations. SMPI
+SimGrid v3.7 and higher, and mostly for S4U simulations. SMPI
simulations may well fail in parallel mode. It is described in
`INRIA RR-7653 <http://hal.inria.fr/inria-00602216/>`_.
you never used the tracing API.
-- Any SimGrid-based simulator (MSG, SMPI, ...) and raw traces:
+- Any SimGrid-based simulator (S4U, SMPI, ...) and raw traces:
.. code-block:: none
tells it to trace host and link utilization (without any
categorization).
-- MSG-based simulator and categorized traces (you need to
+- S4U-based simulator and categorized traces (you need to
declare categories and classify your tasks according to them)
.. code-block:: none
simulations. For additional details about this and all tracing
options, check See the :ref:`tracing_tracing_options`.
-Configuring MSG
----------------
-
-.. _cfg=msg/debug-multiple-use:
-
-Debugging MSG Code
-..................
-
-**Option** ``msg/debug-multiple-use`` **Default:** off
-
-Sometimes your application may try to send a task that is still being
-executed somewhere else, making it impossible to send this task. However,
-for debugging purposes, one may want to know what the other host is/was
-doing. This option shows a backtrace of the other process.
-
Configuring SMPI
----------------
-The SMPI interface provides several specific configuration items.
+The SMPI interface provides several specific configuration items.
These are not easy to see with ``--help-cfg``, since SMPI binaries are usually launched through the ``smiprun`` script.
.. _cfg=smpi/host-speed:
select the decision logic either of the OpenMPI or the MPICH libraries. (By
default SMPI uses naive version of collective operations.)
-Each collective operation can be manually selected with a
-``smpi/collective_name:algo_name``. Available algorithms are listed in
-:ref:`SMPI_use_colls`.
-
-.. TODO:: All available collective algorithms will be made available
- via the ``smpirun --help-coll`` command.
+Each collective operation can be manually selected with a ``smpi/collective_name:algo_name``. For example, if you want to use
+the Bruck algorithm for the Alltoall algorithm, you should use ``--cfg=smpi/alltoall:bruck`` on the command-line of smpirun. The
+reference of all available algorithms are listed in :ref:`SMPI_use_colls`, and you can get the full list implemented in your
+version using ``smpirun --help-coll``.
.. _cfg=smpi/barrier-collectives: