- **host/model:** :ref:`options_model_select`
-- **maxmin/precision:** :ref:`cfg=maxmin/precision`
- **maxmin/concurrency-limit:** :ref:`cfg=maxmin/concurrency-limit`
- **model-check:** :ref:`options_modelchecking`
- **storage/max_file_descriptors:** :ref:`cfg=storage/max_file_descriptors`
-- **surf/precision:** :ref:`cfg=surf/precision`
+- **precision/timing:** :ref:`cfg=precision/timing`
+- **precision/work-amount:** :ref:`cfg=precision/work-amount`
- **For collective operations of SMPI,** please refer to Section :ref:`cfg=smpi/coll-selector`
- **smpi/auto-shared-malloc-thresh:** :ref:`cfg=smpi/auto-shared-malloc-thresh`
<https://hal.inria.fr/inria-00071989/document>`_.
- **ns-3** (only available if you compiled SimGrid accordingly):
Use the packet-level network
- simulators as network models (see :ref:`model_ns3`).
+ simulators as network models (see :ref:`models_ns3`).
This model can be :ref:`further configured <options_pls>`.
-- ``cpu/model``: specify the used CPU model. We have only one model
- for now:
+- ``cpu/model``: specify the used CPU model. We have only one model for now:
- **Cas01:** Simplistic CPU model (time=size/speed)
-- ``host/model``: The host concept is the aggregation of a CPU with a
- network card. Three models exists, but actually, only 2 of them are
- interesting. The "compound" one is simply due to the way our
- internal code is organized, and can easily be ignored. So at the
- end, you have two host models: The default one allows aggregation of
- an existing CPU model with an existing network model, but does not
- allow parallel tasks because these beasts need some collaboration
- between the network and CPU model.
-
- - **default:** Default host model. Currently, CPU:Cas01 and
- network:LV08 (with cross traffic enabled)
- - **compound:** Host model that is automatically chosen if
- you change the network and CPU models
- - **ptask_L07:** Host model somehow similar to Cas01+CM02 but
- allowing "parallel tasks", that are intended to model the moldable
- tasks of the grid scheduling literature.
+- ``host/model``: we have two such models for now.
+
+ - **default:** Default host model. It simply uses the otherwise configured models for cpu, disk and network (i.e. CPU:Cas01,
+ disk:S19 and network:LV08 by default)
+ - **ptask_L07:** This model is mandatory if you plan to use parallel tasks (and useless otherwise). ptasks are intended to
+ model the moldable tasks of the grid scheduling literature. A specific host model is necessary because each such activity
+ has a both compute and communicate components, so the CPU and network models must be mixed together.
- ``storage/model``: specify the used storage model. Only one model is
provided so far.
The different models rely on a linear inequalities solver to share
the underlying resources. SimGrid allows you to change the solver, but
be cautious, **don't change it unless you are 100% sure**.
-
+
- items ``cpu/solver``, ``network/solver``, ``disk/solver`` and ``host/solver``
allow you to change the solver for each model:
and slow pattern that follows the actual dependencies.
.. _cfg=bmf/precision:
-.. _cfg=maxmin/precision:
-.. _cfg=surf/precision:
+.. _cfg=precision/timing:
+.. _cfg=precision/work-amount:
Numerical Precision
...................
-**Option** ``maxmin/precision`` **Default:** 1e-5 (in flops or bytes) |br|
-**Option** ``surf/precision`` **Default:** 1e-9 (in seconds) |br|
+**Option** ``precision/timing`` **Default:** 1e-9 (in seconds) |br|
+**Option** ``precision/work-amount`` **Default:** 1e-5 (in flops or bytes) |br|
**Option** ``bmf/precision`` **Default:** 1e-12 (no unit)
The analytical models handle a lot of floating point values. It is
**Option** ``network/TCP-gamma`` **Default:** 4194304
-The analytical models need to know the maximal TCP window size to take
-the TCP congestion mechanism into account. On Linux, this value can
-be retrieved using the following commands. Both give a set of values,
-and you should use the last one, which is the maximal size.
+The analytical models need to know the maximal TCP window size to take the TCP congestion mechanism into account (see
+:ref:`this page <understanding_cm02>` for details). On Linux, this value can be retrieved using the following commands.
+Both give a set of values, and you should use the last one, which is the maximal size.
.. code-block:: console
$ cat /proc/sys/net/ipv4/tcp_rmem # gives the sender window
$ cat /proc/sys/net/ipv4/tcp_wmem # gives the receiver window
+If you want to disable the TCP windowing mechanism, set this parameter to 0.
+
.. _cfg=network/bandwidth-factor:
.. _cfg=network/latency-factor:
.. _cfg=network/weight-S:
**Option** ``network/weight-S`` **Default:** depends on the model
Value used to account for RTT-unfairness when sharing a bottleneck (network connections with a large RTT are generally penalized
-against those with a small one). Described in `Accuracy Study and Improvement of Network Simulation in the SimGrid Framework
-<http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf>`_
+against those with a small one). See :ref:`models_TCP` and also this scientific paper: `Accuracy Study and Improvement of Network
+Simulation in the SimGrid Framework <http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf>`_
Default values for ``CM02`` is 0. ``LV08`` sets it to 20537 while both ``SMPI`` and ``IB`` set it to 8775.
InfiniBand network behavior can be modeled through 3 parameters
``smpi/IB-penalty-factors:"βe;βs;γs"``, as explained in `the PhD
-thesis of Jean-Marc Vincent
+thesis of Jérôme Vienne
<http://mescal.imag.fr/membres/jean-marc.vincent/index.html/PhD/Vienne.pdf>`_ (in French)
or more concisely in `this paper <https://hal.inria.fr/hal-00953618/document>`_,
even if that paper does only describe models for myrinet and ethernet.
Configuring SMPI
----------------
-The SMPI interface provides several specific configuration items.
+The SMPI interface provides several specific configuration items.
These are not easy to see with ``--help-cfg``, since SMPI binaries are usually launched through the ``smiprun`` script.
.. _cfg=smpi/host-speed: