X-Git-Url: http://bilbo.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/f843921db769f36e1622f3925d4ea8b82ccf1d40..e2e8fe5601125b3c81e3eb51b5cea86ad51fbb66:/docs/source/Configuring_SimGrid.rst diff --git a/docs/source/Configuring_SimGrid.rst b/docs/source/Configuring_SimGrid.rst index db03b0a01c..e7a968578b 100644 --- a/docs/source/Configuring_SimGrid.rst +++ b/docs/source/Configuring_SimGrid.rst @@ -67,7 +67,7 @@ with :cpp:func:`simgrid::s4u::Engine::set_config` or :cpp:func:`MSG_config`. int main(int argc, char *argv[]) { simgrid::s4u::Engine e(&argc, argv); - e->set_config("Item:Value"); + simgrid::s4u::Engine::set_config("Item:Value"); // Rest of your code } @@ -87,7 +87,6 @@ Existing Configuration Items - **contexts/factory:** :ref:`cfg=contexts/factory` - **contexts/guard-size:** :ref:`cfg=contexts/guard-size` - **contexts/nthreads:** :ref:`cfg=contexts/nthreads` -- **contexts/parallel-threshold:** :ref:`cfg=contexts/parallel-threshold` - **contexts/stack-size:** :ref:`cfg=contexts/stack-size` - **contexts/synchro:** :ref:`cfg=contexts/synchro` @@ -124,6 +123,8 @@ Existing Configuration Items - **network/bandwidth-factor:** :ref:`cfg=network/bandwidth-factor` - **network/crosstraffic:** :ref:`cfg=network/crosstraffic` - **network/latency-factor:** :ref:`cfg=network/latency-factor` +- **network/loopback-lat:** :ref:`cfg=network/loopback` +- **network/loopback-bw:** :ref:`cfg=network/loopback` - **network/maxmin-selective-update:** :ref:`Network Optimization Level ` - **network/model:** :ref:`options_model_select` - **network/optim:** :ref:`Network Optimization Level ` @@ -395,6 +396,16 @@ can be set to 0 (disable this feature) or 1 (enable it). Note that with the default host model this option is activated by default. +.. _cfg=network/loopback: + +Configuring loopback link +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Several network model provide an implicit loopback link to account for local +communication on a host. By default it has a 10GBps bandwidth and a null latency. +This can be changed with ``network/loopback-lat`` and ``network/loopback-bw`` +items. + .. _cfg=smpi/async-small-thresh: Simulating Asynchronous Send @@ -402,9 +413,9 @@ Simulating Asynchronous Send (this configuration item is experimental and may change or disappear) -It is possible to specify that messages below a certain size will be +It is possible to specify that messages below a certain size (in bytes) will be sent as soon as the call to MPI_Send is issued, without waiting for -the correspondant receive. This threshold can be configured through +the correspondent receive. This threshold can be configured through the ``smpi/async-small-thresh`` item. The default value is 0. This behavior can also be manually set for mailboxes, by setting the receiving mode of the mailbox with a call to @@ -433,7 +444,7 @@ Configuring the Storage model .. _cfg=storage/max_file_descriptors: -File Descriptor Cound per Host +File Descriptor Count per Host ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ **Option** ``storage/max_file_descriptors`` **Default:** 1024 @@ -504,10 +515,10 @@ In SMPI, this depends on the message size, that is compared against two threshol - if (:ref:`smpi/send-is-detached-thresh ` < size) then MPI_Send returns only when the message has actually been sent over the network. This is known as the rendez-vous mode. -The ``smpi/buffering`` option gives an easier interface to choose between these semantics. It can take two values: +The ``smpi/buffering`` (only valid with MC) option gives an easier interface to choose between these semantics. It can take two values: -- **zero:** means that buffering should be disabled. Blocking communications are actually blocking. -- **infty:** means that buffering should be made infinite. Blocking communications are non-blocking. +- **zero:** means that buffering should be disabled. All communications are actually blocking. +- **infty:** means that buffering should be made infinite. All communications are non-blocking. .. _cfg=model-check/property: @@ -519,7 +530,7 @@ Specifying a liveness property If you want to specify liveness properties, you have to pass them on the command line, specifying the name of the file containing the property, as formatted by the `ltl2ba `_ program. -Note that ltl2ba is not part of SimGrid and must be installed separatly. +Note that ltl2ba is not part of SimGrid and must be installed separately. .. code-block:: shell @@ -708,7 +719,7 @@ MC-related options, keep non-MC-related ones and add Currently, if the path is of the form ``X;Y;Z``, each number denotes the actor's pid that is selected at each indecision point. If it's of the form ``X/a;Y/b``, the X and Y are the selected pids while the a -and b are the return values of their simcalls. In the previouse +and b are the return values of their simcalls. In the previous example, ``1/3;1/4``, you can see from the full output that the actor 1 is doing MC_RANDOM simcalls, so the 3 and 4 simply denote the values that these simcall return. @@ -782,10 +793,16 @@ stacks), leading to segfaults with corrupted stack traces. If you want to push the scalability limits of your code, you might want to reduce the ``contexts/stack-size`` item. Its default value is 8192 (in KiB), while our Chord simulation works with stacks as small -as 16 KiB, for example. This *setting is ignored* when using the -thread factory. Instead, you should compile SimGrid and your -application with ``-fsplit-stack``. Note that this compilation flag is -not compatible with the model checker right now. +as 16 KiB, for example. You can ensure that some actors have a specific +size by simply changing the value of this configuration item before +creating these actors. The :cpp:func:`simgrid::s4u::Engine::set_config` +functions are handy for that. + +This *setting is ignored* when using the thread factory (because there +is no way to modify the stack size with C++ system threads). Instead, +you should compile SimGrid and your application with +``-fsplit-stack``. Note that this compilation flag is not compatible +with the model checker right now. The operating system should only allocate memory for the pages of the stack which are actually used and you might not need to use this in @@ -814,7 +831,6 @@ on other parts of the memory if their size is too small for the application. .. _cfg=contexts/nthreads: -.. _cfg=contexts/parallel-threshold: .. _cfg=contexts/synchro: Running User Code in Parallel @@ -832,17 +848,6 @@ run. To activate this, set the ``contexts/nthreads`` item to the amount of cores that you have in your computer (or lower than 1 to have the amount of cores auto-detected). -Even if you asked several worker threads using the previous option, -you can request to start the parallel execution (and pay the -associated synchronization costs) only if the potential parallelism is -large enough. For that, set the ``contexts/parallel-threshold`` -item to the minimal amount of user contexts needed to start the -parallel execution. In any given simulation round, if that amount is -not reached, the contexts will be run sequentially directly by the -main thread (thus saving the synchronization costs). Note that this -option is mainly useful when the grain of the user code is very fine, -because our synchronization is now very efficient. - When parallel execution is activated, you can choose the synchronization schema used with the ``contexts/synchro`` item, which value is either: @@ -876,24 +881,21 @@ you never used the tracing API. .. code-block:: shell - --cfg=tracing:yes --cfg=tracing/uncategorized:yes --cfg=triva/uncategorized:uncat.plist + --cfg=tracing:yes --cfg=tracing/uncategorized:yes - The first parameter activates the tracing subsystem, the second + The first parameter activates the tracing subsystem, and the second tells it to trace host and link utilization (without any - categorization) and the third creates a graph configuration file to - configure Triva when analysing the resulting trace file. + categorization). - MSG or SimDag-based simulator and categorized traces (you need to declare categories and classify your tasks according to them) .. code-block:: shell - --cfg=tracing:yes --cfg=tracing/categorized:yes --cfg=triva/categorized:cat.plist + --cfg=tracing:yes --cfg=tracing/categorized:yes - The first parameter activates the tracing subsystem, the second - tells it to trace host and link categorized utilization and the - third creates a graph configuration file to configure Triva when - analysing the resulting trace file. + The first parameter activates the tracing subsystem, and the second + tells it to trace host and link categorized utilization. - SMPI simulator and traces for a space/time view: @@ -961,10 +963,16 @@ a ``MPI_Send()``, SMPI will automatically benchmark the duration of this code, and create an execution task within the simulator to take this into account. For that, the actual duration is measured on the host machine and then scaled to the power of the corresponding -simulated machine. The variable ``smpi/host-speed`` allows one to specify -the computational speed of the host machine (in flop/s) to use when -scaling the execution times. It defaults to 20000, but you really want -to adjust it to get accurate simulation results. +simulated machine. The variable ``smpi/host-speed`` allows one to +specify the computational speed of the host machine (in flop/s by +default) to use when scaling the execution times. + +The default value is ``smpi/host-speed=20kf`` (= 20,000 flop/s). This +is probably underestimated for most machines, leading SimGrid to +overestimate the amount of flops in the execution blocks that are +automatically injected in the simulator. As a result, the execution +time of the whole application will probably be overestimated until you +use a realistic value. When the code consists of numerous consecutive MPI calls, the previous mechanism feeds the simulation kernel with numerous tiny @@ -1050,7 +1058,7 @@ The possible throughput of network links is often dependent on the message sizes, as protocols may adapt to different message sizes. With this option, a series of message sizes and factors are given, helping the simulation to be more realistic. For instance, the current default -value means that messages with size 65472 and more will get a total of +value means that messages with size 65472 bytes and more will get a total of MAX_BANDWIDTH*0.940694, messages of size 15424 to 65471 will get MAX_BANDWIDTH*0.697866, and so on (where MAX_BANDWIDTH denotes the bandwidth of the link). @@ -1289,7 +1297,7 @@ values, for example ``1:3:2;10:5:1``. The sections are divided by ";" so this example contains two sections. Furthermore, each section consists of three values. -1. The first value denotes the minimum size for this section to take effect; +1. The first value denotes the minimum size in bytes for this section to take effect; read it as "if message size is greater than this value (and other section has a larger first value that is also smaller than the message size), use this". In the first section above, this value is "1". @@ -1382,7 +1390,7 @@ for each shared block. With the ``global`` algorithm, each call to SMPI_SHARED_MALLOC() returns a new address, but it only points to a shadow block: its memory area is mapped on a 1 MiB file on disk. If the returned block is of size -N MiB, then the same file is mapped N times to cover the whole bloc. +N MiB, then the same file is mapped N times to cover the whole block. At the end, no matter how many times you call SMPI_SHARED_MALLOC, this will only consume 1 MiB in memory.