my_simulator --cfg=Item:Value (other arguments)
\endverbatim
-Several \c --cfg command line arguments can naturally be used. If you
+Several \c `--cfg` command line arguments can naturally be used. If you
need to include spaces in the argument, don't forget to quote the
argument. You can even escape the included quotes (write \' for ' if
you have your argument between ').
within that tag, you can pass one or several \c \<prop\> tags to specify
the configuration to use. For example, setting \c Item to \c Value
can be done by adding the following to the beginning of your platform
-file: \verbatim
+file:
+\verbatim
<config>
<prop id="Item" value="Value"/>
</config>
It is possible to specify a timing gap between consecutive emission on
the same network card through the \b network/sender_gap item. This
is still under investigation as of writting, and the default value is
-to wait 0 seconds between emissions (no gap applied).
+to wait 10 microseconds (1e-5 seconds) between emissions.
\subsubsection options_model_network_asyncsend Simulating asyncronous send
If you want to push the scalability limits of your code, you really
want to reduce the \b contexts/stack_size item. Its default value
-is 128 (in Kib), while our Chord simulation works with stacks as small
-as 16 Kib, for example. For the thread factory, the default value
+is 8192 (in KiB), while our Chord simulation works with stacks as small
+as 16 KiB, for example. For the thread factory, the default value
is the one of the system, if it is too large/small, it has to be set
with this parameter.
- Any SimGrid-based simulator (MSG, SimDag, SMPI, ...) and raw traces:
\verbatim
---cfg=tracing:1 --cfg=tracing/uncategorized:1 --cfg=triva/uncategorized:uncat.plist
+--cfg=tracing:yes --cfg=tracing/uncategorized:yes --cfg=triva/uncategorized:uncat.plist
\endverbatim
The first parameter activates the tracing subsystem, the second
tells it to trace host and link utilization (without any
- MSG or SimDag-based simulator and categorized traces (you need to declare categories and classify your tasks according to them)
\verbatim
---cfg=tracing:1 --cfg=tracing/categorized:1 --cfg=triva/categorized:cat.plist
+--cfg=tracing:yes --cfg=tracing/categorized:yes --cfg=triva/categorized:cat.plist
\endverbatim
The first parameter activates the tracing subsystem, the second
tells it to trace host and link categorized utilization and the
smpirun -trace ...
\endverbatim
The <i>-trace</i> parameter for the smpirun script runs the
-simulation with --cfg=tracing:1 and --cfg=tracing/smpi:1. Check the
+simulation with --cfg=tracing:yes and --cfg=tracing/smpi:yes. Check the
smpirun's <i>-help</i> parameter for additional tracing options.
Sometimes you might want to put additional information on the trace to
simulation kernel (default value: 1e-6). Please note that in some
circonstances, this optimization can hinder the simulation accuracy.
+ In some cases, however, one may wish to disable simulation of
+application computation. This is the case when SMPI is used not to
+simulate an MPI applications, but instead an MPI code that performs
+"live replay" of another MPI app (e.g., ScalaTrace's replay tool,
+various on-line simulators that run an app at scale). In this case the
+computation of the replay/simulation logic should not be simulated by
+SMPI. Instead, the replay tool or on-line simulator will issue
+"computation events", which correspond to the actual MPI simulation
+being replayed/simulated. At the moment, these computation events can
+be simulated using SMPI by calling internal smpi_execute*() functions.
+
+To disable the benchmarking/simulation of computation in the simulated
+application via this runtime automatic switching, the variable \b
+smpi/privatize_global_variables should be set to no
+
\subsection options_smpi_timing Reporting simulation time
Most of the time, you run MPI code through SMPI to compute the time it
Simulation time: 1e3 seconds.
\endverbatim
-\subsection options_model_smpi_detached Simulating MPI detached send
+\subsection options_smpi_global Automatic privatization of global variables
+
+MPI executables are meant to be executed in separated processes, but SMPI is
+executed in only one process. Global variables from executables will be placed
+in the same memory zone and shared between processes, causing hard to find bugs.
+To avoid this, several options are possible :
+ - Manual edition of the code, for example to add __thread keyword before data
+ declaration, which allows the resulting code to work with SMPI, but only
+ if the thread factory (see \ref options_virt_factory) is used, as global
+ variables are then placed in the TLS (thread local storage) segment.
+ - Source-to-source transformation, to add a level of indirection
+ to the global variables. SMPI does this for F77 codes compiled with smpiff,
+ and used to provide coccinelle scripts for C codes, which are not functional anymore.
+ - Compilation pass, to have the compiler automatically put the data in
+ an adapted zone.
+ - Runtime automatic switching of the data segments. SMPI stores a copy of
+ each global data segment for each process, and at each context switch replaces
+ the actual data with its copy from the right process. This mechanism uses mmap,
+ and is for now limited to systems supporting this functionnality (all Linux
+ and some BSD should be compatible).
+ Another limitation is that SMPI only accounts for global variables defined in
+ the executable. If the processes use external global variables from dynamic
+ libraries, they won't be switched correctly. To avoid this, using static
+ linking is advised (but not with the simgrid library, to avoid replicating
+ its own global variables).
+
+ To use this runtime automatic switching, the variable \b smpi/privatize_global_variables
+ should be set to yes
-(this configuration item is experimental and may change or disapear)
+
+
+\subsection options_model_smpi_detached Simulating MPI detached send
This threshold specifies the size in bytes under which the send will return
immediately. This is different from the threshold detailed in \ref options_model_network_asyncsend
uses naive version of collective operations). Each collective operation can be manually selected with a
\b smpi/collective_name:algo_name. Available algorithms are listed in \ref SMPI_collective_algorithms .
+
\section options_generic Configuring other aspects of SimGrid
\subsection options_generic_path XML file inclusion path
- \c surf/nthreads: \ref options_model_nthreads
+- \c smpi/simulation_computation: \ref options_smpi_bench
- \c smpi/running_power: \ref options_smpi_bench
- \c smpi/display_timing: \ref options_smpi_timing
- \c smpi/cpu_threshold: \ref options_smpi_bench
- \c smpi/async_small_thres: \ref options_model_network_asyncsend
- \c smpi/send_is_detached: \ref options_model_smpi_detached
- \c smpi/coll_selector: \ref options_model_smpi_collectives
+- \c smpi/privatize_global_variables: \ref options_smpi_global
- \c path: \ref options_generic_path
- \c verbose-exit: \ref options_generic_exit