* Change the correction factors used in LMM model, according to
the latest experiments described in INRIA RR-7821.
Accuracy should be improved this way.
* Change the correction factors used in LMM model, according to
the latest experiments described in INRIA RR-7821.
Accuracy should be improved this way.
* Use the partial invalidation optimization by default for the
network too. Should produce the exact same results, only faster.
* Major cleanup in surf to merge models and split some optimization
* Use the partial invalidation optimization by default for the
network too. Should produce the exact same results, only faster.
* Major cleanup in surf to merge models and split some optimization
* Use now crosstraffic keyword instead of the terribly misleading
fullduplex keyword. It is activated by default now in the current
default model, use --cfg=network/crosstraffic:0 to turn it off.
* Use now crosstraffic keyword instead of the terribly misleading
fullduplex keyword. It is activated by default now in the current
default model, use --cfg=network/crosstraffic:0 to turn it off.
* Deprecate the use of m_channel_t mecanism like MSG_task_{get,put}
functions and friends. This interface was considered as
deprecated since over 2 years, it's time to inform our users that it is.
* Deprecate the use of m_channel_t mecanism like MSG_task_{get,put}
functions and friends. This interface was considered as
deprecated since over 2 years, it's time to inform our users that it is.
- contexts/synchro: Synchronization mode to use when running
contexts in parallel (either futex, posix or busy_wait)
- contexts/parallel_threshold: Minimal number of user contexts
- contexts/synchro: Synchronization mode to use when running
contexts in parallel (either futex, posix or busy_wait)
- contexts/parallel_threshold: Minimal number of user contexts
configuration item). In our tests, running the models in parallel
never lead to any speedups because they are so fast that the gain
of computing each model in parallel does not amortizes the
configuration item). In our tests, running the models in parallel
never lead to any speedups because they are so fast that the gain
of computing each model in parallel does not amortizes the
* Trace header updated according to the latest Paje file format
* Tracing network lazy updates, no longer obligate users to use full updates
* --cfg=tracing/platform:1 also registers power/bandwidth variables
* Trace header updated according to the latest Paje file format
* Tracing network lazy updates, no longer obligate users to use full updates
* --cfg=tracing/platform:1 also registers power/bandwidth variables
Lua:
* Improve the API of Lua MSG bindings, using the Lua spirit.
* Each simulated process now lives in its own Lua world (globals are
Lua:
* Improve the API of Lua MSG bindings, using the Lua spirit.
* Each simulated process now lives in its own Lua world (globals are
- automatically duplicated). It helps writing simulators. This is also the
- first step towards running real distributed Lua programs with SimGrid.
+ automatically duplicated). It helps writing simulators. Will allow
+ to run Splay programs within SimGrid in the future.
necessary at this point to get MC working.
Turn model-checking OFF if simulation performance matters to you.
necessary at this point to get MC working.
Turn model-checking OFF if simulation performance matters to you.