- Use addr2line as a fallback for stacktraces when backtrace is not available.
- Build option -Denable_documentation is now OFF by default.
- Network model 'NS3' was renamed into 'ns-3'.
+ - SunOS and Haiku OS support. Because exotic platforms are fun.
+
+Python:
+ - Simgrid can now hopefully be installed with pip.
+
+S4U:
+ - wait_any can now be used for asynchronous executions too.
XBT:
- New log appenders: stdout and stderr. Use stdout for xbt_help.
- SMPI now reports support of MPI3.1. This does not mean SMPI supports all MPI 3 calls, but it was already the case with 2.2
- MPI/IO is now supported over the Storage API (no files are written or read, storage is simulated). Supported calls are all synchronous ones.
- MPI interface is now const correct for input parameters
+ - MPI_Ireduce, MPI_Iallreduce, MPI_Iscan, MPI_Iexscan, MPI_Ireduce_scatter, MPI_Ireduce_scatter_block support
+ - Fortran bindings for async collectives.
+ - MPI_Comm_get_name, MPI_Comm_set_name, MPI_Count support.
Model-checker:
- Remove option 'model-check/record': Paths are recorded in any cases now.
- Remove the lagrange-based models (Reno/Reno2/Vegas). The regular
models proved to be more accurate than these old experiments.
-Python:
- - Simgrid can now hopefully be installed with pip.
-
-Fixed bugs (FG=FramaGit; GH=GitHub):
+Fixed bugs (FG=FramaGit; GH=GitHub -- Please prefer framagit for new bugs)
- FG#1: Broken link in error messages
- FG#2: missing installation documentation
- FG#3: missing documentation in smpirun
- FG#20: 'tesh --help' should return 0
- FG#21: Documentation link on http://simgrid.org/ broken
- FG#22: Debian installation instruction are broken
+ - FG#26: Turning off a link should raise NetworkFailureException exceptions
- GH#133: Java: a process can run on a VM even if its host is off
- GH#320: Stacktrace: Avoid the backtrace variant of Boost.Stacktrace?
- GH#326: Valgrind-detected error for join() when energy plugin is activated
- MPI_Alltoallw support
- Partial MPI nonblocking collectives implementation: MPI_Ibcast, MPI_Ibarrier,
MPI_Iallgather, MPI_Iallgatherv, MPI_Ialltoall, MPI_Ialltoallv, MPI_Igather,
- MPI_Igatherv, MPI_Iscatter, MPI_Iscatterv, MPI_Ialltoallw, MPI_Ireduce,
- MPI_Iallreduce, MPI_Iscan, MPI_Iexscan, MPI_Ireduce_scatter,
- MPI_Ireduce_scatter_block, with fortran bindings.
+ MPI_Igatherv, MPI_Iscatter, MPI_Iscatterv, MPI_Ialltoallw.
- MPI_Request_get_status, MPI_Status_set_cancelled, MPI_Status_set_elements
- support, MPI_Comm_get_name, MPI_Comm_set_name
+ support
- Basic implementation of generalized requests (SMPI doesn't
allow MPI_THREAD_MULTIPLE) : MPI_Grequest_complete, MPI_Grequest_start