X-Git-Url: http://bilbo.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/b0ae92931a70bcd5e513d9f3f4400d2d3cbca196..eb6d2036ae2c22f06f32bf4aba3e165a0d110fbe:/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh diff --git a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh index 3a93258773..91fb56e394 100644 --- a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh +++ b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh @@ -2,42 +2,32 @@ p Test allreduce $ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper "${bindir:=.}/../../../bin/simgrid-mc" -map -hostfile ../hostfile_coll -platform ${platfdir:=.}/small_platform.xml -np 4 --log=xbt_cfg.thres:critical ${bindir:=.}/coll-allreduce-with-leaks --log=smpi_config.thres:warning --cfg=smpi/display-allocs:yes --cfg=smpi/simulate-computation:no --log=smpi_coll.thres:error --log=smpi_mpi.thres:error --log=smpi_pmpi.thres:error --cfg=smpi/list-leaks:10 --log=no_loc -> [rank 0] -> Tremblay -> [rank 1] -> Tremblay -> [rank 2] -> Tremblay -> [rank 3] -> Tremblay > [0.000000] [mc_safety/INFO] Check a safety property. Reduction is: dpor. -> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles : display types and addresses (n max) with --cfg=smpi/list-leaks:n. -> Running smpirun with -wrapper "valgrind --leak-check=full" can provide more information -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Comm -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Group -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Comm -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Group -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Comm -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Group -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Comm -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Group -> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 128 bytes during its lifetime through malloc/calloc calls. -> Largest allocation at once from a single process was 16 bytes, at sysdep.h:59. It was called 8 times during the whole simulation. +> [0.000000] [smpi/INFO] [rank 0] -> Tremblay +> [0.000000] [smpi/INFO] [rank 1] -> Tremblay +> [0.000000] [smpi/INFO] [rank 2] -> Tremblay +> [0.000000] [smpi/INFO] [rank 3] -> Tremblay +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. > If this is too much, consider sharing allocations for computation buffers. > This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) -> -> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles : display types and addresses (n max) with --cfg=smpi/list-leaks:n. -> Running smpirun with -wrapper "valgrind --leak-check=full" can provide more information -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Comm -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Group -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Comm -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Group -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Comm -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Group -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Comm -> [0.000000] [smpi_utils/WARNING] Leaked handle of type MPI_Group -> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 128 bytes during its lifetime through malloc/calloc calls. -> Largest allocation at once from a single process was 16 bytes, at sysdep.h:59. It was called 8 times during the whole simulation. +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. > If this is too much, consider sharing allocations for computation buffers. > This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) -> +> > [0.000000] [mc_safety/INFO] No property violation found. -> [0.000000] [mc_safety/INFO] Expanded states = 63 -> [0.000000] [mc_safety/INFO] Visited states = 500 -> [0.000000] [mc_safety/INFO] Executed transitions = 484 +> [0.000000] [mc_safety/INFO] 63 unique states visited; 16 backtracks (500 transition replays, 422 states visited overall)