2 @page inside_tests Testing SimGrid
4 This page will teach you how to run the tests, selecting the ones you
5 want, and how to add new tests to the archive.
9 SimGrid code coverage is usually between 70% and 80%, which is much
10 more than most projects out there. This is because we consider SimGrid
11 to be a rather complex project, and we want to modify it with less fear.
13 We have two sets of tests in SimGrid: Each of the 10,000+ unit tests
14 check one specific case for one specific function, while the 500+
15 integration tests run a given simulation specifically intended to
16 exercise a larger amount of functions together. Every example provided
17 in examples/ is used as an integration test, while some other torture
18 tests and corner cases integration tests are located in teshsuite/.
19 For each integration test, we ensure that the output exactly matches
20 the defined expectations. Since SimGrid displays the timestamp of
21 every loggued line, this ensures that every change of the models'
22 prediction will be noticed. All these tests should ensure that SimGrid
23 is safe to use and to depend on.
25 \section inside_tests_runintegration Running the tests
27 Running the tests is done using the ctest binary that comes with
28 cmake. These tests are run for every commit and the result is publicly
29 <a href="https://ci.inria.fr/simgrid/">available</a>.
32 ctest # Launch all tests
33 ctest -R msg # Launch only the tests which name match the string "msg"
34 ctest -j4 # Launch all tests in parallel, at most 4 at the same time
35 ctest --verbose # Display all details on what's going on
36 ctest --output-on-failure # Only get verbose for the tests that fail
38 ctest -R msg- -j5 --output-on-failure # You changed MSG and want to check that you didn't break anything, huh?
39 # That's fine, I do so all the time myself.
42 \section inside_tests_rununit Running the unit tests
44 All unit tests are packed into the testall binary, that lives in src/.
45 These tests are run when you launch ctest, don't worry.
48 make testall # Rebuild the test runner on need
49 ./src/testall # Launch all tests
50 ./src/testall --help # revise how it goes if you forgot
51 ./src/testall --tests=-all # run no test at all (yeah, that's useless)
52 ./src/testall --dump-only # Display all existing test suite
53 ./src/testall --tests=-all,+dict # Only launch the tests from the dict testsuite
54 ./src/testall --tests=-all,+foo:bar # run only the bar test from the foo suite.
58 \section inside_tests_add_units Adding unit tests
60 If you want to test a specific function or set of functions, you need
62 <project/directory>/tools/cmake/UnitTesting.cmake to add your
63 source file to the TEST_CFILES list, and add the corresponding unit
64 file to the TEST_UNITS list. For example, if your file is toto.c,
65 your unit file will be toto_unit.c. The full path to your file must be
66 provided, but the unit file will always be in src/ directly.
68 If you want to create unit tests in the file src/xbt/toto.c, your
69 changes should look similar to:
72 --- a/tools/cmake/UnitTesting.cmake
73 +++ b/tools/cmake/UnitTesting.cmake
74 @@ -11,6 +11,7 @@ set(TEST_CFILES
81 ${CMAKE_CURRENT_BINARY_DIR}/src/cunit_unit.c
82 @@ -22,6 +23,7 @@ set(TEST_UNITS
83 ${CMAKE_CURRENT_BINARY_DIR}/src/xbt_strbuff_unit.c
84 ${CMAKE_CURRENT_BINARY_DIR}/src/xbt_sha_unit.c
85 ${CMAKE_CURRENT_BINARY_DIR}/src/config_unit.c
86 + ${CMAKE_CURRENT_BINARY_DIR}/src/toto_unit.c
88 ${CMAKE_CURRENT_BINARY_DIR}/src/simgrid_units_main.c
92 Then, you want to actually add your tests in the source file. All the
93 tests must be protected by "#ifdef SIMGRID_TEST" so that they don't
94 get included in the regular build. The line SIMGRID_TEST must also be
95 written on the endif line for the extraction script to work properly.
97 Tests are subdivided in three levels. The top-level, called <b>test
98 suite</b>, is created with the macro #XBT_TEST_SUITE. There can be
99 only one suite per source file. A suite contains <b>test units</b>
100 that you create with the #XBT_TEST_UNIT macro. Finally, you start
101 <b>actual tests</b> with #xbt_test_add. There is no closing marker of
102 any sort, and an unit is closed when the next unit starts, or when the
103 end of file is reached.
105 Once a given test is started with #xbt_test_add, you use
106 #xbt_test_assert to specify that it was actually an assert, or
107 #xbt_test_fail to specify that it failed (if your test cannot easily
108 be written as an assert). #xbt_test_exception can be used to report
109 that it failed with an exception. There is nothing to do to report
110 that a given test succeeded, just start the next test without
111 reporting any issue. Finally, #xbt_test_log can be used to report
112 intermediate steps. The messages will be shown only if the
113 corresponding test fails.
115 Here is a recaping example, inspired from the dynar implementation.
117 /* The rest of your module implementation */
121 XBT_TEST_SUITE("dynar", "Dynar data container");
122 XBT_LOG_EXTERNAL_DEFAULT_CATEGORY(xbt_dyn); // Just the regular logging stuff
124 XBT_TEST_UNIT("int", test_dynar_int, "Dynars of integers")
129 xbt_test_add("==== Traverse the empty dynar");
130 xbt_dynar_t d = xbt_dynar_new(sizeof(int), NULL);
131 xbt_dynar_foreach(d, cursor, i) {
132 xbt_test_fail( "Damnit, there is something in the empty dynar");
136 xbt_test_add("==== Push %d int and re-read them", NB_ELEM);
137 d = xbt_dynar_new(sizeof(int), NULL);
138 for (cpt = 0; cpt < NB_ELEM; cpt++) {
139 xbt_test_log("Push %d, length=%lu", cpt, xbt_dynar_length(d));
140 xbt_dynar_push_as(d, int, cpt);
143 for (cursor = 0; cursor < NB_ELEM; cursor++) {
144 int *iptr = xbt_dynar_get_ptr(d, cursor);
145 xbt_test_assert(cursor == *iptr,
146 "The retrieved value is not the same than the injected one (%u!=%d)",cursor, cpt);
149 xbt_test_add("==== Check the size of that %d-long dynar", NB_ELEM);
150 xbt_test_assert(xbt_dynar_size(d) == NB_ELEM);
154 XBT_TEST_UNIT("insert",test_dynar_insert,"Using the xbt_dynar_insert and xbt_dynar_remove functions")
156 xbt_dynar_t d = xbt_dynar_new(sizeof(unsigned int), NULL);
160 xbt_test_add("==== Insert %d int, traverse them, remove them",NB_ELEM);
164 #endif /* SIMGRID_TEST <-- that string must appear on the endif line */
167 For more details on the macro used to write unit tests, see their
168 reference guide: @ref XBT_cunit. For details on on how the tests are
169 extracted from the module source, check the tools/sg_unit_extractor.pl
172 Last note: please try to keep your tests fast. We run them very very
173 very often, and you should strive to make it as fast as possible, to
174 not upset the other developers. Do not hesitate to stress test your
175 code with such unit tests, but make sure that it runs reasonably fast,
176 or nobody will run "ctest" before commiting code.
178 \section inside_tests_add_integration Adding integration tests
180 TESH (the TEsting SHell) is the test runner that we wrote for our
181 integration tests. It is distributed with the SimGrid source file, and
182 even comes with a man page. TESH ensures that the output produced by a
183 command perfectly matches the expected output. This is very precious
184 to ensure that no change modifies the timings computed by the models
187 To add a new integration test, you thus have 3 things to do:
189 - <b>Write the code exercising the feature you target</b>. You should
190 strive to make this code clear, well documented and informative for
191 the users. If you manage to do so, put this somewhere under
192 examples/ and modify the cmake files as explained on this page:
193 \ref inside_cmake_examples. If you feel like you should write a
194 torture test that is not interesting to the users (because nobody
195 would sainly write something similar in user code), then put it under
196 teshsuite/ somewhere.
197 - <b>Write the tesh file</b>, containing the command to run, the
198 provided input (if any, but almost no SimGrid test provide such an
199 input) and the expected output. Check the tesh man page for more
201 Tesh is sometimes annoying as you have to ensure that the expected
202 output will always be exactly the same. In particular, your should
203 not output machine dependent informations, nor memory adresses as
204 they would change on each run. Several steps can be used here, such
205 as the obfucation of the memory adresses unless the verbose logs
206 are displayed (using the #XBT_LOG_ISENABLED() macro), or the
207 modification of the log formats to hide the timings when they
208 depend on the host machine.
209 - <b>Add your test in the cmake infrastructure</b>. For that, modify
210 the file <project/directory>/tools/cmake/Tests.cmake. Make sure to
211 pick a wise name for your test. It is often useful to check a
212 category of tests together. The only way to do so in ctest is to
213 use the -R argument that specifies a regular expression that the
214 test names must match. For example, you can run all MSG test with
215 "ctest -R msg". That explains the importance of the test names.
217 Once the name is chosen, create a new test by adding a line similar to
218 the following (assuming that you use tesh as expected).
221 # Usage: ADD_TEST(test-name ${CMAKE_BINARY_DIR}/bin/tesh <options> <tesh-file>)
222 # option --setenv bindir set the directory containing the binary
223 # --setenv srcdir set the directory containing the source file
224 # --cd set the working directory
225 ADD_TEST(my-test-name ${CMAKE_BINARY_DIR}/bin/tesh
226 --setenv bindir=${CMAKE_BINARY_DIR}/examples/my-test/
227 --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/my-test/
228 --cd ${CMAKE_HOME_DIRECTORY}/examples/my-test/
229 ${CMAKE_HOME_DIRECTORY}/examples/msg/io/io.tesh
233 As usual, you must run "make distcheck" after modifying the cmake files,
234 to ensure that you did not forget any files in the distributed archive.
236 \section inside_tests_ci Continous Integration
238 We use several systems to automatically test SimGrid with a large set
239 of parameters, across as many platforms as possible.
240 We use <a href="https://ci.inria.fr/simgrid/">Jenkins on Inria
241 servers</a> as a workhorse: it runs all of our tests for many
242 configurations. It takes a long time to answer, and it often reports
243 issues but when it's green, then you know that SimGrid is very fit!
244 We use <a href="https://travis-ci.org/mquinson/simgrid">Travis</a> to
245 quickly run some tests on Linux and Mac. It answers quickly but may
246 miss issues. And we use <a href="https://ci.appveyor.com/project/mquinson/simgrid">AppVeyor</a>
247 to build and somehow test SimGrid on windows.
249 \subsection inside_tests_jenkins Jenkins on the Inria CI servers
251 You should not have to change the configuration of the Jenkins tool
252 yourself, although you could have to change the slaves' configuration
253 using the <a href="https://ci.inria.fr">CI interface of INRIA</a> --
254 refer to the <a href="https://wiki.inria.fr/ciportal/">CI documentation</a>.
256 The result can be seen here: https://ci.inria.fr/simgrid/
258 We have 3 projects on Jenkins:
259 \li <a href="https://ci.inria.fr/simgrid/job/SimGrid-Multi/">SimGrid-Multi</a>
260 is the main project, running the tests that we spoke about.\n It is
261 configured (on Jenkins) to run the script <tt>tools/jenkins/build.sh</tt>
262 \li <a href="https://ci.inria.fr/simgrid/job/SimGrid-DynamicAnalysis/">SimGrid-DynamicAnalysis</a>
263 runs the tests both under valgrind to find the memory errors and
264 under gcovr to report the achieved test coverage.\n It is configured
265 (on Jenkins) to run the script <tt>tools/jenkins/DynamicAnalysis.sh</tt>
266 \li <a href="https://ci.inria.fr/simgrid/job/SimGrid-Windows/">SimGrid-Windows</a>
267 is an ongoing attempt to get Windows tested on Jenkins too.
269 In each case, SimGrid gets built in
270 /builds/workspace/$PROJECT/build_mode/$CONFIG/label/$SERVER/build
271 with $PROJECT being for instance "SimGrid-Multi", $CONFIG "DEBUG" or
272 "ModelChecker" and $SERVER for instance "simgrid-fedora20-64-clang".
274 If some configurations are known to fail on some systems (such as
275 model-checking on non-linux systems), go to your Project and click on
276 "Configuration". There, find the field "combination filter" (if your
277 interface language is English) and tick the checkbox; then add a
278 groovy-expression to disable a specific configuration. For example, in
279 order to disable the "ModelChecker" build on host
280 "small-freebsd-64-clang", use:
283 (label=="small-freebsd-64-clang").implies(build_mode!="ModelChecker")
286 \subsection inside_tests_travis Travis
288 Travis is a free (as in free beer) Continuous Integration system that
289 open-sourced project can use freely. It is very well integrated in the
290 GitHub ecosystem. There is a plenty of documentation out there. Our
291 configuration is in the file .travis.yml as it should be, and the
292 result is here: https://travis-ci.org/mquinson/simgrid
294 \subsection inside_tests_appveyor AppVeyor
296 AppVeyor aims at becoming the Travis of Windows. It is maybe less
297 mature than Travis, or maybe it is just that I'm less trained in
298 Windows. Our configuration is in the file appveyor.yml as it should
299 be, and the result is here: https://ci.appveyor.com/project/mquinson/simgrid
301 It should be noted that I miserably failed to use the environment
302 provided by AppVeyor, since SimGrid does not build with Microsoft
303 Visual Studio. Instead, we download a whole development environment
304 from the internet at each build. That's an archive of already compiled
305 binaries that are unpacked on the appveyor systems each time we start.
306 We re-use the ones from the
307 <a href="https://github.com/symengine/symengine">symengine</a>
308 project. Thanks to them for compiling sane tools and constituting that
309 archive, it saved my mind!
311 \subsection inside_tests_debian Debian builders
313 Since SimGrid is packaged in Debian, we benefit from their huge
314 testing infrastructure. That's an interesting torture test for our
315 code base. The downside is that it's only for the released versions of
316 SimGrid. That is why the Debian build does not stop when the tests
317 fail: post-releases fixes do not fit well in our workflow and we fix
318 only the most important breakages.
320 The build results are here:
321 https://buildd.debian.org/status/package.php?p=simgrid