> [0.000000] [mc_explo/INFO] *** PROPERTY NOT VALID ***
> [0.000000] [mc_explo/INFO] **************************
> [0.000000] [mc_explo/INFO] Counter-example execution trace:
+> [0.000000] [mc_explo/INFO] 3: iSend(mbox=0)
+> [0.000000] [mc_explo/INFO] 3: iSend(mbox=0)
+> [0.000000] [mc_explo/INFO] 3: WaitComm(from 3 to 1, mbox=0, no timeout)
+> [0.000000] [mc_explo/INFO] 3: WaitComm(from 3 to 1, mbox=0, no timeout)
+> [0.000000] [mc_explo/INFO] 1: iRecv(mbox=0)
+> [0.000000] [mc_explo/INFO] 3: iSend(mbox=0)
+> [0.000000] [mc_explo/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout)
+> [0.000000] [mc_explo/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'3;3;3;3;1;3;1'
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 742 unique states visited; 145 backtracks (2228 transition replays, 1331 states visited overall)
+
+! expect return 1
+! timeout 20
+$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true --cfg=model-check/guided-mc:nb_wait ${bindir:=.}/s4u-mc-bugged2 ${platfdir:=.}/model_checker_platform.xml --log=root.thresh:critical --log=mc.thresh:info
+> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor.
+> [0.000000] [mc_explo/INFO] **************************
+> [0.000000] [mc_explo/INFO] *** PROPERTY NOT VALID ***
+> [0.000000] [mc_explo/INFO] **************************
+> [0.000000] [mc_explo/INFO] Counter-example execution trace:
> [0.000000] [mc_explo/INFO] 1: iRecv(mbox=0)
> [0.000000] [mc_explo/INFO] 3: iSend(mbox=0)
> [0.000000] [mc_explo/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout)
> [0.000000] [mc_explo/INFO] 3: iSend(mbox=0)
> [0.000000] [mc_explo/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout)
> [0.000000] [mc_explo/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;3;3;1'
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 348 unique states visited; 67 backtracks (1051 transition replays, 633 states visited overall)
\ No newline at end of file
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 644 unique states visited; 157 backtracks (2487 transition replays, 1674 states visited overall)
\ No newline at end of file
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
-> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
-> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
-> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
-> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
-> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
-> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
-> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
-> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
-> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
-> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 169 unique states visited; 28 backtracks (283 transition replays, 64 states visited overall)
\ No newline at end of file
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 169 unique states visited; 28 backtracks (286 transition replays, 64 states visited overall)
\ No newline at end of file
> [0.000000] [mc_explo/INFO] *** PROPERTY NOT VALID ***
> [0.000000] [mc_explo/INFO] **************************
> [0.000000] [mc_explo/INFO] Counter-example execution trace:
-> [0.000000] [mc_explo/INFO] 1: iRecv(mbox=0)
+> [0.000000] [mc_explo/INFO] 2: iSend(mbox=0)
> [0.000000] [mc_explo/INFO] 3: iSend(mbox=0)
> [0.000000] [mc_explo/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout)
> [0.000000] [mc_explo/INFO] 1: iRecv(mbox=0)
> [0.000000] [mc_explo/INFO] 2: iSend(mbox=0)
> [0.000000] [mc_explo/INFO] 1: WaitComm(from 2 to 1, mbox=0, no timeout)
-> [0.000000] [mc_explo/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;2;1'
+> [0.000000] [mc_explo/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'2;3;1;1;2;1'
> [0.000000] [mc_dfs/INFO] DFS exploration ended. 21 unique states visited; 2 backtracks (24 transition replays, 1 states visited overall)
+! expect return 1
+! timeout 300
+$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/reduction:none --cfg=model-check/guided-mc:nb_wait -- ${bindir:=.}/s4u-mc-failing-assert ${platfdir}/small_platform.xml --log=root.thresh:critical
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/reduction' to 'none'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/guided-mc' to 'nb_wait'
+> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: none.
+> [0.000000] [mc_explo/INFO] **************************
+> [0.000000] [mc_explo/INFO] *** PROPERTY NOT VALID ***
+> [0.000000] [mc_explo/INFO] **************************
+> [0.000000] [mc_explo/INFO] Counter-example execution trace:
+> [0.000000] [mc_explo/INFO] 3: iSend(mbox=0)
+> [0.000000] [mc_explo/INFO] 2: iSend(mbox=0)
+> [0.000000] [mc_explo/INFO] 1: iRecv(mbox=0)
+> [0.000000] [mc_explo/INFO] 3: WaitComm(from 3 to 1, mbox=0, no timeout)
+> [0.000000] [mc_explo/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout)
+> [0.000000] [mc_explo/INFO] 1: iRecv(mbox=0)
+> [0.000000] [mc_explo/INFO] 2: WaitComm(from 2 to 1, mbox=0, no timeout)
+> [0.000000] [mc_explo/INFO] 1: WaitComm(from 2 to 1, mbox=0, no timeout)
+> [0.000000] [mc_explo/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'3;2;1;3;1;1;2;1'
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 8 unique states visited; 0 backtracks (8 transition replays, 0 states visited overall)
> [Checker] INDEPENDENT Transitions:
> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2)
> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3)
-> [Checker] Dependent Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3)
> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: 2) (stack depth: 4, state: 14, 0 interleaves)
> [Checker] INDEPENDENT Transitions:
> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3)
> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=14)
+> [Checker] INDEPENDENT Transitions:
+> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=14)
> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 5, state: 15, 0 interleaves)
> [Checker] Dependent Transitions:
> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=14)
> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=14)
> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=16)
> [Checker] There remains 0 actors, but none to interleave (depth 8).
-> [Checker] Execution came to an end at 1;1;2;1;2;2;0 (state: 17, depth: 7)
-> [Checker] Backtracking from 1;1;2;1;2;2;0
+> [Checker] Execution came to an end at 2;1;2;1;2;2;0 (state: 17, depth: 7)
+> [Checker] Backtracking from 2;1;2;1;2;2;0
> [Checker] There remains 0 actors, but none to interleave (depth 8).
-> [Checker] Execution came to an end at 1;1;2;1;2;2;0 (state: 17, depth: 7)
-> [Checker] Backtracking from 1;1;2;1;2;2;0
+> [Checker] Execution came to an end at 2;1;2;1;2;2;0 (state: 17, depth: 7)
+> [Checker] Backtracking from 2;1;2;1;2;2;0
> [Checker] DFS exploration ended. 17 unique states visited; 2 backtracks (22 transition replays, 2 states visited overall)
$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:2 --log=s4u_test.thres:critical
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '2'
> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor.
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 135 unique states visited; 26 backtracks (339 transition replays, 151 states visited overall)
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 130 unique states visited; 27 backtracks (334 transition replays, 151 states visited overall)
$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true --cfg=model-check/guided-mc:nb_wait -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:3 --log=s4u_test.thres:critical
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/guided-mc' to 'nb_wait'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '3'
> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor.
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 5622 unique states visited; 1169 backtracks (21253 transition replays, 13439 states visited overall)
\ No newline at end of file
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 3492 unique states visited; 743 backtracks (13164 transition replays, 8263 states visited overall)
\ No newline at end of file
> [0.000000] [mc_comm_determinism/INFO] The recv communications pattern of the actor 0 is different! Different source for communication #1
> [0.000000] [mc_comm_determinism/INFO] Send-deterministic : Yes
> [0.000000] [mc_comm_determinism/INFO] Recv-deterministic : No
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 242 unique states visited; 67 backtracks (625 transition replays, 303 states visited overall)
\ No newline at end of file
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 313 unique states visited; 89 backtracks (826 transition replays, 408 states visited overall)
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true'
> [0.000000] [sthread/INFO] Starting the simulation.
> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor.
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 719 unique states visited; 82 backtracks (1978 transition replays, 1053 states visited overall)
\ No newline at end of file
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 1100 unique states visited; 135 backtracks (3155 transition replays, 1702 states visited overall)
+
+$ ${bindir:=.}/../../bin/simgrid-mc --cfg=model-check/sleep-set:true --cfg=model-check/guided-mc:nb_wait --cfg=model-check/setenv:LD_PRELOAD=${libdir:=.}/libsgmalloc.so:${libdir:=.}/libsthread.so ${bindir:=.}/pthread-producer-consumer -q -c 2 -C 1 -p 2 -P 1
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/guided-mc' to 'nb_wait'
+> [0.000000] [sthread/INFO] Starting the simulation.
+> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor.
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 1101 unique states visited; 136 backtracks (3233 transition replays, 1713 states visited overall)
\ No newline at end of file
long State::expended_states_ = 0;
-State::State(const State& other)
- : transition_(other.transition_)
- , num_(other.num_)
- , system_state_(other.system_state_)
- , parent_state_(nullptr)
- , guide_(other.guide_)
- , sleep_set_(other.sleep_set_)
-{
-}
-
State::State(RemoteApp& remote_app) : num_(++expended_states_)
{
XBT_VERB("Creating a guide for the state");
if (_sg_mc_guided == "nb_wait")
guide_ = std::make_shared<WaitGuide>();
+ recipe_ = std::list<Transition*>();
+
remote_app.get_actors_status(guide_->actors_to_run_);
/* Stateful model checking */
*remote_app.get_remote_process_memory());
}
-State::State(RemoteApp& remote_app, const State* parent_state) : num_(++expended_states_), parent_state_(parent_state)
+State::State(RemoteApp& remote_app, std::shared_ptr<State> parent_state)
+ : num_(++expended_states_), parent_state_(parent_state)
{
if (_sg_mc_guided == "none")
guide_ = std::make_shared<WaitGuide>();
*guide_ = *(parent_state->guide_);
+ recipe_ = std::list(parent_state_->get_recipe());
+ recipe_.push_back(parent_state_->get_transition());
+
remote_app.get_actors_status(guide_->actors_to_run_);
/* Stateful model checking */
*/
Transition* transition_ = default_transition_.get();
+ /** @brief A list of transition to be replayed in order to get in this state. */
+ std::list<Transition*> recipe_;
+
/** Sequential state ID (used for debugging) */
long num_ = 0;
/** Unique parent of this state. Required both for sleep set computation
and for guided model-checking */
- const State* parent_state_;
+ std::shared_ptr<State> parent_state_ = nullptr;
std::shared_ptr<GuidedState> guide_;
public:
explicit State(RemoteApp& remote_app);
- explicit State(RemoteApp& remote_app, const State* parent_state);
- explicit State(const State& other);
+ explicit State(RemoteApp& remote_app, std::shared_ptr<State> parent_state);
/* Returns a positive number if there is another transition to pick, or -1 if not */
aid_t next_transition() const; // this function should disapear as it is redundant with the next one
bool is_actor_done(aid_t actor) const { return guide_->actors_to_run_.at(actor).is_done(); }
Transition* get_transition() const;
void set_transition(Transition* t) { transition_ = t; }
+ std::shared_ptr<State> get_parent_state() { return parent_state_; }
+ std::list<Transition*> get_recipe() const { return recipe_; }
+
std::map<aid_t, ActorState> const& get_actors_list() const { return guide_->actors_to_run_; }
unsigned long get_actor_count() const { return guide_->actors_to_run_.size(); }
return trace;
}
+void DFSExplorer::restore_stack(std::shared_ptr<State> state)
+{
+
+ stack_ = std::list<std::shared_ptr<State>>();
+ std::shared_ptr<State> current_state(state);
+ stack_.push_front(std::shared_ptr<State>(current_state));
+ // condition corresponds to reaching initial state
+ while (current_state->get_parent_state() != nullptr) {
+ current_state = current_state->get_parent_state();
+ stack_.push_front(std::shared_ptr<State>(current_state));
+ }
+ XBT_DEBUG("Replaced stack by %s", get_record_trace().to_string().c_str());
+}
+
void DFSExplorer::log_state() // override
{
on_log_state_signal(get_remote_app());
Exploration::log_state();
}
-/* Copy a given stack by deep-copying it at the State level : this is required so we can backtrack at different
- * points without interacting with the stacks in the opened_states_ waiting for their turn. On the other hand,
- * the exploration of one stack in opened_states_ could only slightly modify the sleep set of another stack in
- * opened_states_, so it is only a slight waste of performance in the exploration. */
-void DFSExplorer::add_to_opened_states(stack_t stack)
-{
- stack_t tmp_stack;
- for (auto& state : stack)
- tmp_stack.push_back(std::make_shared<State>(State(*state)));
- opened_states_.emplace(tmp_stack);
-}
-
void DFSExplorer::run()
{
on_exploration_start_signal(get_remote_app());
while (not stack_.empty()) {
/* Get current state */
- State* state = stack_.back().get();
+ std::shared_ptr<State> state(stack_.back());
XBT_DEBUG("**************************************************");
XBT_DEBUG("Exploration depth=%zu (state:#%ld; %zu interleaves todo)", stack_.size(), state->get_num(),
state->get_transition()->to_string().c_str(), stack_.size(), state->get_num(), state->count_todo());
/* Create the new expanded state (copy the state of MCed into our MCer data) */
- std::shared_ptr<State> next_state;
- next_state = std::make_shared<State>(get_remote_app(), state);
+ std::shared_ptr<State> next_state = std::make_shared<State>(get_remote_app(), state);
on_state_creation_signal(next_state.get(), get_remote_app());
/* Sleep set procedure:
if (prev_state->is_actor_enabled(issuer_id)) {
if (not prev_state->is_actor_done(issuer_id)) {
prev_state->consider_one(issuer_id);
- add_to_opened_states(tmp_stack);
+ opened_states_.push(std::shared_ptr<State>(tmp_stack.back()));
} else
XBT_DEBUG("Actor %ld is already in done set: no need to explore it again", issuer_id);
} else {
issuer_id);
if (prev_state->consider_all() >
0) // If we ended up marking at least a transition, explore it at some point
- add_to_opened_states(tmp_stack);
+ opened_states_.push(std::shared_ptr<State>(tmp_stack.back()));
}
break;
} else {
// Before leaving that state, if the transition we just took can be taken multiple times, we
// need to give it to the opened states
if (stack_.back()->count_todo_multiples() > 0)
- add_to_opened_states(stack_);
+ opened_states_.push(std::shared_ptr<State>(stack_.back()));
if (_sg_mc_termination)
this->check_non_termination(next_state.get());
// if no backtracking point, then set the stack_ to empty so we can end the exploration
if (opened_states_.empty()) {
+ XBT_DEBUG("No more opened point of exploration, the search will end");
stack_ = std::list<std::shared_ptr<State>>();
return;
}
- stack_t backtrack = opened_states_.top(); // Take the point with smallest distance
+ std::shared_ptr<State> backtracking_point = opened_states_.top(); // Take the point with smallest distance
opened_states_.pop();
// if the smallest distance corresponded to no enable actor, remove this and let the
// exploration ask again for a backtrack
- if (backtrack.back()->next_transition_guided().first == -1)
+ if (backtracking_point->next_transition_guided().first == -1) {
+ XBT_DEBUG("Best backtracking candidates has already been explored. We may backtrack again");
return;
+ }
// We found a real backtracking point, let's go to it
backtrack_count_++;
-
+ XBT_DEBUG("Backtracking to state#%ld", backtracking_point->get_num());
/* If asked to rollback on a state that has a snapshot, restore it */
- State* last_state = backtrack.back().get();
- if (const auto* system_state = last_state->get_system_state()) {
+ if (const auto* system_state = backtracking_point->get_system_state()) {
system_state->restore(*get_remote_app().get_remote_process_memory());
- on_restore_system_state_signal(last_state, get_remote_app());
- stack_ = backtrack;
+ on_restore_system_state_signal(backtracking_point.get(), get_remote_app());
+ this->restore_stack(backtracking_point);
return;
}
get_remote_app().restore_initial_state();
on_restore_initial_state_signal(get_remote_app());
/* Traverse the stack from the state at position start and re-execute the transitions */
- for (std::shared_ptr<State> const& state : backtrack) {
- if (state == backtrack.back()) /* If we are arrived on the target state, don't replay the outgoing transition */
- break;
- state->get_transition()->replay(get_remote_app());
- on_transition_replay_signal(state->get_transition(), get_remote_app());
+ for (auto& state : backtracking_point->get_recipe()) {
+ state->replay(get_remote_app());
+ on_transition_replay_signal(state, get_remote_app());
visited_states_count_++;
}
- stack_ = backtrack;
- XBT_DEBUG(">> Backtracked to %s", get_record_trace().to_string().c_str());
+ this->restore_stack(backtracking_point);
}
DFSExplorer::DFSExplorer(const std::vector<char*>& args, bool with_dpor, bool need_memory_info)
} else
XBT_INFO("Start a DFS exploration. Reduction is: %s.", to_c_str(reduction_mode_));
- auto initial_state = std::make_unique<State>(get_remote_app());
+ auto initial_state = std::make_shared<State>(get_remote_app());
XBT_DEBUG("**************************************************");
stack_.push_back(std::move(initial_state));
/* Get an enabled actor and insert it in the interleave set of the initial state */
- XBT_DEBUG("Initial state. %lu actors to consider", initial_state->get_actor_count());
+ XBT_DEBUG("Initial state. %lu actors to consider", stack_.back()->get_actor_count());
if (reduction_mode_ == ReductionMode::dpor)
stack_.back()->consider_best();
else {
stack_.back()->consider_all();
}
if (stack_.back()->count_todo_multiples() > 1)
- add_to_opened_states(stack_);
+ opened_states_.push(std::shared_ptr<State>(stack_.back()));
}
Exploration* create_dfs_exploration(const std::vector<char*>& args, bool with_dpor)
* regarding the chosen guide in the last state. */
class OpenedStatesCompare {
public:
- bool operator()(stack_t const& lhs, stack_t const& rhs)
+ bool operator()(std::shared_ptr<State> const& lhs, std::shared_ptr<State> const& rhs)
{
- return lhs.back()->next_transition_guided().second < rhs.back()->next_transition_guided().second;
+ return lhs->next_transition_guided().second < rhs->next_transition_guided().second;
}
};
/** Opened states are states that still contains todo actors.
* When backtracking, we pick a state from it*/
- std::priority_queue<stack_t, std::vector<stack_t>, OpenedStatesCompare> opened_states_;
- void add_to_opened_states(stack_t stack);
+ std::priority_queue<std::shared_ptr<State>, std::vector<std::shared_ptr<State>>, OpenedStatesCompare> opened_states_;
+
+ /** Change current stack_ value to correspond to the one we would have
+ * had if we executed transition to get to state. This is required when
+ * backtracking, and achieved thanks to the fact states save their parent.*/
+ void restore_stack(std::shared_ptr<State> state);
RecordTrace get_record_trace_of_stack(stack_t stack);
};
> If this is too much, consider sharing allocations for computation buffers.
> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
>
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
-> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
-> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
-> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
-> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
-> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
-> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
-> If this is too much, consider sharing allocations for computation buffers.
-> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
->
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 891 unique states visited; 236 backtracks (5614 transition replays, 4388 states visited overall)
\ No newline at end of file
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 623 unique states visited; 173 backtracks (3980 transition replays, 3108 states visited overall)