1 /*! @page uhood_switch Process Synchronizations and Context Switching
5 @section uhood_switch_DES SimGrid as an Operating System
7 SimGrid is a discrete event simulator of distributed systems: it does
8 not simulate the world by small fixed-size steps but determines the
9 date of the next event (such as the end of a communication, the end of
10 a computation) and jumps to this date.
12 A number of actors executing user-provided code run on top of the
13 simulation kernel. The interactions between these actors and the
14 simulation kernel are very similar to the ones between the system
15 processes and the Operating System (except that the actors and
16 simulation kernel share the same address space in a single OS
19 When an actor needs to interact with the outer world (eg. to start a
20 communication), it issues a <i>simcall</i> (simulation call), just
21 like a system process issues a <i>syscall</i> to interact with its
22 environment through the Operating System. Any <i>simcall</i> freezes
23 the actor until it is woken up by the simulation kernel (eg. when the
24 communication is finished).
26 Mimicking the OS behavior may seem over-engineered here, but this is
27 mandatory to the model-checker. The simcalls, representing actors'
28 actions, are the transitions of the formal system. Verifying the
29 system requires to manipulate these transitions explicitly. This also
30 allows one to run the actors safely in parallel, even if this is less
31 commonly used by our users.
33 So, the key ideas here are:
35 - The simulator is a discrete event simulator (event-driven).
37 - An actor can issue a blocking simcall and will be suspended until
38 it is woken up by the simulation kernel (when the operation is
41 - In order to move forward in (simulated) time, the simulation kernel
42 needs to know which actions the actors want to do.
44 - The simulated time will only move forward when all the actors are
45 blocked, waiting on a simcall.
47 This leads to some very important consequences:
49 - An actor cannot synchronize with another actor using OS-level primitives
50 such as `pthread_mutex_lock()` or `std::mutex`. The simulation kernel
51 would wait for the actor to issue a simcall and would deadlock. Instead it
52 must use simulation-level synchronization primitives
53 (such as `simcall_mutex_lock()`).
55 - Similarly, an actor cannot sleep using
56 `std::this_thread::sleep_for()` which waits in the real world but
57 must instead wait in the simulation with
58 `simgrid::s4u::Actor::this_actor::sleep_for()` which waits in the
61 - The simulation kernel cannot block.
62 Only the actors can block (using simulation primitives).
64 @section uhood_switch_futures Futures and Promises
66 @subsection uhood_switch_futures_what What is a future?
68 Futures are a nice classical programming abstraction, present in many
69 language. Wikipedia defines a
70 [future](https://en.wikipedia.org/wiki/Futures_and_promises) as an
71 object that acts as a proxy for a result that is initially unknown,
72 usually because the computation of its value is yet incomplete. This
73 concept is thus perfectly adapted to represent in the kernel the
74 asynchronous operations corresponding to the actors' simcalls.
77 Futures can be manipulated using two kind of APIs:
79 - a <b>blocking API</b> where we wait for the result to be available
82 - a <b>continuation-based API</b> where we say what should be done
83 with the result when the operation completes
84 (`future.then(something_to_do_with_the_result)`). This is heavily
85 used in ECMAScript that exhibits the same kind of never-blocking
86 asynchronous model as our discrete event simulator.
88 C++11 includes a generic class (`std::future<T>`) which implements a
89 blocking API. The continuation-based API is not available in the
90 standard (yet) but is [already
91 described](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0159r0.html#futures.unique_future.6)
92 in the Concurrency Technical Specification.
94 `Promise`s are the counterparts of `Future`s: `std::future<T>` is used
95 <em>by the consumer</em> of the result. On the other hand,
96 `std::promise<T>` is used <em>by the producer</em> of the result. The
97 producer calls `promise.set_value(42)` or `promise.set_exception(e)`
98 in order to <em>set the result</em> which will be made available to
99 the consumer by `future.get()`.
101 @subsection uhood_switch_futures_needs Which future do we need?
103 The blocking API provided by the standard C++11 futures does not suit
104 our needs since the simulation kernel <em>cannot</em> block, and since
105 we want to explicitly schedule the actors. Instead, we need to
106 reimplement a continuation-based API to be used in our event-driven
109 Our futures are based on the C++ Concurrency Technical Specification
110 API, with a few differences:
112 - The simulation kernel is single-threaded so we do not need
113 inter-thread synchronization for our futures.
115 - As the simulation kernel cannot block, `f.wait()` is not meaningful
118 - Similarly, `future.get()` does an implicit wait. Calling this method in the
119 simulation kernel only makes sense if the future is already ready. If the
120 future is not ready, this would deadlock the simulator and an error is
123 - We always call the continuations in the simulation loop (and not
124 inside the `future.then()` or `promise.set_value()` calls). That
125 way, we don't have to fear problems like invariants not being
126 restored when the callbacks are called :fearful: or stack overflows
127 triggered by deeply nested continuations chains :cold_sweat:. The
128 continuations are all called in a nice and predictable place in the
129 simulator with a nice and predictable state :relieved:.
131 - Some features of the standard (such as shared futures) are not
132 needed in our context, and thus not considered here.
134 @subsection uhood_switch_futures_implem Implementing `Future` and `Promise`
136 The `simgrid::kernel::Future` and `simgrid::kernel::Promise` use a
137 shared state defined as follows:
140 enum class FutureStatus {
146 class FutureStateBase : private boost::noncopyable {
148 void schedule(simgrid::xbt::Task<void()>&& job);
149 void set_exception(std::exception_ptr exception);
150 void set_continuation(simgrid::xbt::Task<void()>&& continuation);
151 FutureStatus get_status() const;
152 bool is_ready() const;
155 FutureStatus status_ = FutureStatus::not_ready;
156 std::exception_ptr exception_;
157 simgrid::xbt::Task<void()> continuation_;
161 class FutureState : public FutureStateBase {
163 void set_value(T value);
166 boost::optional<T> value_;
170 class FutureState<T&> : public FutureStateBase {
174 class FutureState<void> : public FutureStateBase {
179 Both `Future` and `Promise` have a reference to the shared state:
186 std::shared_ptr<FutureState<T>> state_;
193 std::shared_ptr<FutureState<T>> state_;
194 bool future_get_ = false;
198 The crux of `future.then()` is:
203 auto simgrid::kernel::Future<T>::then_no_unwrap(F continuation)
204 -> Future<decltype(continuation(std::move(*this)))>
206 typedef decltype(continuation(std::move(*this))) R;
208 if (state_ == nullptr)
209 throw std::future_error(std::future_errc::no_state);
211 auto state = std::move(state_);
212 // Create a new future...
214 Future<R> future = promise.get_future();
215 // ...and when the current future is ready...
216 state->set_continuation(simgrid::xbt::makeTask(
217 [](Promise<R> promise, std::shared_ptr<FutureState<T>> state,
219 // ...set the new future value by running the continuation.
220 Future<T> future(std::move(state));
221 simgrid::xbt::fulfillPromise(promise,[&]{
222 return continuation(std::move(future));
225 std::move(promise), state, std::move(continuation)));
226 return std::move(future);
230 We added a (much simpler) `future.then_()` method which does not
236 void simgrid::kernel::Future<T>::then_(F continuation)
238 if (state_ == nullptr)
239 throw std::future_error(std::future_errc::no_state);
240 // Give shared-ownership to the continuation:
241 auto state = std::move(state_);
242 state->set_continuation(simgrid::xbt::makeTask(
243 std::move(continuation), state));
247 The `.get()` delegates to the shared state. As we mentioned previously, an
248 error is raised if the future is not ready:
252 T simgrid::kernel::Future::get()
254 if (state_ == nullptr)
255 throw std::future_error(std::future_errc::no_state);
256 std::shared_ptr<FutureState<T>> state = std::move(state_);
261 T simgrid::kernel::FutureState<T>::get()
263 xbt_assert(status_ == FutureStatus::ready, "Deadlock: this future is not ready");
264 status_ = FutureStatus::done;
266 std::exception_ptr exception = std::move(exception_);
267 std::rethrow_exception(std::move(exception));
269 xbt_assert(this->value_);
270 auto result = std::move(this->value_.get());
271 this->value_ = boost::optional<T>();
272 return std::move(result);
276 @section uhood_switch_simcalls Implementing the simcalls
278 So a simcall is a way for the actor to push a request to the
279 simulation kernel and yield the control until the request is
280 fulfilled. The performance requirements are very high because
281 the actors usually do an inordinate amount of simcalls during the
284 As for real syscalls, the basic idea is to write the wanted call and
285 its arguments in a memory area that is specific to the actor, and
286 yield the control to the simulation kernel. Once in kernel mode, the
287 simcalls of each demanding actor are evaluated sequentially in a
288 strictly reproducible order. This makes the whole simulation
292 @subsection uhood_switch_simcalls_v2 The historical way
294 In the very first implementation, everything was written by hand and
295 highly optimized, making our software very hard to maintain and
296 evolve. We decided to sacrifice some performance for
297 maintainability. In a second try (that is still in use in SimGrid
298 v3.13), we had a lot of boiler code generated from a python script,
299 taking the [list of simcalls](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/simcalls.in)
300 as input. It looks like this:
303 # This looks like C++ but it is a basic IDL-like language
304 # (one definition per line) parsed by a python script:
306 void process_kill(smx_actor_t process);
307 void process_killall(int reset_pid);
308 void process_cleanup(smx_actor_t process) [[nohandler]];
309 void process_suspend(smx_actor_t process) [[block]];
310 void process_resume(smx_actor_t process);
311 void process_set_host(smx_actor_t process, sg_host_t dest);
312 int process_is_suspended(smx_actor_t process) [[nohandler]];
313 int process_join(smx_actor_t process, double timeout) [[block]];
314 int process_sleep(double duration) [[block]];
316 smx_mutex_t mutex_init();
317 void mutex_lock(smx_mutex_t mutex) [[block]];
318 int mutex_trylock(smx_mutex_t mutex);
319 void mutex_unlock(smx_mutex_t mutex);
324 At runtime, a simcall is represented by a structure containing a simcall
325 number and its arguments (among some other things):
328 struct s_smx_simcall {
333 // Arguments of the simcall:
334 union u_smx_scalar args[11];
335 // Result of the simcall:
336 union u_smx_scalar result;
337 // Some additional stuff:
342 with the a scalar union type:
355 unsigned long long ull;
362 When manually calling the relevant [Python
363 script](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/simcalls.py),
364 this generates a bunch of C++ files:
366 * an enum of all the [simcall numbers](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_enum.h#L19);
368 * [user-side wrappers](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_bodies.cpp)
369 responsible for wrapping the parameters in the `struct s_smx_simcall`;
370 and wrapping out the result;
372 * [accessors](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_accessors.hpp)
373 to get/set values of of `struct s_smx_simcall`;
375 * a simulation-kernel-side [big switch](https://github.com/simgrid/simgrid/blob/4ae2fd01d8cc55bf83654e29f294335e3cb1f022/src/simix/popping_generated.cpp#L106)
376 handling all the simcall numbers.
378 Then one has to write the code of the kernel side handler for the simcall
379 and the code of the simcall itself (which calls the code-generated
380 marshaling/unmarshaling stuff).
382 In order to simplify this process, we added two generic simcalls which can be
383 used to execute a function in the simulation kernel:
386 # This one should really be called run_immediate:
387 void run_kernel(std::function<void()> const* code) [[nohandler]];
388 void run_blocking(std::function<void()> const* code) [[block,nohandler]];
391 ### Immediate simcall
393 The first one (`simcall_run_kernel()`) executes a function in the simulation
394 kernel context and returns immediately (without blocking the actor):
397 void simcall_run_kernel(std::function<void()> const& code)
399 simcall_BODY_run_kernel(&code);
402 template<class F> inline
403 void simcall_run_kernel(F& f)
405 simcall_run_kernel(std::function<void()>(std::ref(f)));
409 On top of this, we add a wrapper which can be used to return a value of any
410 type and properly handles exceptions:
414 typename std::result_of_t<F()> kernelImmediate(F&& code)
416 // If we are in the simulation kernel, we take the fast path and
417 // execute the code directly without simcall
418 // marshalling/unmarshalling/dispatch:
419 if (SIMIX_is_maestro())
420 return std::forward<F>(code)();
422 // If we are in the application, pass the code to the simulation
423 // kernel which executes it for us and reports the result:
424 typedef typename std::result_of_t<F()> R;
425 simgrid::xbt::Result<R> result;
426 simcall_run_kernel([&]{
427 xbt_assert(SIMIX_is_maestro(), "Not in maestro");
428 simgrid::xbt::fulfillPromise(result, std::forward<F>(code));
434 where [`Result<R>`](#result) can store either a `R` or an exception.
439 xbt_dict_t Host::properties() {
440 return simgrid::simix::kernelImmediate([&] {
441 simgrid::kernel::resource::HostImpl* host =
442 this->extension<simgrid::kernel::resource::HostImpl>();
443 return host->getProperties();
448 ### Blocking simcall {#uhood_switch_v2_blocking}
450 The second generic simcall (`simcall_run_blocking()`) executes a function in
451 the SimGrid simulation kernel immediately but does not wake up the calling actor
455 void simcall_run_blocking(std::function<void()> const& code);
458 void simcall_run_blocking(F& f)
460 simcall_run_blocking(std::function<void()>(std::ref(f)));
464 The `f` function is expected to setup some callbacks in the simulation
465 kernel which will wake up the actor (with
466 `simgrid::simix::unblock(actor)`) when the operation is completed.
468 This is wrapped in a higher-level primitive as well. The
469 `kernel_sync()` function expects a function-object which is executed
470 immediately in the simulation kernel and returns a `Future<T>`. The
471 simulator blocks the actor and resumes it when the `Future<T>` becomes
472 ready with its result:
476 auto kernel_sync(F code) -> decltype(code().get())
478 typedef decltype(code().get()) T;
479 xbt_assert(not SIMIX_is_maestro(), "Can't execute blocking call in kernel mode");
481 auto self = simgrid::kernel::actor::ActorImpl::self();
482 simgrid::xbt::Result<T> result;
484 simcall_run_blocking([&result, self, &code]{
486 auto future = code();
487 future.then_([&result, self](simgrid::kernel::Future<T> value) {
488 // Propagate the result from the future
489 // to the simgrid::xbt::Result:
490 simgrid::xbt::setPromise(result, value);
491 simgrid::simix::unblock(self);
495 // The code failed immediately. We can wake up the actor
496 // immediately with the exception:
497 result.set_exception(std::current_exception());
498 simgrid::simix::unblock(self);
502 // Get the result of the operation (which might be an exception):
507 A contrived example of this would be:
510 int res = simgrid::simix::kernel_sync([&] {
511 return kernel_wait_until(30).then(
512 [](simgrid::kernel::Future<void> future) {
519 ### Asynchronous operations {#uhood_switch_v2_async}
521 We can write the related `kernel_async()` which wakes up the actor immediately
522 and returns a future to the actor. As this future is used in the actor context,
523 it is a different future
524 (`simgrid::simix::Future` instead of `simgrid::kernel::Future`)
525 which implements a C++11 `std::future` wait-based API:
532 Future(simgrid::kernel::Future<T> future) : future_(std::move(future)) {}
533 bool valid() const { return future_.valid(); }
535 bool is_ready() const;
538 // We wrap an event-based kernel future:
539 simgrid::kernel::Future<T> future_;
543 The `future.get()` method is implemented as[^getcompared]:
547 T simgrid::simix::Future<T>::get()
550 throw std::future_error(std::future_errc::no_state);
551 auto self = simgrid::kernel::actor::ActorImpl::self();
552 simgrid::xbt::Result<T> result;
553 simcall_run_blocking([this, &result, self]{
555 // When the kernel future is ready...
557 [this, &result, self](simgrid::kernel::Future<T> value) {
558 // ... wake up the process with the result of the kernel future.
559 simgrid::xbt::setPromise(result, value);
560 simgrid::simix::unblock(self);
564 result.set_exception(std::current_exception());
565 simgrid::simix::unblock(self);
572 `kernel_async()` simply :wink: calls `kernelImmediate()` and wraps the
573 `simgrid::kernel::Future` into a `simgrid::simix::Future`:
577 auto kernel_async(F code)
578 -> Future<decltype(code().get())>
580 typedef decltype(code().get()) T;
582 // Execute the code in the simulation kernel and get the kernel future:
583 simgrid::kernel::Future<T> future =
584 simgrid::simix::kernelImmediate(std::move(code));
586 // Wrap the kernel future in a user future:
587 return simgrid::simix::Future<T>(std::move(future));
591 A contrived example of this would be:
594 simgrid::simix::Future<int> future = simgrid::simix::kernel_sync([&] {
595 return kernel_wait_until(30).then(
596 [](simgrid::kernel::Future<void> future) {
602 int res = future.get();
605 `kernel_sync()` could be rewritten as:
609 auto kernel_sync(F code) -> decltype(code().get())
611 return kernel_async(std::move(code)).get();
615 The semantic is equivalent but this form would require two simcalls
616 instead of one to do the same job (one in `kernel_async()` and one in
619 ## Mutexes and condition variables
621 ### Condition Variables
623 Similarly SimGrid already had simulation-level condition variables
624 which can be exposed using the same API as `std::condition_variable`:
627 class ConditionVariable {
631 ConditionVariable(smx_cond_t cond) : cond_(cond) {}
634 ConditionVariable(ConditionVariable const&) = delete;
635 ConditionVariable& operator=(ConditionVariable const&) = delete;
637 friend void intrusive_ptr_add_ref(ConditionVariable* cond);
638 friend void intrusive_ptr_release(ConditionVariable* cond);
639 using Ptr = boost::intrusive_ptr<ConditionVariable>;
640 static Ptr createConditionVariable();
642 void wait(std::unique_lock<Mutex>& lock);
644 void wait(std::unique_lock<Mutex>& lock, P pred);
646 // Wait functions taking a plain double as time:
648 std::cv_status wait_until(std::unique_lock<Mutex>& lock,
649 double timeout_time);
650 std::cv_status wait_for(
651 std::unique_lock<Mutex>& lock, double duration);
653 bool wait_until(std::unique_lock<Mutex>& lock,
654 double timeout_time, P pred);
656 bool wait_for(std::unique_lock<Mutex>& lock,
657 double duration, P pred);
659 // Wait functions taking a std::chrono time:
661 template<class Rep, class Period, class P>
662 bool wait_for(std::unique_lock<Mutex>& lock,
663 std::chrono::duration<Rep, Period> duration, P pred);
664 template<class Rep, class Period>
665 std::cv_status wait_for(std::unique_lock<Mutex>& lock,
666 std::chrono::duration<Rep, Period> duration);
667 template<class Duration>
668 std::cv_status wait_until(std::unique_lock<Mutex>& lock,
669 const SimulationTimePoint<Duration>& timeout_time);
670 template<class Duration, class P>
671 bool wait_until(std::unique_lock<Mutex>& lock,
672 const SimulationTimePoint<Duration>& timeout_time, P pred);
682 We currently accept both `double` (for simplicity and consistency with
683 the current codebase) and `std::chrono` types (for compatibility with
684 C++ code) as durations and timepoints. One important thing to notice here is
685 that `cond.wait_for()` and `cond.wait_until()` work in the simulated time,
686 not in the real time.
688 The simple `cond.wait()` and `cond.wait_for()` delegate to
689 pre-existing simcalls:
692 void ConditionVariable::wait(std::unique_lock<Mutex>& lock)
694 simcall_cond_wait(cond_, lock.mutex()->mutex_);
697 std::cv_status ConditionVariable::wait_for(
698 std::unique_lock<Mutex>& lock, double timeout)
700 // The simcall uses -1 for "any timeout" but we don't want this:
705 simcall_cond_wait_timeout(cond_, lock.mutex()->mutex_, timeout);
706 return std::cv_status::no_timeout;
708 catch (const simgrid::TimeoutException& e) {
709 // If the exception was a timeout, we have to take the lock again:
711 lock.mutex()->lock();
712 return std::cv_status::timeout;
724 Other methods are simple wrappers around those two:
728 void ConditionVariable::wait(std::unique_lock<Mutex>& lock, P pred)
735 bool ConditionVariable::wait_until(std::unique_lock<Mutex>& lock,
736 double timeout_time, P pred)
739 if (this->wait_until(lock, timeout_time) == std::cv_status::timeout)
745 bool ConditionVariable::wait_for(std::unique_lock<Mutex>& lock,
746 double duration, P pred)
748 return this->wait_until(lock,
749 simgrid::s4u::Engine::get_clock() + duration, std::move(pred));
756 We wrote two future implementations based on the `std::future` API:
758 * the first one is a non-blocking event-based (`future.then(stuff)`)
759 future used inside our (non-blocking event-based) simulation kernel;
761 * the second one is a wait-based (`future.get()`) future used in the actors
762 which waits using a simcall.
764 These futures are used to implement `kernel_sync()` and `kernel_async()` which
765 expose asynchronous operations in the simulation kernel to the actors.
767 In addition, we wrote variations of some other C++ standard library
768 classes (`SimulationClock`, `Mutex`, `ConditionVariable`) which work in
771 * using simulated time;
773 * using simcalls for synchronisation.
775 Reusing the same API as the C++ standard library is very useful because:
777 * we use a proven API with a clearly defined semantic;
779 * people already familiar with those API can use our own easily;
781 * users can rely on documentation, examples and tutorials made by other
784 * we can reuse generic code with our types (`std::unique_lock`,
785 `std::lock_guard`, etc.).
787 This type of approach might be useful for other libraries which define
788 their own contexts. An example of this is
789 [Mordor](https://github.com/mozy/mordor), an I/O library using fibers
790 (cooperative scheduling): it implements cooperative/fiber
791 [mutex](https://github.com/mozy/mordor/blob/4803b6343aee531bfc3588ffc26a0d0fdf14b274/mordor/fibersynchronization.h#L70),
793 mutex](https://github.com/mozy/mordor/blob/4803b6343aee531bfc3588ffc26a0d0fdf14b274/mordor/fibersynchronization.h#L105)
794 which are compatible with the
795 [`BasicLockable`](http://en.cppreference.com/w/cpp/concept/BasicLockable)
797 [`[thread.req.lockable.basic]`]((http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf#page=1175))
798 in the C++14 standard).
800 ## Appendix: useful helpers
804 Result is like a mix of `std::future` and `std::promise` in a
805 single-object without shared-state and synchronisation:
811 bool is_valid() const;
812 void set_exception(std::exception_ptr e);
813 void set_value(T&& value);
814 void set_value(T const& value);
817 boost::variant<boost::blank, T, std::exception_ptr> value_;
823 Those helper are useful for dealing with generic future-based code:
826 template<class R, class F>
827 auto fulfillPromise(R& promise, F&& code)
828 -> decltype(promise.set_value(code()))
831 promise.set_value(std::forward<F>(code)());
834 promise.set_exception(std::current_exception());
838 template<class P, class F>
839 auto fulfillPromise(P& promise, F&& code)
840 -> decltype(promise.set_value())
843 std::forward<F>(code)();
847 promise.set_exception(std::current_exception());
851 template<class P, class F>
852 void setPromise(P& promise, F&& future)
854 fulfillPromise(promise, [&]{ return std::forward<F>(future).get(); });
860 `Task<R(F...)>` is a type-erased callable object similar to
861 `std::function<R(F...)>` but works for move-only types. It is similar to
862 `std::package_task<R(F...)>` but does not wrap the result in a `std::future<R>`
863 (it is not <i>packaged</i>).
865 | |`std::function` |`std::packaged_task`|`simgrid::xbt::Task`
866 |---------------|----------------|--------------------|--------------------------
867 |Copyable | Yes | No | No
868 |Movable | Yes | Yes | Yes
869 |Call | `const` | non-`const` | non-`const`
870 |Callable | multiple times | once | once
871 |Sets a promise | No | Yes | No
873 It could be implemented as:
879 std::packaged_task<T> task_;
884 task_(std::forward<F>(f))
887 template<class... ArgTypes>
888 auto operator()(ArgTypes... args)
889 -> decltype(task_.get_future().get())
891 task_(std::forward<ArgTypes)(args)...);
892 return task_.get_future().get();
898 but we don't need a shared-state.
900 This is useful in order to bind move-only type arguments:
903 template<class F, class... Args>
907 std::tuple<Args...> args_;
908 typedef decltype(simgrid::xbt::apply(
909 std::move(code_), std::move(args_))) result_type;
911 TaskImpl(F code, std::tuple<Args...> args) :
912 code_(std::move(code)),
913 args_(std::move(args))
915 result_type operator()()
917 // simgrid::xbt::apply is C++17 std::apply:
918 return simgrid::xbt::apply(std::move(code_), std::move(args_));
922 template<class F, class... Args>
923 auto makeTask(F code, Args... args)
924 -> Task< decltype(code(std::move(args)...))() >
926 TaskImpl<F, Args...> task(
927 std::move(code), std::make_tuple(std::move(args)...));
928 return std::move(task);
937 You might want to compare this method with `simgrid::kernel::Future::get()`
938 we showed previously: the method of the kernel future does not block and
939 raises an error if the future is not ready; the method of the actor future
940 blocks after having set a continuation to wake the actor when the future
945 `std::lock()` might kinda work too but it may not be such as good idea to
946 use it as it may use a [<q>deadlock avoidance algorithm such as
947 try-and-back-off</q>](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf#page=1199).
948 A backoff would probably uselessly wait in real time instead of simulated
949 time. The deadlock avoidance algorithm might as well add non-determinism
950 in the simulation which we would like to avoid.
951 `std::try_lock()` should be safe to use though.