###
Document the fact that gras processes display the backtrace on sigusr and sigint
-Document XBT_LOG_EXTERNAL_DEFAULT_CATEGORY
Document host module
/* FIXME: better place? */
* maybe a memory pool so that we can cleanly kill an actor
[errors/exception]
- * Better split casual errors from programing errors.
- The first ones should be repported to the user, the second should kill
+ * Better split casual errors from programming errors.
+ The first ones should be reported to the user, the second should kill
the program (or, yet better, only the msg handler)
* Allows the use of an error handler depending on the current module (ie,
the same philosophy as log4c using GSL's error functions)
initializations, and more)
* Allow each actor to have its own setting
* a init/exit mecanism for logging appender
- * Several appenders; fix the setting stuff to change the appender
* more logging appenders (take those from Ralf in l2)
[modules]
examples, too
[transport]
- * Spawn threads handling the communication
- - Data sending cannot be delegated if we want to be kept informed
- (*easily*) of errors here.
- - Actor execution flow shouldn't be interrupted
- - It should be allowed to access (both in read and write access)
- any data available (ie, referenced) from the actor without
- requesting to check for a condition before.
- (in other word, no mutex or assimilated)
- - I know that enforcing those rules prevent the implementation of
- really cleaver stuff. Keeping the stuff simple for the users is more
- important to me than allowing them to do cleaver tricks. Black magic
- should be done *within* gras to reach a good performance level.
-
- - Data receiving can be delegated (and should)
- The first step here is a "simple" mailbox mecanism, with a fifo of
- messages protected by semaphore.
- The rest is rather straightforward too.
-
* use poll(2) instead of select(2) when available. (first need to check
the advantage of doing so ;)
Another idea we spoke about was to simulate this feature with a bunch of
- threads blocked in a read(1) on each incomming socket. The latency is
+ threads blocked in a read(1) on each incoming socket. The latency is
reduced by the cost of a syscall, but the more I think about it, the
less I find the idea adapted to our context.
[bandwidth]
* add a version guessing the appropriate datasizes automatically
[other modules]
- * provide a way to retrieve the host load as in NWS
* log control, management, dynamic token ring
* a way using SSH to ask a remote host to open a socket back on me
-
\ No newline at end of file
+
+
+*
+* SURF
+******
+
+[maxmin]
+ * select portion of the system that changed instead of recomputing
+ * the whole system solution at each action change
+
+