5 Document the fact that gras processes display the backtrace on sigusr and sigint
6 Document XBT_LOG_EXTERNAL_DEFAULT_CATEGORY
9 /* FIXME: better place? */
10 int vasprintf (char **ptr, const char *fmt, va_list ap);
11 char *bprintf(const char*fmt, ...) _XBT_GNUC_PRINTF(1,2);
14 - rename SWAG to RING?
15 - Rename cursor to iterator
17 gras_socket_close should be blocking until all the data sent have been
18 received by the other side (implemented with an ACK mechanism).
29 * Check the gcc version on powerpc. We disabled -floop-optimize on powerpc,
30 but versions above 3.4.0 should be ok.
31 * check whether we have better than jmp_buf to implement exceptions, and
32 use it (may need to generate a public .h, as glib does)
40 (errors, logs ; dynars, dicts, hooks, pools; config, rrdb)
43 * maybe a memory pool so that we can cleanly kill an actor
46 * Better split casual errors from programing errors.
47 The first ones should be repported to the user, the second should kill
48 the program (or, yet better, only the msg handler)
49 * Allows the use of an error handler depending on the current module (ie,
50 the same philosophy as log4c using GSL's error functions)
53 * Hijack message from a given category to another for a while (to mask
54 initializations, and more)
55 * Allow each actor to have its own setting
56 * a init/exit mecanism for logging appender
57 * Several appenders; fix the setting stuff to change the appender
58 * more logging appenders (take those from Ralf in l2)
61 * Add configuration and dependencies to our module definition
62 * allow to load them at runtime
63 check in erlang how they upgrade them without downtime
66 * we may need a round-robin database module, and a statistical one
67 * a hook module *may* help cleaning up some parts. Not sure yet.
68 * Some of the datacontainer modules seem to overlap. Kill some of them?
69 - replace fifo with dynars
70 - replace set with SWAG
77 * implement the P2P protocols that macedon does. They constitute great
81 * Spawn threads handling the communication
82 - Data sending cannot be delegated if we want to be kept informed
83 (*easily*) of errors here.
84 - Actor execution flow shouldn't be interrupted
85 - It should be allowed to access (both in read and write access)
86 any data available (ie, referenced) from the actor without
87 requesting to check for a condition before.
88 (in other word, no mutex or assimilated)
89 - I know that enforcing those rules prevent the implementation of
90 really cleaver stuff. Keeping the stuff simple for the users is more
91 important to me than allowing them to do cleaver tricks. Black magic
92 should be done *within* gras to reach a good performance level.
94 - Data receiving can be delegated (and should)
95 The first step here is a "simple" mailbox mecanism, with a fifo of
96 messages protected by semaphore.
97 The rest is rather straightforward too.
99 * use poll(2) instead of select(2) when available. (first need to check
100 the advantage of doing so ;)
102 Another idea we spoke about was to simulate this feature with a bunch of
103 threads blocked in a read(1) on each incomming socket. The latency is
104 reduced by the cost of a syscall, but the more I think about it, the
105 less I find the idea adapted to our context.
107 * timeout the send/recv too (hard to do in RL)
109 * multiplex on incoming SOAP over HTTP (once datadesc can deal with it)
111 * The module syntax/API is too complex.
112 - Everybody opens a server socket (or almost), and nobody open two of
113 them. This should be done automatically without user intervention.
114 - I'd like to offer the possibility to speak to someone, not to speak on
115 a socket. Users shouldn't care about such technical details.
116 - the idea of host_cookie in NWS seem to match my needs, but we still
117 need a proper name ;)
118 - this would allow to exchange a "socket" between peer :)
119 - the creation needs to identify the peer actor within the process
121 * when a send failed because the socket was closed on the other side,
122 try to reopen it seamlessly. Needs exceptions or another way to
123 differentiate between the several system_error.
124 * cache accepted sockets and close the old ones after a while.
125 Depends on the previous item; difficult to achieve with firewalls
128 * Add a XML wire protocol alongside to the binary one (for SOAP/HTTP)
132 * Inter-arch conversions
134 - Convert in the same buffer when size increase
135 - Exchange (on net) structures in one shoot when possible.
136 - Port to really exotic platforms (Cray is not IEEE ;)
137 * datadesc_set_cste: give the value by default when receiving.
138 - It's not transfered anymore, which is good for functions pointer.
140 - Cleanup the code (bison?)
141 - Factorize code in union/struct field adding
142 - Handle typedefs (needs love from DataDesc/)
143 - Handle unions with annotate
145 - Handle long long and long double
146 - Forbid "char", allow "signed char" and "unsigned char", or user code won't be
147 portable to ARM, at least.
148 - Handle struct/union/enum embeeded within another container
149 (needs modifications in DataDesc, too)
153 - Check struct { struct { int a } b; }
155 * gras_datadesc_import_nws?
158 * Other message types than oneway & RPC are possible:
159 - forwarding request, group communication
162 * Group communication
163 * Message declarations in a tree manner (such as log channels)?
165 [GRASPE] (platform expender)
166 * Tool to visualize/deploy and manage in RL
167 * pull method of source diffusion in graspe-slave
169 [Actors] (parallelism in GRAS)
170 * An actor is a user process.
171 It has a highly sequential control flow from its birth until its death.
172 The timers won't stop the current execution to branch elsewhere, they
173 will be delayed until the actor is ready to listen. Likewise, no signal
174 delivery. The goal is to KISS for users.
175 * You can fork a new actor, even on remote hosts.
176 * They are implemented as threads in RL, but this is still a distributed
177 memory *model*. If you want to share data with another actor, send it
178 using the message interface to explicit who's responsible of this data.
179 * data exchange between actors placed within the same UNIX process is
180 *implemented* by memcopy, but that's an implementation detail.
182 [Other, more general issues]
183 * watchdog in RL (ie, while (1) { fork; exec the child, wait in father })
184 * Allow [homogeneous] dico to be sent
185 * Make GRAS thread safe by mutexing what needs to be
192 * add a version guessing the appropriate datasizes automatically
194 * provide a way to retrieve the host load as in NWS
195 * log control, management, dynamic token ring
196 * a way using SSH to ask a remote host to open a socket back on me