6 sock specific tcp (buffsize) useless
8 * use the exception everywhere
19 * Check the gcc version on powerpc. We disabled -floop-optimize on powerpc,
20 but versions above 3.4.0 should be ok.
21 * check whether we have better than jmp_buf to implement exceptions, and
22 use it (may need to generate a public .h, as glib does)
30 (errors, logs ; dynars, dicts, hooks, pools; config, rrdb)
33 * Mallocators and/or memory pool so that we can cleanly kill an actor
36 * Better split casual errors from programing errors.
37 The first ones should be repported to the user, the second should kill
38 the program (or, yet better, only the msg handler)
39 * Allows the use of an error handler depending on the current module (ie,
40 the same philosophy than log4c using GSL's error functions)
43 * Hijack message from a given category to another for a while (to mask
44 initializations, and more)
45 * Allow each actor to have its own setting
46 * a init/exit mecanism for logging appender
47 * Several appenders; fix the setting stuff to change the appender
48 * more logging appenders (take those from Ralf in l2)
51 * speed up the cursors, for example using the contexts when available
54 * better formalisation of what modules are (amok deeply needs it)
55 configuration + init() + exit() + dependencies
56 * allow to load them at runtime
57 check in erlang how they upgrade them without downtime
60 * we may need a round-robin database module, and a statistical one
61 * a hook module *may* help cleaning up some parts. Not sure yet.
62 * Some of the datacontainer modules seem to overlap. Kill some of them?
69 * add the token ring as official example
70 * implement the P2P protocols that macedon does. They constitute great
74 * Spawn threads handling the communication
75 - Data sending cannot be delegated if we want to be kept informed
76 (*easily*) of errors here.
77 - Actor execution flow shouldn't be interrupted
78 - It should be allowed to access (both in read and write access)
79 any data available (ie, referenced) from the actor without
80 requesting to check for a condition before.
81 (in other word, no mutex or assimilated)
82 - I know that enforcing those rules prevent the implementation of
83 really cleaver stuff. Keeping the stuff simple for the users is more
84 important to me than allowing them to do cleaver tricks. Black magic
85 should be done *within* gras to reach a good performance level.
87 - Data receiving can be delegated (and should)
88 The first step here is a "simple" mailbox mecanism, with a fifo of
89 messages protected by semaphore.
90 The rest is rather straightforward too.
92 * use poll(2) instead of select(2) when available. (first need to check
93 the advantage of doing so ;)
95 Another idea we spoke about was to simulate this feature with a bunch of
96 threads blocked in a read(1) on each incomming socket. The latency is
97 reduced by the cost of a syscall, but the more I think about it, the
98 less I find the idea adapted to our context.
100 * timeout the send/recv too (hard to do in RL)
102 * multiplex on incoming SOAP over HTTP (once datadesc can deal with it)
104 * The module syntax/API is too complex.
105 - Everybody opens a server socket (or almost), and nobody open two of
106 them. This should be done automatically without user intervention.
107 - I'd like to offer the possibility to speak to someone, not to speak on
108 a socket. Users shouldn't care about such technical details.
109 - the idea of host_cookie in NWS seem to match my needs, but we still
110 need a proper name ;)
111 - this would allow to exchange a "socket" between peer :)
112 - the creation needs to identify the peer actor within the process
114 * when a send failed because the socket was closed on the other side,
115 try to reopen it seamlessly. Needs exceptions or another way to
116 differentiate between the several system_error.
117 * cache accepted sockets and close the old ones after a while.
118 Depends on the previous item; difficult to achieve with firewalls
121 * Implement gras_datadesc_cpy to speedup things in the simulator
122 (and allow to have several "actors" within the same unix process).
123 For now, we mimick closely the RL even in SG. It was easier to do
124 since the datadesc layer is unchanged, but it is not needed and
126 gras_datadesc_cpy needs to provide the size of the corresponding messages, so
127 that we can report it into the simulator.
128 * Add a XML wire protocol alongside to the binary one (for SOAP/HTTP)
132 * Inter-arch conversions
134 - Convert in the same buffer when size increase
135 - Exchange (on net) structures in one shoot when possible.
136 - Port to really exotic platforms (Cray is not IEEE ;)
137 * datadesc_set_cste: give the value by default when receiving.
138 - It's not transfered anymore, which is good for functions pointer.
140 - Cleanup the code (bison?)
141 - Factorize code in union/struct field adding
142 - Handle typedefs (needs love from DataDesc/)
143 - Handle unions with annotate
145 - Handle long long and long double
146 - Forbid "char", allow "signed char" and "unsigned char", or user code won't be
147 portable to ARM, at least.
148 - Handle struct/union/enum embeeded within another container
149 (needs modifications in DataDesc, too)
153 - Check struct { struct { int a } b; }
155 * gras_datadesc_import_nws?
158 * A proper RPC mecanism
159 - gras_rpctype_declare_v (name,ver, payload_request, payload_answer)
160 (or gras_msgtype_declare_rpc_v).
161 - Attaching a cb works the same way.
162 - gras_msg_rpc(peer, &request, &answer)
163 - On the wire, a byte indicate the message type:
164 - 0: one-way message (what we have for now)
165 - 1: method call (answer expected; sessionID attached)
166 - 2: successful return (usual datatype attached, with sessionID)
167 - 3: error return (payload = exception)
168 - other message types are possible (forwarding request, group
172 * Group communication
173 * Message declarations in a tree manner (such as log channels)?
175 [GRASPE] (platform expender)
176 * Tool to visualize/deploy and manage in RL
177 * pull method of source diffusion in graspe-slave
179 [Actors] (parallelism in GRAS)
180 * An actor is a user process.
181 It has a highly sequential control flow from its birth until its death.
182 The timers won't stop the current execution to branch elsewhere, they
183 will be delayed until the actor is ready to listen. Likewise, no signal
184 delivery. The goal is to KISS for users.
185 * You can fork a new actor, even on remote hosts.
186 * They are implemented as threads in RL, but this is still a distributed
187 memory *model*. If you want to share data with another actor, send it
188 using the message interface to explicit who's responsible of this data.
189 * data exchange between actors placed within the same UNIX process is
190 *implemented* by memcopy, but that's an implementation detail.
192 [Other, more general issues]
193 * watchdog in RL (ie, while (1) { fork; exec the child, wait in father })
194 * Allow [homogeneous] dico to be sent
195 * Make GRAS thread safe by mutexing what needs to be
196 * Use a xbt_set for gras_procdata_t->libdata instead of a dict
197 so that the search can be linear.
204 * finish this module (still missing the saturate part)
205 * add a version guessing the appropriate datasizes automatically
207 * provide a way to retrieve the host load as in NWS
208 * log control, management, dynamic token ring
209 * a way using SSH to ask a remote host to open a socket back on me