Arnaud Giersch [Mon, 13 Feb 2012 09:37:32 +0000 (10:37 +0100)]
Fix another race in log initializations.
Since setting the threshold is not the last thing done when a category
is initialized, there is a possibility that a message is logged with wrong
parameters (e.g. format or appender).
Define a new field "initialized" which is set to 1 only when the category
is fully initialized.
Arnaud Giersch [Wed, 8 Feb 2012 16:03:36 +0000 (17:03 +0100)]
Fix a race condition in _XBT_LOG_ISENABLEDV().
Without this change, catv.threshold can be initialized by another
thread, between the test for priority being great enough, and
that for catv.threshold being initialized, leading to a false
positive answer.
Hypothesis
==========
Initially, catv.threshold == xbt_log_priority_unititialized == -1
After initialization, priority < catv.threshold
Two threads running _XBT_LOG_ISENABLEDV() for the same
category, and the same priority.
Thread A Thread B
======== ========
priority >= cat.threshold
is TRUE
priority >= catv.threshold
is TRUE
catv.threshold != x.l.p._uninitialized
is FALSE
call xbt_log_cat_init(...)
returns FALSE
catv.threshold != x.l.p._uninitialized
is TRUE
=> _XBT_LOG_ISENABLEDV(...) => _XBT_LOG_ISENABLEDV(...)
is FALSE is TRUE
Martin Quinson [Wed, 8 Feb 2012 10:47:29 +0000 (11:47 +0100)]
Setup the framework allowing to add backtraces to the malloc meta-data
- implement a malloc-clean backtrace() function
- make some room to store the backtraces. Only for big blocks for now,
the memory consumption seem to be very high when doing so for
fragments. Possible solutions include:
- increasing the minimal fragment size to reduce the amount of
possible fragment per block. It will waste some blocks for very
small fragments, but it will save metadata that is paid for EVERY
block, including full blocks, through the union in the metadata
- Reduce the size of the saved backtraces. For now, we save up to 10
calls, 5 to 3 levels may be enough if space is scarce.
- use that framework to save the backtraces in one malloc execution
path. Other malloc execution paths, as well as realloc paths should
now be changed to store the backtrace too.
- Implement a mmalloc_backtrace_display() function that displays the
backtrace at which the block where malloc()ed. This is a bit crude
for now, as we reuse the internals of exceptions that where not
really done for that, but it works.
Martin Quinson [Mon, 6 Feb 2012 10:36:18 +0000 (11:36 +0100)]
Fixups in mrealloc
- make sure that it won't try to get a block smaller than what we are
willing to give (was a bug in previous implementation!!)
- update the requested size markers in meta-datas
Martin Quinson [Fri, 3 Feb 2012 14:17:14 +0000 (15:17 +0100)]
Further simplify the mmallocs, and improve its introspection abilities
- Ensure that the mmallocation code will never return NULL (but die
verbosely), and simplify the using code accordingly.
- Stop using THROWF in there, because these functions probably need
malloc to work, and that what broke when we want to issue a message.
Use printf/abort instead.
- Introduce a SMALLEST_POSSIBLE_MALLOC. It already existed (and were
defined to sizeof(struct list) to ensure that free fragments can be
enlisted, but I need this to declare the block metadata
- Add a frag_size information within the bloc info structure. It may
not perfectly be kept uptodate yet (in particular, by realloc)
Arnaud Giersch [Fri, 3 Feb 2012 10:27:49 +0000 (11:27 +0100)]
Correctly initialize simdata->comm when marked as isused.
I hope that it's now the good fix for the segfault I got when running
chord --cfg=contexts/stack_size:5 --log=msg_chord.thres:critical --cfg=network/model:Constant ./cluster_with_100000_hosts.xml ./chord100000.xml
(see the backtrace in the message for commit 1380f1a).
Martin Quinson [Thu, 2 Feb 2012 20:44:21 +0000 (21:44 +0100)]
TODO--: the 'type' of each mmalloc block is granted to be uptodate at every point
Tomorrow, I'll add the fragment metadatas, and the backtraces.
For that, I'll waste a lot of space by adding a static tables to the
malloc_info structure, where the size of that table is the maximal
amount of fragments per block.
Something like BLOCKSIZE/sizeof(struct list) since mmalloc refuses to
allocate smaller blocks (to ensure that we can enlist free fragments).
This implementation of malloc will definitely not be something that
you want to use when not forced to do so to get the model-checking
working. But it will provide all the information that MC needs.
Martin Quinson [Thu, 2 Feb 2012 16:45:21 +0000 (17:45 +0100)]
Simplify the malloc_info structure containing the metadata of a given block in mmalloc
* Less structures in union in structure in union inception madness
We're using a less portable anonymous union, but gcc handles that
since maybe 15 years, so that should be cool.
* We can now determine from looking at it whether the block is busy or
free, without having to search in which list the block is.
That was useles to other usages of mmalloc, but this is very
interesting when comparing heaps.
Moreover, it comes for free: it has exactly the same (memory) cost
when the block is busy, and we have a plenty of place in the block
to store that this it free when it is actually free.
Please note that this information is not updated when the block is
freed yet. (splitting the commit just in case someone tries to read it
later: this one is almost automatic refactoring)