>> [ snipped [
>>
>>Hope that makes sense,
>>Jeff
>
>It does, but I have no (practical) experience with
>kernel memory allocators so my question is - should I
>use an allocator that splits a 4kb page into objects
>of the same size, or should I rather try to write
>one that can allocate "random" (eg. the "power of two"
>buddy algorithm) chunks of memory ? Which one is better,
>in your opinion ?
Well, in my opinion, no matter what you do, every memory
allocation should be dword aligned. In other words,
if you make sure every memory allocation is a multiple
of 4 bytes, you're all good there.
Reason: 32-bit accesses do nasty things to the bus
if they aren't dword aligned
As for the other issue; myself, I'd definitly
try to divide up the 4k page as much as possible,
so you don't waste memory (but don't try to
eliminate _all_ memory waste... I think that
would result in being too complicated and kludgy
for what it's worth).
So...
- If you use paging, you'll have to work around 4k pages.
- I'd recommend dividing them up into multiples of 4, 8, or 16, though,
whenever possible. In other words, if the program
requests 12 bytes... allocate 16 (the first larger
multiple of 4).
That's my opinion anyway,
Jeff