OSDev.org
https://forum.osdev.org/

Dynamic Memory Allocation investigation for lists
https://forum.osdev.org/viewtopic.php?f=15&t=31365
Page 1 of 1

Author:  prasoc [ Tue Feb 14, 2017 5:57 am ]
Post subject:  Dynamic Memory Allocation investigation for lists

I just found this interesting article on how to reduce waste allocations with lists that don't have a fixed size.

https://crntaylor.wordpress.com/2011/07 ... den-ratio/

Turns out that the trade-off between having enough space to fill the list, and having that memory be usable gives us a multiplicative factor which is around the golden ratio.

Pretty neat, I'm going to use this in my C++ std::vector implementation!

Author:  dchapiesky [ Tue Feb 14, 2017 6:53 pm ]
Post subject:  Re: Dynamic Memory Allocation investigation for lists

An interesting post - thanks for it.

If you are writing your own STL might I suggest studying the design of Electronic Art's STL?

https://rawgit.com/electronicarts/EASTL ... 20FAQ.html

https://github.com/electronicarts/EASTL

They took the position of adding intrusive lists where the amount of memory allocated in significantly reduced. Also they made memory management and custom allocators much easier to implement allowing, for example, a list to be bound to a pool allocator - thus moving the decision making of how to deal with non-fixed size allocations to an allocator rather than the list its self.

One extremely useful aspect of better allocators for STL is you can emit events tracking allocations by container type, data type, etc...

The tool EA developed to work with EASTL is described here...

https://www.youtube.com/watch?v=8KIvWJUYbDA


EASTL is well worth a study and I highly encourage it.

Good luck with your project

Cheers

Author:  OSwhatever [ Wed Feb 15, 2017 8:08 am ]
Post subject:  Re: Dynamic Memory Allocation investigation for lists

Interesting article. As dchapiesky mentions, intrusive algorithms are the way to go in an operating system (the kernel in particular) development. I actually can't think of any data structure in my operating system that has a vector that is resized at all. They are usually only allocated once and then a normal C array is enough. For all other lists and structures, intrusive algorithms are used and some with dedicated allocators.

For user space applications, resizable vectors are much more often used. While the article mentions and infers the golden ratio, there are some additional dimensions to this. If the size of the objects goes up, is the golden ration still valid? If each object is 200kB, then is increasing 1.5 each time still a good idea? There might be a vector where objects aren't added very often and then you waste a lot of memory. Then again why would you create a vector with 200kB elements instead of having a vector of pointers to your 200kB objects? Then the question comes up, where is this threshold where a vector of pointers become the better approach?

Page 1 of 1 All times are UTC - 6 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/