Sounds like you are talking about a memory manager. I just switched over from a descriptor-based approach to a bitmap-based approach.
In the descriptor based approach, a process could request any size memory in blocks of 4k, and the memory manager would create a 24 byte descriptor to track that (a bit like an f-node on a file system). However, for a tight, efficient memory manager this turned out to be way too costly in terms of code maintenance (complexity, in assembler, is very costly).
Instead I switched to larger blocks (16k) that are tracked in 2 bitmaps. The first bitmap tracks usage and the second tracks which block is the first in a sequence of contiguous blocks (which makes it easier to deallocate and ensure blocks don't get deallocated out of sequence).
The upshot is that I need 2 BITS of ram to be able to allocate and deallocate 16k blocks. If my calculator is not broken, 8 bytes of bitmap manages 2 MiB of RAM. My OS is being designed to run in about half a Gig, so that's a good trade-off for me.
Of course, I still have a problem to solve - what happens when a process dies without deallocating its memory? I may still need a descriptor for that, but I will worry about that down the road. For now, I can move forward to a TCP stack that can get and release the needed buffers. I'll cross that other bridge when the chickens have hatched. Or something.