ngx wrote:
1. Receive the start address for the first frame(start of page frame allocation memory space - -0x0 on x86)
2. Receive the end address of the last frame(end of PFA allocation space)
[...]
3. Mark all of the frames in which unfree zones are located as reserved(using the memory map and driver requests)
I don't know why the memory map only comes into things in the third step. The memory map contains all places where there may be physical RAM the firmware isn't currently using, and that RAM is not guaranteed to be contiguous. For maximum portability (even other architectures; I see nothing arch-specific here), assume as little as possible. Integrate the entire memory map into your view of memory.
ngx wrote:
4. Mark frames where kernel is located as reserved using variables in the linker script(start and end)
Depends on how leaky you want your abstractions to be. My main (stage 2) kernel is only used with virtual memory, and therefore independent of its physical address. The stage 1 kernels (UEFI and GRUB adapters, respectively) load the stage 2 kernel however they want and add the physical addresses used for that into the "reserved" range. This way, I don't even depend on all of it being contiguous. Though it seems unnecessary in this age of multi GB RAM sizes, I am quite prepared to have my kernel hacked into page-sized blocks all over memory. Virtual memory means I don't have to care. Of course, the stage 1 kernels are unlikely to do that at this time.
The page tables are special. Those are added to the reserved ranges in stage 2, in order to make it easier on the stage 1 kernels. Stage 2 can discover all addresses those tables are using, after all.
But then, I am building a 64-bit kernel and can just map all physical memory to one place and map a logical view of the kernel to another. In a 32-bit kernel you likely don't have that luxury.
ngx wrote:
5. Find an address to store the bitmap
1. Calculate the amount of frames needed to store the bitmap; The bitmap should be the size of - `RAM / PAGE_FRAME_SIZE`
Seems about right. But what is "RAM" here? It ought to be the span of possible addresses, so last address plus one minus first address. Remember there can be large holes in the middle.
ngx wrote:
2. Search for the first place where the right amount of frames is free(they should be one after another)
Those who search have to deal with not finding it. You might want to choose a more granular approach, like a linked list of bitmaps. That way you can have multiple places to store the information, rather than requiring one large block.
ngx wrote:
4. Zero out all of the frames where the bit/byte map is located
And then set all the bytes representing reserved memory areas. Wouldn't want to overwrite the kernel now, would we?
ngx wrote:
4. Return the start address of the page frame
Virtual or physical? If physical, how to map it into address space?
ngx wrote:
3. And also there would be functions for allocating pages that are reserved, freeing pages that are reserved, function to allocate RAM for PMM use
Unless you mean something else, you are describing a PMM (physical memory manager) here. What you will need is a function map fixed addresses, reserved or not, into address space. Those are needed by drivers that want to talk to MMIO.
ngx wrote:
To allocate multiple frames one after another at a time I thought of just putting a special bit flag in the byte-map entry for the page frame that would indicate that the frame is a part of several frames(so they are counted as one space) in a row so the pfree would free all of them
Bad idea. For one, there is precious little need to ever allocate more than one contiguous page of physical RAM. You can always use virtual memory to make it look contiguous in virtual space, and having bigger blocks just gives you fragmentation problems (if someone wants to allocate 4MB, and all you have left is 3MB here and 2MB there, you have to decline the request. But if they make the same request in pages, you can fulfill it).
Plus, what if multiple multi-page allocations happen to be adjacent at pfree() time? Then pfree() is going to free both allocations after being called on just the first one, right? You could conceivably try to limit this problem by using multiple bits to form a sort of allocation ID, so that only pages with the same ID become freed. But you would need to ensure that no two adjacent spots end up with the same ID if you did that. That is possible, as long as you don't allow resizing. Then you can solve the problem with only 3 IDs (when allocating a block, only at most two IDs can be already adjacent, namely in the block before and after the one you are allocating, so you only need a third to be unique).
But it does make the whole thing more complicated. Easier might be to put the burden of tracking the number of pages allocated on the caller. You get "phys_addr_t palloc(size_t);" and "void pfree(phys_addr_t, size_t);". Caller gives number of pages in both cases. Ought to be simple enough, especially when starting out.
ngx wrote:
Is my algorithm very bad, because I certainly think there are much easier and cleaner ways to do it(especially when OS is only at the start of it's life like mine)?
Easier? Barely. Cleaner? Eh, what is dirty about this one? You may in future need to add constrained allocation (some devices cannot deal with addresses beyond a certain point. For instance, for additional CPU startup on x86, you will likely need a page in the low 1MB of address space for the trampoline.). Unlike with a stack algorithm, constrained allocation is at least possible with a bitmap allocator.
And yes, faster algorithms exist. Your algorithm will spend a lot of time skipping over already allocated memory. You can mitigate this by keeping track of the lowest free address, but under high allocator pressure, you will still be skipping over already allocated memory a lot. But it will probably work, and when starting out, that is the most important thing.