sj95126 wrote:
SHZhang wrote:
Do you think that temporarily mapping the physical page to some virtual address on each allocation/free is a reasonable way to work around this?
That's actually what I used to do to add a new PT to a page table, in case I was in a situation where the page table had to reference itself. I used one fixed linear address to map and unmap a physical page during the time it needed updating. I didn't like it; it just felt clunky. Among other things, you have to invalidate the entry in the TLB every time you reuse the address, to make sure it's pointing to the correct page.
So long as your temporary mapping is protected by a lock, you don't need to invalidate anything.
You take the lock, ensuring nothing else is doing PMM, safe in the knowledge you can map/unmap the temporary mapping without anyone else referencing that memory area. Your page alloc then becomes:
Code:
// Physical page number
typedef uint32_t page_t;
spin_t pmmlock;
static page_t first_free;
page_t * tempmap;
page_t page_alloc()
{
spinlock(pmmlock);
const page_t page = first_free;
// If we have a page available, remove it from the stack
if (page) {
// map the new page into VM
mmu_map(newpage, tempmap);
// tempmap now points to a page_t, which is the next free page after this one
first_free = *tempmap;
}
spinunlock(pmmlock);
// At this point, newpage is our newly allocated page, or 0 if stack empty
return page;
}
void page_free(page_t page)
{
spinlock(pmmlock);
// map the page being freed into VM
mmu_map(page, tempmap);
// put the old stack top page into newly freed page
*tempmap = first_free;
first_free = page;
// done - unlock
spinunlock(pmmlock);
}
Disclaimer - knocked out code, uncompiled, untested.
The beauty of this though is that the lock prevents concurrent access to the virtual memory address, from all CPUs, and the first thing alloc and free does once before updating the stack is remap the temporary mapping as required, overwriting the previous temporary mapping. So we don't even need to unmap the data once we're done, we can just leave the temporary mapping there, avoiding any TLB flush (and TLB shoot down IPI).
sj95126 wrote:
There is another way. You could do something similar to filesystem cluster chaining. Create a consecutive chunk of memory (with valid linear mapping) that's an array of physical page addresses. Each location would contain the address of the next free page. When you need one, take it just like the linked-list stack approach:
newpage = HEAD
HEAD = HEAD->next
This would have about 0.1% overhead, e.g. for a system with 4GB of RAM, you'd need an array of 1 million 32-bit pointers.
You'd probably want some per-page meta data anyway, for use in your VMM. This data can contain information such as page status (locked, dirty, accessed, age, usage count), but it can also union with your free list chain.
Code:
struct page_info_t {
union {
struct vmpage {
// flags for dirty, locked, referenced etc.
unsigned int flags;
// aging information for page eviction
unsigned age;
// COW usage
unsigned copies;
// reverse mapping information, an entry for each mapping to this page
unsigned nrmap;
struct {
asid as;
void * p;
} * rmap;
};
page_t nextfree;
} u;
};
You can allocate an array of the above at kernel bootstrap, with an entry per physical page, and sequentially free each page, which will also initialize the array as a side effect.