OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 2:56 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 12 posts ] 
Author Message
 Post subject: Troubles with understanding paging and malloc (free)
PostPosted: Sun Feb 19, 2017 3:27 am 
Offline
Member
Member
User avatar

Joined: Fri Aug 07, 2015 6:13 am
Posts: 1134
Hello!

I have two very important conceptual questions:

1.Paging:

What I know:
*Paging is used for mapping physical addresses to virtual ones. Virtual address space can be greater than the physical one. CR3 contains a pointer to a page directory which contains 1024 page tables and there are 1024 pages within those page tables. MMU uses that page directory to figure out how individual addresses are mapped. CR3 must be a physical address. CR0 is used for enabling paging by setting the PG bit to 1. Addresses must be page aligned (4KB aka 0x1000). I need to identity map a first few megabytes of my kernel. Page faults occur when you try to access an address that doesn't exist.

What I don't know:
The difference between pages and frames. What is the exact technical order of thing that need to be done in order for paging to be enabled (identity mapping of the first few megabytes included). Where am I supposed to put my page directory? (at which address) Do I need dynamic memory allocation for paging to be enabled? What does it mean to allocate a frame and free it? Why do we allocate pages and frames? How is it done?

2.Dynamic memory allocation:

What I know:
*Dynamic memory allocation is use for allocating contiguous chunks of memory. Most common functions are malloc, free, calloc, realloc. Malloc allocates memory aka returns a pointer to it. Free frees it. Calloc is similar to malloc but with zeros. Realloc is expanding already allocated memory. In order to allocate something I need to return a pointer to it. (also I need to increment its last address) People usually use lists for this. It is complicated.

What I don't know:
How it's done. How to use lists with it? How to free memory? What does it have to do with paging? Do I need to allocate pages? Will it mess up my paging? Can it override paging stuff aka pages? How do I actually code it? I don't seem to understand how it works from point A to point B. How do I keep track of allocated addresses? What happens when I free multiple bytes but in different locations, how do I merge them?

Please don't comment if it is not related. I have seen and read all the wiki pages + all the forum topics. Also I have a copy of Intel's manual laying around. (4000+ pages, such a mess)

_________________
OS: Basic OS
About: 32 Bit Monolithic Kernel Written in C++ and Assembly, Custom FAT 32 Bootloader


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Sun Feb 19, 2017 6:00 am 
Offline
Member
Member

Joined: Tue Mar 04, 2014 5:27 am
Posts: 1108
octacone wrote:
What I know:
*Paging is used for mapping physical addresses to virtual ones. Virtual address space can be greater than the physical one. CR3 contains a pointer to a page directory which contains 1024 page tables


The PD doesn't contain PTs. It points to them.

octacone wrote:
and there are 1024 pages within those page tables.


Every PT points to up to 1024 pages. [Side note: the scheme you're talking about is the original/classical page translation introduced in the i80386. There are other schemes, for PSE/PAE, for large pages, for long (AKA 64-bit) mode.]

octacone wrote:
MMU uses that page directory to figure out how individual addresses are mapped. CR3 must be a physical address.


There low 12 bits of CR3 have some control bits, AFAIR since the i80486.

octacone wrote:
Addresses must be page aligned (4KB aka 0x1000).


What addresses? Probably not all, right? CR3, PDEs and PTEs must contain addresses that are multiple of 4KB.

octacone wrote:
I need to identity map a first few megabytes of my kernel.

Maybe you do, maybe you don't. It depends.

octacone wrote:
What I don't know:
The difference between pages and frames.


In the scheme you're talking about, the page directory, every page table and every page that page tables point to are just 4KB-aligned blocks of memory. All of them are pages (and can be paged out and in if so designed). However, often times the pages that page tables point to are called page frames.

octacone wrote:
What is the exact technical order of thing that need to be done in order for paging to be enabled (identity mapping of the first few megabytes included).

There's a dedicated section in the CPU manual and there must be a dedicated wiki page here. Have you looked for it? It's now time.

octacone wrote:
Where am I supposed to put my page directory? (at which address)

Up to you. Provided, it's properly aligned (remember about 4KB?) and you don't trash it.

octacone wrote:
Do I need dynamic memory allocation for paging to be enabled?

I don't think there's such a feature in the CPU that you can enable. Could you elaborate?

octacone wrote:
What does it mean to allocate a frame and free it?

I suppose, if you have some sort of list of available pages/frames, take it from there and then when it's no longer needed put it back?

octacone wrote:
Why do we allocate pages and frames?

If you know that you need page translation, the answer should be obvious, right? Or do you need it just because every cool kid has it, but you still aren't sure about its purpose?

octacone wrote:
How is it done?

Lists, stacks, bitmaps, wherever you can store or mark unused addresses to take from. And then when freeing a frame, you must have the address handy (perhaps, taken directly from a PDE or a PTE; do you remember that you put them there to establish translation?) and you put it back to your list/stack/bitmap/etc.

octacone wrote:
2.Dynamic memory allocation:
What I don't know:
How it's done.

Google it up?

octacone wrote:
How to use lists with it?

What lists? Are there any lists? I mean, do you have a particular implementation in mind that uses lists to implement malloc()/free() or are you asking how to implement lists on top of existing malloc()/free() or what?

octacone wrote:
How to free memory?

Find a way to undo the changes that were made during allocation?

octacone wrote:
What does it have to do with paging?

It depends on how you implement both. Your malloc()/free() may explicitly call your page handling code to create or destroy translation. Or your malloc()/free() may never do that and instead rely on your paging code to do this semi-automatically (a page fault may be used to create the missing translation that caused said page fault). Or you may create fixed translation and never change it, in which case malloc() and free() will not need to bother at all.

octacone wrote:
Do I need to allocate pages?

Maybe. We don't know. Do you need to?

octacone wrote:
Will it mess up my paging?

Will page allocation mess your paging or will malloc()/free() mess your paging?

octacone wrote:
Can it override paging stuff aka pages?

Um... What?

octacone wrote:
How do I actually code it? I don't seem to understand how it works from point A to point B. How do I keep track of allocated addresses?

You use data structures. Unless it was just noise, you were kinda close to it when you mentioned lists. Google up an implementation or just read anything computer-science-introductory on memory management. You'll get some ideas. For *** sake, if you're gonna read about it anyway (you're on a forum, right?), read articles or books on the subject, don't ask us to restate our knowledge from our L1/L2/L3, worse from RAM, er, from those same articles and books. (Btw, did I mention the CPU manual? It's also read, unless used in printed form for its physical properties, e.g. weight)

octacone wrote:
What happens when I free multiple bytes but in different locations, how do I merge them?

If unused/freed regions aren't adjacent, you obviously don't merge them. Unless you want to defragment memory, which will add complexity and cost. Don't go there until you get the basics working.

octacone wrote:
Please don't comment if it is not related.

Tried not to.

octacone wrote:
I have seen and read all the wiki pages + all the forum topics.

That just can't be.

http://wiki.osdev.org/Tutorials#Memory_Management ?
Alright, these particular ones aren't very detailed.

How about http://wiki.osdev.org/Memory_Allocation ? That one looks better. And there are some links for exploration at the end.

More stuff can be found through http://wiki.osdev.org/Category:Memory_management.

octacone wrote:
Also I have a copy of Intel's manual laying around. (4000+ pages, such a mess)

Did you locate the table of contents? Further, what about the text search feature of your PDF reader? Ctrl+F? Mouse click? Cortana/Alexa/etc?


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Sun Feb 19, 2017 6:33 am 
Offline
Member
Member
User avatar

Joined: Thu Apr 16, 2015 7:37 am
Posts: 64
octacone wrote:
What I know:
*Paging is used for mapping physical addresses to virtual ones. Virtual address space can be greater than the physical one. CR3 contains a pointer to a page directory which contains 1024 page tables and there are 1024 pages within those page tables. MMU uses that page directory to figure out how individual addresses are mapped. CR3 must be a physical address. CR0 is used for enabling paging by setting the PG bit to 1. Addresses must be page aligned (4KB aka 0x1000). I need to identity map a first few megabytes of my kernel. Page faults occur when you try to access an address that doesn't exist.

This sounds pretty much correct to me, apart from the last part, I think it would be more accurate to say "Virtual addresses which are not mapped to a physical address result in a page fault." In addition, page faults can also occur for a few other reasons, such as writing to a read only page, or not having the required privileges to read / write to a page. The main thing you should take away from that is that you can try to access any virtual address you want as soon as you turn paging on, but the CPU will page fault if you're doing something you're not supposed to.

Also of note, the addresses needing to be page aligned is kind of a side effect of two things, one, the bottom 12 bits are used for something other than describing where the page tables / page mappings go to (permissions, caching flags, etc), because some flag bits of some kind are helpful to have in the structure and that's a good place to put them, and two, paging only ever operates on the top 20 bits of an address, so forcing all structures to be page aligned helps significantly in simplifying the design of the paging system in the CPU, since it never needs to do things to the bottom 12 bits of the address line.
octacone wrote:
What I don't know:
The difference between pages and frames. What is the exact technical order of thing that need to be done in order for paging to be enabled (identity mapping of the first few megabytes included). Where am I supposed to put my page directory? (at which address) Do I need dynamic memory allocation for paging to be enabled? What does it mean to allocate a frame and free it? Why do we allocate pages and frames? How is it done?

To enable paging, the gist of it is:
- Set up the paging structures somewhere in memory
- Load the CR3 Register with the address of said structures
- Set the Paging Enable bit in CR4

Of course, the first step is the real meat of the problem, which I'll get to in a moment. You mention that you know that identity mapping needs to be done in order for paging to work. Just to clarify, one way of thinking about it is that when you turn paging on, all the memory address lines in the CPU now go through a translation unit (that's what the MMU is) before going into the RAM electronics. So, as soon as you enable paging, the very next instruction fetch will be attempted to be translated by the MMU, and if your paging structures aren't set up correctly, the MMU will page fault, and at this point, you're probably going to triple fault because any subsequent accesses will also have problems and trigger page faults. You probably knew this already, but I'm just trying to make sure I cover everything relevant to the stuff you don't understand.

So, in order to set up paging, you first need to set up the identity paging for any code which is meant to be run without address translation, otherwise you'll run into the problems above. Once you've enabled paging, you can either map the rest of your kernel into somewhere else in the virtual address space, or you can map the kernel into it's actual location so that any virtual memory accesses to the kernel make the same accesses to physical memory (AKA, identity mapping the kernel). While there's some debate about whether you should have the kernel identity mapped or moved somewhere else, it's my personal opinion that most of the arguments for either side are weak, and the best advice is that you shouldn't make the choice based on whether you're having trouble accomplishing it or not. Also, don't forget, if you move your kernel, you'll at least need to identity map the parts of it which enable paging, for the reasons described above, and them jump to your kernel's new location (this is the essence of what the "Higher half bare bones" tutorial is about, understand it thoroughly if you do decide to move your kernel.)

You seem pretty confident with the way the paging structures work, so to begin, you should probably just consider "I want my kernel to be mapped from XXX to XXX, how would I make the paging structures accomplish that?", and then create the paging structures which achieve this (Drawing a diagram like this will likely help). Where you put this structure is completely 100% up to you, as long as you know where it is (in physical memory). Once you've done that, go through the wiki pages about paging and make sure you've set up the various flags and permissions correctly (if the wiki is ambiguous on something, sorry, but you'll have to consult the Intel manual to get the full details about a particular flag or field, section 4.3 of volume 3 of the manuals should help). Be sure that you're absolutely certain that you know what each bit does and why it's set the way it is (I know from experience it's extremely easy to make mistakes here, so give a double check if things aren't what they seem.)

Once you've set up the paging structures and enabled paging, you shouldn't notice anything different if you identity mapped, or, if you moved your kernel, you should be able to jump to your kernel's new location and continue as if it was loaded into physical memory there in the first place. Remember, the whole point of setting up this stuff with our kernel is just so that it works while we use address translation for other purposes. Enabling paging actually takes a little more than I mentioned above, section 4.1.2 of Vol. 3 of the Intel manuals describes the exact process.

In regards to frames, my understanding of the term (at least, the way the wiki seems to describe it to me), is that allocating and freeing page frames is the act of creating and removing areas of translation in the virtual address space, so I guess you could call a frame "an area of virtual memory where you're going to map something to". A simple example is that you want to run a new program that takes up 10k in code and 6k in R/W data, and the program runs under the assumption that it is loaded at 0x10000000. You'd allocate a frame by simply changing the paging structures so there's a read only area at 0x10000000 that's 12k long (we have to abide to page granularity, hence 12 and not 10) and a Read / Write area at 0x10003000 which is 8k long. To illustrate this, we can add to our little diagram we made before like so. It doesn't matter where we put the program in physical memory, because we mapped it to the right place in virtual memory. In fact, our program could be spread out in several pieces in physical memory as long as all the pieces were mapped back together in the right places again. You don't NEED dynamic memory allocation to do things with paging, but as you can see, being able to know where there's space to load the program in physical memory (so that you can later map it to the right spot it needs to be in) is going to be very helpful.
octacone wrote:
2.Dynamic memory allocation:

What I know:
*Dynamic memory allocation is use for allocating contiguous chunks of memory. Most common functions are malloc, free, calloc, realloc. Malloc allocates memory aka returns a pointer to it. Free frees it. Calloc is similar to malloc but with zeros. Realloc is expanding already allocated memory. In order to allocate something I need to return a pointer to it. (also I need to increment its last address) People usually use lists for this. It is complicated.

What I don't know:
How it's done. How to use lists with it? How to free memory? What does it have to do with paging? Do I need to allocate pages? Will it mess up my paging? Can it override paging stuff aka pages? How do I actually code it? I don't seem to understand how it works from point A to point B. How do I keep track of allocated addresses? What happens when I free multiple bytes but in different locations, how do I merge them?

It's super important to realise there are several "kinds" of memory management depending on the situation you're in. The things you're mentioning in the "What I know" section above sound very much like a program's heap memory, which is where it stores arrays and other dynamically sized structures at runtime. Note that this is on the scale of bytes for things such as malloc and free (eg. "I need 12 bytes for an array of 3 ints, where can I put them?") An entirely different kind, although one you're probably going to want to consider now, is memory management on a page level. From my understanding, a process basically gets a small sandpit of memory to do its heap work (such as malloc and free). If it finds it's running out of room, it expands this sandpit area through some kind of system call to get more memory for the process (on linux, these are the sbrk and mmap system calls I believe). For example, a process may start of with 16K of heap space, and if a malloc call ends up using up all of that space, it will ask the kernel for more, eg. "kernel, please give me up to 32k to use for various variables and arrays and things on the heap" These calls ask the kernel to do memory management on the scale of pages, not bytes, and this is usually done by editing the page tables, as well as various other structures the OS is using to keep track of usage and locations of stuff.

It's critical to understand the differences between the various kinds of "memory" resources that are managed by the OS, and some basic theory for how this is done, and you seem to be a little confused, or at least, unaware, of some of these. That's OK, it's not easy, a lot of the terms change meaning depending on context and the kind of resource they're referring to. This seems to stem from a small problem I used to have when I was working on my OS, which was "This feature is important in an OS, therefore I should learn how to operate it." Instead, and maybe this is a bit radical, but my personal suggestion is to abandon paging for now and spend some time designing your system, especially in terms of the kernel and how it loads programs and gives them stack and heap space. A lot areas of OSdev are based around solutions to problems which are not apparent until you're in the middle of designing a system, and so, you end up trying to learn a solution to a problem you don't have.

As an example, surely you're aware of the difference between polling the keyboard and waiting for interrupts from it. When you start off with this area, you might think "Polling works fine, the CPU is very fast, so surely polling will work fine." That's OK, until you realise that not everybody wants not write some code in their program to poll the keyboard, especially if they're not using it. It's almost as if you need a way for the hardware to signal that something happened without waiting for the active program to take care of it... Interrupts! In the same way, paging seems confusing at first, until you consider the logistics of loading and running multiple programs in the same address space, especially when their expectations of the location of data clashes. If you have access to a library, I recommend you read the chapter on memory management in Andrew Tanenbaum's "Modern operating systems". He explains the problem from the ground up, and it made it really clear to me why paging and memory management exist in the first place, as well as outlining various methods of solving the problems that come up in those areas. If you read that, or take some time to encounter these problems yourself, I think it will help immensely with your understanding of all of the things you've mentioned in part two of your question.

- Mikumiku747

Side note: While I was writing this, alexfru already provided a lot of great answers. I'm going to admit upfront that I'm not nearly as experienced or well practised as they or other members of the forum, but I think my answers are nonetheless still useful, as I try to provide a more intuitive explanation and try to use my own learning experiences to try to help others find solutions that work for them. As always, take my answers with a large grain of salt, there's no warranties, I'm just trying to help where I feel I can. This doesn't sound great, but the best way to learn about how the CPU works is the Intel manuals, read chapter 4 of volume 3 at least a few times, take time to cross reference all the diagrams and thier explanations, and maybe even take out a pencil and paper and draw some diagrams which intuitively explain to yourself what the manual is talking about. The manual is dry, not structured in the way you'd expect, and often words things in a seemingly strange way, because it's a technical document and correctness comes before readability. Just give it a shot and see how it goes, and come back with any parts from your first question you don't understand, or, for that matter, parts of the manual you don't understand. People will be surprisingly willing to help you read the manual as opposed to providing explanations of the manual.


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Sun Feb 19, 2017 11:26 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
octacone: Part of the problem here is that you are confusing different levels of memory management, as Mikumiku747 stated. This is a common problem for people starting out in OS dev (in general, not just for memory management), and unfortunately, a lot of the tutorials (and even textbooks) make this worse by using the same term to mean different things, and worse still, use the same function name for different operations.

The functions malloc() and free() are a case in point. malloc() is a Standard C function for requesting a pointer to a new block of memory that can be used by the application, while free() releases the memory that that pointer points to, meaning it can be used for something else. This is to allow what is usually called dynamic memory allocation: getting a piece of memory at run time that can hold a variable, and then releasing that memory when it is no longer needed. The purpose of it is to allow you to create data structures whose size isn't known at compile time, such as lists, trees, and resizeable arrays.

These two functions are used at the application level, and usually start off by carving pieces out of a larger block of memory (called the heap) already set aside by the operating system when the application starts. If the application uses all of that memory, it can ask the OS for another larger block, often one larger than the original heap was by some scaling factor to reduce the number of times that the application's own memory manager has to make a system call to the operating system's memory manager. This is usually done behind the scenes, by malloc() itself, and the application programmer doesn't need to know anything about how it is doing things.

Note that none of this mentions paging. This is at a higher level than paging, and it doesn't matter here whether paging is used or not. A more sophisticated version of malloc might work closely with the OS to use the paged memory more efficiently, but in general, the level most application's memory managers work on - and each application that uses malloc/free at all will have one of it's own - means it doesn't need to know or care about paging.

The problem is that the term 'malloc' is sometimes used in a more general sense for any kind of memory allocation, not just the specific application-level function. This is all to likely to confuse someone who doesn't already have a clear grasp of the subject. If you see the term 'malloc' in a discussion of paging - or memory management in the OS at all - then that sort of confusion of levels is occurring.

While it is usually worded that way as an analogy to application level memory management, to make it easier for someone just getting into OS dev to understand, it has the opposite effect if the person reading it doesn't already understand malloc and free in the first place.

Thing is, paging is only a part of the memory management picture, and isn't actually a necessary part. Paging is a set of tools that the hardware provides, and its main purpose is to make it feasible to use what is called virtual memory - that is, it allows the OS to map some or all of the primary memory to a larger amount of secondary memory (on a disk or something similar), and automatically move sections of memory (that is, pages) between the two so that it can use the same memory for more than one thing. The point of all this is to allow the system to use more primary memory than it actually has, by swapping pages in and out when they aren't in use.

The paging systems in most modern CPUs have other features, mostly related to controlling access to pages, but virtual memory is the primary use of paging. This means that for a system that doesn't use virtual memory, there is no need to engage paging (though some such systems will use it for the other features). Furthermore, some kinds of systems - real time operating systems, in particular - can't use virtual memory easily or at all, so they wouldn't use paging for that purpose anyway. The reason for this is that there is no way in general to predict when a page fault might cause the system to swap, or how long swapping will take once it starts, and real-time applications need predictability.

You will notice that this is an entirely separate issue from the OS deciding how much memory to give each application, and when. The OS needs to do that regardless of whether it is using paging or not, and while paging changes how the OS might do that, it mostly affects which blocks of memory the OS gives the application, how much they can get total before the OS runs out of memory to give, and how long it takes for the application to access a given piece of memory (depending on whether it is swapped out or not).

Generally, an OS will provide one or more system calls which an application can use to ask for more memory. On Linux, as Mikumiku747 said, it would be sbrk(), or the newer mmap(). Also as Mikumiku747 said, these usually would be called by a standard function such as malloc(), rather than explicitly by the application programmer, for a number of reasons: not only does it make it easier for the application programmer, it makes it more portable (because malloc should work more or less the same on any system, regardless of what is underneath it), and lets the local application manage its own memory in larger pieces rather than making an expensive system call every time you needed, say, four bytes of additional memory.

The point is, the memory management done inside the OS, both for allocation and for paging, is almost entirely separate from the local memory management done by malloc and free, or by a garbage collector running in the application process (as would be the case for languages like Java or Python).

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Mon Feb 20, 2017 1:56 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

I'd strongly recommend splitting it into 3 separate pieces, each responsible for a different layer.

The lowest layer would be physical memory management. This manages pages of physical memory; including "normal RAM pages" (where there's no restrictions), and including buffers needed by certain devices where there are restrictions (e.g. "buffer must be below a certain physical address", or "buffer can't cross some sort of boundary").

The middle layer would be virtual memory management. This isn't just about mapping things (from the physical address space) into a virtual address space; it's about creating and maintaining an illusion where what software sees isn't reality. For example, software might think it allocated 123 MiB of RAM but that RAM might not actually be allocated (until software modifies it); and software might think a file is mapped into virtual memory but that file's data might not actually be loaded into RAM yet; and software might think it stored some data into RAM but that data may have been sent to swap space. Note that virtual memory management typically also includes permissions (read, write, execute) which are another part of the illusion (e.g. software might think it has "read/write access" for an area, but in reality it might only have "read access" because the virtual memory manager wants to know when the area is written to). It's important to understand that the illusion allows some very powerful tricks that allow the OS to use less RAM, make better use of RAM (e.g. use it for file caches, etc instead of wasting it), improve performance by postponing or avoiding work, etc.

The highest layer is "whatever the process wants". It might be "malloc()" and "free()" implemented in a C standard library; but it might be a less retarded alternative to "malloc()" and "free()" that's implemented as a generic C library, or it might be garbage collection implemented in a Java virtual machine, or could be something custom designed specifically for a specific program (e.g. a lot of database engines have their own memory management implemented directly on top of the OS's "middle layer" and don't use the language's memory management), or it could be something else. None of the highest layer has anything to do with the kernel itself - it's either part of a language's standard environment/library, or something an application developer is responsible for.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Mon Feb 20, 2017 3:26 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Brendan wrote:
The highest layer is "whatever the process wants". It might be "malloc()" and "free()" implemented in a C standard library; but it might be a less retarded alternative to "malloc()" and "free()" that's implemented as a generic C library....


I thought it'd be helpful if I clarify what I meant by "less retarded alternative".

For modern computers there are multiple types of caches (including TLBs) and multiple levels of caches (L1, L2, ...), and all these caches may have various performance artifacts (e.g. "aliasing" in associative caches where multiple pieces of data compete for the same fraction of the cache and won't use the entire cache). There is a large performance difference between fetching data from cache and fetching data from RAM (cache miss), and there's a huge performance difference between "best case" (translation is in L1 TLB, data is in L1 data cache) and "worst case" (TLB miss causes multiple fetches from RAM followed by data cache miss). Also note that (for systems using virtual memory tricks - e.g. memory mapped files and swap space) you can think of RAM as a kind of cache, where "RAM miss" implies fetching data from disk.

To avoid potentially severe performance consequences you want to pay attention to something called "cache locality"; which mostly just means "put pieces of data that are used at about the same time close to each other".

For a simple example; imagine you are iterating through a singly linked list with something like "while( (current != NULL) && (current->id != foo) ) { current = current->next; }". If the data involved is scattered everywhere you might get a TLB miss and a cache miss at every iteration; and if the data involved is kept within the least number of pages and the least number cache lines you can avoid a huge number of TLB misses and cache misses and it could be 100 times faster.

The best way to allow programmers to optimise for cache locality is to provide memory pools. For example; you might have one memory pool used for all data related to a "employee phone number data structure" where all memory allocated from that pool is closer to all other memory allocated from that pool; and you might have another pool used for all data related to a "customer phone number data structure" where all memory allocated from that pool is closer to all other memory allocated from that pool. Note that using the size of the allocation does not work, because the size of each allocation for the "employee phone number data structure" may be identical to the size of each allocation for the "customer phone number data structure".

The next thing to consider is that sometimes you want to allocate memory that will be used for a long time (where it makes sense to take longer to decide the best memory to allocate for performance), and sometimes you want to allocate memory that will be freed very soon (where it makes sense to allocate quickly).

The next thing to consider is security and encryption. Intel CPUs and AMD's upcoming CPUs (Zen) support "encrypted RAM", where you have to choose whether or not to sacrifice performance (latency) for security. Of course this compromise (how much slower encryption is) is going to be different for different CPUs; so (for future-proofing and portability) you don't want a "encrypted/unencrypted" flag, you want and want something more like (e.g.) a value from 0 to 100 where 0 means "do whatever is fastest regardless of security", 255 means "do whatever is most secure regardless of performance" and where values from 1 to 254 allow the library or OS to make an intelligent choice for you.

The next thing to consider is that we're currently at the beginning of what I'll call "the on-chip high bandwidth RAM era". Intel has released CPUs with built-in high bandwidth memory (the latest Xeon Phi) and AMD are about to release CPUs with built-in high bandwidth memory (their upcoming Zen); and this is something that is going to become relatively commonplace in future CPUs. However, this means that for best performance you want to allocate normal RAM for some things (where you don't need high bandwidth or lower latency) and built-in high bandwidth RAM for other things. Of course this compromise (how much you want high bandwidth or high latency) is going to be different for different CPUs and is also going to be different depending on how much high bandwidth RAM is currently free; so it's another case where you don't want a flag but want something more like (e.g.) a value from 0 to 255 (where 0 means "always use normal RAM whenever possible" and 255 means "always use high bandwidth RAM whenever possible", and where values from 1 to 254 allow the library or OS to make an intelligent choice for you).

Then there's debugging. Ideally, when you allocate memory you want to be able to set some information saying where it was allocated and/or what it is; so that debugging and profiling tools can help you see what is going on (including things like "Hey, every minute another 1 MIB of memory is being allocate for "person structures" at line 123 of file "foo.c" - by golly, I think that's a memory leak!"). Of course this would be controlled by compile time stuff (e.g. "#ifdef DEBUGGING") and would be suppressed for released code.

Mostly; if you combine all of the above, you might want something a little bit like "memAlloc(int alignment, int localityPool, int allocationSpeedHint, int securityHint, int bandwidthHint, char *description, int line, char *file)". Of course you also want a "createLocalityPool()" and you might want to shift some of the parameters into that. The end result might be more like:
Code:
int createLocalityPool(int allocationSpeedHint, int securityHint, int bandwidthHint);

void *memAlloc(int alignment, int localityPool, char *description, int line, char *file);



Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Mon Feb 20, 2017 4:49 am 
Offline
Member
Member

Joined: Sat Nov 10, 2012 1:16 pm
Posts: 62
Mikumiku747 wrote:
Also of note, the addresses needing to be page aligned is kind of a side effect of two things, one, the bottom 12 bits are used for something other than describing where the page tables / page mappings go to (permissions, caching flags, etc), because some flag bits of some kind are helpful to have in the structure and that's a good place to put them, and two, paging only ever operates on the top 20 bits of an address, so forcing all structures to be page aligned helps significantly in simplifying the design of the paging system in the CPU, since it never needs to do things to the bottom 12 bits of the address line.

Minor nitpick: it's more than 20 bits when you're working in long mode, obviously. (When processors get to the point that the full 64-bit addressing space is available, there's one other odd bit that'll crop up in long mode - the 6th-level page-mapping table is only 128 entries long, each of which covers 128PB, to cover the full 16EB memory space - but that's still probably a good decade or so out.)


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Mon Feb 20, 2017 7:32 am 
Offline
Member
Member
User avatar

Joined: Thu Jul 12, 2012 7:29 am
Posts: 723
Location: Tallinn, Estonia
octacone wrote:
The difference between pages and frames.


Frames are physical memory backing the pages. The RAM that is assigned to a valid page table entry.

_________________
Learn to read.


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Mon Feb 20, 2017 1:15 pm 
Offline
Member
Member
User avatar

Joined: Fri Aug 07, 2015 6:13 am
Posts: 1134
Thank you all for trying to explain how it all works. Now I have a much cleaner image of it. At the moment I do not have any special plans for things such as swapping, extra protection, different access levels etc... I just want to enable paging properly without failing. Also I decided to use identity mapping (only first 4MB). I will need paging for some of my upcoming tasks (ELF executables, multitasking, security rings, kernel code memory protection).

Also I think you guys didn't quite understand me. I was talking about standard C allocation. For example: window_t* = (window_t*) Malloc(sizeof(window_t)); I desperately need that kind of memory allocation. Not having it really slows me down (a ton!). I need to be able to allocate a specific amount of bytes (not 4KB stuff) like give me 3 bytes, give me 20 megabytes, give me 17 bytes, give me 1008 bytes... I need it for allocating buffers, structures, lists, etc... My intentional question is how to do it? How do I use lists with it? I mean how do work it out? What am I incrementing, like kernel_end stuff, is that a virtual address (I guess so). What if lets say I do Malloc(50) and it returns a pointer to 0xCOFFEE but my ELF stores its .bss at 0xCOFFEE, will I overwrite it? How do I keep a list of already allocated things without creating 1 billion bytes long array?

Now lets get back to paging itself. One question remains: how do I translate addresses -> 0xABC to 0xDEF, where is 0xABC stored and where does 0xDEF need to be mapped? Does page_directory -> page_table[0] -> page[0] contain 0x0000??? And then to map it I need to do like page_directory -> page_table[0] -> page[0] = 0x0001, to map 0x0000 to 0x0001. Where is the first address located (first physical address inside my struct)? As far as I know, 20 bit long frame address (inside page struct) holds the physical address to what? Where is what located inside my struct array thing?

I hope that my questions make some sense.
Edit: also I need kmalloc to allocate paging structures.!?

_________________
OS: Basic OS
About: 32 Bit Monolithic Kernel Written in C++ and Assembly, Custom FAT 32 Bootloader


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Mon Feb 20, 2017 2:31 pm 
Offline
Member
Member

Joined: Tue Mar 04, 2014 5:27 am
Posts: 1108
:shock:


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Mon Feb 20, 2017 2:37 pm 
Offline
Member
Member
User avatar

Joined: Mon Jul 18, 2016 2:46 pm
Posts: 170
octacone wrote:
Thank you all for trying to explain how it all works. Now I have a much cleaner image of it. At the moment I do not have any special plans for things such as swapping, extra protection, different access levels etc... I just want to enable paging properly without failing. Also I decided to use identity mapping (only first 4MB). I will need paging for some of my upcoming tasks (ELF executables, multitasking, security rings, kernel code memory protection).

Also I think you guys didn't quite understand me. I was talking about standard C allocation. For example: window_t* = (window_t*) Malloc(sizeof(window_t)); I desperately need that kind of memory allocation. Not having it really slows me down (a ton!). I need to be able to allocate a specific amount of bytes (not 4KB stuff) like give me 3 bytes, give me 20 megabytes, give me 17 bytes, give me 1008 bytes... I need it for allocating buffers, structures, lists, etc... My intentional question is how to do it? How do I use lists with it? I mean how do work it out? What am I incrementing, like kernel_end stuff, is that a virtual address (I guess so). What if lets say I do Malloc(50) and it returns a pointer to 0xCOFFEE but my ELF stores its .bss at 0xCOFFEE, will I overwrite it? How do I keep a list of already allocated things without creating 1 billion bytes long array?

Now lets get back to paging itself. One question remains: how do I translate addresses -> 0xABC to 0xDEF, where is 0xABC stored and where does 0xDEF need to be mapped? Does page_directory -> page_table[0] -> page[0] contain 0x0000??? And then to map it I need to do like page_directory -> page_table[0] -> page[0] = 0x0001, to map 0x0000 to 0x0001. Where is the first address located (first physical address inside my struct)? As far as I know, 20 bit long frame address (inside page struct) holds the physical address to what? Where is what located inside my struct array thing?

I hope that my questions make some sense.
Edit: also I need kmalloc to allocate paging structures.!?


They already told you whats the difference between the c malloc and the malloc you have to implement.
In windows there are different APIs to allocate memory, lets start from usermode:
1. your program starts and the CRT (C-Runtime) allocates a heap for it using HeapCreate. This memory is contiguous, one big block (It doesnt have to be contiguous without paging, but paging MAKES it contiguous)
2. you call malloc, malloc is USERMODE only and implemented in the CRT, it uses a linked list to manage memory regions
3. malloc calls HeapAlloc and gets a piece of the memory block allocated using HeapCreate
3. there is also VirtualAlloc which is the function you actually want to implement for usermode
4. in kernelmode there is ExAllocatePool, which allocates contiguous memory, again this CAN use paging, but normally those pools are really contiguous even without paging. This is your kmalloc which you have to implement first

So in short, you need:
- (ring 0) kmalloc: allocates contiguous physical memory (You can use a bitmap for example http://wiki.osdev.org/Page_Frame_Allocation)
- (ring 0) vmap: maps physical memory as virtual memory to a specific context
- (ring 3) malloc: uses kmalloc and vmap to create a heap and then gets pieces of mem from it and builds a linked list of it

Paging translates a VIRTUAL address to a PHYSICAL address.
This means you can map one physical address to many different virtual ones.
For your kmalloc problem for the page dir, simply hardcode the base address where you put it at like microsoft did until windows 10 and map this memory later manually. Dont forget to mark all those areas as used for your kmalloc routine so you dont accidently overwrite it when you call kmalloc


Top
 Profile  
 
 Post subject: Re: Troubles with understanding paging and malloc (free)
PostPosted: Mon Feb 20, 2017 2:42 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
I assume for the moment that you are talking about memory allocation in the kernel, which is rather different from how you would handle it in userspace. There are some similarities, of course, and it is possible to use the same algorithms and data structure designs in both (though you probably wouldn't), bu they would need to be handled separately.

So, for now, let's focus on how the kernel manages memory for itself.

The first thing to understand is that the majority of kernel designs in virtual memory systems use what is called a 'Higher Half Kernel', in which the kernel maps itself to a logical address space in each process' memory map that starts at the 2^(n-1) + 1 address of a 2^n address space; so for a 32-bit address space, it would be mapped to the half of memory starting at byte 2,147,483,648. Now, it doesn't have to use a full half of memory (recent 32-bit versions of Linux, for example, uses the upper 1 GiB), but the 'higher half' approach simplifies a lot of things, because it can (among other things) simply test the high bit(s) of an address to see if it is in userspace or kernel space. Since most processes will never use anywhere close to 2GiB of memory, setting aside half the 4GiB memory space is usually not too burdensome. Each process has this upper memory mapped into it's own private virtual address space, meaning that the kernel can access anything in each process as it needs to (and can selectively give the process access to shared buffers and so forth that are in the area otherwise reserved for kernel space). The kernel is always in the process space, and all the processes share the same kernel code (though not necessarily the same kernel data structures).

This, by itself, makes managing kernel memory much easier, as it means that there is almost always plenty of logical addressing for the kernel to use, and it doesn't have to compete with applications for memory. It does mean that the applications each have a smaller effective address space, but as I already said, this is rarely a major problem today. The issue now is how to keep track of how you are using that memory.

Note that this is, again, a separate issue from how you map the logical memory addresses to individual pages either in memory or on disk, though for the kernel memory manager you will be free to put a lot more attention into how it interacts with the paging manager, as they will both be part of the kernel itself. You can just have a "kernel malloc" that wallpapers over those details, and might want to for the initial versions, but since the paging manager plays such a crucial role in the performance of everything else you probably want to at least give some design thought to how you can integrate the two somewhat without driving yourself crazy (or crazier).

This brings us to the question of how you want to handle the memory allocation itself (in the kernel still). Unfortunately, allocation algorithms are still an open research topic, and while there are a lot of different method to choose from, none of them is really optimal for all needs.

The first question is, how much memory do you need for the code, the static data (things which will be allocated once and never re-sized), and the kernel stack(s), and which parts of each need to be kept in physical main memory at all times (that is to say, never paged out). The stack is the trickiest of these, as it actually changes size over time, whereas the other two have fixed image sizes, and for performance never going to page out the upper (say) four pages' worth of stack. It is probably wisest to allocate about double the amount of total stack space you expect to use, and have the pager keep a 'window' of half of that locked in memory at all times. You may find it easier still to simply keep all of the stack in physical memory, but that's up to you.

Anyway, whatever is left can be used a heap space. Now, as a rule, you want to avoid heap allocation in the kernel as much as possible, but that can lead to problems of overrunning fixed data structures; a classic example of this problem is in kernels which had a fixed process table with (say) 64 slots, and then being unable to spawn a 65th process. While some things (such as a the memory free/allocated lists themselves) can be made 'big enough', trying to guess how big some others might need to be is likely to lead to giving them either too much memory (not a huge problem but a problem) or too little (which can be a serious problem). So you want to identify which data structures absolutely cannot be precomputed, and use heap space for only those data structures.

Also, you might want to start by pre-allocating some specific amount of space for the individual data structures, to act as a memory pool. This should make it easier to maintain locality of reference, and would allow you to apply more specific allocation methods to different ones later on, though at the cost of greater complexity.

I need to go take care of some things, but I'll try to post something about different allocation strategies later.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 12 posts ] 

All times are UTC - 6 hours


Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 56 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group