OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 4:19 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 25 posts ]  Go to page Previous  1, 2
Author Message
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Tue Jun 27, 2017 4:53 am 
Offline
Member
Member
User avatar

Joined: Sun Jul 14, 2013 6:01 pm
Posts: 442
LtG wrote:
The issue I have with the above is that the 10-30GB/s is not a limit of the CPU, it's the RAM bandwidth.


now this is even more wrong

depends on the cpu and the current configuration, we are having 2 or 4 channel ddr3 or ddr4 memory setups, a 4 chanel ddr4 configuration can go up to 70 gbyte/sec for example. of course you have to utilize a lot of thread to be able to utilize all of this bandwidth.

_________________
Operating system for SUBLEQ cpu architecture:
http://users.atw.hu/gerigeri/DawnOS/download.html


Last edited by Geri on Tue Jun 27, 2017 6:00 am, edited 1 time in total.

Top
 Profile  
 
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Tue Jun 27, 2017 5:06 am 
Offline
Member
Member

Joined: Fri Aug 19, 2016 10:28 pm
Posts: 360
zaval wrote:
I just remember, that I read in the WDK documentation that for the pools, the buddy mechanism is used.
True, but just a remark, because this can be a point of confusion. MS uses the term "buddy scheme" in the most general sense. Normally buddy allocators operate on chunk sizes which are either a power of 2, or Fibonacci sequences. The size progression MS uses is linear. Something like size(n) = n * 16 bytes on x64. What this means is that when a chunk of size(p) is freed next to a chunk of size(q), the two are joined into a chunk of size(p+q+x), where x comes from the extra header (the header is 16 bytes, so actually x=1). Both neighboring chunks are considered for coalescing. It just is not a power of 2 approach, where identical chunks are coalesced into one twice their size.


Top
 Profile  
 
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Wed Jun 28, 2017 9:23 pm 
Offline
Member
Member

Joined: Sat Feb 27, 2010 8:55 pm
Posts: 147
Octocontrabass wrote:
Microsoft has to support the OS. It's reasonable to expect that they would like to be able to test it themselves before claiming that it will work, especially when third-party drivers are involved.



LtG wrote:
As others have said, MS actually has to _support_ their customers (to varying degrees). If they don't have access to a device with more than 2TB RAM, then they couldn't even have tested their software on one single machine. The reality is that you will quite likely introduce limits into your code (as people have been doing by using uint32_t, as they did when they used two digits to encode the year, causing the y2k, as they did with the Linux epoch stuff = y2k38).

So I would say that the only sane thing is to actually test it at least a little, and getting 2TB machines is probably a bit difficult, not sure what kind of servers are available these days.


To me limiting support for physical RAM amounts you've actually tested is a bit like writing a file system that only allows file names you've actually tested. If you've stored one character, you've stored them all; if you've mapped one gigabyte, you've mapped them all.

I just can't help but feel MS is being unreasonably paranoid. Is it a combination of a multi-billion-dollar reputation on the line combined with the resources to obtain multi-terabyte machines that allow them to limit their OS's physical RAM usage to only that which they've tested, or is there a legitimate reason for concern?

I suppose what I'm getting at -- and this is the reason I've posted this in Design & Theory -- is this: Should we limit our OSes only to that which we have tested on real hardware, or is it reasonably safe to assume once a PMM algorithm has been thoroughly tested it can scale as high as the CPU will support?


Top
 Profile  
 
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Wed Jun 28, 2017 9:30 pm 
Offline
Member
Member

Joined: Sat Feb 27, 2010 8:55 pm
Posts: 147
Octocontrabass wrote:
The problem is that drivers need to use physical addresses for things like DMA. When the RAM being used for DMA comes from a physical address above 4GB but the driver tells the hardware to use a physical address below 4GB, bad things happen.


omarrx024 wrote:

The device drivers mainly communicate with the device itself using memory-mapped I/O and DMA, both of which use physical addresses, not linear addresses. For PAE, the linear addresses are indeed 32-bits only, but the physical addresses are normally 36-bits by default. So even if the memory is mapped at, for example, 3 GB, or any address in the 32-bit linear address space, the actual physical address of that page may be 6 GB, or any other address in the 36-bit physical address space. A driver that is not aware of the existance of physical addresses larger than 32 bits may truncate a 36-bit address, taking the lowest 32-bits only. An example buffer at physical address 4 GB (0x100000000) would be sent to the device by a driver that is not aware of PAE as address zero. My instincts tell me that the device would not work properly that way. ;)


LtG wrote:
If drivers lived fully in virtual address space, and thus had to request DMA (and other stuff) from Windows then this wouldn't be much of an issue driver wise. However if drivers deal with the "raw" physical addresses and they internally use uint32_t (or any type of pointer but the code is compiled as 32-bit, so the pointers are 32-bit), then they will overflow with larger addresses and of course cause weird stuff until they crash.



I realize I wasn't clear in my initial post when I said the OS can "map" devices where it wants to; I wasn't referring to mapping physical addresses to virtual address via paging. I meant isn't the OS able to dictate what physical addresses devices make use of? If it limits the devices' physical address space to <4GB then even truncated addresses will work just fine.


Top
 Profile  
 
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Wed Jun 28, 2017 10:48 pm 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
azblue wrote:
To me limiting support for physical RAM amounts you've actually tested is a bit like writing a file system that only allows file names you've actually tested. If you've stored one character, you've stored them all; if you've mapped one gigabyte, you've mapped them all.

If you have 10TB drive and your FS is supposed to support any size files, then I'd at least test creating a 10TB file. I probably wouldn't limit it to 10TB just because I didn't have access to larger drive.

Of course with RAID and such you can "virtually" extend the drive and test even larger files.

I also would use unit tests to "prove" that my memory manager works under all (supported) conditions. So I probably wouldn't add a 2TB RAM limit, but that's just me.

azblue wrote:
I just can't help but feel MS is being unreasonably paranoid. Is it a combination of a multi-billion-dollar reputation on the line combined with the resources to obtain multi-terabyte machines that allow them to limit their OS's physical RAM usage to only that which they've tested, or is there a legitimate reason for concern?

AFAIK there's nothing bad waiting around the corner after 2TB.. So in that sense MS is being "paranoid".

azblue wrote:
I suppose what I'm getting at -- and this is the reason I've posted this in Design & Theory -- is this: Should we limit our OSes only to that which we have tested on real hardware, or is it reasonably safe to assume once a PMM algorithm has been thoroughly tested it can scale as high as the CPU will support?

As said, I wouldn't (except potentially for licensing reasons, but I doubt that applies to any of us =)). However I would use _extensive_ unit testing with all the core stuff, including PMM and VMM. Proving that it works under all supported conditions, and boot gives error for example if there's too little memory to work with..


Top
 Profile  
 
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Wed Jun 28, 2017 10:54 pm 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
azblue wrote:
I realize I wasn't clear in my initial post when I said the OS can "map" devices where it wants to; I wasn't referring to mapping physical addresses to virtual address via paging. I meant isn't the OS able to dictate what physical addresses devices make use of? If it limits the devices' physical address space to <4GB then even truncated addresses will work just fine.


As I understand it, MS actually ran into this issue and presumably there was nothing they could do, since the issue is in the drivers.

The OS could _potentially_ dictate the physical addresses and then there would be no issue. My guess is that Windows drivers were allowed to handle their own DMA, and thus when doing DMA they need to deal with the physical addresses. In DMA the driver tells the hardware device to which _physical_ address it wants the hardware to write (or read) data. Even if these addresses are provided from the OS, the driver needs to be able to pass them on, but if the physical address is 64-bit and the driver assumes (incorrectly) that they're 32-bit then you're in trouble. It doesn't matter what the virtual address is for the driver, the MMU sits between the CPU and the RAM, it's not between the devices and the RAM*.

I'm guessing IOMMU could be used to solve this issue.

*) The MMU can't be between devices and RAM given that the current CR3 (page dir) keeps getting changed all the time as the OS switches between processes, so the mappings would change constantly and the device would write to random memory locations. The IOMMU of course isn't tied to the currently executing process so there's no issue with it.


Top
 Profile  
 
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Thu Jun 29, 2017 7:52 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

azblue wrote:
To me limiting support for physical RAM amounts you've actually tested is a bit like writing a file system that only allows file names you've actually tested. If you've stored one character, you've stored them all; if you've mapped one gigabyte, you've mapped them all.

I just can't help but feel MS is being unreasonably paranoid. Is it a combination of a multi-billion-dollar reputation on the line combined with the resources to obtain multi-terabyte machines that allow them to limit their OS's physical RAM usage to only that which they've tested, or is there a legitimate reason for concern?

I suppose what I'm getting at -- and this is the reason I've posted this in Design & Theory -- is this: Should we limit our OSes only to that which we have tested on real hardware, or is it reasonably safe to assume once a PMM algorithm has been thoroughly tested it can scale as high as the CPU will support?


We should design our physical memory management to handle full 64-bit physical addresses (because Intel or AMD will increase the "current architectural 52-bit limit" eventually), and make sure it works for whatever we have, and not limit the maximum.

Note that this takes a little care; because it's tempting to pack other information into the upper (unused/reserved) bits of physical addresses, like "uint64_t value = other_stuff << 52 | physical address". This is especially true for lockless algorithms on older 64-bit CPUs that don't support the CMPXCHG16B instruction (and only support CMPXCHG8B); where you want to (atomically) compare and exchange a physical address plus something else.

I would be tempted to assume that this is the problem Microsoft has. They have a lot of code written by many developers (and not just drivers, but in the kernel itself); and can't easily guarantee that nobody has done any "pack other information into the upper (unused/reserved) bits of physical addresses" hackery (without testing to see if something goes wrong). The difference between us and Microsoft is that we don't have code written by many developers.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Thu Jun 29, 2017 10:08 am 
Offline
Member
Member

Joined: Fri Aug 19, 2016 10:28 pm
Posts: 360
Brendan wrote:
Note that this takes a little care; because it's tempting to pack other information into the upper (unused/reserved) bits of physical addresses, like "uint64_t value = other_stuff << 52 | physical address". This is especially true for lockless algorithms on older 64-bit CPUs that don't support the CMPXCHG16B instruction (and only support CMPXCHG8B); where you want to (atomically) compare and exchange a physical address plus something else.
Lock-free pointer packing affected virtual memory only, because Windows has no linear mapping of the physical memory like Linux. In Windows 8.1 MS simply dropped support for the said early-gen AMD cpu in order to get kernel ASLR working and the limitation went away. The only thing that I imagine could have had similar effect for the physical memory, aside from hardware limitations, is drivers packing bits in DMA related structures. It's not something I am familiar with, but seems a little retarded optimization to me (because, it does not facilitate lock-freedom anymore).


Top
 Profile  
 
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Thu Jun 29, 2017 10:54 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

simeonz wrote:
Lock-free pointer packing affected virtual memory only, because Windows has no linear mapping of the physical memory like Linux.


Pointer packing definitely effected some (earlier) lock-free algorithms Windows used for virtual memory management (but this has nothing to do with "map all physical memory into kernel space" nonsense that Linux does, which is a completely unrelated issue).

Physical address packing may or may not have effected, and may or may not still effect, algorithms (that may or may not be lock-free) that Windows uses for physical memory management or anything else, or that device drivers use for anything.

The point I'm making is that there are so many possibilities that it's impossible for Microsoft to be sure.

simeonz wrote:
The only thing that I imagine could have had similar effect for the physical memory, aside from hardware limitations, is drivers packing bits in DMA related structures. It's not something I am familiar with, but seems a little retarded optimization to me (because, it does not facilitate lock-freedom anymore).


For one simple (contrived and fictitious) example, maybe the VFS uses physical pages to cache file data and the VFS has an array of 32-bit entries per file and does:
Code:
    file_structure->page_cache[entry_number] = (is_cached_flag << 31)
                                               | (currently_being_fetched_flag << 30)
                                               | (was_modified_flag << 29)
                                               | (physical_address_of_page >> 12);

..such that physical addresses have to be "29+12 = 41 bits" to fit and the OS can't have more than 2 TiB of RAM because that'd need more than 41 bits.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Physical Memory - Proper Maximum Limit
PostPosted: Thu Jun 29, 2017 11:29 am 
Offline
Member
Member

Joined: Fri Aug 19, 2016 10:28 pm
Posts: 360
Brendan wrote:
Pointer packing definitely effected some (earlier) lock-free algorithms Windows used for virtual memory management (but this has nothing to do with "map all physical memory into kernel space" nonsense that Linux does, which is a completely unrelated issue).
What I meant was that virtual memory embeds and limits the physical memory on Linux. And that this is not the case in Windows, as the FS cache manager can store data in physical memory without mapping most of it, at least not at the same time. (Then they may map some of it and thrash the TLB, but this is a different issue.) In other words, on Windows, the physical memory and virtual memory limits are designed to be uncoupled, whereas this would be a thornier condition with the current Linux design.

Brendan wrote:
For one simple (contrived and fictitious) example, maybe the VFS uses physical pages to cache file data and the VFS has an array of 32-bit entries per file and does:
Code:
file_structure->page_cache[entry_number] = (is_cached_flag << 31)
| (currently_being_fetched_flag << 30)
| (was_modified_flag << 29)
| (physical_address_of_page >> 12);
Fair enough. I was thinking about large values controlled by versioned pointers, not updating small values in-place atomically. Though if this was done by a third-party driver developer (as MS claims), not VFS code, it would be a bold move.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 25 posts ]  Go to page Previous  1, 2

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 20 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group