devc1 wrote:
rdos wrote:
devc1 wrote:
What if a high-end server computer with 20 hard drives and 64 gb of RAM. What would you do in this case ?
I think this is one of the limitations that 64 bit mode addresses ?
Not at all. My new disc buffering scheme will use physical address, not linear. File systems will be run the "micro kernel" way in their own processes having 2G of private linear memory. That's enough to cache meta data for any reasonable file system. Of course, each of the hard drives will have their own server process.
I don't think the issue is so much what cannot be done with protected mode, but rather how long protected mode will work and possible drawbacks performance wise as Intel and AMD optimize their processors for long mode.
This is not what I mean, I mean that a high end server will use MMIO hard drive controllers (AHCI, NVMe..) because they are faster. And you can't access the full address space or the full RAM because your OS can only see 64gb of address space.
I have an AHCI driver, and I could write a NVMe driver (probably will do some day). My OS can see all physical RAM because PAE use 64-bit addresses and so can access all physical RAM. As I wrote before in the thread, my spectrum analyzer has 128 GB of RAM, and I can use all of it in my analysis program by memory mapping a 2M buffer at a time.
The analyzer hardware is a PCIe FPGA that use BAR0 as an array of physical addresses that it should write sample data to. The FPGA will use 128 byte PCIe transactions to write directly to physical memory. At the OS side, the OS will allocate the desired number of 2M memory blocks, put them in BAR0 and start the data collection. Then the analysis program will start 12 processor cores that will memory map their current buffer and do the analysis in parallel with sampling. I think this is a high end application that I originally thought would require using long mode, but this is not so. It works perfectly well in protected mode too, even though a bit slower since I cannot use 64-bit registers.
I have a limit for how much physical memory I support, but it is a software limit. I currently have a limit of 256 GB, but it can be increased if needed. This is based on the need to map the physical bitmap in kernel linear memory. Each byte can map 8 4k pages, 1kb can map 32MB, 1MB can map 32GB, and 1GB can map 32TB. So, there is some limit around 10-20TB or so, but I have no hardware with that much memory, and so I don't reserve space for such a large bitmap. Actually, the largest kernel linear area is for the page based allocator, which currently is mostly used for disc buffers. With the new filesystem structure, this area will no longer be used for disc buffers, and so can been reduced considerably, and the physical bitmap can be increased.
I think it is possible to increase the amount of supported physical memory further. One idea would be to use a page based bitmap only up to 8 G or so, and for higher addresses use a 2M bitmap instead. In this bitmap, 1 byte can map 8x2MB, 1k can map 16G, 1M can map 16T and 1G can map 16384T. This configuration can support at least 5000TB of physical memory.
Long mode OSes that map physical memory in the 48-bit linear address space can at most support 256T of memory, probably a lot less.
So, no, there are no practical limits for how much physical memory a 32-bit OS can support. It's a matter of smart algorithms only.