EDIT: I fired off my initial
mea culpa before I could take a closer look at the rest of QByte's post. I don't want to have sequential posts, so I am editing this one.
Qbyte wrote:
schol-r-lea wrote:
(at the moment, something larger than, say, 256 GB, for a CPU in very large HPC installation; while this will doubtless increase over time, in order to implement a physical 64-bit address space you would need to buy a bigger universe, as there are fewer than 2^64 baryons in all of visible space).
Firstly, your numbers are off by over 60 orders of magnitude. 2^64 roughly equals 1.84x10^19, while it's estimated that there are around 10^80 atoms in the universe, each of which is composed of 1 or more baryons. And since an average atom is around 0.1 nm wide, there are 10^7 atoms along a 1 mm length, 10^14 in a sqaure mm, and 10^21 in a cubic mm. If we have a chip with dimensions 50x50x2mm, that chip contains 5x10^24 atoms, which is enough to feasibly support 2^64 bits of physically addressable memory once technology reaches that level (that's 50,000 atoms per bit of information, and experimental demonstrations have already achieved
far better than that).
Gaaah, I can't believe I said something so stupid. That was a serious brain fart, sorry.
Qbyte wrote:
Secondly, I'm well aware of the distinction between virtual address spaces and virtual memory, since memory virtualization doesn't imply that each process has its own virtual address space. Every process can share a single virtual address space, and the hardware then maps those virtual addresses to real physical ones. That can indeed make sense to do if there is a large disparity between the size of the virtual and physical address space, but it becomes frivolous once the physical address space is equal or comparable in size to the virtual one. Therefore, if the above mentioned engineering feat can be achieved, your corncerns about VAT being essential for a practical SASOS evaporate.
Eventually? Perhaps. If your numbers are correct - and I did peek at Solar's post, so I am not convinced anyone's are, right now - then it might even be within the next decade or so, though I suspect other factors such as the sheer complexity of the memory controller this would require will at least delay it. Still, the assertion seems premature, at the very least.
Note that it isn't as if the memory is wired directly to the CPU in current systems, and IIUC - correct me if I am wrong - this is at least as much of a limiting factor right as the memory densities. Perhaps this is wrong, and even if it isn't some breakthrough could change it, but I wouldn't want to count on that.
Qbyte wrote:
Schol-R-LEA wrote:
I gather than your idea of segments is on partitioning elements - whether process or individual objects - within such a space, and relying on the sheer scope of the address space to give an effective partitioning (provided that you maintain spacings of, say, 16 TB between each object, something that is entirely workable in a 64-bit address space) - if so, I get your intent, as it is an approach I am considering for certain projects lately myself.
That is indeed a useful memory management scheme that a SASOS would make use of. Of course, it can do so in a much less naive manner by taking hints from programs and objects about memory usage properties for each individual object and store them accordingly.
Well, yes, of course; that was just a coarse-grained model, a starting point. An organizing principle rather than a system in itself, shall we say.
There is a name for this approach, actually, as it has been discussed going back to the mid 1980s: 'sparse memory allocation'. According to
See MIPS Run, the reason the R4000 series went to 64 bits was in anticipation of OS designers applying sparse allocation via VAT - keeping in mind that memories of the time were generally under 64MiB, even for large mainframes, the typical high-end workstation (which is what MIPS was mostly expected to be used for) had between 4MiB and 8MiB, and a PC with even 4MiB was exceptional. Whether or not this claim is correct, I cannot say, but the fact that they felt it was worth jumping to 64-bit in 1990 does say that they weren't thinking in terms of direct memory addressability at the time.
Edit: minor corrections on the dates.
But I digress.
Qbyte wrote:
For example, things like music, video and image files are basically guaranteed that they wont change in size, so they would be marked as such, and the OS would be free to store them as densely as possible (no spacing between them and neighboring objects at all). Then, there are objects such as text files who are likely to change in size but still stay within a reasonably small space, so they would be stored with suitably sized gaps between them and neighboring objects. Finally, there would be large dynamic data structures like those used by spreadsheets, simulations, etc, who would be given gigabytes or even terabytes of spacing as you suggested. In an exceedingly rare worst case scenario where a large data structure needs more contiguous space than is currently available, the amortized cost of a shuffling operation would be extremely low.
Qbyte wrote:
However, segments actually have no direct relation with the above at all. Memory protection is a mandatory feature of any general purpose system and the primary purpose of segments here is purely to restrict what parts of the address space (hence what objects) a given process can access and in what ways.
Name even
one existing or historical system in which this was the primary purpose of segments. All of the ones I know of used them for two reasons: as variable-sized sections for swapping (basically paging by another name) or for allowing a wider address space than physical address pins permitted (as with the x86 and Zilog). Even on the Burroughs 5000 series machines, which is apparently where segmentation originated, it was mostly used for simplifying the memory banking (a big deal on those old mainframes) IIUC. Indeed, the majority of segmented memory architectures built
didn't have memory protection, at least not in their initial models.
This isn't to say that they can't be used for this purpose, but at that point, calling them segments is only going to create confusion - much like what we have in this thread.
Qbyte wrote:
Schol-R-LEA wrote:
And I really do still think that capability-based addressing would be a closer fit to your goal (and rdos's) than segments, either as a general concept or in terms of existing implementations, but since that seems to be even more of a dead topic than hardware segments, well...
I'm no stranger to the idea of capability based addressing, but as intriguing as it is, I've personally arrived at the conclusion of it just being a more convoluted and contrived way of enforcing access rights in comparison to segments. At the end of the day, an object is just a region of memory, so it makes the most sense to just explicitly define the ranges of addresses that a process is allowed to access and be done with it. Anything beyond that just seems like fluff. I'd love to be convinced otherwise, so if you've got any original, concrete points in favor of a capabilty based scheme, I'm all ears.
They aren't really comparable, IMAO, because caps shift the burden of proof from accessed resource to the one doing the accessing - if the process don't have a capability token, it can't even determine if the resource exists, never mind access it - it has to request a token from the management system (whether in hardware or in software), and if the system refuses the request, it is not given a reason why (it could be because the requester isn't trusted, or because it isn't present in the first place).
They aren't specifically about memory; that just happens to have been one of the first things they try to apply it to, and the hardware of the time wasn't really up to it yet (it isn't clear if current hardware could do it efficiently; many researchers such as Ivan Godard seem to think it would be, but since in practice security has generally ended up costing more then insecurity in the minds of most users
, no one seems to care to find out). It isn't even specifically a hardware solution; after the early 1980s, there has been a lot more work on software capabilities than hardware ones.
But I suspect you know this, from what you said.
I need to get going, but I will pick up on this later.