Hi,
Qbyte wrote:
Quote:
If the system is under memory pressure, you can end up swapping out a whole segment, possibly several megabytes in size, when the allocation that triggered the swap may only have been for a few tens of kilobytes. An application that allocates a huge sparse array in one segment and only touches small portions of it can end up causing the system to thrash on swap, or even starve the system for memory completely: every time the application is accesses that segment, the whole segment is swapped in, even though most of it is empty, and any time any other application is scheduled, that whole segment has to be swapped back out to make room.
These points would only be true of a very naive implementation. Nothing mandates that segments need always be swapped in and out in full and there are numerous simple strategies which avoid that. Of course, segmentation fares worse than paging in this regard, but the situation isn't as fatal as you're implying.
If you're able to split a segment into pieces (and swap in/out pieces) it's easier/cheaper/faster to add attributes to those pieces (pages) and not bother with all the additional overhead of segments.
Qbyte wrote:
Schol-R-LEA wrote:
In any case, the rise of NVMe memory will probably lead to a
more hierarchical memory structure, rather than less, especially since it is likely to lead to an abandonment of the current file-oriented models entirely in favor of
persistent operating systems, meaning that the OS will have to be able to work with different kinds of memory fluently and transparently in a fine-grained manner. This will probably lead to a hybrid approach that differs from both paging and segmentation in significant ways.
Brendan wrote:
For a rough estimate, if you gathered the top designers/researchers in the world and threw billions of $$$ at them (every year for ten years) to create the best CPU that supports segmentation and the best possible OS that uses segmentation; then the resulting OS (for real world usage, not meaningless carefully selected micro-benchmarks) will probably be about 100 times slower than Windows or Linux simply because of segmentation alone.
I would conjecture that a project of that scale would most likely result in the development of high-speed, high-density, non-volatile RAM, the so-called "universal memory", and the integration of said memory onto the CPU die as per the
Berkeley iRAM project. This would have rather profound implications for system design, one of which would be calling into question the usefulness of many aspects of paging, since secondary storage would be eliminated and now everything would reside persistently in a "single-level store" as envisioned by the designers of Multics, who were well-known advocates of segmentation. With circa 2 TB of on-chip non-volatile RAM, segmentation would be well positioned to undergo a renaissance.
Let's design an OS based on persistent objects.
First, if you have 1000 text files there's no point duplicating the code for your text editor 1000 times, so at the lowest levels you'd split the object into "code" and "data" (in the same way that objects in C++ are split into "class" and "data" where the code is not duplicated for every object). Second, because hardware and software is always dodgy (buggy and/or incomplete, and needing to be updated), and because people like to be able to completely replace software (e.g. switch from Linux to Windows, or switch from MS Office to Libre Office, or ...) you'd want to make sure the data is in some sort of standardised format. Third, to allow the data to be backed up (in case hardware fails) or transferred to different computers you'd need a little meta-data for the data (e.g. a name, some attributes, etc). Essentially, for practical reasons (so that it's not a fragile and inflexible joke) "persistent objects" ends up being implemented as files, with the code stored as executable files and the data stored as data files (with many file formats).
Fourth, when modifying data users need/want the ability to either discard all their changes or commit their changes. Fifth, there may be multiple users modifying the same file at the same time. Sixth, if/when software crashes you want to be able to revert the data back to the "last known good" version and don't want the data permanently corrupted/lost. Essentially, you end up with a "create copy, modify the copy, then commit changes" pattern; which is identical to "load file, modify file, save file".
So...
If all RAM is non-volatile; the first thing we need to do is partition that RAM so that some is used for "temporary stuff" and the remainder is used for file system/s.
Note that we can cheat in some cases; primarily, we can use the equivalent of "memory mapped files" (where pages from the file system are mapped into objects so that "RAM as RAM" isn't used) with "copy on write" tricks so that if a memory mapped file is modified we aren't modifying the file itself. We could also borrow "currently free file system RAM" and use it for "temporary stuff" if there's a way to for the file system to ask for it back if/when it's needed, so that the amount of "RAM as RAM" is likely to be larger than the minimum that was reserved when partitioning the RAM.
What we end up with is something that is almost identical to what we already have - executable files and data files, objects/processes, and paging (and tricks to save memory). The only real differences are that "hibernate" is faster (you can turn the computer off without copying data from RAM to swap space and restore everything when the computer is turned back on without copying data from swap space back into RAM), swap space would be replaced with its opposite ("file system pages borrowed and used as RAM"), and there'd be no reason to have file system caches. These are all relatively superficial differences - they don't change how applications are written or how computers are used.
The main reason to want non-volatile RAM (rather than just very fast SSD) has nothing to do with the OS or software at all - it's power consumption. For DRAM, when there's lots of RAM (many GiB, where only a small amount is actually being read/written at any point in time) you end with a lot of power consumed to refresh the contents of RAM (and SRAM is significantly worse, always consuming power). Non-volatile RAM (with many GiB, where a large amount isn't being read/written at any point in time) could consume significantly less power.
Cheers,
Brendan