Hi,
rdos wrote:
Add that the paging structure (TLB cache) needs to be referenced for every part of an instruction that references memory, and that segmentation can work with a static copy of the current descriptor entry data, and it is a given that paging is many times less efficient than segmentation.
Wrong. You're ignoring all of the things that make segmentation bad (memory mapped files, swap space, physical address space de-fragmentation, etc) and focusing on a few negligible things in a deluded attempt at pretending segmentation doesn't suck badly.
rdos wrote:
The only reason all of today's major OSes uses paging is for legacy and compatibility reasons. It's certainly not because it is a good solution.
Wrong. In fact it's the exact opposite - OSs use segmentation for backward compatibility, and inevitably break compatibility to remove segmentation in later versions (once they realise the pain of keeping it is worse than the pain of breaking compatibility).
rdos wrote:
And a smart segmentation implementation would basically never require invalidating descriptors on other CPU cores. Those would be rare things. That's because the OS could avoid reusing selectors, instead allocating them on a random basis. Of course, the selector would be 64 bits, not 16 bits.
In that case you'd have multiple descriptors referring to the same data, and if the size or access permissions to the data need to be changed you'd be creating new segments on many CPUs rather than invalidating existing segments on many CPUs. It the same problem.
The only alternative that avoid invalidation problems (including "multi-CPU TLB shootdown" and "multi-CPU segment shootdown" and "multi-CPU descriptor recreation") is to refuse to share mutable data between threads/CPUs. Of course "no shared mutable data" has its own (different) problems.
Cheers,
Brendan