OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 6:14 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 101 posts ]  Go to page Previous  1 ... 3, 4, 5, 6, 7
Author Message
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Sun Jul 08, 2018 7:31 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Qbyte wrote:
Schol-R-LEA wrote:
I'm sorry, but... uhm... one of us is confused as to how one of the terms 'segmentation' and 'single address space OS' are defined, and I am not convinced it is me.
The term "segment" has indeed been used in different ways by various manufacturers over the years, but fundamentally, a segment is nothing more than a contiguous range of addresses of arbitrary length, in contrast to a page which is a contiguous range of addresses of some fixed length.


Ah, I think I see your intent now. While (as I have said earlier) I tend to see segmentation as mostly relating to address line usage (as this has been the case for most implementations, not just Intel's), this is the 'official' meaning of the term, yes. However, I personally don't see an advantage to treating 'variable sized address ranges' and 'fixed size address ranges' as separate concepts, especially if (as you state below) you are talking about them primarily in terms of memory protection.

Qbyte wrote:
Whether or not these two schemes are used to implement memory virtualization (address translation) is a separate matter. Protection and virtualization are different concepts, but they are usually rolled into one in a given implementation.


Well, it is about time someone else here understood that they are separate. I've been arguing that these are separate concepts - and mechanisms - from the outset, but several people here keep insisting on confusing them. However, as I have pointed out, I think you are reversing the relationship - paging is always about address translation, and never primarily about security. Most memory protection schemes on systems with either paging or segments work with those systems, but they aren't there for that purpose.

Qbyte wrote:
In a 64-bit single address space OS, virtualization is not required (but can still be included) since every physical address within the entire machine can be directly accessed with a 64-bit reference. This means that all code and data can reference each other using their absolute addresses and the practical necessity for each process to have its own virtual address space like in a 32-bit system no longer exists.


I think you are confusing the ideas of virtual address translation (mapping logical memory addresses to physical ones), which allows for automatic overlay of a larger address space than is physically present, with virtual address spaces, which is where each process is given it's own mapping. SAS rejects the latter, but for a realistic system, still requires the former if you mean to exploit the full range of an address space larger than physically implementable (at the moment, something larger than, say, 256 GB, for a CPU in very large HPC installation; while this will doubtless increase over time, in order to implement a physical 64-bit address space you would need to buy a bigger universe, as there are fewer than 2^64 baryons in all of visible space).

Indeed, VAT is if anything more important in a SASOS than it is in a VASOS; it requires you to map elements very sparsely in order to ensure that everything has space to grow, but physical memory limitations mean you will generally need to map them into physical memory much more compactly.

(And despite what Embryo and some other Java aficionados might claim, garbage collection alone still won't allow you to put the proverbial ten gallons of manure in a five gallon bag. I am all for automatic memory management - I'm a Lisper, after all - but if your datasets are larger than physical memory, it can't make room where none exists.)

I gather than your idea of segments is on partitioning elements - whether process or individual objects - within such a space, and relying on the sheer scope of the address space to give an effective partitioning (provided that you maintain spacings of, say, 16 TB between each object, something that is entirely workable in a 64-bit address space) - if so, I get your intent, as it is an approach I am considering for certain projects lately myself.

However, the point of such a separation isn't to protect from malicious access, nor is it about using variable-sized address mappings - it is just to ensure that there is ample space for dynamic expansion of each memory arena (which, just to be clear, is my own preferred high-level term for what you are describing as segments - it is, yet again, an orthogonal concept). Or at least it seems to be so to me.

And I really do still think that capability-based addressing would be a closer fit to your goal (and rdos's) than segments, either as a general concept or in terms of existing implementations, but since that seems to be even more of a dead topic than hardware segments, well... :roll:

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Tue Jul 10, 2018 2:25 am 
Offline
Member
Member

Joined: Tue Jan 02, 2018 12:53 am
Posts: 51
Location: Australia
Schol-R-LEA wrote:
I think you are confusing the ideas of virtual address translation (mapping logical memory addresses to physical ones), which allows for automatic overlay of a larger address space than is physically present, with virtual address spaces, which is where each process is given it's own mapping. SAS rejects the latter, but for a realistic system, still requires the former if you mean to exploit the full range of an address space larger than physically implementable (at the moment, something larger than, say, 256 GB, for a CPU in very large HPC installation; while this will doubtless increase over time, in order to implement a physical 64-bit address space you would need to buy a bigger universe, as there are fewer than 2^64 baryons in all of visible space).
Firstly, your numbers are off by over 60 orders of magnitude. 2^64 roughly equals 1.84x10^19, while it's estimated that there are around 10^80 atoms in the universe, each of which is composed of 1 or more baryons. And since an average atom is around 0.1 nm wide, there are 10^7 atoms along a 1 mm length, 10^14 in a sqaure mm, and 10^21 in a cubic mm. If we have a chip with dimensions 50x50x2mm, that chip contains 5x10^24 atoms, which is enough to feasibly support 2^64 bits of physically addressable memory once technology reaches that level (that's 50,000 atoms per bit of information, and experimental demonstrations have already achieved far better than that).

Secondly, I'm well aware of the distinction between virtual address spaces and virtual memory, since memory virtualization doesn't imply that each process has its own virtual address space. Every process can share a single virtual address space, and the hardware then maps those virtual addresses to real physical ones. That can indeed make sense to do if there is a large disparity between the size of the virtual and physical address space, but it becomes frivolous once the physical address space is equal or comparable in size to the virtual one. Therefore, if the above mentioned engineering feat can be achieved, your corncerns about VAT being essential for a practical SASOS evaporate.
Schol-R-LEA wrote:
I gather than your idea of segments is on partitioning elements - whether process or individual objects - within such a space, and relying on the sheer scope of the address space to give an effective partitioning (provided that you maintain spacings of, say, 16 TB between each object, something that is entirely workable in a 64-bit address space) - if so, I get your intent, as it is an approach I am considering for certain projects lately myself.
That is indeed a useful memory management scheme that a SASOS would make use of. Of course, it can do so in a much less naive manner by taking hints from programs and objects about memory usage properties for each individual object and store them accordingly. For example, things like music, video and image files are basically guaranteed that they wont change in size, so they would be marked as such, and the OS would be free to store them as densely as possible (no spacing between them and neighboring objects at all). Then, there are objects such as text files who are likely to change in size but still stay within a reasonably small space, so they would be stored with suitably sized gaps between them and neighboring objects. Finally, there would be large dynamic data structures like those used by spreadsheets, simulations, etc, who would be given gigabytes or even terabytes of spacing as you suggested. In an exceedingly rare worst case scenario where a large data structure needs more contiguous space than is currently available, the amortized cost of a shuffling operation would be extremely low.

However, segments actually have no direct relation with the above at all. Memory protection is a mandatory feature of any general purpose system and the primary purpose of segments here is purely to restrict what parts of the address space (hence what objects) a given process can access and in what ways. Usually, only writes will be restricted so that data and code from anywhere in the system can be read and executed by default, which reduces the number of segments a process needs to be allocated, reducing TLB pressure. The only processes who wouldn't be given universal reading rights would be applications like web browsers, untrusted third-party software and programs who have no use for doing so (which could help catch bugs).
Schol-R-LEA wrote:
And I really do still think that capability-based addressing would be a closer fit to your goal (and rdos's) than segments, either as a general concept or in terms of existing implementations, but since that seems to be even more of a dead topic than hardware segments, well... :roll:
I'm no stranger to the idea of capability based addressing, but as intriguing as it is, I've personally arrived at the conclusion of it just being a more convoluted and contrived way of enforcing access rights in comparison to segments. At the end of the day, an object is just a region of memory, so it makes the most sense to just explicitly define the ranges of addresses that a process is allowed to access and be done with it. Anything beyond that just seems like fluff. I'd love to be convinced otherwise, so if you've got any original, concrete points in favor of a capabilty based scheme, I'm all ears.


Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Tue Jul 10, 2018 6:25 am 
Offline
Member
Member
User avatar

Joined: Thu Nov 16, 2006 12:01 pm
Posts: 7612
Location: Germany
OK, so my last post here was a shameful display of getting the math completely wrong, even with assistance by Wolfram Alpha, as I couldn't pay enough attention to not mix up base 2 and base 10.

:shock: :oops: :roll:

So, second try.

I did some math, for visualizing address space capabilities. Using a cube of pure carbon, and storing one bit of information per atom (which is wildly optimistic for obvious reasons), we would get "storage cubes" of:

  • 1.3 mm³ for 64 bit address space;
  • 24 km³ for 128 bit address space (that's a cube with a 2.8 km edge);
  • 8.175 * 10^39 km³ for 256 bit address space (that's a cube with an edge over two light-years long).

Just sayin'.

_________________
Every good solution is obvious once you've found it.


Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Tue Jul 10, 2018 6:52 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
EDIT: I fired off my initial mea culpa before I could take a closer look at the rest of QByte's post. I don't want to have sequential posts, so I am editing this one.

Qbyte wrote:
schol-r-lea wrote:
(at the moment, something larger than, say, 256 GB, for a CPU in very large HPC installation; while this will doubtless increase over time, in order to implement a physical 64-bit address space you would need to buy a bigger universe, as there are fewer than 2^64 baryons in all of visible space).
Firstly, your numbers are off by over 60 orders of magnitude. 2^64 roughly equals 1.84x10^19, while it's estimated that there are around 10^80 atoms in the universe, each of which is composed of 1 or more baryons. And since an average atom is around 0.1 nm wide, there are 10^7 atoms along a 1 mm length, 10^14 in a sqaure mm, and 10^21 in a cubic mm. If we have a chip with dimensions 50x50x2mm, that chip contains 5x10^24 atoms, which is enough to feasibly support 2^64 bits of physically addressable memory once technology reaches that level (that's 50,000 atoms per bit of information, and experimental demonstrations have already achieved far better than that).


Gaaah, I can't believe I said something so stupid. That was a serious brain fart, sorry.

Qbyte wrote:
Secondly, I'm well aware of the distinction between virtual address spaces and virtual memory, since memory virtualization doesn't imply that each process has its own virtual address space. Every process can share a single virtual address space, and the hardware then maps those virtual addresses to real physical ones. That can indeed make sense to do if there is a large disparity between the size of the virtual and physical address space, but it becomes frivolous once the physical address space is equal or comparable in size to the virtual one. Therefore, if the above mentioned engineering feat can be achieved, your corncerns about VAT being essential for a practical SASOS evaporate.


Eventually? Perhaps. If your numbers are correct - and I did peek at Solar's post, so I am not convinced anyone's are, right now - then it might even be within the next decade or so, though I suspect other factors such as the sheer complexity of the memory controller this would require will at least delay it. Still, the assertion seems premature, at the very least.

Note that it isn't as if the memory is wired directly to the CPU in current systems, and IIUC - correct me if I am wrong - this is at least as much of a limiting factor right as the memory densities. Perhaps this is wrong, and even if it isn't some breakthrough could change it, but I wouldn't want to count on that.

Qbyte wrote:
Schol-R-LEA wrote:
I gather than your idea of segments is on partitioning elements - whether process or individual objects - within such a space, and relying on the sheer scope of the address space to give an effective partitioning (provided that you maintain spacings of, say, 16 TB between each object, something that is entirely workable in a 64-bit address space) - if so, I get your intent, as it is an approach I am considering for certain projects lately myself.
That is indeed a useful memory management scheme that a SASOS would make use of. Of course, it can do so in a much less naive manner by taking hints from programs and objects about memory usage properties for each individual object and store them accordingly.


Well, yes, of course; that was just a coarse-grained model, a starting point. An organizing principle rather than a system in itself, shall we say.

There is a name for this approach, actually, as it has been discussed going back to the mid 1980s: 'sparse memory allocation'. According to See MIPS Run, the reason the R4000 series went to 64 bits was in anticipation of OS designers applying sparse allocation via VAT - keeping in mind that memories of the time were generally under 64MiB, even for large mainframes, the typical high-end workstation (which is what MIPS was mostly expected to be used for) had between 4MiB and 8MiB, and a PC with even 4MiB was exceptional. Whether or not this claim is correct, I cannot say, but the fact that they felt it was worth jumping to 64-bit in 1990 does say that they weren't thinking in terms of direct memory addressability at the time.
Edit: minor corrections on the dates.

But I digress.

Qbyte wrote:
For example, things like music, video and image files are basically guaranteed that they wont change in size, so they would be marked as such, and the OS would be free to store them as densely as possible (no spacing between them and neighboring objects at all). Then, there are objects such as text files who are likely to change in size but still stay within a reasonably small space, so they would be stored with suitably sized gaps between them and neighboring objects. Finally, there would be large dynamic data structures like those used by spreadsheets, simulations, etc, who would be given gigabytes or even terabytes of spacing as you suggested. In an exceedingly rare worst case scenario where a large data structure needs more contiguous space than is currently available, the amortized cost of a shuffling operation would be extremely low.


Qbyte wrote:
However, segments actually have no direct relation with the above at all. Memory protection is a mandatory feature of any general purpose system and the primary purpose of segments here is purely to restrict what parts of the address space (hence what objects) a given process can access and in what ways.


Name even one existing or historical system in which this was the primary purpose of segments. All of the ones I know of used them for two reasons: as variable-sized sections for swapping (basically paging by another name) or for allowing a wider address space than physical address pins permitted (as with the x86 and Zilog). Even on the Burroughs 5000 series machines, which is apparently where segmentation originated, it was mostly used for simplifying the memory banking (a big deal on those old mainframes) IIUC. Indeed, the majority of segmented memory architectures built didn't have memory protection, at least not in their initial models.

This isn't to say that they can't be used for this purpose, but at that point, calling them segments is only going to create confusion - much like what we have in this thread.

Qbyte wrote:
Schol-R-LEA wrote:
And I really do still think that capability-based addressing would be a closer fit to your goal (and rdos's) than segments, either as a general concept or in terms of existing implementations, but since that seems to be even more of a dead topic than hardware segments, well... :roll:
I'm no stranger to the idea of capability based addressing, but as intriguing as it is, I've personally arrived at the conclusion of it just being a more convoluted and contrived way of enforcing access rights in comparison to segments. At the end of the day, an object is just a region of memory, so it makes the most sense to just explicitly define the ranges of addresses that a process is allowed to access and be done with it. Anything beyond that just seems like fluff. I'd love to be convinced otherwise, so if you've got any original, concrete points in favor of a capabilty based scheme, I'm all ears.


They aren't really comparable, IMAO, because caps shift the burden of proof from accessed resource to the one doing the accessing - if the process don't have a capability token, it can't even determine if the resource exists, never mind access it - it has to request a token from the management system (whether in hardware or in software), and if the system refuses the request, it is not given a reason why (it could be because the requester isn't trusted, or because it isn't present in the first place).

They aren't specifically about memory; that just happens to have been one of the first things they try to apply it to, and the hardware of the time wasn't really up to it yet (it isn't clear if current hardware could do it efficiently; many researchers such as Ivan Godard seem to think it would be, but since in practice security has generally ended up costing more then insecurity in the minds of most users :roll:, no one seems to care to find out). It isn't even specifically a hardware solution; after the early 1980s, there has been a lot more work on software capabilities than hardware ones.

But I suspect you know this, from what you said.

I need to get going, but I will pick up on this later.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Last edited by Schol-R-LEA on Tue Jul 10, 2018 9:36 am, edited 3 times in total.

Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Tue Jul 10, 2018 7:18 am 
Offline
Member
Member

Joined: Fri Aug 19, 2016 10:28 pm
Posts: 360
Qbyte wrote:
Finally, there would be large dynamic data structures like those used by spreadsheets, simulations, etc, who would be given gigabytes or even terabytes of spacing as you suggested.
Aren't you going to waste space? What amount? Who is going to pay for it?
Qbyte wrote:
In an exceedingly rare worst case scenario where a large data structure needs more contiguous space than is currently available, the amortized cost of a shuffling operation would be extremely low.
Is this going to be competitive to the performance of translation hardware? For what kind of workloads - read or write dominant?
P.S. If you move data around, how are the pointers to it going to reflect the change if not through a level of indirection?
Qbyte wrote:
Firstly, your numbers are off by over 60 orders of magnitude. 2^64 roughly equals 1.84x10^19, while it's estimated that there are around 10^80 atoms in the universe, each of which is composed of 1 or more baryons. And since an average atom is around 0.1 nm wide, there are 10^7 atoms along a 1 mm length, 10^14 in a sqaure mm, and 10^21 in a cubic mm. If we have a chip with dimensions 50x50x2mm, that chip contains 5x10^24 atoms, which is enough to feasibly support 2^64 bits of physically addressable memory once technology reaches that level (that's 50,000 atoms per bit of information, and experimental demonstrations have already achieved far better than that).
To be honest, this is not my department, but wouldn't there be issues with creating the conducting matrix that will lead the data out of the storage to the row buffer? In particular, wouldn't there be interference and stability issues at such densities?

All of this is indeed exciting and probably possible, but it has to be considered carefully. I personally dislike when the software industry aims to reduce its own burden at the expense of the customer. Even considering the cost of the development effort, the technology still has to prove that its overall total cost of ownership is smaller.


Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Tue Jul 10, 2018 9:12 am 
Offline
Member
Member
User avatar

Joined: Thu Nov 16, 2006 12:01 pm
Posts: 7612
Location: Germany
I think Qbyte's mind model of such a high-capacity chip is basically the same as my "cube of carbon": It's not about actual feasability, today or tomorrow or even in twenty years, or about being off by an order of magnitude or two, but about realizing that 2^64 is probably not the end of the road yet.

That "2^64 should be enough for anybody" might, at some point, sound as ludicrous, lacking imagination, and being a sorry excuse for shoddy design as the original "640k should be enough" does to us today.

_________________
Every good solution is obvious once you've found it.


Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Tue Jul 10, 2018 12:25 pm 
Offline
Member
Member

Joined: Fri Aug 19, 2016 10:28 pm
Posts: 360
Solar wrote:
That "2^64 should be enough for anybody" might, at some point, sound as ludicrous, lacking imagination, and being a sorry excuse for shoddy design as the original "640k should be enough" does to us today.
No doubt. People are working with petabytes already. We are going to need more than 64-bits at some point, meaning we could end up hauling some 16 bytes of pointer information. My doubts are that the memory hierarchy can flatten as much as the earlier discussion suggested. So much so that the optimizations that we use today would be superseded by brute force hardware performance and storage capacity excess. May be. Who knows.

Btw. Thanks for the Wolfram Alpha reminder. It has become more intelligent than I seem to remember it being.


Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Tue Jul 10, 2018 11:51 pm 
Offline
Member
Member

Joined: Tue Jan 02, 2018 12:53 am
Posts: 51
Location: Australia
Schol-R-LEA wrote:
Name even one existing or historical system in which this was the primary purpose of segments.
Arguably, the Honeywell 6180, which was specifically designed to support Multics, wherein memory protection was always considered to be of great importance.
Schol-R-LEA wrote:
This isn't to say that they can't be used for this purpose, but at that point, calling them segments is only going to create confusion - much like what we have in this thread.
Admittedly, the definitions I adhere to tend to originate from the academic literature moreso than anywhere else, so there may be a few concepts that get muddied in translation. Perhaps a less conflicting term for the scheme would be partitioning, while referring to each contiguous address range as a region. Happy?
Schol-R-LEA wrote:
They aren't really comparable, IMAO, because caps shift the burden of proof from accessed resource to the one doing the accessing - if the process don't have a capability token, it can't even determine if the resource exists, never mind access it - it has to request a token from the management system (whether in hardware or in software), and if the system refuses the request, it is not given a reason why (it could be because the requester isn't trusted, or because it isn't present in the first place).
In what way is that different from partitioning, aside from being more complex to implement and manage? With partitioning, each process has a number of regions in memory that it is allowed to access, and the "burden of proof" is on the process to obtain access to those regions in the first place. All that's required to implement partitioning is a simple memory protection unit that contains the base, limit, and PID tag for each region it supports. That's all. It's simple, fast, efficient to manage, and achieves the exact same ends as what capabilities do. Despite my best efforts to do so in the past, I fail to see what meaningful advantages a capability based strategy brings to the table.


Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Wed Jul 11, 2018 1:37 am 
Offline
Member
Member

Joined: Wed Mar 09, 2011 3:55 am
Posts: 509
Qbyte wrote:
I'm no stranger to the idea of capability based addressing, but as intriguing as it is, I've personally arrived at the conclusion of it just being a more convoluted and contrived way of enforcing access rights in comparison to segments. At the end of the day, an object is just a region of memory, so it makes the most sense to just explicitly define the ranges of addresses that a process is allowed to access and be done with it. Anything beyond that just seems like fluff. I'd love to be convinced otherwise, so if you've got any original, concrete points in favor of a capabilty based scheme, I'm all ears.


I actually started thinking about capabilities before I knew what they were called, in pondering what I didn't like about flat paging schemes, Intel pmode segmentation, etc.

Where I think the advantage of capabilities lies is that it can be more fine grained than just "this process can access these bits of memory", which is what you get with a fixed page table or segment table. With capabilities, you can have something like "No matter which process it is running as part of, this library can access this shared memory region, but no other code in that process can access said region". Depending on CPU architecture, your User/Kernel bit or protection ring number provides this on a very course level with a fixed and generally small number of possible privileged libraries (on many architectures just one, the kernel), but a capability architecture could be used to implement an arbitrary number of privileged libraries with disjoint private memory regions (rather than lumping them all into a monolithic kernel or scattering them into separate processes and requiring IPC for processes to request services from them, which are the alternatives on non-capability systems). That said, many capability architectures seem to have failed by trying to be *too* fine grained, often trying to do per-object capabilities (witness iAPX 432). On the other hand, the 386 was almost a capability architecture with the right coarseness, but didn't quite go far enough (hardware tasks weren't reentrant, LTR was privileged, segmentation could have provided capabilities, but didn't quite manage without hardware tasks, etc). A good capability architecture, I think, would support capabilities on approximately the coarseness of files, stacks, and heaps, and would switch between protection contexts without OS involvement on events like far jumps and stack switches.


Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Wed Jul 11, 2018 8:38 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Qbyte wrote:
Schol-R-LEA wrote:
Name even one existing or historical system in which this was the primary purpose of segments.
Arguably, the Honeywell 6180, which was specifically designed to support Multics, wherein memory protection was always considered to be of great importance.

I'd need to look up the details on that, especially since it was (at least so far as the name is concerned) a member of an existing family. Admittedly, whether that last part is relevant or not isn't clear as a) common naming doesn't necessarily indicate a common architecture, especially with these older mainframes where the CPUs were built from individual components (usually a mix of individual transistors and SSI ICs with discrete resistors and capacitors as glue, with maybe some MSI ICs by around 1968 or so, but not necessarily), and b) it seems that segmentation was used on only this model or group of models, and the one source I've seen so far (the page you've linked to) doesn't indicate why it was added.

Still, I appreciate you mentioning it, as I wasn't aware of this example.

Qbyte wrote:
Schol-R-LEA wrote:
This isn't to say that they can't be used for this purpose, but at that point, calling them segments is only going to create confusion - much like what we have in this thread.
Admittedly, the definitions I adhere to tend to originate from the academic literature moreso than anywhere else, so there may be a few concepts that get muddied in translation. Perhaps a less conflicting term for the scheme would be partitioning, while referring to each contiguous address range as a region. Happy?


Actually, yeah. I am sorry if it's pedantic, but it does help eliminate ambiguity. Especially since what you have been describing does not fit the academic definition of a segmented memory, namely "a memory architecture in which most or all effective addresses are generated from a combination of a base address value and an offset" - which you have been saying isn't what you have in mind.

Qbyte wrote:
Schol-R-LEA wrote:
They aren't really comparable, IMAO, because caps shift the burden of proof from accessed resource to the one doing the accessing - if the process don't have a capability token, it can't even determine if the resource exists, never mind access it - it has to request a token from the management system (whether in hardware or in software), and if the system refuses the request, it is not given a reason why (it could be because the requester isn't trusted, or because it isn't present in the first place).
In what way is that different from partitioning, aside from being more complex to implement and manage? With partitioning, each process has a number of regions in memory that it is allowed to access, and the "burden of proof" is on the process to obtain access to those regions in the first place. All that's required to implement partitioning is a simple memory protection unit that contains the base, limit, and PID tag for each region it supports. That's all. It's simple, fast, efficient to manage, and achieves the exact same ends as what capabilities do. Despite my best efforts to do so in the past, I fail to see what meaningful advantages a capability based strategy brings to the table.


I am by no means an expert on capabilities, but according to those who are, the main issue is that any system based on access-control lists - that is to say, RWX bits set on a per-group or per-priority basis, and similar mechanisms - are vulnerable to a class of exploits referred to as a the 'confused deputy problem', while capabilities supposedly aren't. Again, I am no expert, but based on sources such as this one, the reasoning behind this assertion seems sound to me.

Also, capabilities are issued according to the process, rather than the user, and can even be issued per code unit. Thus, they are more fine-grained than the more common approaches - as linguofreak points out, it is possible to have a caps system in which some library code running in a process can have a capability, but the process as a whole doesn't.

Finally, since the system is deciding who to give a capability token to dynamically and programmatically, on a per-request basis, rather according to to a static set of rules, and can cancel a specific token or group of tokens at any time without changing the permissions held through any other capability tokens, the capability manager can react more flexibly to changes in the situation based on new information.

Or at least this is my understanding of the topic, based on those sources I've read. Comments and corrections welcome.

As I've said earlier, this isn't specific to memory hardware, even for a hardware based capability mechanism. Capability-based security is a broad category that applies to limiting access to any sort of resource. Indeed, the most common examples given when describing caps (and AFAICT the most common actual applications of caps in systems using them) are regarding file system security.

We may need to move this to a new thread, as it is going afield from the topic of segmentation. I know I am the one who inserted the topic of caps into this debate, so I am probably the one to do so.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Memory Segmentation in the x86 platform and elsewhere
PostPosted: Wed Aug 22, 2018 1:36 pm 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Schol-R-LEA wrote:
It just occurs to me that rdos might be conflating 'segmentation' with the concept of a 'modified Harvard architecture'.


Not at all. Banking or separating code & data is not a useful protection model. It doesn't enforce validity or limits on objects.

Schol-R-LEA wrote:
Paging? Segmentation? Separate matters entirely. They both solve a different set of problems from the memory protection, as well as from each other. Segmentation, as I said before, is about stuffing an m-bit address space into n address lines when n < m. Paging is about moving part of the data or instructions in a fast memory storage to a slower one and back in a way that is transparent to the application programmers (that is, without have to explicitly use overlays and the like).


That's how some people have used it, and also how it was used in real mode. However, protected mode allows for mapping objects to segments, and then to do exact limit checking.

Schol-R-LEA wrote:
To sum up: on the x86 in 32-bit protected mode, there is no difference whatsoever in the degree of protection one gets by actively using segments from that gotten from by setting the segments to a flat virtual space. None. Period.


That's incorrect. Drivers in RDOS will get a code selector and a data selector. Code that uses these will not be able to access data outside of this own area. It's effectively an isolated environment. Other memory can only be accessed by allocating memory (which will return a selector), or by passing 48-bit pointers in the API.

Schol-R-LEA wrote:
Segmentation only wins over paging if you are using separate segments for every individual data element - as in, every variable has its own segment. Even then, the only advantage is in how well the segment size matches the object's size; you can do the same thing with pages, but since the sizes are fixed it almost always has a size mismatch.


Right. However, due to how Intel implemented segmentation, an OS can only allocate selectors to objects that are not mass-created due to selector shortage.

Also, entry points for APIs can be dynamically allocated by using call-gates, and also enforced. Both from user level and kernel.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 101 posts ]  Go to page Previous  1 ... 3, 4, 5, 6, 7

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 24 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group