nullplan wrote:
bloodline wrote:
I noticed that there doesn’t seem to be any discussion on this forum (at least as far as the search function is concerned) regarding Managed Code operating systems.
That would probably be because about half of the forum is trying to reinvent Linux (although some seem to be reinventing OpenBSD, as if it was any difference for this comparison), while the other half is trying to reinvent MS-DOS. That's for the established members. Other than that, we get a lot of newbies here that copied James Molloy's code and now they need help. Am I being hyperbolic?
Lol, yes I did find James’s tutorial and found it useful to help my understanding of the GDT and IDT, but the rest of it diverged from my OS model so I have no idea how good or bad the rest of it is. -edit- also I was annoyed that all his examples use Intel syntax and I’ve settled into using AT&T syntax for now... it’s the little things
I’m not interested in making a UNIX clone, I want to explore a few ideas I’ve had knocking around for a few years, and one of those was thinking about moving memory protection from the hardware domain to the software domain.
Quote:
Things aren't really moving here, is my point. Every new dawn only provides more of the same. The same OSes and the same questions. The biggest innovation currently under consideration is UEFI, an invention from nigh-on two decades ago. UEFI came out right around the time the switch to 64-bit and to multi-CPU happened, and you will notice a staunch refusal in some parts of this community to acknowledge either trend.
I have yet to explore any of these three “innovations” yet... but I get that some people want to have a “tech ceiling”... I have friends who top out at the C64, and there isn’t really much more you can do without reinventing the wheel on that platform.
Quote:
I for one would welcome a managed code approach to kernel development, but I fear that it would lead to vendor lock-in. Once your kernel is on .NET and can only receive messages from .NET applications, all of those kind of have to use .NET as well. Whereas, if the kernel is written in C, it can usually receive messages from anything with raw processor access, and can run .NET applications or Java applications or native applications. Or whatever you want, really.
Well, you can’t blame M$ for going down a route that would lead to vendor lock-in... that is the business model that have spent 40 years trying to maintain.
But that doesn’t stop us exploring some of the ideas they were researching.
Quote:
nexos wrote:
I don't think its a good idea, as that would make the kernel a lot slower.
I think that is too shallow an analysis, and one that the research paper goes out of its way to discredit. Indeed, since it seems to me that privileges are handled entirely in software on that system, it avoids a lot of the hardware problems you have with traditional operating systems. If I understand the paper correctly, they run everything in supervisor mode, but disallow user applications from doing certain things (by not running them if they try to do them). But that means that there are no system calls, there are only normal messages. Therefore, there are no system call switching costs (which depending on architecture can be severe!) Also, they apparently use a single address space, which on a modern 64-bit CPU has no drawbacks to my knowledge (on 32-bit systems you would limit yourself to 4GB of address space for the whole system, and not all of that would be RAM). Therefore, a context switch is as costly as a setjmp()/longjmp(). How is that for speed?
So, this is what interests me, though I don’t like the “running everything in supervisor mode” concept. I would still expect two levels of privilege. But what is the minimum amount of hardware assisted protection one can get away with (supplemented by software enforced security) , for maximum speed.
The tricky part for me is thinking about how (third party) hardware drivers would work...
Quote:
Plus, that OS settles my debate with bzt by requiring processes to be sealed, and no code can be added to a process. A process can start another process, yes, but then they are independent.
bloodline wrote:
My thinking is that only “user space” code would be managed, the kernel (a microkernel, or perhaps somewhat hybrid) would be coded more traditionally.
That is not what they describe in the paper. The kernel is written 90% in "Sing#" (which is modified C#, because as it turns out, C# does allow unsafe operations), the remaining tenth made up of assembly and C++.
Microsoft are not the be all and end all of software development
I was first introduced to the idea with their Singularity project... but that just got me started thinking about the idea, not wanting to just reimplement their project.
Quote:
PeterX wrote:
It seems to me that managed code is simply interpreted bytecode.
Both IcedTea and Mono have been doing dynamic recompilation forever. That is a process by which native code is created dynamically from bytecode, and is then run at native speed. Since only one class at a time is translated, and only on demand, this approach is feasible even with Java 1.6 or later, and .NET with generics. Whenever a reference is made to a class that is not yet translated, the branch targets will point to the NULL page, causing an exception that is handled by translating that other class.
One might imagine a system where webm “binaries” are the only supported native code...
Quote:
PeterX wrote:
But in real practice this doesn't seem like a great idea.
I concur, but not for speed reasons. I don't think the vendor lock-in mentioned above is a good thing. And I really don't want the world to be run by Microsoft. The world run by Google is already scary enough.
For now, I think we will have to live with these unsecure systems, and bring our daily sacrifice at the altar of Address Sanitizer. Damn, that thing should be the default!