Hi,
Rusky wrote:
Brendan wrote:
Instead of getting rid of abstractions, an exokernel has the exact same abstractions implemented in a library (that no sane programmer ever bothers to change, and no sane person should ever be expected to change), plus an additional layer of abstractions in the kernel to "securely multiplex".
This is demonstrably false: on today's operating systems, people
already bother to change these abstractions, with user-space threading (both stackful and stackless), non-file-based object stores (databases, relational and otherwise; disk images for VMs), user-space network stacks, containers, etc. on top of the kernel's abstractions. This is not because they're insane, but because the kernel's abstractions are not a good fit for ever application (which is not to say they're bad, just that they're too specific).
All of these things are built on top of abstractions that the kernel provides. For an exo-kernel, they'd be built on top of the "kernel library abstraction" (that is built on top of the extra "securely multiplex abstraction").
Now imagine if each process replaced/ignored the "kernel library abstraction" and did its own thing. 5 different processes try to read 5 different files at the same time (and a sixth process is trying to write to one of those files) - where is the file system cache, and what enforces IO priorities? It can't be in the kernel (it only securely multiplexes!) and it can't be in the "kernel library" because all processes replaced/ignored it.
For everything that requires centralised management (file caches, DNS caches, file systems, IO priorities, ..., figuring out which process should be allowed access to a received network packet, etc) you need something to "centrally manage". You can put it in the kernel (monolithic) or put it in a process of its own (micro-kernel) or put it in a library that can't be changed. You can't put it in a library that can be changed because then there's chaos (all processes are free to ignore policies that ensure correct management of a shared resources); unless you have extremely strict rules like "
you can replace the file systems abstraction, but it must maintain the global file cache like <this> and must do global IO priorities like <that> to ensure it cooperates with other processes properly" with no way to enforce those rules (and prevent chaos).
Rusky wrote:
So we already have the "twice as many abstractions" that you claim exokernels would introduce, though really it's even more than that because there are many different designs for the user-space abstractions. Further, we're starting to improve the situation by designing new APIs that multiplex hardware without imposing their own policies.
For example, graphics APIs like OpenGL and older versions of Direct3D do too much for their clients - memory management, synchronization, etc. - which leads to lots of pointless heuristics and nonsense like game-specific code in drivers. Newer graphics APIs like Vulkan, Direct3D 12, Metal, and even a newer OpenGL style called AZDO instead hand these responsibilities to the client, so there's less abstraction and conflicting heuristics on both sides. This leads to real performance improvements with more straightfoward code in e.g. the recent game DOOM.
Graphics has always been bad (too much communication between game and driver, with too little opportunity for either to optimise). They've been trying to resolve this problem the wrong way (shifting half the driver into the game) for too long already. It will continue to cause problems, like the OS being unable to recover when the game crashes, games that require a specific video card (and break when you install/use a different video card), nothing working when you try to stretch the same game across 2 or more monitors (that use 2 or more video cards) and everything breaking when you expect a multi-tasking OS to actually multi-task (run 2 or more games).
Cheers,
Brendan