OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 5:34 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 47 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Fri Sep 30, 2016 3:17 pm 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Brendan wrote:
We have empirical evidence that suggests that, as a huge additional bonus, another benefit of async event loops is that it may prevent a "vocal minority" of web developers (a group of people that I've always had a very low opinion of) who failed to get used to Node.js away from my OS.
The only people complaining about callback-style async IO in Node.js were the ones who were writing it anyway. So failing to get used to it didn't prevent them from using it, it just drove them to come up with the objectively-clearer async/await control structure.

And to get away from your nonsense about web developers, the same thing is true of several other languages- C# added async/await where it is mostly used in desktop applications, Python added async/await where it is mostly used in servers (web and otherwise), Go was written to support coroutines (which provide the same asynchrony and source-level benefit at a runtime cost), C++ has had several proposals for async/await and coroutines, etc.

The non-switch/loop style is useful for far more reasons than "lol web developers r so dumbb." It's no more a matter of "just get used to it" than taking away someone's if/while/etc and leaving them with nothing but goto.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Fri Sep 30, 2016 3:26 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Rusky wrote:
Brendan wrote:
We have empirical evidence that suggests that, as a huge additional bonus, another benefit of async event loops is that it may prevent a "vocal minority" of web developers (a group of people that I've always had a very low opinion of) who failed to get used to Node.js away from my OS.
The only people complaining about callback-style async IO in Node.js were the ones who were writing it anyway. So failing to get used to it didn't prevent them from using it, it just drove them to come up with the objectively-clearer async/await control structure.

And to get away from your nonsense about web developers, the same thing is true of several other languages- C# added async/await where it is mostly used in desktop applications, Python added async/await where it is mostly used in servers (web and otherwise), Go was written to support coroutines (which provide the same asynchrony and source-level benefit at a runtime cost), C++ has had several proposals for async/await and coroutines, etc.


And?

All of these things (callbacks, async/await, coroutines, etc) are just convenience wrappers built on top of async event loops (to make things more comfortable for people that failed to get used to event loops). Do you think people are doing all this (in many different languages for many different environments) because they think "async" is bad?

Rusky wrote:
The non-switch/loop style is useful for far more reasons than "lol web developers r so dumbb." It's no more a matter of "just get used to it" than taking away someone's if/while/etc and leaving them with nothing but goto.


OOP is bad because it takes away everyone's if/while/etc and leaves them with nothing but goto (methods)!


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Fri Sep 30, 2016 4:29 pm 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Brendan wrote:
All of these things (callbacks, async/await, coroutines, etc) are just convenience wrappers built on top of async event loops (to make things more comfortable for people that failed to get used to event loops). Do you think people are doing all this (in many different languages for many different environments) because they think "async" is bad?
To be clear, I don't think async is bad, nor do I think e.g. node.js users do (they did sacrifice sane language design to use it for a while). I think it's a good thing the same way machine code is a good thing- as an efficient underlying semantics, but not something to "get used to" and write in directly most of the time. Async/await is only as much of a "convenience wrapper" as a while loop or function call.

Brendan wrote:
Rusky wrote:
The non-switch/loop style is useful for far more reasons than "lol web developers r so dumbb." It's no more a matter of "just get used to it" than taking away someone's if/while/etc and leaving them with nothing but goto.
OOP is bad because it takes away everyone's if/while/etc and leaves them with nothing but goto (methods)!
That... doesn't match what I said at all. Writing a method doesn't mean you suddenly can no longer write a while loop. On the other hand, writing an event loop manually does- you can't write "while (true) { do_something(); do_async_io(); do_something_else(); }" because you have to split up the code before and after the IO into different switch cases, and now the built-in while loop is useless. Async/await lets you write if/while/etc across async IO, while the compiler takes care of writing (and optimizing) the underlying switch statement.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Fri Sep 30, 2016 6:21 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Rusky wrote:
Brendan wrote:
Rusky wrote:
The non-switch/loop style is useful for far more reasons than "lol web developers r so dumbb." It's no more a matter of "just get used to it" than taking away someone's if/while/etc and leaving them with nothing but goto.
OOP is bad because it takes away everyone's if/while/etc and leaves them with nothing but goto (methods)!
That... doesn't match what I said at all. Writing a method doesn't mean you suddenly can no longer write a while loop. On the other hand, writing an event loop manually does- you can't write "while (true) { do_something(); do_async_io(); do_something_else(); }" because you have to split up the code before and after the IO into different switch cases, and now the built-in while loop is useless. Async/await lets you write if/while/etc across async IO, while the compiler takes care of writing (and optimizing) the underlying switch statement.


It wasn't supposed to make sense, it was supposed to help you realise that your "Async is no more a matter of "just get used to it" than taking away someone's if/while/etc and leaving them with nothing but goto." is equally nonsensical.

Note that:
  • "fully async" (where you have a thread receiving messages and handling them) is quite similar to the actor model (where you have an actor receiving messages and handling them)
  • the actor model is quite similar to what OOP was originally intended to be (where you have an object receiving messages and handling them)
  • what OOP was originally intended to be is related to what I call "hybrid OOP" (where "receiving messages" is replaced by direct method calls)


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Fri Sep 30, 2016 6:33 pm 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Brendan wrote:
Note that:
  • "fully async" (where you have a thread receiving messages and handling them) is quite similar to the actor model (where you have an actor receiving messages and handling them)
  • the actor model is quite similar to what OOP was originally intended to be (where you have an object receiving messages and handling them)
  • what OOP was originally intended to be is related to what I call "hybrid OOP" (where "receiving messages" is replaced by direct method calls)
And yet, async/await is still better than manually writing event loops. Not sure where you're pulling this non-sequitur from.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Fri Sep 30, 2016 7:45 pm 
Offline
Member
Member

Joined: Sat Sep 29, 2007 5:43 pm
Posts: 127
Location: Amsterdam, The Netherlands
Since I am tinkering with an idea that is related to this topic, I want to contribute a few of my thoughts to this thread. In my operating system, rather than the traditional approach of passing the arguments through the registers and issuing a system call to switch to the kernel, I am experimenting with the approach of using a command buffer that is mapped in userspace to which multiple system calls can be written. For clarity, and as both are used in this idea, I want to distinguish both by using the terms direct system calls and indirect system calls respectively.

The conventional system calls would end up being the indirect system calls that end up being written to the command buffer. The simplistic approach to this would be to write the system call number followed by its arguments in sequential order to the command buffer. Then once all the indirect system calls have been scheduled, the operating system has to process the command buffer and schedule the next task. To indicate that were are done writing to the command buffer (either because the buffer is full or because there simply are no more indirect system calls to schedule), we have to issue a direct system call that tells the operating system that we want to give up our time slice, and optionally mark our task as idle until one of the indirect system calls has completed. Let's say these system calls are bit modelled like the UNIX system calls, where you have open(), read(), write() and close(), then each of these system calls may end up returning a result (e.g. a file descriptor or a status code). To solve this the kernel maps in an area for to tell which system calls have completed in addition to the command buffer.

Unfortunately, if we want to open() a file and then read() the results from it, we end up in the same situation as with synchronous system calls. We first have to issue open(), then wait for it to complete so that we have access to a file descriptor before we can actually read() anything from it. That's because the model is too simplistic: there is currently no way to describe dependencies between the system calls. To solve this issue I want to introduce a set of registers so that the result of each system call can be stored in a register. The format now ends up being something like: the register the system call uses to store the result, the system call number and the arguments (which can either be a register or a constant). Now the aforementioned situation can be written in a single pass as follows:

Code:
%0 = open("foo.txt");
%1 = read(%0, buf, 1024);


Which solves one of the problems, but what if there is a dependency between two system calls where the result of the one isn't used as an argument for the other. For instance, what if we want to open a file, read some data from it and then close it afterwards. The solution would be to extend the command buffer format a little bit so that we can add the dependencies of each system call as follows:

Code:
%0 = open("foo.txt");
%1 = read(%0, buf, 1024);
%2 = close(%0) waits on %1;


Since I am still playing around with the idea, I don't know how well it will work in practice and what kind of problems remain to solve. However, I can already see a few major benefits with this. One of them is that if you care about POSIX compatibility, that it is fairly easy to introduce a compatibility layer that allows you to run a lot of existing applications at the cost of performance that simply issues a single system call and waits for it to complete. Functionality such as epoll() or asynchronous I/O are also easy to implement and as multiple system calls can be issued and managed in userspace, such an implementation would end up with fewer context switches. Another benefit is that these system calls tend to be a lot more portable in the sense that the command buffer can be formatted in a portable fashion using e.g. variable length encoding, rather than having a different ABI per architecture, which easily ends up being a mess (e.g. ptrace() on Linux for 32-bit/64-bit SPARC systems).

But to me one of the major benefits seems to be that this way of scheduling system calls easily allows for the support of green threads as used by programming languages like Erlang, where you essentially end up with a programming model that feels more synchronous and thus more natural to some people as the calls may block within the scope of such a thread. The userspace scheduler is simply a co-operative scheduler that gets called whenever a thread performs a blocking operation, so that it can handle the next completed operation by switching to the appropriate thread (or yield, if all threads are idling). Furthermore, the cost of a context switch in userspace is much cheaper: you essentially end up pushing registers on the stack, switch the stack and pop the registers from the stack.

Neverthless, while the idea does sound promising, I do believe that there is no free lunch: the interface is obviously not as straightforward (it only is in terms of portability) and may end up consuming a lot more resources than synchronous system calls, but I do think that there are many cases where asynchronous system calls shine. Also, on the topic of ptrace(), the idea needs to be worked out a lot more. Supporting something like system call tracing is not as straight-forward with such an interface than when you are using synchronous system calls.

So to answer the OP: yes, I think it can be worthwhile to support something like POSIX through the form of a compatibility layer offered to the applications that require it at the cost of performance, so that you can at least use an existing userspace on your system, but do keep in mind that you probably don't want your native applications to be POSIX-compliant, as that would mean that your microkernel design stricly depends on POSIX and the restrictions and complications it brings with it. To me that would mean that there aren't a whole lot of benefits of using your operating system to any other POSIX-compliant operating system, at least not performance-wise.


Yours sincerely,
Stephan.


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Fri Sep 30, 2016 10:00 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Brendan wrote:
It wasn't supposed to make sense, it was supposed to help you realise that your "Async is no more a matter of "just get used to it" than taking away someone's if/while/etc and leaving them with nothing but goto." is equally nonsensical.


I believe I can clarify this discussion with one question: Brendan, is your objection regarding Node.JS's use of async and await primarily to
  1. using a higher-level abstraction such as async/await, regardless of either the underlying mechanism and/or the understanding of those underlying mechanisms by those using them;
  2. the costs of using such abstractions on software meant to operate at large scale in a stable and secure manner;
  3. providing such abstractions as a feature of either a programming language or a language's standard library;
  4. the fact that most developers using those abstractions are unfamiliar and/or uncomfortable with the use of the underlying primitives without the aid of said abstraction layer;
  5. the existence of a developer culture which you perceive as encouraging use of tools without understanding their mechanisms and/or costs, or which discourages deeper study;
  6. the specific implementation of async/await in Node.JS; or
  7. some other factor that did not occur to me.

I think that, as with so many other conversations here, those in the debate are talking at cross purposes, with one speaking about one thing, the other of something else, and neither seeing that they aren't communicating to the other what they think they are. As tedious as it can be, sometimes we need to step away from the immediate discussion and deliberately make our positions clear, even to the point of pedantry, before the discussion devolves into ad hominem attacks of the sort all too common on nearly every message board.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Fri Sep 30, 2016 11:03 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Schol-R-LEA wrote:
Brendan wrote:
It wasn't supposed to make sense, it was supposed to help you realise that your "Async is no more a matter of "just get used to it" than taking away someone's if/while/etc and leaving them with nothing but goto." is equally nonsensical.


I believe I can clarify this discussion with one question: Brendan, is your objection regarding Node.JS's use of async and await primarily to


Mostly, I wasn't objecting to (e.g.) Node.JS's use of async and await; I was objecting to "asynchronous" (the entire concept, regardless of how any language does/doesn't bury it under convenience fluff) being categorised as unreadable, complex and rarely beneficial (compared to synchronous things like most of POSIX) .

For async/await specifically; it's over-complicated (e.g. increasing learning curve), obfusticating (hard to guess what's actually going on underneath or come to terms with "what happens when"), inefficient (capturing and restoring state, a generic "awaiter manager" hidden behind the scenes), inflexible (most implementations are so crippled you can't even have a timeout) and (in my experience) entirely unnecessary.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Fri Sep 30, 2016 11:27 pm 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Brendan wrote:
For async/await specifically; it's over-complicated (e.g. increasing learning curve), obfusticating (hard to guess what's actually going on underneath or come to terms with "what happens when"), inefficient (capturing and restoring state, a generic "awaiter manager" hidden behind the scenes), inflexible (most implementations are so crippled you can't even have a timeout) and (in my experience) entirely unnecessary.
Async/await has no runtime cost over the hand-written version- state capture/restore is precisely what the programmer would write in a switch statement (in fact, usually better because the register allocator is aware of it), and the only "awaiter manager" is just a message dispatcher like the hand-written switch. Every implementation I've used supports timeouts as well, even if only via "whenAny(async_io(), timeout(N))".

I disagree on how complicated or obfuscating it is- to me it much more clearly describes "what happens when" and is no harder to guess what's going on underneath than a for loop. But maybe it's just something you need to get used to. ;)

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Sat Oct 01, 2016 2:06 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Rusky wrote:
Brendan wrote:
For async/await specifically; it's over-complicated (e.g. increasing learning curve), obfusticating (hard to guess what's actually going on underneath or come to terms with "what happens when"), inefficient (capturing and restoring state, a generic "awaiter manager" hidden behind the scenes), inflexible (most implementations are so crippled you can't even have a timeout) and (in my experience) entirely unnecessary.

Async/await has no runtime cost over the hand-written version- state capture/restore is precisely what the programmer would write in a switch statement (in fact, usually better because the register allocator is aware of it), and the only "awaiter manager" is just a message dispatcher like the hand-written switch. Every implementation I've used supports timeouts as well, even if only via "whenAny(async_io(), timeout(N))".


Pure bullshit. For the "switch()" in a message handling loop you modify a state machine's state; and you do not save the state machine's state or restore the state machine's state or save the thread's state or restore the thread's state.

Code:
    loading = true;
    sendMessage(VFS, OPEN_FILE_REQUEST);
    while(loading) {
        getMessage(message);
        switch(message.type) {
            case OPEN_FILE_REPLY:
                 sendMessage(VFS, READ_REQUEST);
                 break;
            case READ_REPLY:
                 handleData(message.data);
                 if(more) {
                     sendMessage(VFS, READ_REQUEST);
                 } else {
                     sendMessage(VFS, CLOSE_FILE_REQUEST);
                 }
                 break;
            case CLOSE_FILE_REPLY:
                 loading = false;
                 break;
        }
    }


The only state that changes here is a single "loading" boolean.

Rusky wrote:
I disagree on how complicated or obfuscating it is- to me it much more clearly describes "what happens when" and is no harder to guess what's going on underneath than a for loop. But maybe it's just something you need to get used to. ;)


I doubt you even know when it does/doesn't spawn an entire new thread.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Sat Oct 01, 2016 2:20 am 
Offline
Member
Member

Joined: Sun Feb 01, 2009 6:11 am
Posts: 1070
Location: Germany
StephanvanSchaik wrote:
For instance, what if we want to open a file, read some data from it and then close it afterwards. The solution would be to extend the command buffer format a little bit so that we can add the dependencies of each system call as follows:

Code:
%0 = open("foo.txt");
%1 = read(%0, buf, 1024);
%2 = close(%0) waits on %1;

Did you forget error handling here? Or are you assuming that you never need to check for errors because the next function would automatically fail, like read() when passed a -1 file descriptor?

You could of course add some kind of conditional execution here. And probably you'll soon find uses for loops (handling short reads maybe). Eventually it might turn out that what you just started to write is a VM. :)

Quote:
But to me one of the major benefits seems to be that this way of scheduling system calls easily allows for the support of green threads as used by programming languages like Erlang, where you essentially end up with a programming model that feels more synchronous and thus more natural to some people as the calls may block within the scope of such a thread. The userspace scheduler is simply a co-operative scheduler that gets called whenever a thread performs a blocking operation, so that it can handle the next completed operation by switching to the appropriate thread (or yield, if all threads are idling). Furthermore, the cost of a context switch in userspace is much cheaper: you essentially end up pushing registers on the stack, switch the stack and pop the registers from the stack.

Yes, coroutines are nice. And no, you don't need your asynchronous syscall interface if the kernel understands them. You already save and restore the register state when processing a syscall, so doing a context switch here comes for free. Even with a synchronous syscall interface, the kernel can just queue the syscall and switch to a different thread/coroutine until the operation has completed. The important part here is that the kernel is working asynchronously internally, but if you want the userspace to feel synchronous, there's little reason to change the traditional syscall interface. Essentially you get something that feels like blocking syscalls, except that they block only a single coroutine instead of the whole thread.

_________________
Developer of tyndur - community OS of Lowlevel (German)


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Sat Oct 01, 2016 2:35 am 
Offline
Member
Member

Joined: Sun Feb 01, 2009 6:11 am
Posts: 1070
Location: Germany
Brendan wrote:
Code:
    loading = true;
    sendMessage(VFS, OPEN_FILE_REQUEST);
    while(loading) {
        getMessage(message);
        switch(message.type) {
            case OPEN_FILE_REPLY:
                 sendMessage(VFS, READ_REQUEST);
                 break;
            case READ_REPLY:
                 handleData(message.data);
                 if(more) {
                     sendMessage(VFS, READ_REQUEST);
                 } else {
                     sendMessage(VFS, CLOSE_FILE_REQUEST);
                 }
                 break;
            case CLOSE_FILE_REPLY:
                 loading = false;
                 break;
        }
    }

That's a whole lot of code to describe a completely synchronous operation in async terms. Written synchronously, what you intend to implement here is:
Code:
vfs_open_file_request()
vfs_read_request()
do {
    handle_data()
} while (more);
vfs_close_file()

Are you really going to say that this isn't simpler than your version?

Of course, you didn't quite implement the same thing as this because you neglected to actually implement a state machine. I can reply READ_REPLY to you even if you sent OPEN_FILE_REQUEST and your code will start processing uninitialised data. So real stable async code would still gain a little more complexity by checking whether the reply we just got was really expected at this point.

And what's the benefit? We have zero parallelism here. And if we did want to have parallelism, we would have to add code here that handles replies to requests made within handle_data(). This is quickly becoming a mess. Manually programming global state machines like this doesn't make sense for more than Hello World programs.

_________________
Developer of tyndur - community OS of Lowlevel (German)


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Sat Oct 01, 2016 2:51 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

For me; normal kernel API functions pass parameters and return parameters in up to 4 registers (including the function number, and the status). On top of this I have 2 different "batch functions" where you give the kernel a table of entries; and where each entry represents a normal kernel API function, and has "input parameter values" that are replaced by "output parameter values". This allows the kernel to do a "load registers, call normal kernel function, store registers" loop over the table. For the 2 different "batch functions", one does the kernel functions in any order and doesn't stop if anything returns an error. The other does them in sequential order and stops if anything returns an error.

Of course the entire idea of this is to improve performance by avoid "user-space -> kernel-space -> user-space" context switches.

StephanvanSchaik wrote:
Code:
%0 = open("foo.txt");
%1 = read(%0, buf, 1024);
%2 = close(%0) waits on %1;


Since I am still playing around with the idea, I don't know how well it will work in practice and what kind of problems remain to solve.


This is too complex. Even assuming some kind of low level and very regular representation, the overhead of figuring out what the parameters are and which function depends on another will probably make it slower than the "user-space -> kernel-space -> user-space" switching, so it'd be faster to ignore it and just do separate/individual kernel API calls.

StephanvanSchaik wrote:
Another benefit is that these system calls tend to be a lot more portable in the sense that the command buffer can be formatted in a portable fashion using e.g. variable length encoding, rather than having a different ABI per architecture, which easily ends up being a mess (e.g. ptrace() on Linux for 32-bit/64-bit SPARC systems).


Variable length encodings and portability (e.g. endian swapping, etc) will make "overhead too high to be beneficial" even more likely.

StephanvanSchaik wrote:
But to me one of the major benefits seems to be that this way of scheduling system calls easily allows for the support of green threads as used by programming languages like Erlang, where you essentially end up with a programming model that feels more synchronous and thus more natural to some people as the calls may block within the scope of such a thread.


How is does it make it easier for things like green threads than individual "async_open()", async_read()" and "async_close()" kernel API functions?

StephanvanSchaik wrote:
The userspace scheduler is simply a co-operative scheduler that gets called whenever a thread performs a blocking operation, so that it can handle the next completed operation by switching to the appropriate thread (or yield, if all threads are idling). Furthermore, the cost of a context switch in userspace is much cheaper: you essentially end up pushing registers on the stack, switch the stack and pop the registers from the stack.


For purely cooperative, you only need to switch stacks - the caller can push/pop registers (and save/restore over 2 KiB of AVX512 state).

Note that the normal problem for user-space threading is that you'll have a low priority thread in one process wasting CPU time while a high priority thread in another process gets none. More fun is to forget user-space threading (that lacks the information needed to make efficient global decisions) and "piggy-back" kernel API calls for opportunistic thread switching ("Oh, you want to allocate some memory? You've used most of the CPU time you where given, so I'll do a thread switch early and give you a credit for next time").


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Sat Oct 01, 2016 2:53 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Kevin wrote:
Brendan wrote:
Code:
    loading = true;
    sendMessage(VFS, OPEN_FILE_REQUEST);
    while(loading) {
        getMessage(message);
        switch(message.type) {
            case OPEN_FILE_REPLY:
                 sendMessage(VFS, READ_REQUEST);
                 break;
            case READ_REPLY:
                 handleData(message.data);
                 if(more) {
                     sendMessage(VFS, READ_REQUEST);
                 } else {
                     sendMessage(VFS, CLOSE_FILE_REQUEST);
                 }
                 break;
            case CLOSE_FILE_REPLY:
                 loading = false;
                 break;
        }
    }

That's a whole lot of code to describe a completely synchronous operation in async terms.


Yes, the trivial example is trivial. Yes, you've failed to extrapolate from that. Yes, it's not much harder to have a single thread load 1234 files while transferring data to/from network while searching for prime numbers.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: the suitability of microkernels for POSIX
PostPosted: Sat Oct 01, 2016 2:59 am 
Offline
Member
Member
User avatar

Joined: Wed Aug 17, 2016 4:55 am
Posts: 251
You know, for all the talk Brendan makes about asynchronicity, he still insists on showing an explicit loop to handle them. Not only that sounds bundersome, it effectively forces the whole thing to become single threaded. If instead of having a loop the system called the callbacks directly, they could be run in separate threads and easily make the system very scalable without even trying - the only thing that would be done sequentially is events that by definition behave that way. In fact, doing things purely asynchronously would remove the need for the scheduler altogether, there isn't task switching, callbacks get called when needed, and the concept of process becomes more about isolation than anything else.

Yes, something like that would require completely breaking the standard way in which languages work (e.g. no main() function in C programs), but wouldn't that be the whole point anyway? In fact, this would lend itself to a very functional approach. Also really, this is how javascript normally works in browsers for the most part, timeout included ("a script got stuck" just means "it hit the timeout limit").

Not saying it's necessarily practical, just mentioning how a pure asynchronous approach should work in my opinion.

Kevin wrote:
StephanvanSchaik wrote:
For instance, what if we want to open a file, read some data from it and then close it afterwards. The solution would be to extend the command buffer format a little bit so that we can add the dependencies of each system call as follows:

Code:
%0 = open("foo.txt");
%1 = read(%0, buf, 1024);
%2 = close(%0) waits on %1;

Did you forget error handling here? Or are you assuming that you never need to check for errors because the next function would automatically fail, like read() when passed a -1 file descriptor?

To be fair, he did say it wasn't well defined yet. My first thought was that the chain would immediately terminate the moment a call fails. That seems like the obvious approach, especially since errors would be isolated to that chain and wouldn't affect other chains.

Brendan wrote:
This is too complex. Even assuming some kind of low level and very regular representation, the overhead of figuring out what the parameters are and which function depends on another will probably make it slower than the "user-space -> kernel-space -> user-space" switching, so it'd be faster to ignore it and just do separate/individual kernel API calls.

Honestly it'd be easier to just let the program say "this is a chain of commands" rather than wasting time figuring out dependencies implicitly. That chain in itself would be synchonous, but the program can meanwhile go do other stuff while it waits for it to finish (different chains are asynchronous towards each other).

EDIT: typo


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 47 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 27 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group