Hi,
Kevin wrote:
Brendan wrote:
For "everything asynchronous", most of the time you end up a kind of finite state machine (with "switch(message.type)" in a loop), and once you get used to it it's not harder at all. The problem is that you do need to get used to it first.
Yes, it is harder and results in less readable code. It's the difference between structured programming and gotos all over the place. And then you have some boilerplate code for stater transitions between each asynchronous operation, you don't keep your stack contents across async ops, etc.
It's more readable because you're only focusing on one "event" at a time, biolerplate code exists regardless, the stack contents are irrelevant ("state" becomes part of the the finite state machine).
Kevin wrote:
Such event loops are great if your program is doing rather trivial things. It stops being fun when your code becomes a little more complex.
This is true for everything that involves programming.
Kevin wrote:
Quote:
For this trivial example; there is parallelism - you can process one piece and output it while you're waiting for the next piece to arrive (without the inconvenience of setting up a "worker thread"). More importantly, for "everything asynchronous" it's easier to do it this way than it is to do it the slower way.
My point (which you ignored by tactical quote splitting) was that with common directory sizes, you would always send
one request and get
one response, and splitting them into smaller pieces in order to allow things to actually run in parallel would hurt performance rather than improving it.
You're saying that a deliberately trivial example was trivial while failing to extrapolate to anything that isn't.
Kevin wrote:
In other words, while there may be some programs for which this actually matters, using AIO would be premature optimisation for at least 95% of the cases where the directory entries are read and would come with costs, but no benefits.
And now the deliberately trivial example that you've failed to extrapolate from has become 95% of all cases. Awesome.
Kevin wrote:
Quote:
For POSIX where "asynchronous" is too painful for anyone to bother, most IO ends up being synchronous. For "everything asynchronous" where "synchronous" is too painful for anyone to bother, most IO ends up being asynchronous.
Please show me your API that makes emulating synchronous requests hard while making async requests easy. Unless you're rather inventive with artificial restrictions in the programming language, this seems rather unlikely to happen.
You really do have no idea what you're talking about.
For mine; you can receive a message from any sender at any time. You send a "open file request", you might get a "user pressed the cursor key" message, then a "kernel is running low on memory" message, then a "process you started has terminated" message, then 1234 more unrelated messages, then the "open file reply". There is no guaranteed order (e.g. X sends a message to Y and Z at the same time; when Y receives the message from X it sends a message to Z; Z can receive the later message from Y before it receives the earlier message from Z). There is also no guaranteed delivery (successfully sent does not imply it will be successfully received).
For the API, there is only "sendMessage()", "getMessage()" and "getMessageWithTimeout()".
Cheers,
Brendan