eekee wrote:
@thewrongchristian: I don't see your objection as being related to the problems the article discusses. POSIX doesn't care about caches so long as reads which follow writes return the data just written. That's impossible to do in a networked filesystem if each machine caches the filesystem independently.
(This in turn reminds me Plan 9 has a mount cache, but it's off by default. 9front's mount cache is on by default.)
I only got as far as the text I quoted, as I was under other time pressures. Reading further, it explains the context about POSIX and distributed filesystems.
But the point stands, the POSIX semantics are not much of a problem for 99% of use cases, and designing an OS round the remaining 1% makes no sense. That 1% can just live without POSIX semantics, or just split up its data sufficiently that each piece can be processed independently.
Based on the article, the argument seems to be that POSIX prevents scaling where data must be shared between distributed nodes, whereas in fact if the data is required to be shared and updated concurrently in a consistent manner (and you do want the data to be consistent, else it is probably wrong,) the barrier to scalability is in fact distributed nature of the program and fundamentals such as the speed of light.
To be properly scalable, the problem being solved concurrently must be appropriately partitioned.