rdos wrote:
Most of the bloat of Windows and Linux are not useful. For a power user, it would be far better to be able to configure stuff like you want it and build your own applications (in C++). Instead, most of the bloat is to support script & interpreted languages, windowing systems, and complicated general solutions that are not needed.
Well, I'd say that A LOT of complexity in modern kernels comes from the need to support containers. It's not only about supporting chroot(). Modern Linux has a sophisticated support for
namespaces that allows each container to have a completely different and independent view of the system, including independent PIDs, network stack, UIDs and even time management since 2020. Containers now are almost VMs running the same kernel, but faster.
Containers are not only used in the cloud, but on workstations by developers as well. I prefer for myself to not work inside a container, but for many developers, that's mandatory.
Also, security is much more complicated today. Back in old days ('90s), capability-oriented security wasn't a thing. Now, it's all about that. It's not anymore that you have a permission on some resource or you haven't: there is fine granularity of capabilities you might or might not have.
Finally, for the sake of performance kernels got more complicated as well. In the good old days, we had just read(), write(), open() etc., right? Now we have readv(), pread64(), preadv(), preadv2(). The vectored I/O really helps, but it's more complex to support. Not only the p* versions of read() allow us to specify the position inside the file on each call and that allows multiple threads to use the same file without having to move the current position at every single read and having to synchronize.
Also, high-performance software doesn't just call open(). Now there is openat(), mkdirat(), fchownat(), unlinkat() etc. That improves the lookup time.
Also, in the good old days (like in my OS today
) we had just select() and poll(). Later came epoll() which is a monster, but still simple compared to what we got in kernel 5.1: io_uring. That's all because big companies need software on Linux to run faster and faster.
Also, how do you copy a file? In the good old days, we just used read() and stored the content in a buffer and then used write() for the other file. Today we have fancy syscalls like splice(), vmsplice() etc. that allows direct copy, without userspace buffers.
Also, in the good old times, we had simple SYSV signals. Now we have real-time signals, which are A LOT more complicated to both support and use, but have their own advantages of course. Along with that, we got all the "real time Linux support" which complicates internally a lot of stuff, not only the scheduler.
Finally, older operating systems, not only Linux, had limited monitoring features compared to modern ones. Now we have ftrace, which is extremely powerful and useful to enterprise users of Linux and kernel developers, but introduced plenty of complexity.
I believe you got the point: people now demand much more from operating systems than they did in the past. And that does not apply only to Linux, just I know Linux much better than the other mainstream operating systems and I'm using it as an example.
Don't get me wrong: I'm still a big fan of simpler and smaller operating systems, like the one I'm working on myself but, it will never be "good" for server or desktop machines with today's demands. That's why I'm thinking about small embedded devices, even MMU-less CPUs like ARM Cortex-R series. Writing a small operating system for those machines is still possible.