rdos wrote:
I wouldn't say these are the same concepts. For instance, CreateProcess in NT is much better than fork in Unix, and they are certainly not similar.
Now you're talking about implementations and not concepts again. The concept is separated address spaces, filled up with different segments (code and data), that's the same for both. As long as the implementation goes, CreateProcess can be emulated with fork+exec, but fork can't be emulated easily with CreateProcess, so it is questionable, which one is the better. In my opinion the one that can easily mimic the other is the better. Plus creating multiple processes connected with pipes is a hell lot easier with fork than with CreateProcess and its armada or arguments.
rdos wrote:
DLLs are not like shared libraries in Unix
Yes, they are. Both are dynamically loaded, and both provide a function library shared among multiple processes. The concept (shared portions of code loaded dynamically as needed) is exactly the same.
rdos wrote:
NT even doesn't use the same executable formats (PE vs ELF).
Again, implementation detail. The concept (that both file formats have a code segment and some data segments) is the same, so much, that
objconv can convert between ELF and PE without probs. Actually the Linux kernel is capable of executing programs in the
PE format with a module, proving that file format is indeed just a small implementation detail.
rdos wrote:
The use of syscalls with ints is a pretty inefficient method that they should NOT have borrowed from Unix.
Again, implementation detail. One could have used the sysenter/syscall instructions. The point is, some functions require elevated privilege level, accessed by a special instruction and not via "standard" calls. The concept here is separated user space and kernel space (in contrast to Amiga Exec and Singularity for example, where there's only one address space, and kernel functions are accessed the same way as any other library function.)
rdos wrote:
The same goes for ioctl.
Same concept for both UNIX and WinNT. DeviceIoControl operates on opened file handles (just like ioctl in UNIX), and they provide very similar functionality.
rdos wrote:
For applications, I almost exclusively use C++
I agree. C++ is good for user space applications and libraries, but not for a kernel IMHO. (But I must put the emphasis on "IMHO", I don't want to say nobody should ever use C++ for kernel development, all I'm saying is it's very hard and does not worth it IMHO.)
rdos wrote:
They do. Things like memory-mapped IO don't work effectively with the file read/write API. They work best with physical addresses.
Again, you're confusing implementation with concept. Just take a look at
fmemopen or
mmap with fd -1, and you'll see that efficient implementations do exists.
In fact, our machines are still von Neumann architectures, mimicing the human society as Neumann originally designed it:
- CPU is the government, making decisions and creating laws (code);
- RAM is similar to the justice system (don't forget that in the USA there's precedence law), remembering the previous results and inputs (data) and storing laws (code);
- and Peripherals are the executor divisions, like the police, firedeps or the military, executing the orders from the two above and reporting (providing feedback on the results) to them.
Just because there are new hardware (like GPU specialized in matrix operations) doesn't mean the
concept above has changed. It didn't. Same way, just because CPU's now have non-execute protection did not change the
concept of storing code as data (think about JIT compilers). It doesn't matter what interface the peripherals are using (IO ports or MMIO), or that you read bytes from a FERRIT ring or sectors from floppies or from SSDs or from pressured surface with a laser like in CDROMs, the
concept of separated peripherals remains. Even if you replace the mouse and keyboard with a mind-reading BCI helmet like in The Foundation, and storage devices with shiny crystals like in Stargate, that wouldn't change the
concept.
Now going a step further into software land, you can see that all kernels use the same
concepts too, even though their implementations and API differ: they all have files, directories, devices, processes, libraries etc. They are store code in files, and they all load those into address spaces to create processes. All the same.
rdos wrote:
I don't think so.
You can also think that Earth is flat, it won't make that true
If you peel the kernels from the implementation details, you'll see the concepts under the hood are exactly the same.
Conclusion: we can only talk about if a particular implementation is more effective or easier to use than the others, but for the concepts, there's nothing new under the Sun.
Cheers,
bzt