Korona wrote:
Now, can you state in non-inflammatory ways what is wrong about the post?
I already did. For one, no serious dev would ever say that using memory rather than storage is a misconception. Also, a serious dev would not talk about how API concepts are bad when he is only comparing Rust libraries. A serious dev would know the difference between standard POSIX API and mmap interface, and he would never confuse those with high-level, language-specific libraries. No Rust libraries can circumvent the kernel's syscall API, no matter how hard they try. Etc. etc. etc.
For example statements like these:
Quote:
When legacy APIs need to read data that is not cached in memory they generate a page fault.
This is absolutely not true. It might be for certain Rust implementations that use mmap without MAP_POPULATE under the hood, but definitely not true for the legacy open/read/close syscall API in general.
Quote:
Those operations [page fault, interrupts, copies or virtual memory mapping update] are now in the same order of magnitude of the I/O operation itself.
Absolutely wrong, a single memory copy is still magnitudes faster than any sector read, being NVMe or not. (Put aside that there's a controller and sector access overhead too, and that NVMe/DMA needs an interrupt as well, the result of the I/O operation must be transferred into memory from a peripheral; therefore it can't be faster than a direct memory-to-memory transfer.) This is plain nonsense, not backed up by any measurements by the author.
Quote:
if modern NVMe support many concurrent operations, there is no reason to believe that reading from many files is more expensive than reading from one.
Files are handled in the VFS layer, and totally independent to the block I/O layer where the concurrent NVMe operations might make a difference. From the block I/O perspective it doesn't matter at all if the sectors to be read belong to the same file or to different files. If there's an overhead for using many files then that overhead is realized in the VFS layer (and/or in the file system drivers) no matter to the block device's type.
Quote:
While the device is waiting for the I/O operation to come back, the CPU is not doing anything.
It looks like the author is stuck in the DOS-era before DMA was invented... Seriously, nobody told this guy that Linux is a multitasking system?
I could go on and continue, but I'm sure the above is more than enough to see the author is no expert.
Cheers,
bzt