Hi Korona, those are some nice observations.
Quote:
O(1) worst-case data structures. You will find many more linked lists and similar data structures in kernels than in user-space applications. This is a trade off of average case performance in favor of worst-case performance. If your user-space application needs 50 us to rehash a hash table, who cares; you know that hash tables are fast on average. If your IRQ handler does that, the whole system hangs for 50 us (e.g. cannot accept higher priority IRQs from real-time devices).
Constant time worst case data structures seems like an absolute must in real-time operating systems, but are they worth sacrificing a better average time performance in a general purpose operating system. Since, even if we provide a constant time DS which answers a query in few micro-seconds we still need to cross the wrath of non-preemptive kernels which may more often lead to high latencies. So do you think its worth sacrificing better average time performance for O(1) worst-case performace?