JamesHarris wrote:
On the contrary, it's a problem which has been known about for years - albeit that it's less recognised now. I've read many times of communication programs which had to reprogram the master 8259 to prioritise the com port interrupts or they could not support high baud rates.
I doubt that well-designed programs need to mess with IRQ priority at all (at least not if the OS takes high IRQ volumes into account -- the situation will likely be different on Windows 3.1 than on a OS that is designed from scratch). Before you need to do that, just switch from IRQ handling to polling.
IRQs only have an advantage over polling if either the IRQ rate is not easily predictable (but in your scenario, we know that we want to read from the UART every few microseconds), or if we can save power (which we can't if we're running on almost 100% CPU utilization anyway).
The same strategy (polling instead of IRQs if you know that the I/O latency is going to be low) is also exploited by modern NVMe stacks.
Note that x86 is about the only architecture that supports IRQ priorities (= nested IRQs) at all. Other archs (e.g., ARM) use a single vector for all IRQs. The myriad of embedded ARM devices that run in real-time applications does not seem to require nested IRQs.