JamesHarris wrote:
Interesting idea. I have my doubts about the latency it would introduce but I'll keep it in mind.
I don't know about latency, but I can vouch for its stability. This is, in essence, what Linux is doing. I once had a system set up wrong, an interrupt trigger was set to level on a GPIO pin, and that pin was always at that level. So the system was in an interrupt storm, constantly being interrupted. And you almost wouldn't notice. Most things still worked normally. Some timing things were off, but others worked perfectly.
JamesHarris wrote:
A spinlock in an interrupt service routine?
Not a problem if spinlocks disable interrupts. Spinlocks are not meant to be held for a long time. The holder of a spinlock must not sleep, or go to userspace, and so it will release the spinlock in a very short amount of time. This is the difference between a spinlock and a mutex. And you are right, you must never take a mutex lock in an interrupt handler.
JamesHarris wrote:
In my time I've come across lots of software which is orders of magnitude slower than it should be because it is riddled with all kinds of inefficiencies (which are then executed in loops, compounding the problems) and ISTM better to design for performance from the outset.
But you can only do that once you have understood the problem. You needed to build the inefficient but working system first to see where the inefficiencies really mount up, and to try and think of a better way. Yes, sometimes you can have great fun doing local optimizations on an inefficient design, hammering down the nails that stick out, and other times no nails really stick out but things aren't all that great, either. However, it is still better to first build a system that works and then improve on the design.
Also, if you have no hot code, it is possible you just have hot data. In which case an analysis of the data flow (and an optimization there) might be in order. You still never preemptively optimize, but rather, measure the impact of your design changes. Programming can be very counter-intuitive.
rdos wrote:
I don't think that is a good idea. I think you should handle the cause of the interrupt in the interrupt handler, but then you can defer the actions needed to be done in the OS itself to a server thread.
This is consistent with previous statements from you (which boil down to "Linux is wrong" in so many words[1]), yet this does not address the actual problem. The problem is that interrupts arrive when they damn well please, and the hardware has fixed the interrupt priorities, and there is nothing the software can do about it. And that is still true with the proposed masking scheme, but the actual interrupt handlers themselves are minimized, and the approach is portable to all other interrupt controllers, and generic to all devices. Your approach would have us calling driver code from the ISR, which is what I was trying to avoid. The deferment proposed by me allows the "ISR" to run in kernel thread context, and do way more things than a real ISR could.
[1]I do not know if this is conscious on your part, but you seem to be going the other direction whenever I present anything from Linux's book here. Especially the less intuitive parts.