nexos wrote:
Hello,
I am redesigning my model of scheduler development in favor of a better system. Currently, I am a bit confused about one thing. Every CPU, IMO, should have lockless access to its run queue. But what if a thread on a different CPU tries to unblock a thread on that CPU? It now must access another CPU's run queue! I have a few solutions that I would like suggestions on:
Make that CPU send an IPI to the CPU it is trying to unblock the thread on. That IPI handler would, with ints disabled, add that thread to the run queue. The only issue is that interrupts have a large overhead.
Bite the bullet and make run queues be access with locks. Seems inefficient and bug prone, however.
Any other ideas?
nexos
I use a temporary queue (rather a 256 entry array) for threads that have become active, either from IRQs or from another core. The implementation is lock-free and so doesn't need a spinlock which makes the scheduler simpler. The core owing the temporary queue will empty it and put the threads in it's local scheduler queue. When a core activates a thread on another core it will send an IPI to the target core after it has added the thread to it's temporary queue.