immibis wrote:
Octocontrabass wrote:
But user mode programs call the kernel. It doesn't make sense to use "call" in both directions.
Your scheduler may be easier to understand, but you're trading that for additional complexity in your kernel's entry/exit points. Are you sure that's a good trade?
Yes, you have two levels of calling, really. Imagine an algorithm like a DFS with an explicit stack. Then you call doSomeDfsWork and it does some processing using that stack. It makes sense to say that node processing "recurses", and it also makes sense to say you are calling doSomeDfsWork, and the two kinds of calls are orthogonal and happen on different stacks and everything is completely okay with that.
Does it make sense to implement a scheduler as that loop? Maybe not literally, but I think it's fun to imagine it that way. As a conceptual model it answers a bunch of questions about scheduling, like what state to create a process in, what happens if it destroys itself while running, and how scheduling interacts with syscalls.
I see the scheduler as an event handler, not some sequential code that runs in a loop. Additionally, the scheduler runs in the context of random threads (IRQs) or in the context of a thread requesting some scheduler-related action. I also have a load-balancer and IRQ-balancer that runs in a kernel thread, but that's to optimize the system and is not directly connected to scheduling.
IMO, the scheduler should in no way interact with syscalls. The scheduler shouldn't care if a thread runs in kernel mode or user mode, and should not be needed to be informed about switching rings.
Debug-wise, it's far superior to load & store registers in the thread control block rather than depending on stack contexts. This makes it easy to create a new thread: Create the thread control block, initialize the registers, allocate a kernel stack, and put it into the ready list.