bloodline wrote:
linguofreak wrote:
bloodline wrote:
So I did ponder a semaphore system, which would put the task back into the waiting list, only to be scheduled back into when it received the signal that the lock was free. But since a task switch in my system is just a simple stack swap, keeping the task in the ready list has less overhead than the bookkeeping needed to keep track of which task is waiting for which lock, and then alter the task state depending upon the conditions. But this is definitely something I would like to add in future for higher level resources!
You could do something like giving each task a function pointer for a callback "isWaiting()".
For the scheduler, the isWaiting() pointer is used as follows: When a new task is created, the pointer is initialized as a null pointer. When the scheduler selects a task to run, it checks to see if the isWaiting() pointer is null. If it is, it schedules the task. If it is not, it calls the function on the other end of the pointer. If the pointer is non-null when the function returns, the scheduler rejects the task as a candidate to run and moves on down the list of ready tasks. If it is now null, the scheduler runs the task.
For the task, the isWaiting() pointer is used like this: When the task needs to wait on a lock, it sets the isWaiting() pointer to point to a function that tests that lock. When the function is called, it performs the test, and if another task still holds the lock, it does nothing and returns. If the lock is free, the function nulls the isWaiting() pointer and returns.
I do like this idea! I think I would need a separate list of “lockWaiting” tasks which the scheduler would scan every os quantum, execute the callback and then move any tasks which are now unblocked to the ready list. This would probably have a similar overhead to my existing locking system, but with the advantage that task priority is no longer an issue.
I’ll think about this some more, but I do really like the idea!
So I've though more about this idea.
This is the new design. It's still just on paper at the moment (I'm working on something else right now).
1. The kernel will maintain a list of locks.
2. When resource is locked by a task, the opaque lock object will record that lock.
3. If another task tries to obtain a lock on that object, the task is moved to the taskWait list, and the lock it wishes to obtain is added to the kernel's lockList with a record of the requesting task.
4. Every reschedule, the lock list is scanned and each lock tested, if the lock is now free, the associated task is move back into the ready list, the resource is locked for the new task, and the lock is removed from the lockList.
This list is strictly first come, first served.
This has a higher rescheduling overhead, but appears to be more elegant, than my current priority boosting solution.
When I implement it, I will no doubt find that it is flawed.