a. If an activity is accessing private resource, it is not adversely affected (except performance-wise) by interruptions and simultaneous executions of other activities. The activity may complete after an unknown delay and with several interruptions, but its correctness is not affected.
b. If an activity is accessing shared resource, it is affected by the interruptions and simultaneous executions of its peer activities, since the accesses, in the general case, are non-atomic with respect to its peer activities. A peer is an activity which accesses the same resource which a given activity too does. Such an activity can either (b.1) handle the
interruption/simultaneous execution on its own (bulk of context switching, for e.g.), or (b.2) cannot handle the interruption/simultaneous execution on its own (synchronization with devices and other threads for e.g.)
Below I consider uniprocessor semantics alone.
Although type.a activities are not affected by ISE, in order to provide fairness, they are run with their importance elevated to the importance of the resource being accessed.
If one looks at the stack which has been switched away, its entire stack (save for the some frames at the top) consists of frames for activities of type.a. The top consists of the type.a and type.b.1 activities which carry out the actual switch.
An activity of certain importance x, A(x), can be interrupted by an activity of a higher importance x + k, A(x+k). While the latter runs, the system must allow accumulation of same-or-lower-importance activities. When A(x+k) finishes, the system must stagger down in an ordered fashion running any pending activities A(x+k), A(x+k-1), A(x+k-2), ..., in that order, before returning to A(x).
The flow is:
Code:
sync(to = x):
raise-to-level-(x)
drop-sync(from = x, to = 0):
perform-pending-activities-A(x)
goto-level-(x-1)
perform-pending-activities-A(x-1)
goto-level-(x-2)
perform-pending-activities-A(x-2)
.
.
.
goto-level-(0)
perform-pending-activities-A(0)
Regardless of their type, all activities share time as a resource. In the OP, the importance which is assigned to time is the same as assigned to the deferred calls of activities with importance higher than that of time. The importance/level is
labelled as "dpc-level".
The activity which assigns time to other activities is the context switching activity, taskSwitch.
It runs either
- involuntarily upon the quota-expiration of the current activity, or
- voluntarily upon the request of the current activity.
Since it manages time, it runs at dpc-level, the importance assigned to time.
When the quota-expiration is signalled from a level higher than dpc-level, the switch-dpc activity is queued.
If the quota-expiration is signalled from a level lower than dpc-level, the voluntary switching of activity can be forced.
Regardless of the way the expiration is signalled, at some point in time, the dpc-level is reached and, the loop, which retires the queued dpcs, runs:
Code:
run-dpcs // a A(dpc-level) activity
while (1) {
sync(max)
dequeue-dpc-from-listhead
drop-sync(max, dpc-level)
if-done-then-exit
// Point X0
else-call-dpc-routine // taskSwitch runs here
// Point Y0
}
Voluntary switching is requested as:
Code:
voluntary:
c = curr_level;
sync(dpc-level)
// Point X1
taskSwitch
// Point Y1
drop-sync(dpc-level, c)
The run-dpcs has an activity which accesses the shared resource, dpc-queue. Its importance is
Code:
max = maximum(importance of all activities which access it)
, which may be higher than dpc-level. Hence, sync/drop-sync is required.
Some definitions:
- out-going-involuntary is the current activity being forced to give up time.
- in-coming-involuntary is the new activity which went to sleep because it was forced to give up time.
- out-going-voluntary is the current activity voluntarily giving up time.
- in-coming-voluntary is teh new activity which went to sleep voluntarily.
There are 4 cases to consider, based on the type of the activities involved in the switching:
(1) out-going-involuntary, in-coming-involuntary:taskSwitch enters at Point X0 as part of one activity, and emerges at Point Y0 as part of the same or a different activity.The loop still works as it does not maintain any stale references. If it did, refreshing them and deciding based on that is enough.
(2) out-going-involuntary, in-coming-voluntary:taskSwitch enters at Point X0 as part of one activity, and emerges at Point Y1 as part of a different activity. The drop-sync call ensures that pending dpcs are run.
(3) out-going-voluntary, in-coming-involuntary:taskSwitch enters at Point X1 as part of one activity, and emerges at Point Y0 as part of a different activity. The loop still works as it does not maintain any stale references. If it did, refreshing them and deciding based on that is enough.
(4) out-going-voluntary, in-coming-voluntary:taskSwitch is enters at Point X1 as part of one activity, and emerges at Point Y1 as part of the same or a different activity.The drop-sync call ensures that pending dpcs are run.
As far as running the dpcs is considered, changing threads from under the dpc-processing does not affect it.
About nesting of interrupts:If nested interrupts are allowed, (or even if they are not they occur when running at lower levels such as the dpc-level), then multiple instances of the kernel's common interrupt handler activity (which also executes run-dpcs) will be on the stack.
Suppose IH0 is the very first instance. It records this fact atomically in the thread's context area. At any time if IH1 is pushed on top of IH0 (because of another interrupt), IH1 can know that it is not the 'primary' interrupt context, and so it can service the interrupts and let the IH0 instance deal with dpcs. IH1 knows that IH0 is still active because of the flag, and IH0 knows that, during its dpc-level processing, further dpcs can be queued.
Multiple instances of an activity can decide on the way to distribute the work. The wierdness of the stack stems from the task of deciding the freshness of the data which the interrupted activities were accessing around the switching boundary.
Suppose that each IHx instance is allowed to process dpcs, instead of only IH0. That would results in several instances of type.a and type.b.1 activities stacked on top of each other. The priorities of the dpcs would be inverted: If IH0 is about to run a dpc0, IH1 comes along and dequeues dpc1 which was later in the queue than dpc0, and begins running dpc1.
If, at level IHx, x > 1, the taskSwitch ran, the nested IH instances will be stashed away while the thread sleeps. Upon returning at Point Y0 after some amount of delay, if the IH instances refresh stale context, they can undo their stack and return back to the original user activity which was interrupted. Such a scheme requires IH instances and the dpcs to be of type.b.1, a quite complicated scheme.
About sub-priorities of DPCs:If dpcs themselves have sub-priorities, then regard them as several distinct, levels.