Hello all, first time poster.
I'm starting a project to implement an OS on top of the
seL4 microkernel. Right now, I need to put together the timer driver, and I'm swimming a bit blind with regards to time resolution.
The hardware that I'm writing this for (just the one platform) has 1, 10, 100 and 1000us resolution on 16-bit countdown timers and a global 64-bit counter.
For the time being, I'm assuming 1us timer resolution (because I have it), an
indexed min-heap for a priority queue (O(log n) insertion, removal, and structure is easily de-cluttered). I was possibly thinking of another structure, but for now, that's how I'm doing it.
At the moment, this is the sort of thing I have:
Register timer:
- get target with
Code:
read_timestamp + delay
- push the timout onto the priority queue
- if the timeout went to the front, set the timer with a
Code:
target - read_timestamp
(the time to push makes it outdated)
for setting timer, a function is called with a us delay:
- if it's within 1ms, run the handler anyway
- if it's within 2^16us, set the us timer with that delay
- if it's less than 2^16 + 20ms, set the ms timer so that it fires an interrupt at about 30-40ms to go so the us timer can finish it off
- if iit's more than 2^16 + 20ms, I just do a full 1ms timer tick (1m 5s)
for handling an interrupt:
- I enter a loop - while interrupt queue is not empty, and the head has more than a ms to go, I run the registered handler.
- this loop does a time-stamp read each time in the loop
- I then look at the difference between timestamp and target, and call the set-timer function
My problem, is that I've given myself 1ms grace-period. If I take any longer than that, it will under-flow. i realise that I could reason that if it underflows, it would be UINT_MAX - <small number>, but I'm not so sure that's the ideal way to deal with it.
I've had a very quick and naive look at RIOT OS, and it looks like they reason about what the overhead should be, and give an estimate as to the us ticks that have happened, and factor that into the code. That's too complicated for what I'm thinking at the moment.
Or should I just do away with 1us timer resolution, work with the 10us timers, and settle for 1ms accuracy?