OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 9:54 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 5 posts ] 
Author Message
 Post subject: Theory on clocks in hardware and software
PostPosted: Mon Dec 26, 2022 6:45 am 
Offline
Member
Member

Joined: Wed Aug 30, 2017 8:24 am
Posts: 1593
Hello everyone, and belated merry Christmas.

I recently looked over list of software clocks implemented in Linux and wondered how they buggered up the concept of "monotonic time" that badly. Looking at the clock_gettime manpage, I see no less than five clocks that ought to be the same as the monotonic clock, but aren't because the abstractions keep leaking.

The objective of a program looking at the clock has always been one of two things: Figuring out how much time has passed since the last time it looked at the clock, or figuring out what time to display to the user. Therefore it simply does not make sense to me to have a monotonic clock that doesn't count the time the system was suspended. My OS will therefore only have two clocks for telling the time, a monotonic and a realtime clock. This is the software interface presented to the programs.

My OS kernel will have a class of device drivers called hardware clocks. These drivers will know the frequency of the clock, the precision of the clock, some quirks of the clock, and of course a method to read the clock. That method returns a non-overflowing 64-bit counter (non-overflowing, of course, because a 64-bit counter starting at zero and running at a frequency of less than ca. 10 GHz will not overflow in any sensible human time frame). I would have to look into sensible implementations of that driver interface on the PC, but the TSC, the HPET, and the RTC all seem like they could serve as backend. If all else fails, the driver can count timer interrupts, but I really don't want to make that the norm.

At boot time the kernel will then select the best available hardware timer (probably the one with the best precision) and present as monotonic time the value of that timer after passing through a linear function. Since the output is supposed to be the time in nanoseconds, the initial factor will be the period time of the timer (the reciprocal of its frequency) in nanoseconds. And the initial offset will be zero.

An NTP client might want to adjust the tick rate. I consider the frequency of the hardware timer to be fixed, but what can be done is to change the factor of the linear function. However, if the factor is lowered, this might cause the value seen by userspace to lower as well. This must not happen; two reads of the monotonic clock that are ordered by some other means must have increasing values. Therefore, when the factor is changed, I will read the current value of the monotonic clock and the hardware timer into x0 and y0, and set the function henceforth to:
Code:
y = T' (x - x0) + y0 = T' x - T' x0 + y0

Oh, look at that. The new function is also a linear function with the new period time as factor and y0 - T' x0 as offset.

As long as T' is positive, the function will always increase this way. I may implement a T floor, so that it cannot be set nonsensically low; we'll see.

The real time clock presented to userspace is simply a constant offset from the monotonic clock. Setting the real time clock means setting a new offset from the monotonic clock. Leap seconds I really don't want to bother with too much, so the NTP client can simply set back the real time clock by one second at some point during the leap second. This is the most any application should ever need.

As for interrupting timers: I thought I would simply have a priority queue of all the things that need doing at some point. These timers are ordered by deadline. Whenever something changes about the priority queue, the timer interrupt is set for the next deadline, then all timers with expired deadlines are run and removed from the queue. This includes things like scheduling the next task. I would simply register a timer to do that when (re-)starting a user task while other user tasks are available for the CPU (otherwise no time limit is needed, right?)

Oh yeah, suspend mode. When suspending the machine, if the hardware timer does not run in suspend mode, then I would look for a hardware timer that remains running while suspended. For that, the precision doesn't matter. If no such timer exists, then obviously I cannot gauge the time spent suspended, but if I have such a timer, then I can, and I can increase the offset on the monotonic clock function by the time spent in suspend mode.

OK, did I forget anything?

_________________
Carpe diem!


Top
 Profile  
 
 Post subject: Re: Theory on clocks in hardware and software
PostPosted: Tue Dec 27, 2022 4:13 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Most of it seems ok to me.

I don't have GHz precision to my monotonic clock, rather use the "legacy" frequency of the timer which is 1.193 MHz. I find it a bit problematic to have different representations of the tick period, and so when I use the HPET as a source, I will convert ticks to the legacy format. If I redesigned this today, I might use nanoseconds as the base instead, but it would cause too much trouble in the current situation.

I think the RTC clock has good accuracy, so by enabling interrupts from it you can adjust the clock to run more smoothly. Using NTP will introduce a lot more jitter, so I'm unsure if this is a good idea. I only use NTP to adjust the offset between monotonic clock and real time.

I try to use different sources for monotonous time and timers if possible. On old hardware, I had to use the PIT timer for both, which introduce a bit of error in the monotonous clock.

I only use the RTC as the initial source of real time. It has too poor precision for anything else. At boot, I read out the RTC and set the monotonous time to the RTC time and the real time offset to zero.

Another issue is how to handle timers on multicore systems. The best solution is to use a local timer (like the APIC timer), but sometimes I use a global timer which means that timer events must be global. It's a lot easier & effective to handle timer events per core. Using a global timer means all timer interrupts will go to the BSP, while using the APIC timer results in an interrupt on the core that started the timer.


Top
 Profile  
 
 Post subject: Re: Theory on clocks in hardware and software
PostPosted: Fri Jan 20, 2023 3:48 pm 
Offline
Member
Member
User avatar

Joined: Mon May 22, 2017 5:56 am
Posts: 812
Location: Hyperspace
nullplan wrote:
The objective of a program looking at the clock has always been one of two things: Figuring out how much time has passed since the last time it looked at the clock, or figuring out what time to display to the user. Therefore it simply does not make sense to me to have a monotonic clock that doesn't count the time the system was suspended.

What about timing how long a task takes? Would it be better to add up the lengths of the timeslices the program gets? Maybe not if those timeslices don't count blocking disk activity. I guess there's a reason for the "real time" column from the Unix `time` command. For that use, I'd want a timer which doesn't count when the system is suspended.

I'm planning on implementing a concept of experiential time to my OS -- time the computer has experienced. If a human adjusts a clock backward, the human knows time hasn't actually reversed, so why should a computer be that stupid? This is a question especially worth asking in matters of security, but it applies to `make` too. Given removable storage drives, I suppose each filesystem should have its own experiential time too. (Speaking as an old PC builder, any drive is ultimately removable, as is any partition if you're crazy enough.) The OS should simply maintain an appropriate offset for each filesystem, initialised each time the filesystem is mounted.

_________________
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie


Top
 Profile  
 
 Post subject: Re: Theory on clocks in hardware and software
PostPosted: Sat Jan 21, 2023 1:00 am 
Offline
Member
Member

Joined: Wed Aug 30, 2017 8:24 am
Posts: 1593
eekee wrote:
What about timing how long a task takes?
That's not really looking at the clock. That's asking the OS about how many resources you consumed. And those questions are always much harder than it seems on surface (see also: How much memory does this process consume?). You brought up a task being suspended for disk activity. It doesn't consume any CPU in that time, so whether that is part of the run-time you are asking about is a matter of taste.

eekee wrote:
I guess there's a reason for the "real time" column from the Unix `time` command. For that use, I'd want a timer which doesn't count when the system is suspended.
Well I don't. If it took 5 hours to complete a task because the system was suspended for that long, then that's what the "real time" column should report, because that is how much time passed in reality.

_________________
Carpe diem!


Top
 Profile  
 
 Post subject: Re: Theory on clocks in hardware and software
PostPosted: Sun Jan 22, 2023 6:40 am 
Offline
Member
Member
User avatar

Joined: Mon May 22, 2017 5:56 am
Posts: 812
Location: Hyperspace
You're right, it is a matter of taste.

_________________
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 5 posts ] 

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 18 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group