rdos wrote:
I typically use NTP, not Internet time.
That two has absolutely nothing to do with each other. It's like saying you're not using a browser, because you use SVG.
rdos wrote:
If you want to log the timing of events, you cannot have the sub-second part drifting. It will result in incorrect calculations of time between events. Also, the sub-second part naturally must go through zero as the second part increase, otherwise you have inconsistent time. What this means is that both must have the same precision.
You are wrong. First, when the second part increases, you simply zero the fractional part. Second, no matter how hard you want it, no machine can give you nanosecond precision. For example, on a machine with RTC only, the fractional part might be in nanosec units, but that will be updated in best case every 1/ 32768th of a second. Keeping the fractional part actually increasing by 1 every nanosec requires so huge overhead that no mainline kernels do that (assuming the hardware can handle 1 billion IRQs per second in the first place, which is very unlikely).
rdos wrote:
I'm unsure if you understood the method. If I set system time from the RTC at boot up, but later discover that it runs five seconds behind, then I simply set the "offset" to five seconds. When I read out the wall clock I take system time and add the "offset" (five seconds) and now I have the correct wall clock, and still a consistent system time that increase properly.
No, I haven't misunderstood, your wallclock will be off 100% guaranteed. What you are not keeping in mind, you say "discover that it runs five seconds behind", without realizing what that actually means and how that can be implemented. If you do, then you'll get to the offset-less solution I was mentioning.
rdos wrote:
This in no way affects the acuracy of system time or the wall clock.
Yes, it does, you said it yourself that you might "discover that it runs behind". That could not happen if it were accurate, but getting behind happens because it's not.
rdos wrote:
RDOS doesn't run on emulators.
That explains everything. That's why you're unaware of these things, because you haven't tested RDOS properly under different circumstances. A properly written OS should have no issues running in an emulator.
nullplan wrote:
ext2/3/4 can record file times with nanosecond precision. Actually, most FSes can. Except FAT, which famously can only store times to a precision of two seconds. And that is important for make, which tries to figure out which of two files is newer, which is a problem when you only have second precision but an otherwise fast computer, so a lot of files are created at the same time.
Which is totally and completely irrelevant, because the VFS can handle timestamp with nanosec precision regardless to the actual FS used. If a process needs nanosec precision on file timestamps, then it will only communicating with the VFS-node only,
with in-memory file records only, there's simple no time to write the files to the storage and read them back in less than a nanosec. That's just not possible.
nullplan wrote:
Syslog is another protocol that comes to mind.
RFC5425 states it pretty clearly, that time format must comply with RFC3339, which in turn, as I've already quoted "A format which includes rarely used options is likely to cause interoperability problems. [...] The format defined below includes only one rarely used option: fractions of a second."
In other words, using fractions of a second with syslog is an Undefined Behaviour, threat it as such. (Not to mention if you're correlating two loglines there's absolutely no guarantee that their originator machines had the same time precision, most likely they haven't.) I was the IT development manager at the company which produced syslog-ng, I know a thing or two about syslogs
Sadly a couple of years ago syslog-ng was sold to the sharks, and went down on the toilet.
Cheers,
bzt