On the goodness of gettimeofday()

Discussion of chess software programming and technical issues.

Moderator: Ras

User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: On the goodness of gettimeofday()

Post by sje »

bob wrote:
sje wrote:Well, actually I do know that the microsecond clock is good to one microsecond, although that is not necessarily proven from the sample output.
How do you solve the interrupt issue? when any interrupt occurs, the rest are disabled until they are expicitly enabled, and it doesn't take much to miss clock ticks. That's why things like NTP were developed, to correct the time slip that occurs naturally because of this...

I have _never_ seen a computer that could maintain a clock to even 1 second per day, which is millisecond accuracy...
The timer is not run by the CPU directly, instead it keeps its own internal register containing the tick count. It's like the Old Days back in th 1960s when a CDC 6500 would check its rackmount (!) time of day clock each day at midnight. I remember watching this from the operator's console and it was no surprise to see the $6,000,000 mainframe be off by more than a minute every day.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: On the goodness of gettimeofday()

Post by bob »

sje wrote:
bob wrote:
sje wrote:Well, actually I do know that the microsecond clock is good to one microsecond, although that is not necessarily proven from the sample output.
How do you solve the interrupt issue? when any interrupt occurs, the rest are disabled until they are expicitly enabled, and it doesn't take much to miss clock ticks. That's why things like NTP were developed, to correct the time slip that occurs naturally because of this...

I have _never_ seen a computer that could maintain a clock to even 1 second per day, which is millisecond accuracy...
The timer is not run by the CPU directly, instead it keeps its own internal register containing the tick count. It's like the Old Days back in th 1960s when a CDC 6500 would check its rackmount (!) time of day clock each day at midnight. I remember watching this from the operator's console and it was no surprise to see the $6,000,000 mainframe be off by more than a minute every day.
That's not the only problem I am talking about... You do a gettimeofday() call, which runs off to do a system call. The value can be read, but before _your_ process runs again to use the value returned, significant time can go by, making the time you get back off by a "quantum" (whatever scheduling timeslice your system uses). I don't see any reliable way to time to the microsecond level unless you somehow take control of the hardware yourself, disable interrupts so that you can't possibly get preempted, and that would turn into some _very_ interesting programming to say the least...

But outside of that, how can you get accurate time on a system where multiple processes are running and you can be preempted at any instant for variable amounts of time.
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: On the goodness of gettimeofday()

Post by sje »

bob wrote:That's not the only problem I am talking about... You do a gettimeofday() call, which runs off to do a system call. The value can be read, but before _your_ process runs again to use the value returned, significant time can go by, making the time you get back off by a "quantum" (whatever scheduling timeslice your system uses). I don't see any reliable way to time to the microsecond level unless you somehow take control of the hardware yourself, disable interrupts so that you can't possibly get preempted, and that would turn into some _very_ interesting programming to say the least...

But outside of that, how can you get accurate time on a system where multiple processes are running and you can be preempted at any instant for variable amounts of time.
First, there may likely be some jitter, but with a separate circuit containing the timer and not affected by outside factors there won't be any cumulative, long term error.

The early Macintosh computers (ca. mid 1980s) had two timers. The first was a separate chip that had a constantly running clock/calendar with a one second resolution. The second timer was a four byte counter in RAM that was incremented by the CPU at a 60 Hz crystal controlled rate coincident with an unmasked periodic interrupt that synchronized the video output.

In the absence of a real time OS, I've found that having a spare processor core ready and waiting will minimize any latency issues. My tests support this. Any aperiodic latency seems to come not from the main process, but from the process running the terminal emulation that's handling the output.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: On the goodness of gettimeofday()

Post by bob »

sje wrote:
bob wrote:That's not the only problem I am talking about... You do a gettimeofday() call, which runs off to do a system call. The value can be read, but before _your_ process runs again to use the value returned, significant time can go by, making the time you get back off by a "quantum" (whatever scheduling timeslice your system uses). I don't see any reliable way to time to the microsecond level unless you somehow take control of the hardware yourself, disable interrupts so that you can't possibly get preempted, and that would turn into some _very_ interesting programming to say the least...

But outside of that, how can you get accurate time on a system where multiple processes are running and you can be preempted at any instant for variable amounts of time.
First, there may likely be some jitter, but with a separate circuit containing the timer and not affected by outside factors there won't be any cumulative, long term error.

The early Macintosh computers (ca. mid 1980s) had two timers. The first was a separate chip that had a constantly running clock/calendar with a one second resolution. The second timer was a four byte counter in RAM that was incremented by the CPU at a 60 Hz crystal controlled rate coincident with an unmasked periodic interrupt that synchronized the video output.

In the absence of a real time OS, I've found that having a spare processor core ready and waiting will minimize any latency issues. My tests support this. Any aperiodic latency seems to come not from the main process, but from the process running the terminal emulation that's handling the output.
Just unblocking a process is a microseconds duration deal, and across a system call anything can happen, which makes trying to time to the microsecond level futile...