bob wrote:Carey wrote:sje wrote:CThinker wrote:"portable"?
gettimeofday() is not portable. Its resolution is also not guaranteed.
It's been around since 4.2BSD and that's at least thirteen years ago. It's POSIX from the IEEE specification. And the resolution is guaranteed to be either one microsecond or the best available if that's larger. It's also on every Mac OS/X system, and every Linux system in recent use.
I don't happen to have a copy of the POSIX specs. (And that it's from BSD is irrelevant. Lots of things are from back than and not done the same or done right, etc.)
Anyway, does the POSIX specs *specificially* say that the accuracy must be microsecond, or does it just say the resolution must be microsecond?
That's not the same thing.
That is totally irrelevant. The PC is accurate to 18.206 tics per second, or about 54 milliseconds. That's all the resolution the built in clock in the PC hardware has. And, in fact, you can't even get near that in accurately timing things. The only way to time accurately is to do as is done in the linux kernel and calculate a delay loop that will work. But that's no good for determining the time of day.
Actually Bob, I think you are wrong there. I've heard you mention 18.2 before and haven't spoken up because it wasn't in a convesation I was in. But this time...
The 18.2 was a result of the PC XT's limited 4.77mhz clock cycling through a counter set to trigger at roll-over. It could be reprogrammed for higher rates but caused too much load on the slow processors. (And yes, there were public articles that clearly showed how you could get a more reasonable rate than 18.2 Something like even 20 was more convenient and quite do-able. But there were always side-effects.)
But realisitcally, the 18.2 was all that was available on the PC XT.
By the time of the 386 AT's there were other hardware timers available. Things that programmers could use at higher resolutions without the side effects of reporgramming the PC's main timer.
They weren't used back in the days of DOS, unless you reprogrammed it yourself.
But since the days of Windows.... I can see Win3 using the 18.2 timer. Win95, not so much. WinXP & WinVista depending on the 18.2 timer... No.
Even on the 16mhz 386's, 60 ticks per second was achievable without too much strain. From the 486 & Pentiums that Win9x used, 100 per second wouldn't be a problem.
I don't want to go look through Microsoft's hardware site to find out the specs for the required timers that it uses internally, but I am definetly willing to say that it's not 18.2 ticks per second considering other timers are definetly available.
All this talk about milliseconds and microseconds is beyond funny. You do
Well, then I hope you had a good hearty belly laugh....
good under windows or linux to get within .1 seconds and that is hardly a given, it is usually two or three times that at best...
It started out as being the possibility of 1 second resolution.
It being too undefined that you don't know for sure.
And then somebody asked about why not just use clock(), since it's a standard, official function. And it went from there.
The problem with many of these functions arise when threads are used. There are too many internal issues unless we want to start a separate thread, but worrying about millisecond or microsecond resolution is wasted time.
I don't think anybody is truely worried about millisecond or microsecond resolution.
What has mostly been discussed is that in all the functions that have been suggested, there is *no* portable, *reliable* way to do it.
There are common routines that are likely to work. But even for them, it's more a matter of hoping (and expecting) it will work rather than it being guaranteed.
When Steven brought up gettimeofday being POSIX etc., I was somewhat hopping he was right.
But he wasn't. POSIX doesn't require gettimeofday() to have any particular resolution. It could still be no better than 1 second.
It probably wont be, but it's not guaranteed or required. A library writer can still do whatever they want and it be right but yet unusable.
I am with Steven on the gettimeofday(). I've been using it far longer than 13 years. It worked in 1985 on 4.x BSD on our VAX at UAB. That's 23 years of continual functionality.
Nobody is disagreeing with that, Bob.
It started out because it was unknown, unspecified, and talk about there being no portable *guaranteed* way to get reliable wall clock time.
gettimeofday() is like many of the classic unix stuff still in most C compilers.... It's there. It's likely to work. But there is no real guarantee. And some compilers may not do it right, or not return a -1 if it's unavailable, etc.
That's the joy of non-standard stuff.
It may appear to be microseconds but in reality the numbers only change 18.2 times per second. (PC XT style) (Yes yes, that is *extremely* unlikely.)
I've seen compilers (or rather libraries) that lie about clock()'s clock_per_sec by just multiplying everything by some number to make it look like it's changing 1000 times per second. In other words, they lie and make it appear there is a higher accuracy than there really is.
So, does POSIX specs actually require all one million microseconds to be present? Or can an implementation cheat and only provide 60 or 100 (or whatever) distinct values within those million?
This is a serious question. As I said, I don't have the POSIX specs. I did a quick look on google and I just saw some pages tlaking about it, but no actual specs.
If you can quote the actual specs, I'd appreciate it.
Thanks.
Carey