Move generator

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Move generator

Post by bob »

Carey wrote:
bob wrote:
Carey wrote:
sje wrote:
CThinker wrote:"portable"?

gettimeofday() is not portable. Its resolution is also not guaranteed.
It's been around since 4.2BSD and that's at least thirteen years ago. It's POSIX from the IEEE specification. And the resolution is guaranteed to be either one microsecond or the best available if that's larger. It's also on every Mac OS/X system, and every Linux system in recent use.
I don't happen to have a copy of the POSIX specs. (And that it's from BSD is irrelevant. Lots of things are from back than and not done the same or done right, etc.)

Anyway, does the POSIX specs *specificially* say that the accuracy must be microsecond, or does it just say the resolution must be microsecond?

That's not the same thing.
That is totally irrelevant. The PC is accurate to 18.206 tics per second, or about 54 milliseconds. That's all the resolution the built in clock in the PC hardware has. And, in fact, you can't even get near that in accurately timing things. The only way to time accurately is to do as is done in the linux kernel and calculate a delay loop that will work. But that's no good for determining the time of day.
Actually Bob, I think you are wrong there. I've heard you mention 18.2 before and haven't spoken up because it wasn't in a convesation I was in. But this time...


The 18.2 was a result of the PC XT's limited 4.77mhz clock cycling through a counter set to trigger at roll-over. It could be reprogrammed for higher rates but caused too much load on the slow processors. (And yes, there were public articles that clearly showed how you could get a more reasonable rate than 18.2 Something like even 20 was more convenient and quite do-able. But there were always side-effects.)

But realisitcally, the 18.2 was all that was available on the PC XT.

By the time of the 386 AT's there were other hardware timers available. Things that programmers could use at higher resolutions without the side effects of reporgramming the PC's main timer.

They weren't used back in the days of DOS, unless you reprogrammed it yourself.


But since the days of Windows.... I can see Win3 using the 18.2 timer. Win95, not so much. WinXP & WinVista depending on the 18.2 timer... No.

Even on the 16mhz 386's, 60 ticks per second was achievable without too much strain. From the 486 & Pentiums that Win9x used, 100 per second wouldn't be a problem.

I don't want to go look through Microsoft's hardware site to find out the specs for the required timers that it uses internally, but I am definetly willing to say that it's not 18.2 ticks per second considering other timers are definetly available.

All this talk about milliseconds and microseconds is beyond funny. You do
Well, then I hope you had a good hearty belly laugh....
good under windows or linux to get within .1 seconds and that is hardly a given, it is usually two or three times that at best...
It started out as being the possibility of 1 second resolution.

It being too undefined that you don't know for sure.

And then somebody asked about why not just use clock(), since it's a standard, official function. And it went from there.

The problem with many of these functions arise when threads are used. There are too many internal issues unless we want to start a separate thread, but worrying about millisecond or microsecond resolution is wasted time.
I don't think anybody is truely worried about millisecond or microsecond resolution.
That's where you are wrong. What about 1 0 games on ICC (game in one minute). We quickly get to sub-second target times, and if my apparent time usage and real time usage drift very far apart, I lose on time...

What has mostly been discussed is that in all the functions that have been suggested, there is *no* portable, *reliable* way to do it.

There are common routines that are likely to work. But even for them, it's more a matter of hoping (and expecting) it will work rather than it being guaranteed.

When Steven brought up gettimeofday being POSIX etc., I was somewhat hopping he was right.

But he wasn't. POSIX doesn't require gettimeofday() to have any particular resolution. It could still be no better than 1 second.
You consider that "news"? :) When they could not even agree on whether "char x" produced a signed or unsigned x by default? :)


It probably wont be, but it's not guaranteed or required. A library writer can still do whatever they want and it be right but yet unusable.
I am with Steven on the gettimeofday(). I've been using it far longer than 13 years. It worked in 1985 on 4.x BSD on our VAX at UAB. That's 23 years of continual functionality.
Nobody is disagreeing with that, Bob.

It started out because it was unknown, unspecified, and talk about there being no portable *guaranteed* way to get reliable wall clock time.

gettimeofday() is like many of the classic unix stuff still in most C compilers.... It's there. It's likely to work. But there is no real guarantee. And some compilers may not do it right, or not return a -1 if it's unavailable, etc.

That's the joy of non-standard stuff.




It may appear to be microseconds but in reality the numbers only change 18.2 times per second. (PC XT style) (Yes yes, that is *extremely* unlikely.)

I've seen compilers (or rather libraries) that lie about clock()'s clock_per_sec by just multiplying everything by some number to make it look like it's changing 1000 times per second. In other words, they lie and make it appear there is a higher accuracy than there really is.


So, does POSIX specs actually require all one million microseconds to be present? Or can an implementation cheat and only provide 60 or 100 (or whatever) distinct values within those million?

This is a serious question. As I said, I don't have the POSIX specs. I did a quick look on google and I just saw some pages tlaking about it, but no actual specs.

If you can quote the actual specs, I'd appreciate it.

Thanks.

Carey
As to current timer resolutions, I've never seen any reference to any version of unix or windows, running on a PC, that used other than the usual 18+ tics per second. Sun machines used to use 100 tics. I've not seen anyone go finer-grained than that because of the cost of handling all those interrupts, however small it might be overall.
Carey
Posts: 313
Joined: Wed Mar 08, 2006 8:18 pm

Re: Move generator

Post by Carey »

bob wrote:
Carey wrote:
bob wrote:
Carey wrote:
sje wrote:
CThinker wrote:"portable"?

gettimeofday() is not portable. Its resolution is also not guaranteed.
It's been around since 4.2BSD and that's at least thirteen years ago. It's POSIX from the IEEE specification. And the resolution is guaranteed to be either one microsecond or the best available if that's larger. It's also on every Mac OS/X system, and every Linux system in recent use.
I don't happen to have a copy of the POSIX specs. (And that it's from BSD is irrelevant. Lots of things are from back than and not done the same or done right, etc.)

Anyway, does the POSIX specs *specificially* say that the accuracy must be microsecond, or does it just say the resolution must be microsecond?

That's not the same thing.
That is totally irrelevant. The PC is accurate to 18.206 tics per second, or about 54 milliseconds. That's all the resolution the built in clock in the PC hardware has. And, in fact, you can't even get near that in accurately timing things. The only way to time accurately is to do as is done in the linux kernel and calculate a delay loop that will work. But that's no good for determining the time of day.
Actually Bob, I think you are wrong there. I've heard you mention 18.2 before and haven't spoken up because it wasn't in a convesation I was in. But this time...


The 18.2 was a result of the PC XT's limited 4.77mhz clock cycling through a counter set to trigger at roll-over. It could be reprogrammed for higher rates but caused too much load on the slow processors. (And yes, there were public articles that clearly showed how you could get a more reasonable rate than 18.2 Something like even 20 was more convenient and quite do-able. But there were always side-effects.)

But realisitcally, the 18.2 was all that was available on the PC XT.

By the time of the 386 AT's there were other hardware timers available. Things that programmers could use at higher resolutions without the side effects of reporgramming the PC's main timer.

They weren't used back in the days of DOS, unless you reprogrammed it yourself.


But since the days of Windows.... I can see Win3 using the 18.2 timer. Win95, not so much. WinXP & WinVista depending on the 18.2 timer... No.

Even on the 16mhz 386's, 60 ticks per second was achievable without too much strain. From the 486 & Pentiums that Win9x used, 100 per second wouldn't be a problem.

I don't want to go look through Microsoft's hardware site to find out the specs for the required timers that it uses internally, but I am definetly willing to say that it's not 18.2 ticks per second considering other timers are definetly available.

All this talk about milliseconds and microseconds is beyond funny. You do
Well, then I hope you had a good hearty belly laugh....
good under windows or linux to get within .1 seconds and that is hardly a given, it is usually two or three times that at best...
It started out as being the possibility of 1 second resolution.

It being too undefined that you don't know for sure.

And then somebody asked about why not just use clock(), since it's a standard, official function. And it went from there.

The problem with many of these functions arise when threads are used. There are too many internal issues unless we want to start a separate thread, but worrying about millisecond or microsecond resolution is wasted time.
I don't think anybody is truely worried about millisecond or microsecond resolution.
That's where you are wrong. What about 1 0 games on ICC (game in one minute). We quickly get to sub-second target times, and if my apparent time usage and real time usage drift very far apart, I lose on time...
(sigh) I wish you'd make up your mind.

First you say that you think it's "beyond funny" that we are talking about milliseconds & microseconds and now you are saying that of course it's important etc. etc.

For record Prof. Hyatt, yes I am aware of those situtations. I think the games are pretty useless, but if you want to play them then that's certainly none of my business.

I do care somewhat that the time is done in at least centisecond resolution. Millisecond would be better.

However, more than that, I want the routines to be consistant from platform to platform. I don't want to have to resort to the "Try it and find out" mentality.

I'll admit that I mispoke a bit when I said "I don't think anybody is truely concerned...."

I probably should have said something like "In this particular conversation, I don't think anybody here is primarly concerned with whether it's millisecond or microsecond, but whether or not they can get it portably and reliably."
What has mostly been discussed is that in all the functions that have been suggested, there is *no* portable, *reliable* way to do it.

There are common routines that are likely to work. But even for them, it's more a matter of hoping (and expecting) it will work rather than it being guaranteed.

When Steven brought up gettimeofday being POSIX etc., I was somewhat hopping he was right.

But he wasn't. POSIX doesn't require gettimeofday() to have any particular resolution. It could still be no better than 1 second.
You consider that "news"? :) When they could not even agree on whether "char x" produced a signed or unsigned x by default? :)
News, no.

But Steven seemed pretty sure it did. And you chimed in with very similar sentiments.

I figured if the two of you were both agreeing about a POSIX function, then maybe I was wrong and I should check. I did ask Steven what the specs actually said but he didn't have a copy.

From what I see on the web, both of your attitudes were inappropriate. You were far far too positive that gettimeofday() was going to work right, rather than hoping it would work right.

Both of you basically placed absolute faith in a routine that has no guarantee of working right.

It may often work right. Nobody is diasgreeing with that.

But a lot of people do prefer some sort of guarantee, and POSIX is definetly not providing it.


And as whether or not char should be signed or unsigned, that was a seperate standardization process. That was ANSI C, not POSIX.

And the default char's undefined signness was done for a very significant reason. Their charter *required* them to 'codify existing practice'. In other words, to take existing practice and implementations and try to find a good compromise while keeping most of the programs still in working condition.

It wasn't that they didn't want to make some changes such as standardizing char to be signed or unsigned. It was that doing so would have broken too many existing programs and their charter prevented them from doing that or doing too much inovation in general.

That's why the C standard doesn't even require a char to be 8 bits. C can even allow bits in the middle of a signed 'int' to exist but not take part in the calculation. (That's why if you ever copy memory, only unsigned char is guaranteed to copy all the bits.)

There were enough existing C implementations and existing systems that making such requirements would have violated their charter.

In many areas they had to be very generic and sometimes had to say that it was 'implementation defined.'

It's unfortunate, because C has outlived many of the problem systems, but that's the way it is.


It probably wont be, but it's not guaranteed or required. A library writer can still do whatever they want and it be right but yet unusable.
I am with Steven on the gettimeofday(). I've been using it far longer than 13 years. It worked in 1985 on 4.x BSD on our VAX at UAB. That's 23 years of continual functionality.
Nobody is disagreeing with that, Bob.

It started out because it was unknown, unspecified, and talk about there being no portable *guaranteed* way to get reliable wall clock time.

gettimeofday() is like many of the classic unix stuff still in most C compilers.... It's there. It's likely to work. But there is no real guarantee. And some compilers may not do it right, or not return a -1 if it's unavailable, etc.

That's the joy of non-standard stuff.




It may appear to be microseconds but in reality the numbers only change 18.2 times per second. (PC XT style) (Yes yes, that is *extremely* unlikely.)

I've seen compilers (or rather libraries) that lie about clock()'s clock_per_sec by just multiplying everything by some number to make it look like it's changing 1000 times per second. In other words, they lie and make it appear there is a higher accuracy than there really is.


So, does POSIX specs actually require all one million microseconds to be present? Or can an implementation cheat and only provide 60 or 100 (or whatever) distinct values within those million?

This is a serious question. As I said, I don't have the POSIX specs. I did a quick look on google and I just saw some pages tlaking about it, but no actual specs.

If you can quote the actual specs, I'd appreciate it.

Thanks.

Carey
As to current timer resolutions, I've never seen any reference to any version of unix or windows, running on a PC, that used other than the usual 18+ tics per second. Sun machines used to use 100 tics. I've not seen anyone go finer-grained than that because of the cost of handling all those interrupts, however small it might be overall.
I have never looked at the Linux kernel source and I don't intend to.

As for Windows, the source isn't available and I wouldn't look through it even if it was available. And I don't want to go through all of Microsoft's hardware docs to find out what they require.

But I do find it hard to believe that you would seriously expect modern systems to still be running at 18.2 ticks per second.

Windows certainly has timers with higher resolutions than that.

Also, I'm pretty sure that the old 386BSD back from 1992ish used a 60hz timer. And that was designed for the 386 systems of that time.

All a timer has to do is increment a counter. (Yes, more can happen, but that's the minimum. It doesn't have to do a task switch or something more.)

(And yes, I am aware that task switching, background processes and all sorts of stuff can interfere with getting consistant timing, but that's not what has been discussed, so there's no point in bringing it up.)
Carey
Posts: 313
Joined: Wed Mar 08, 2006 8:18 pm

Re: Move generator

Post by Carey »

CThinker wrote:
Carey wrote:I don't want to go look through Microsoft's hardware site to find out the specs for the required timers that it uses internally, but I am definetly willing to say that it's not 18.2 ticks per second considering other timers are definetly available.
Straight from Microsoft support KB article:
http://support.microsoft.com/kb/q172338/

Code: Select all

Function                 Units                      Resolution
---------------------------------------------------------------------------
Now, Time, Timer         seconds                    1 second
GetTickCount             milliseconds               approx. 10 ms
TimeGetTime              milliseconds               approx. 10 ms
QueryPerformanceCounter  QueryPerformanceFrequency  same
Yes, it is definitely not 18.2 ticks per second. It is about 100 ticks per second.
Thanks!

I guess that answers it once and for all. Bob wont have to keep talking about 18.2 anymore....

I really didn't think that felt right.

I doubt even Win3 used 18.2, but considering it had to run on 8086, 80286 and 80386's it's quite possible it used whatever it could on that particular processor.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Move generator

Post by bob »

Carey wrote:
bob wrote:
Carey wrote:
bob wrote:
Carey wrote:
sje wrote:
CThinker wrote:"portable"?

gettimeofday() is not portable. Its resolution is also not guaranteed.
It's been around since 4.2BSD and that's at least thirteen years ago. It's POSIX from the IEEE specification. And the resolution is guaranteed to be either one microsecond or the best available if that's larger. It's also on every Mac OS/X system, and every Linux system in recent use.
I don't happen to have a copy of the POSIX specs. (And that it's from BSD is irrelevant. Lots of things are from back than and not done the same or done right, etc.)

Anyway, does the POSIX specs *specificially* say that the accuracy must be microsecond, or does it just say the resolution must be microsecond?

That's not the same thing.
That is totally irrelevant. The PC is accurate to 18.206 tics per second, or about 54 milliseconds. That's all the resolution the built in clock in the PC hardware has. And, in fact, you can't even get near that in accurately timing things. The only way to time accurately is to do as is done in the linux kernel and calculate a delay loop that will work. But that's no good for determining the time of day.
Actually Bob, I think you are wrong there. I've heard you mention 18.2 before and haven't spoken up because it wasn't in a convesation I was in. But this time...


The 18.2 was a result of the PC XT's limited 4.77mhz clock cycling through a counter set to trigger at roll-over. It could be reprogrammed for higher rates but caused too much load on the slow processors. (And yes, there were public articles that clearly showed how you could get a more reasonable rate than 18.2 Something like even 20 was more convenient and quite do-able. But there were always side-effects.)

But realisitcally, the 18.2 was all that was available on the PC XT.

By the time of the 386 AT's there were other hardware timers available. Things that programmers could use at higher resolutions without the side effects of reporgramming the PC's main timer.

They weren't used back in the days of DOS, unless you reprogrammed it yourself.


But since the days of Windows.... I can see Win3 using the 18.2 timer. Win95, not so much. WinXP & WinVista depending on the 18.2 timer... No.

Even on the 16mhz 386's, 60 ticks per second was achievable without too much strain. From the 486 & Pentiums that Win9x used, 100 per second wouldn't be a problem.

I don't want to go look through Microsoft's hardware site to find out the specs for the required timers that it uses internally, but I am definetly willing to say that it's not 18.2 ticks per second considering other timers are definetly available.

All this talk about milliseconds and microseconds is beyond funny. You do
Well, then I hope you had a good hearty belly laugh....
good under windows or linux to get within .1 seconds and that is hardly a given, it is usually two or three times that at best...
It started out as being the possibility of 1 second resolution.

It being too undefined that you don't know for sure.

And then somebody asked about why not just use clock(), since it's a standard, official function. And it went from there.

The problem with many of these functions arise when threads are used. There are too many internal issues unless we want to start a separate thread, but worrying about millisecond or microsecond resolution is wasted time.
I don't think anybody is truely worried about millisecond or microsecond resolution.
That's where you are wrong. What about 1 0 games on ICC (game in one minute). We quickly get to sub-second target times, and if my apparent time usage and real time usage drift very far apart, I lose on time...
(sigh) I wish you'd make up your mind.

First you say that you think it's "beyond funny" that we are talking about milliseconds & microseconds and now you are saying that of course it's important etc. etc.

For record Prof. Hyatt, yes I am aware of those situtations. I think the games are pretty useless, but if you want to play them then that's certainly none of my business.

I do care somewhat that the time is done in at least centisecond resolution. Millisecond would be better.

However, more than that, I want the routines to be consistant from platform to platform. I don't want to have to resort to the "Try it and find out" mentality.

I'll admit that I mispoke a bit when I said "I don't think anybody is truely concerned...."

I probably should have said something like "In this particular conversation, I don't think anybody here is primarly concerned with whether it's millisecond or microsecond, but whether or not they can get it portably and reliably."
What has mostly been discussed is that in all the functions that have been suggested, there is *no* portable, *reliable* way to do it.

There are common routines that are likely to work. But even for them, it's more a matter of hoping (and expecting) it will work rather than it being guaranteed.

When Steven brought up gettimeofday being POSIX etc., I was somewhat hopping he was right.

But he wasn't. POSIX doesn't require gettimeofday() to have any particular resolution. It could still be no better than 1 second.
You consider that "news"? :) When they could not even agree on whether "char x" produced a signed or unsigned x by default? :)
News, no.

But Steven seemed pretty sure it did. And you chimed in with very similar sentiments.

I figured if the two of you were both agreeing about a POSIX function, then maybe I was wrong and I should check. I did ask Steven what the specs actually said but he didn't have a copy.

From what I see on the web, both of your attitudes were inappropriate. You were far far too positive that gettimeofday() was going to work right, rather than hoping it would work right.

Both of you basically placed absolute faith in a routine that has no guarantee of working right.

It may often work right. Nobody is diasgreeing with that.

But a lot of people do prefer some sort of guarantee, and POSIX is definetly not providing it.


And as whether or not char should be signed or unsigned, that was a seperate standardization process. That was ANSI C, not POSIX.

And the default char's undefined signness was done for a very significant reason. Their charter *required* them to 'codify existing practice'. In other words, to take existing practice and implementations and try to find a good compromise while keeping most of the programs still in working condition.

It wasn't that they didn't want to make some changes such as standardizing char to be signed or unsigned. It was that doing so would have broken too many existing programs and their charter prevented them from doing that or doing too much inovation in general.

That's why the C standard doesn't even require a char to be 8 bits. C can even allow bits in the middle of a signed 'int' to exist but not take part in the calculation. (That's why if you ever copy memory, only unsigned char is guaranteed to copy all the bits.)

There were enough existing C implementations and existing systems that making such requirements would have violated their charter.

In many areas they had to be very generic and sometimes had to say that it was 'implementation defined.'

It's unfortunate, because C has outlived many of the problem systems, but that's the way it is.


It probably wont be, but it's not guaranteed or required. A library writer can still do whatever they want and it be right but yet unusable.
I am with Steven on the gettimeofday(). I've been using it far longer than 13 years. It worked in 1985 on 4.x BSD on our VAX at UAB. That's 23 years of continual functionality.
Nobody is disagreeing with that, Bob.

It started out because it was unknown, unspecified, and talk about there being no portable *guaranteed* way to get reliable wall clock time.

gettimeofday() is like many of the classic unix stuff still in most C compilers.... It's there. It's likely to work. But there is no real guarantee. And some compilers may not do it right, or not return a -1 if it's unavailable, etc.

That's the joy of non-standard stuff.




It may appear to be microseconds but in reality the numbers only change 18.2 times per second. (PC XT style) (Yes yes, that is *extremely* unlikely.)

I've seen compilers (or rather libraries) that lie about clock()'s clock_per_sec by just multiplying everything by some number to make it look like it's changing 1000 times per second. In other words, they lie and make it appear there is a higher accuracy than there really is.


So, does POSIX specs actually require all one million microseconds to be present? Or can an implementation cheat and only provide 60 or 100 (or whatever) distinct values within those million?

This is a serious question. As I said, I don't have the POSIX specs. I did a quick look on google and I just saw some pages tlaking about it, but no actual specs.

If you can quote the actual specs, I'd appreciate it.

Thanks.

Carey
As to current timer resolutions, I've never seen any reference to any version of unix or windows, running on a PC, that used other than the usual 18+ tics per second. Sun machines used to use 100 tics. I've not seen anyone go finer-grained than that because of the cost of handling all those interrupts, however small it might be overall.
I have never looked at the Linux kernel source and I don't intend to.

As for Windows, the source isn't available and I wouldn't look through it even if it was available. And I don't want to go through all of Microsoft's hardware docs to find out what they require.

But I do find it hard to believe that you would seriously expect modern systems to still be running at 18.2 ticks per second.

Windows certainly has timers with higher resolutions than that.

Also, I'm pretty sure that the old 386BSD back from 1992ish used a 60hz timer. And that was designed for the 386 systems of that time.

All a timer has to do is increment a counter. (Yes, more can happen, but that's the minimum. It doesn't have to do a task switch or something more.)

(And yes, I am aware that task switching, background processes and all sorts of stuff can interfere with getting consistant timing, but that's not what has been discussed, so there's no point in bringing it up.)
I have been consistent. I _want_ high-resolution timer accuracy. For the very reason I gave. But I also realize that I am not going to get it on all platforms, which is OK at times, not OK for others. Normal chess games could easily get by with 1 second resolution or worse. But very fast games become problematic.

For the timer logic, handling 1,000 (or more) interrupts per second has a non-trivial effect on performance. It eats more CPU time than most are willing to give up, day in and day out...



If you look at the start of this thread, all I was saying is that one does not need to create a gettimeofday() function in Windows just so you can get time. Doing so is wasted code (bigger, slower).

Your own crafty code is this:

Code: Select all

unsigned int ReadClock(void)
{
#if defined(UNIX) || defined(AMIGA)
  struct timeval timeval;
  struct timezone timezone;
#endif
#if defined(NT_i386)
  HANDLE hThread;
  FILETIME ftCreate, ftExit, ftKernel, ftUser;
  BITBOARD tUser64;
#endif

#if defined(UNIX) || defined(AMIGA)
If you look at the start of this thread, all I was saying is that one does not need to create a gettimeofday() function in Windows just so you can get time. Doing so is wasted code (bigger, slower).

Your own crafty code is this:
[code]unsigned int ReadClock(void)
{
#if defined(UNIX) || defined(AMIGA)
  struct timeval timeval;
  struct timezone timezone;
#endif
#if defined(NT_i386)
  HANDLE hThread;
  FILETIME ftCreate, ftExit, ftKernel, ftUser;
  BITBOARD tUser64;
#endif

#if defined(UNIX) || defined(AMIGA)
  gettimeofday(&timeval, &timezone);
  return (timeval.tv_sec * 100 + (timeval.tv_usec / 10000));
#endif
#if defined(NT_i386)
  return ((unsigned int) GetTickCount() / 10);
#endif
}
[/code]

Instead of implementing a gettimeofday() for NT, you simply called GetTickCount(). I think you own code validates my point.
  gettimeofday(&timeval, &timezone);
  return (timeval.tv_sec * 100 + (timeval.tv_usec / 10000));
#endif
#if defined(NT_i386)
  return ((unsigned int) GetTickCount() / 10);
#endif
}
Instead of implementing a gettimeofday() for NT, you simply called GetTickCount(). I think you own code validates my point.
Carey
Posts: 313
Joined: Wed Mar 08, 2006 8:18 pm

Re: Move generator

Post by Carey »

Big quote snipped since we aren't talking about most of that stuff anymore.
bob wrote: I have been consistent. I _want_ high-resolution timer accuracy. For the very reason I gave. But I also realize that I am not going to get it on all platforms, which is OK at times, not OK for others. Normal chess games could easily get by with 1 second resolution or worse. But very fast games become problematic.

For the timer logic, handling 1,000 (or more) interrupts per second has a non-trivial effect on performance. It eats more CPU time than most are willing to give up, day in and day out...
For the timer logic, some of that will depend on what the OS does with the timer. Whether it just does a basic timer increment and return, or whether it checks for possible task switches too.

Just servicing the interrupt does, of course consume quite a few cycles.

I haven't kept track of stuff like that in years, but I remember back in the 486 days I read an article that I thought was pretty interesting at the time. Since that was the first x86 processor with built in caches, a lot of people were uncertain how it really behaved. So, the article basically ran a bunch of tests to find out the cost of doing interrupts.

The Intel book was saying it should be about 100 cycles, but when he actually tested it, the best he got was 50% more, at about 150 cycles. Because of cache misses and other factors it sometimes took nearly 300 cycles.

With the caches turned off, things were more consistant but of course a lot slower.

The part that I found so interesting was when he installed various DOS memory manaugers (EMM386, QEMM, etc.) Because of the protected mode changes and other overhead, it went up to around 900 cycles worst case, with the center at around 600 cycles. (And he said Microsoft's emm386 was even slower. I was using that, which is one of the reasons I remember the article.)


So, figure modern processors have streamlined things a little and since we'd be staying in protected mode rather than switching between 16 & 32 bits, say 300 cycles just to do a basic hardware interrupt, timer variable increment (with no protection) and return from the interrupt.

So 100 timer interrupts per second would consume about 30,000 cycles per second.

I'll even say I was underestimating it and it was 500 cycles. So the total cost would be 50,000 cycles per second.

For processors that have been done in the last 10 years, that's not too bad.

If we up it to 1,000 timer interrupts per second, then we are talking about some noticable overhead. Potentially noticable if the OS is a little slow and bloated. (cough.... Windows...)

And that's just talking about doing a timer. If the OS actually tries to service drivers and do task switches, etc., then the cost is most definetly going to increase enough to bog things down.


However, since we are explictly calling a routine to get the time (rather than it having to be available at a certain memory location at all times), you can actually do a hybrid aproach and get higher resolution with more reasonable overhead.

Do the 100 per second and then when you want the high res time, you can check the timer hardware to see where it's at and then combine that with timer ticks you have been accumulating. There were articles on doing that at least 15 years ago.

Of course, I readily admit I have no idea whether Windows does anything like that. Probably not. I'm just talking here.


Anyway it goes, doing 1000 timer ticks per second is not unreasonable, provided the OS doesn't try to do normal OS stuff every single tick and it has a reasonably streamlined interrupt path for the timer interrupt.

(Let's play a game.... Everybody in here who thinks Windows Vista is efficient and can quickly service 1000 timer interrupts per second, raise your hand.... :lol: )