Move generator

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Move generator

Post by bob »

Carey wrote:
sje wrote:
CThinker wrote:"portable"?

gettimeofday() is not portable. Its resolution is also not guaranteed.
It's been around since 4.2BSD and that's at least thirteen years ago. It's POSIX from the IEEE specification. And the resolution is guaranteed to be either one microsecond or the best available if that's larger. It's also on every Mac OS/X system, and every Linux system in recent use.
I don't happen to have a copy of the POSIX specs. (And that it's from BSD is irrelevant. Lots of things are from back than and not done the same or done right, etc.)

Anyway, does the POSIX specs *specificially* say that the accuracy must be microsecond, or does it just say the resolution must be microsecond?

That's not the same thing.
That is totally irrelevant. The PC is accurate to 18.206 tics per second, or about 54 milliseconds. That's all the resolution the built in clock in the PC hardware has. And, in fact, you can't even get near that in accurately timing things. The only way to time accurately is to do as is done in the linux kernel and calculate a delay loop that will work. But that's no good for determining the time of day.

All this talk about milliseconds and microseconds is beyond funny. You do good under windows or linux to get within .1 seconds and that is hardly a given, it is usually two or three times that at best...

The problem with many of these functions arise when threads are used. There are too many internal issues unless we want to start a separate thread, but worrying about millisecond or microsecond resolution is wasted time.

I am with Steven on the gettimeofday(). I've been using it far longer than 13 years. It worked in 1985 on 4.x BSD on our VAX at UAB. That's 23 years of continual functionality.

It may appear to be microseconds but in reality the numbers only change 18.2 times per second. (PC XT style) (Yes yes, that is *extremely* unlikely.)

I've seen compilers (or rather libraries) that lie about clock()'s clock_per_sec by just multiplying everything by some number to make it look like it's changing 1000 times per second. In other words, they lie and make it appear there is a higher accuracy than there really is.


So, does POSIX specs actually require all one million microseconds to be present? Or can an implementation cheat and only provide 60 or 100 (or whatever) distinct values within those million?

This is a serious question. As I said, I don't have the POSIX specs. I did a quick look on google and I just saw some pages tlaking about it, but no actual specs.

If you can quote the actual specs, I'd appreciate it.

Thanks.

Carey
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: Move generator

Post by sje »

Carey wrote:
Honestly, I don't give a damn about non-POSIX platforms. I use Linux and BSD, and never have I had a problem with gettimeofday() as the
Well, you should care. Not too many programmers want to dismiss 95% of the computers in the world.
You have missed the point. The gettimeofday() routine is the best way without doubt to handle fine resolution timing in a portable fashion. It has been standardized by a non commercial consensus process. It was not something defined by fiat by a corporation responsible to no one, or at least to no technical authority. People who understand this have taken the time and effort to port gettimeofday() and other POSIX calls to less capable systems. That's the way porting is supposed to work, not the other way around which would be polluting the better with the lesser.

It would be difficult for me to care less about 95 pecent of computer users if all of what they are doing is surfing for porn or playing some idiot video game fit for chimpanzees.
CThinker
Posts: 388
Joined: Wed Mar 08, 2006 10:08 pm

Re: Move generator

Post by CThinker »

bob wrote:I am with Steven on the gettimeofday(). I've been using it far longer than 13 years. It worked in 1985 on 4.x BSD on our VAX at UAB. That's 23 years of continual functionality.
If you look at the start of this thread, all I was saying is that one does not need to create a gettimeofday() function in Windows just so you can get time. Doing so is wasted code (bigger, slower).

Your own crafty code is this:

Code: Select all

unsigned int ReadClock(void)
{
#if defined(UNIX) || defined(AMIGA)
  struct timeval timeval;
  struct timezone timezone;
#endif
#if defined(NT_i386)
  HANDLE hThread;
  FILETIME ftCreate, ftExit, ftKernel, ftUser;
  BITBOARD tUser64;
#endif

#if defined(UNIX) || defined(AMIGA)
  gettimeofday(&timeval, &timezone);
  return (timeval.tv_sec * 100 + (timeval.tv_usec / 10000));
#endif
#if defined(NT_i386)
  return ((unsigned int) GetTickCount() / 10);
#endif
}
Instead of implementing a gettimeofday() for NT, you simply called GetTickCount(). I think you own code validates my point.

Btw, I just realized that you have some dead code there that you might want to clean-up (those local variables that don't get used; I'm sure the compiler will optimized them out anyway).
bob wrote: That is totally irrelevant. The PC is accurate to 18.206 tics per second, or about 54 milliseconds. That's all the resolution the built in clock in the PC hardware has. And, in fact, you can't even get near that in accurately timing things. The only way to time accurately is to do as is done in the linux kernel and calculate a delay loop that will work. But that's no good for determining the time of day.
Try this code and you will see that the resolution on NT is 16ms. Search around the web and you will find a lot of postings saying the same. That's 66 ticks per second. If you use multimedia timers, you would get even finer resolutions. The situation gets better with Windows CE, because it was designed with RTOS in mind.

Code: Select all

#include <windows.h>
#include <stdio.h>
void main &#40;void&#41; &#123;
    while &#40;1&#41; &#123;
        DWORD d, c = GetTickCount&#40;);
        while (&#40;d = &#40;GetTickCount&#40;) - c&#41;) == 0&#41;
            ;
        if &#40;d < &#40;1000 / 18&#41;) &#123;
            printf ("%i", d&#41;;
            break;
        &#125;
    &#125;
&#125;
Carey
Posts: 313
Joined: Wed Mar 08, 2006 8:18 pm

Re: Move generator

Post by Carey »

bob wrote:
Carey wrote:
sje wrote:
CThinker wrote:"portable"?

gettimeofday() is not portable. Its resolution is also not guaranteed.
It's been around since 4.2BSD and that's at least thirteen years ago. It's POSIX from the IEEE specification. And the resolution is guaranteed to be either one microsecond or the best available if that's larger. It's also on every Mac OS/X system, and every Linux system in recent use.
I don't happen to have a copy of the POSIX specs. (And that it's from BSD is irrelevant. Lots of things are from back than and not done the same or done right, etc.)

Anyway, does the POSIX specs *specificially* say that the accuracy must be microsecond, or does it just say the resolution must be microsecond?

That's not the same thing.
That is totally irrelevant. The PC is accurate to 18.206 tics per second, or about 54 milliseconds. That's all the resolution the built in clock in the PC hardware has. And, in fact, you can't even get near that in accurately timing things. The only way to time accurately is to do as is done in the linux kernel and calculate a delay loop that will work. But that's no good for determining the time of day.
Actually Bob, I think you are wrong there. I've heard you mention 18.2 before and haven't spoken up because it wasn't in a convesation I was in. But this time...


The 18.2 was a result of the PC XT's limited 4.77mhz clock cycling through a counter set to trigger at roll-over. It could be reprogrammed for higher rates but caused too much load on the slow processors. (And yes, there were public articles that clearly showed how you could get a more reasonable rate than 18.2 Something like even 20 was more convenient and quite do-able. But there were always side-effects.)

But realisitcally, the 18.2 was all that was available on the PC XT.

By the time of the 386 AT's there were other hardware timers available. Things that programmers could use at higher resolutions without the side effects of reporgramming the PC's main timer.

They weren't used back in the days of DOS, unless you reprogrammed it yourself.


But since the days of Windows.... I can see Win3 using the 18.2 timer. Win95, not so much. WinXP & WinVista depending on the 18.2 timer... No.

Even on the 16mhz 386's, 60 ticks per second was achievable without too much strain. From the 486 & Pentiums that Win9x used, 100 per second wouldn't be a problem.

I don't want to go look through Microsoft's hardware site to find out the specs for the required timers that it uses internally, but I am definetly willing to say that it's not 18.2 ticks per second considering other timers are definetly available.

All this talk about milliseconds and microseconds is beyond funny. You do
Well, then I hope you had a good hearty belly laugh....
good under windows or linux to get within .1 seconds and that is hardly a given, it is usually two or three times that at best...
It started out as being the possibility of 1 second resolution.

It being too undefined that you don't know for sure.

And then somebody asked about why not just use clock(), since it's a standard, official function. And it went from there.

The problem with many of these functions arise when threads are used. There are too many internal issues unless we want to start a separate thread, but worrying about millisecond or microsecond resolution is wasted time.
I don't think anybody is truely worried about millisecond or microsecond resolution.

What has mostly been discussed is that in all the functions that have been suggested, there is *no* portable, *reliable* way to do it.

There are common routines that are likely to work. But even for them, it's more a matter of hoping (and expecting) it will work rather than it being guaranteed.

When Steven brought up gettimeofday being POSIX etc., I was somewhat hopping he was right.

But he wasn't. POSIX doesn't require gettimeofday() to have any particular resolution. It could still be no better than 1 second.

It probably wont be, but it's not guaranteed or required. A library writer can still do whatever they want and it be right but yet unusable.
I am with Steven on the gettimeofday(). I've been using it far longer than 13 years. It worked in 1985 on 4.x BSD on our VAX at UAB. That's 23 years of continual functionality.
Nobody is disagreeing with that, Bob.

It started out because it was unknown, unspecified, and talk about there being no portable *guaranteed* way to get reliable wall clock time.

gettimeofday() is like many of the classic unix stuff still in most C compilers.... It's there. It's likely to work. But there is no real guarantee. And some compilers may not do it right, or not return a -1 if it's unavailable, etc.

That's the joy of non-standard stuff.




It may appear to be microseconds but in reality the numbers only change 18.2 times per second. (PC XT style) (Yes yes, that is *extremely* unlikely.)

I've seen compilers (or rather libraries) that lie about clock()'s clock_per_sec by just multiplying everything by some number to make it look like it's changing 1000 times per second. In other words, they lie and make it appear there is a higher accuracy than there really is.


So, does POSIX specs actually require all one million microseconds to be present? Or can an implementation cheat and only provide 60 or 100 (or whatever) distinct values within those million?

This is a serious question. As I said, I don't have the POSIX specs. I did a quick look on google and I just saw some pages tlaking about it, but no actual specs.

If you can quote the actual specs, I'd appreciate it.

Thanks.

Carey
Carey
Posts: 313
Joined: Wed Mar 08, 2006 8:18 pm

Re: Move generator

Post by Carey »

sje wrote:
Carey wrote:
Honestly, I don't give a damn about non-POSIX platforms. I use Linux and BSD, and never have I had a problem with gettimeofday() as the
Well, you should care. Not too many programmers want to dismiss 95% of the computers in the world.
You have missed the point. The gettimeofday() routine is the best way without doubt to handle fine resolution timing in a portable fashion. It has been standardized by a non commercial consensus process.
Steven,

You missed my point...

It *HAS NOT* been standardized. That's the point.

POSIX says the function has to exist and it has to return its results in microseconds but it says nothing about the granularity of that timer.

It can work in half seconds or even no better than full seconds, and still be fully POSIX. Fully compliant, but utterly useless.

Aparently it doesn't require it to return even the best available time the hardware has. Just a time specified in microsectonds.


That's the point.

Whether it is likely to work on the systems you use isn't the point.

You were oh so eager to say that it has to work because it's POSIX!!! But POSIX doesn't seem to require it to work they way you thought.


I was hoping you were right. I honestly was. But it appears POSIX simply did a useless standardization for it.
It was not something defined by fiat by a corporation responsible to no one, or at least to no technical authority. People who understand this have taken the time and effort to port gettimeofday() and other POSIX calls to less capable systems. That's the way porting is supposed to work, not the other way around which would be polluting the better with the lesser.

It would be difficult for me to care less about 95 pecent of computer users if all of what they are doing is surfing for porn or playing some idiot video game fit for chimpanzees.
Ahhh.... Arrogance... Elitism.... A true command line Unix person.
Dann Corbit
Posts: 12541
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: gettimeofday() for Windows

Post by Dann Corbit »

Carey wrote:
Dann Corbit wrote: The gettimeofday() function is useful because literally dozens of Unix - born Xboard engines use that call. So I made a port of it available to simplify porting.

If you want to do timing on Windows, there is a high resolution timer (and if you find it is not available you can use an alternative).
You might want to check to make sure it doesn't put too overhead into your search.

Some months back, I retried several timers and the Windows specific 'high' res timers tended to have a much higher overhead than the more generic ones.

Calling it more than a few hundred times per second in the search produced a noticable slow down in the search.
Most engines only call the timer once every few thousand nodes.
Typically, something like this:

if ((nodes % 26801) == 0) check_time();

Some people like to just stick the timer check directly into their search and let it check it every pass. I don't. I prefer to set up a self adjusting node counter that tries to do it no more than a hundred times per second. Any more than that and the Windows specific timers started slowing down the search.
How do you self-adjust?

It is also possible to use the rtdsc instruction or things of that nature.
Not recommended unless you are doing it some sort of 'unspecified' counter. (Meaning you know it counts something but not what.)

With cpu's being able to slow themselves down when they get hot (or running on batteries), it's not really consistant enough for anything but some sort of 'unknown' timer.
Timing and any other OS specific function is always going to be inherently non-portable. So it is nice to have a way that can be used across systems. I would not suggest using the provided gettimeofday() function for a new windows program. But for an existing Unix program being ported to Windows it's a slam dunk.
Assuming gettimeofday() actually works right and has a reasonable accuracy and has tolerable overhead, I don't see anything wrong with using it. I (and probably most people) don't really care the origins of the function. It could date back to DOS or Unix or even before. As long as it works reliably and is in enough compilers and OS's, then... (shrug).

But I still say that whatever you choose, you should set up a little test at the start of the program to make sure that the timer does actually change and that the resolution appears to be of sufficient resolution for what you need.

Pretty cheap insurance for somebody else compiling it on a system you haven't tested.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Move generator

Post by bob »

CThinker wrote:
bob wrote:I am with Steven on the gettimeofday(). I've been using it far longer than 13 years. It worked in 1985 on 4.x BSD on our VAX at UAB. That's 23 years of continual functionality.
If you look at the start of this thread, all I was saying is that one does not need to create a gettimeofday() function in Windows just so you can get time. Doing so is wasted code (bigger, slower).

Your own crafty code is this:

Code: Select all

unsigned int ReadClock&#40;void&#41;
&#123;
#if defined&#40;UNIX&#41; || defined&#40;AMIGA&#41;
  struct timeval timeval;
  struct timezone timezone;
#endif
#if defined&#40;NT_i386&#41;
  HANDLE hThread;
  FILETIME ftCreate, ftExit, ftKernel, ftUser;
  BITBOARD tUser64;
#endif

#if defined&#40;UNIX&#41; || defined&#40;AMIGA&#41;
  gettimeofday&#40;&timeval, &timezone&#41;;
  return &#40;timeval.tv_sec * 100 + &#40;timeval.tv_usec / 10000&#41;);
#endif
#if defined&#40;NT_i386&#41;
  return (&#40;unsigned int&#41; GetTickCount&#40;) / 10&#41;;
#endif
&#125;
Instead of implementing a gettimeofday() for NT, you simply called GetTickCount(). I think you own code validates my point.
Actually it doesn't. It just shows what happens when you ask people to help in making a program portable across a wide range of platforms. Many of the unix "fixes" came from others as well as we don't have one of every type of unix box possible here either.

I would much prefer to see a gettimeofday() for all systems. Then there would be no portability issues which would _really_ make life nicer there. But then again, windows has no intention of being compatible with anything it seems, so two "forks" in development will always exist.



Btw, I just realized that you have some dead code there that you might want to clean-up (those local variables that don't get used; I'm sure the compiler will optimized them out anyway).
I try to not modify that mess anyway, although I will clean it up at some point in time since it is ugly to wade through... A simple change in that code can have lots of unwanted side-effects that break compiler all over the place....

bob wrote: That is totally irrelevant. The PC is accurate to 18.206 tics per second, or about 54 milliseconds. That's all the resolution the built in clock in the PC hardware has. And, in fact, you can't even get near that in accurately timing things. The only way to time accurately is to do as is done in the linux kernel and calculate a delay loop that will work. But that's no good for determining the time of day.
Try this code and you will see that the resolution on NT is 16ms. Search around the web and you will find a lot of postings saying the same. That's 66 ticks per second. If you use multimedia timers, you would get even finer resolutions. The situation gets better with Windows CE, because it was designed with RTOS in mind.

Code: Select all

#include <windows.h>
#include <stdio.h>
void main &#40;void&#41; &#123;
    while &#40;1&#41; &#123;
        DWORD d, c = GetTickCount&#40;);
        while (&#40;d = &#40;GetTickCount&#40;) - c&#41;) == 0&#41;
            ;
        if &#40;d < &#40;1000 / 18&#41;) &#123;
            printf ("%i", d&#41;;
            break;
If you look at the start of this thread, all I was saying is that one does not need to create a gettimeofday&#40;) function in Windows just so you can get time. Doing so is wasted code &#40;bigger, slower&#41;.

Your own crafty code is this&#58;
&#91;code&#93;unsigned int ReadClock&#40;void&#41;
&#123;
#if defined&#40;UNIX&#41; || defined&#40;AMIGA&#41;
  struct timeval timeval;
  struct timezone timezone;
#endif
#if defined&#40;NT_i386&#41;
  HANDLE hThread;
  FILETIME ftCreate, ftExit, ftKernel, ftUser;
  BITBOARD tUser64;
#endif

#if defined&#40;UNIX&#41; || defined&#40;AMIGA&#41;
  gettimeofday&#40;&timeval, &timezone&#41;;
  return &#40;timeval.tv_sec * 100 + &#40;timeval.tv_usec / 10000&#41;);
#endif
#if defined&#40;NT_i386&#41;
  return (&#40;unsigned int&#41; GetTickCount&#40;) / 10&#41;;
#endif
&#125;
&#91;/code&#93;

Instead of implementing a gettimeofday&#40;) for NT, you simply called GetTickCount&#40;). I think you own code validates my point.
        &#125;
    &#125;
&#125;
You missed my point. The clock ticks every 16ms. But when I check the clock to see how much time has elapsed, I am +/- 16 ms from reality. The clock could have ticked immediately prior to my sampling the value, or it could tick just after I did the sample, resulting in up to 16 ms error...
CThinker
Posts: 388
Joined: Wed Mar 08, 2006 10:08 pm

Re: gettimeofday() for Windows

Post by CThinker »

Dann Corbit wrote:Most engines only call the timer once every few thousand nodes.
Typically, something like this:

if ((nodes % 26801) == 0) check_time();
Dann Corbit wrote:How do you self-adjust?
Interestingly, on some older versions of Thinker, I use a separate thread that sleeps for the duration of the target search time (or utill some event). When it wakes up due to time expiration, it simply sets a global boolean variable indicating so.

The search function checks for this global variable instead of the node count + timer.

I did not gain anything, but the code is longer (maintenance of another thread). And for me, since smaller is more apealing, I just use the method that you described (check the node count, then check the time).

I might bring that back. Hmn...
Carey
Posts: 313
Joined: Wed Mar 08, 2006 8:18 pm

Re: gettimeofday() for Windows

Post by Carey »

Dann Corbit wrote:
Carey wrote:
Dann Corbit wrote: The gettimeofday() function is useful because literally dozens of Unix - born Xboard engines use that call. So I made a port of it available to simplify porting.

If you want to do timing on Windows, there is a high resolution timer (and if you find it is not available you can use an alternative).
You might want to check to make sure it doesn't put too overhead into your search.

Some months back, I retried several timers and the Windows specific 'high' res timers tended to have a much higher overhead than the more generic ones.

Calling it more than a few hundred times per second in the search produced a noticable slow down in the search.
Most engines only call the timer once every few thousand nodes.
Typically, something like this:

if ((nodes % 26801) == 0) check_time();

Some people like to just stick the timer check directly into their search and let it check it every pass. I don't. I prefer to set up a self adjusting node counter that tries to do it no more than a hundred times per second. Any more than that and the Windows specific timers started slowing down the search.
How do you self-adjust?
I've done it several different ways over the years, but it usually works out to something like:

Code: Select all

Search.c

In the search itself&#58;

if &#40;NodeCount > TimerNodeCount&#41;
  &#123;
  ElapsedTime = CheckSearchTimer&#40;);
  ...etc...
  &#125;


And elsewhere&#58;

double CheckSearchTimer&#40;void&#41;
&#123;double CurTime;

  NodeCnt = 0;
  CurTime = ReadProgTimer&#40;);

  if (&#40;CurTime - PrevSearchTime&#41; < 0.01&#41;
    TimerNodeCount += 1 + TimerNodeCount / 2;
  else if (&#40;CurTime - PrevSearchTime&#41; >= .1&#41;
    TimerNodeCount -= TimerNodeCount / 5;

  PrevSearchTime = CurTime;

  return CurTime - StartSearchTime;
&#125;
It varies a bit depending on how I want to do it, but usually along those lines. In this particular code snippet I was using a seperate routine and using a 'double' to hold the timer (done as seconds, with fractional seconds), and it actually returned the elapsed time so the search could call another routine to see if it needed to adjust the specified search time.

I never liked hard coding those kinds of things. Years & years ago I was writing a chess program with a friend. (This was before the internet and we'd actually postal mailed disks. I was using an old unix K&R C and he was using Turbo C, I think.)

Our systems were very different (his being much faster) so I decided to just let the CheckSearchTime() adjust how many nodes to wait between timer checks.

About any sort of adjustment works because it'll pretty quickly get into the 'sweet spot'. And with the > comparison, there's no slow division involved (which was nice because my old system back then didn't have a full precision division instruction.)

(Message edited a little for clarity.)
CThinker
Posts: 388
Joined: Wed Mar 08, 2006 10:08 pm

Re: Move generator

Post by CThinker »

Carey wrote:I don't want to go look through Microsoft's hardware site to find out the specs for the required timers that it uses internally, but I am definetly willing to say that it's not 18.2 ticks per second considering other timers are definetly available.
Straight from Microsoft support KB article:
http://support.microsoft.com/kb/q172338/

Code: Select all

Function                 Units                      Resolution
---------------------------------------------------------------------------
Now, Time, Timer         seconds                    1 second
GetTickCount             milliseconds               approx. 10 ms
TimeGetTime              milliseconds               approx. 10 ms
QueryPerformanceCounter  QueryPerformanceFrequency  same
Yes, it is definitely not 18.2 ticks per second. It is about 100 ticks per second.