Artificial stupidity - making a program play badly

Discussion of chess software programming and technical issues.

Moderator: Ras

sparky

Re: Artificial stupidity - making a program play badly

Post by sparky »

Although I do not need this feature for Vicki (it plays badly already!), I have the following idea:

Humans play badly because they miss moves...(duh!) the deeper the move is in the game tree, the higher the probability that it will be missed. Therefore:

Code: Select all

alphabeta(int n, int alpha, int beta)
{
  moves = genmoves();
  for each move 
  {
    if (rnd(n) == rnd(n)) continue;  [*]
    ...

  }

}
[*] the rnd(n) function above generates a random integer, [0, n)
Tord Romstad
Posts: 1808
Joined: Wed Mar 08, 2006 9:19 pm
Location: Oslo, Norway

Re: Artificial stupidity - making a program play badly

Post by Tord Romstad »

yoshiharu wrote:First of all, with all due respect to your frustration, I must admit this thread is one of the funniest I've ever read :-)
Great!

:)
Then, I am curious about the "game profile" of crippled-Glaurung games versus humans: I understand that there are many middlegame positions in your games where the human has a winning advantage and nonetheless manages to quickly lose the game; but clear of this kind of games, what are the stats when the game is balanced and the middlegame wears out? How are the endings of these games?
I didn't see any specific pattern. Bad blunders seemed to be as common in the endgame as in the middle game, if not even more so, because of time trouble. Some games were also won on time in lost or clearly inferior positions.

I also think that Glaurung's score would have been much worse if the humans hadn't resigned so easily. Very often they resigned directly after blundering a piece early in the game. If they had played on, Glaurung would probably soon have made a blunder of its own.

Thanks a lot to you and everybody else for the suggestions! I now have a lot of things to try. It's a pity it seems difficult to just find a single parameter I can adjust continuously, though. This would have made everything far easier to tune.

Tord
sparky

Re: Artificial stupidity - making a program play badly

Post by sparky »

Actually a bug in my code above... The if() statement should rather be

Code: Select all

  if (rnd(n*k) != rnd(n*k)) continue;

where n is the search depth, and k some constant.

This is untested, but should work... I'm sure you get the idea.
User avatar
hgm
Posts: 28342
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Artificial stupidity - making a program play badly

Post by hgm »

Why not simply

if( rnd(n*k) ) continue;

to save one call to rnd()?
User avatar
xsadar
Posts: 147
Joined: Wed Jun 06, 2007 10:01 am
Location: United States
Full name: Mike Leany

Re: Artificial stupidity - making a program play badly

Post by xsadar »

Tord Romstad wrote:
yoshiharu wrote:First of all, with all due respect to your frustration, I must admit this thread is one of the funniest I've ever read :-)
Great!

:)
Then, I am curious about the "game profile" of crippled-Glaurung games versus humans: I understand that there are many middlegame positions in your games where the human has a winning advantage and nonetheless manages to quickly lose the game; but clear of this kind of games, what are the stats when the game is balanced and the middlegame wears out? How are the endings of these games?
I didn't see any specific pattern. Bad blunders seemed to be as common in the endgame as in the middle game, if not even more so, because of time trouble. Some games were also won on time in lost or clearly inferior positions.

I also think that Glaurung's score would have been much worse if the humans hadn't resigned so easily. Very often they resigned directly after blundering a piece early in the game. If they had played on, Glaurung would probably soon have made a blunder of its own.
That seems natural. People aren't used to having computers blunder, and when you blunder against computers, they're generally quite unforgiving. Have you put anything in the finger notes about it blundering? If not, maybe you should, and put it at the very first to make sure people see it. That might, at least, remove some of the normal psychological effect of playing computer.
Thanks a lot to you and everybody else for the suggestions! I now have a lot of things to try. It's a pity it seems difficult to just find a single parameter I can adjust continuously, though. This would have made everything far easier to tune.

Tord
sparky

Re: Artificial stupidity - making a program play badly

Post by sparky »

Alternatively, you can always register it under DrunkGlaurung or something :-)
User avatar
Bill Rogers
Posts: 3562
Joined: Thu Mar 09, 2006 3:54 am
Location: San Jose, California

Re: Artificial stupidity - making a program play badly

Post by Bill Rogers »

The easiest way to reduce playing strenght in my opinion is to limit the ply depth. While one ply would be the lowest level any program could play.
Bill
Tord Romstad
Posts: 1808
Joined: Wed Mar 08, 2006 9:19 pm
Location: Oslo, Norway

Re: Artificial stupidity - making a program play badly

Post by Tord Romstad »

Bill Rogers wrote:The easiest way to reduce playing strenght in my opinion is to limit the ply depth.
That's nowhere near continous enough. There would be a large jump in strength between a one-ply and a two-ply search. It also wouldn't scale across different time controls (a one-ply search would do much better in blitz than in slow games) or across different game phases (the strength would be higher in the opening than in the endgame).

It would be better to limit the number of nodes, and let the number be proportional to the remaining thinking time.

Tord
User avatar
hgm
Posts: 28342
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Artificial stupidity - making a program play badly

Post by hgm »

Bill Rogers wrote:The easiest way to reduce playing strenght in my opinion is to limit the ply depth. While one ply would be the lowest level any program could play.
Not at all! You could make the program play 0 ply, on static move ordering only. If the program would use SEE to order the moves, that wouldn't even be so bad. The next higher step would be to play on QS (i.e. static move ordering, but search the captures with QS reply).

NEG 0.3d is a program that plays on static move ordering, but using BLIND to sort the moves, rather than SEE. And it still beats Pos. And Pos again, beats a random mover, if the random generator is high-enough quality.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Artificial stupidity - making a program play badly

Post by Don »

Tord Romstad wrote:
mjlef wrote:I went through the same thing with Zillions and earlier chess programs. WHat I settled on is limit search depth (auto play to determine a rating for a 1 ply search, 2 ply, etc.
That's similar to what Aleks suggested. It should work, but I'd like something more continuous. The difference in strength between a 1 ply search and a 2 ply search is probably huge. Another disadvantage is that the ratings would have to be calibrated again for each new time control. A 1 ply search at blitz will obviously do much better against humans than a 1 ply search at a tournament time control.
For even worse play, randomly toss out moves---do not score them based on how likely you think a human would be to overlook it...just toss x% of moves with a 1 ply search. You can then use autoplay to score that as well. People overlook moves all the time...even strong players miss mate in 1 sometimes.
Glaurung never misses a mate in 1, even at the lowest level. Perhaps that alone is worth a considerable number of Elo points?

Tord
Many years ago my program RexChess had a feature where you could set the ELO rating and it would try to play at that strength level.

The old programs played close to 2000 ELO with about 5-6 ply searches in the middlegame, depending on the particular program. The rating curve is also well understood, so it's simply a matter of setting the level appropriately. One really good way to do it is to set the number of nodes searched and that's how Rex did it. It's dirt simply, your program already has this feature and it's smooth - you can calibrate it simply.

Here is the problem with this that most people do not understand that you will need to be aware of if you are not already. Scalability with humans is BETTER than scalability with computers. If you double the thinking time, a computer may play 60 or 70 ELO stronger but it's even better for humans.

This effect was more easily noticed 10 or 20 years ago because humans lost at speed chess, but did a little better at game in 10 minutes, and better still at game in 30 and so on. At tournament time controls, it was the humans that were superior despite the fact that computers play MUCH stronger at 40 moves in 2 hours than they do at game in 5 minutes.

So you cannot really have a fixed level where you can say the program plays at X strength without taking into account the time control. If you have a level you call 1800 it might easily win at fast chess, and badly lose at serious long tournament games.

So in my opinion, what you need is something like this:

1. Determine a formula for converting some number of Nodes searched to a rating based on the assumption that each doubling gives you N ELO points.

2. Make an adjustment based on the time control by assuming the human has a different formula (each doubling is worth (N+20 ELO) or something like that.) I don't know the actual number, but each doubling is worth MORE to a human.

3. 100 nodes should give you a pretty weak search, but please note that at speed chess, 100 nodes will play MUCH better (relative to the human) than it would at long time control games.

In Rex, we just used a fixed formula so that 1800 ELO always played the same on any computer (it would just take longer on a slower computer.) The ELO was supposed to corresponded to how strong it would play IF THIS WAS A NORMAL TOURNAMENT TIME CONTROL GAME. But if the computer can hold it's own at speed chess with a 1500 ELO player, and then you give the human 8X more time to think without changing the computers time control (and assuming you also don't take advantage of pondering) then the human is going to play 2 classes better!

This is often met with disbelief, probably due to human perception. Most humans realize they play better when given 2x or 4x more time on the clock, but they don't realize how much better they are playing because their opponent is playing better too. But I think the evidence is clear, as you progressively increase the time control, humans progressively do better, so you cannot refute the fact that extra time helps humans even more than computers.

I think with a little work you can come up with a formula that returns the number of nodes you need to search to get roughly some ELO rating (against humans) at some TIME CONTROL - so time control needs to be one of the variables.

The nice things about nodes searched, as you yourself pointed out, is that the program needs to play better in the endgame, and it will. For such a think I would probably set a very small hash table size and perhaps cripple the quies search a bit, but it's probably not necessary since you can get very weak play with a 50 node search.

Before I started playing tournament chess, I would play matches with Sargon II on my TRS-80 and it usually would do a 3 and sometimes 4 ply search in tournament games. I played many "serious" match games where I didn't use a clock and I concentrated hard (and forbid myself from takebacks, etc.) I didn't use a clock, but I'm guessing that I took about 2 hours or more per game, recorded each one manually and really simulated tournament conditions. I played Sargon very close to even and had many close 10 games matches, winning some and losing some. I seriously doubt I was better than 1300 USCF ELO at the time but I can only guess. I can tell you for sure, that if I was constrained to 10 or 15 minute games where Sargon played at the same exact level, I would have been crushed, perhaps rarely winning a game. I think that explains what you describe.