Quiescence Search doesn't improve strength

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Ras
Posts: 2487
Joined: Tue Aug 30, 2016 8:19 pm
Full name: Rasmus Althoff

Re: Quiescence Search doesn't improve strength

Post by Ras »

mvanthoor wrote: Fri Feb 26, 2021 12:56 pmDon't sort the move list. You're doing unnecessary work, because some of the moves will never be tried due to cutoffs.
I'm doing a hybrid: swap the "best" move as per (dynamic) MVV-LVA to the top, try that, and only if that doesn't cut, use a Shellsort to sort the remaining move list.

In for the first few QS plies, I don't allow standing pat when a capture delivers check and generate check evasions (also quiet ones). That ends one depth ply before the recapture-only mode hits in, i.e. where capturing is only allowed to the square where the opponent's last capture was.
Rasmus Althoff
https://www.ct800.net
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Quiescence Search doesn't improve strength

Post by hgm »

mvanthoor wrote: Fri Feb 26, 2021 5:04 pm
lithander wrote: Fri Feb 26, 2021 4:22 pm What HGM means is that if you don't consider checks properly in your QS it might consider illegal moves in your QSearch which doesn't really cause you to play an illegal move unless your normal search depth would be zero. Which of course it never is!
If that is indeed what he means: yes, you could indeed consider an illegal capture, but it will obviously never be played because make() will immediately call unmake() and return false. To do this, make() uses square_attacked() to see if the king is in check.
That was indeed what I wanted to point out, in response to your question whether I thought you had been lucky. You seemed to relate the playing of illegal moves in games to something that happens in QS. But even if you would change your qsearch() such that it would only search illegal moves, it could never cause playing an illegal move in the root. Because the root node isn't processed by qsearch(). Of course you woul dplay plenty of very nonsensical moves, if qsearch() would return garbage scores.
User avatar
mvanthoor
Posts: 1784
Joined: Wed Jul 03, 2019 4:42 pm
Location: Netherlands
Full name: Marcel Vanthoor

Re: Quiescence Search doesn't improve strength

Post by mvanthoor »

hgm wrote: Fri Feb 26, 2021 5:27 pm That was indeed what I wanted to point out, in response to your question whether I thought you had been lucky. You seemed to relate the playing of illegal moves in games to something that happens in QS. But even if you would change your qsearch() such that it would only search illegal moves, it could never cause playing an illegal move in the root. Because the root node isn't processed by qsearch(). Of course you woul dplay plenty of very nonsensical moves, if qsearch() would return garbage scores.
Right. I indeed didn't think that qsearch() would make the engine play illegal moves.

You -thought- I was relating the playing of illegal moves to QSearch.
It -seemed- ()to me) that you thought I was lucky in my QSearch. (I can imagine I can be lucky once in a while, but being lucky billions of times in a row didn't seem very logical to me.)
Author of Rustic, an engine written in Rust.
Releases | Code | Docs | Progress | CCRL
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Quiescence Search doesn't improve strength

Post by hgm »

mvanthoor wrote: Fri Feb 26, 2021 2:01 pmI use a separate QSearch function, and my engine has never lost a game due to an illegal move. I don't do anything with regard to checks in QSearch. Are you saying that I have been _lucky_ for thousands of test games?

I'll have to set up a position specifically for this.
So what exactly did you observe for which luck could be an explanation? What does "losing a game due to an illegal move" mean to you, if it does not mean you actually played an illegal move and forfeited the game because of that? You can never lose because of a move you do not play, the rules do not allow for that. So you don't have to be lucky in any way for that to never happen.
User avatar
mvanthoor
Posts: 1784
Joined: Wed Jul 03, 2019 4:42 pm
Location: Netherlands
Full name: Marcel Vanthoor

Re: Quiescence Search doesn't improve strength

Post by mvanthoor »

hgm wrote: Fri Feb 26, 2021 7:12 pm So what exactly did you observe for which luck could be an explanation? What does "losing a game due to an illegal move" mean to you, if it does not mean you actually played an illegal move and forfeited the game because of that? You can never lose because of a move you do not play, the rules do not allow for that. So you don't have to be lucky in any way for that to never happen.
Your question made me think that you were of the opinion that I was lucky that my QSearch works, because I don't have a check against being in check in place in the QSearch.
Author of Rustic, an engine written in Rust.
Releases | Code | Docs | Progress | CCRL
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Quiescence Search doesn't improve strength

Post by hgm »

Ah, I see. But the indicated problem was not that you would play illegal moves in QS, but that you would not search the non-capture evasions, because the check extension that would allow you to search non-captures was not awarded in QS. This could for instance lead to scoring the position as 'checkmated', while in fact the King could simply step out of check. I don't see how it could ever lead to illegal moves being searched.

It also is not clear to me what 'QSearch works' means here. That it never crashed your engine? (but could illegal moves do that at all?) That it always produces the correct score? (But how would you know that?)
User avatar
lithander
Posts: 881
Joined: Sun Dec 27, 2020 2:40 am
Location: Bremen, Germany
Full name: Thomas Jahn

Re: Quiescence Search doesn't improve strength

Post by lithander »

Mike Sherwin wrote: Fri Feb 26, 2021 4:36 pm Alpha-beta search works best when it has more accurate scores to backup and prune against. And the better the evaluation function is the better the pruning will be and the deeper the search will search in a given time.
I have now implemented support for PSTs and am happy to see the same synergy that Marcel reported: PSTs together with QSearch creates stronger play than the sum of its parts would suggest. The strength is pretty much where I'd expect it at this point as opposed to when I used QSearch only with a very simple evaluation.

But when I look at the runtime performance I measure the opposite of what you predicted: When I search a set of test position to a fixed depth the PST version takes twice as long as the version that counts only material. To make the test really fair both use the exact same code only that for the material counting version the PSTs are filled with zeros. So the speed difference must come from pruning. When using the PSTs my metrics show that I generate and play more moves to reach the same depth. So the simple eval has the better pruning! :roll:

With PSTs (from Sunfish)

Code: Select all

Performance: 1135kN/sec
A total of 44,377,312 moves were generated. 4,010,561 were played. 9%
  Operation took 50.8739s
Counting Material

Code: Select all

Performance: 1152kN/sec
A total of 12,334,260 moves were generated. 1,352,387 were played. 10%
  Operation took 25.8926s
If you're right I need to go looking for a problem. But to me the result seems logical, though:
Take the opening position for example: Without PSTs almost all lines will receive the same evaluation after quiesence: 0

This leads to bad play but it should mean a lot of early cutoffs where alpha == beta == 0. It just doens't matter what move sequence you play as long as you don't lose a piece. And as long as the score of positions are that homogeneous it should prune better than if they are all different because they have the PST offsets applied, right?

Why should accurate predictions help? Maybe the effect you predicted and I didn't observe requires some other advanced techniques like history heuristics that I don't have yet? MVV-LVA doesn't consider evaluations so I don't see where the alpha-beta search would benefit from accurate but diverse evaluations.
Minimal Chess (simple, open source, C#) - Youtube & Github
Leorik (competitive, in active development, C#) - Github & Lichess
Mike Sherwin
Posts: 860
Joined: Fri Aug 21, 2020 1:25 am
Location: Planet Earth, Sol system
Full name: Michael J Sherwin

Re: Quiescence Search doesn't improve strength

Post by Mike Sherwin »

lithander wrote: Mon Mar 01, 2021 2:58 am
Mike Sherwin wrote: Fri Feb 26, 2021 4:36 pm Alpha-beta search works best when it has more accurate scores to backup and prune against. And the better the evaluation function is the better the pruning will be and the deeper the search will search in a given time.
I have now implemented support for PSTs and am happy to see the same synergy that Marcel reported: PSTs together with QSearch creates stronger play than the sum of its parts would suggest. The strength is pretty much where I'd expect it at this point as opposed to when I used QSearch only with a very simple evaluation.

But when I look at the runtime performance I measure the opposite of what you predicted: When I search a set of test position to a fixed depth the PST version takes twice as long as the version that counts only material. To make the test really fair both use the exact same code only that for the material counting version the PSTs are filled with zeros. So the speed difference must come from pruning. When using the PSTs my metrics show that I generate and play more moves to reach the same depth. So the simple eval has the better pruning! :roll:

With PSTs (from Sunfish)

Code: Select all

Performance: 1135kN/sec
A total of 44,377,312 moves were generated. 4,010,561 were played. 9%
  Operation took 50.8739s
Counting Material

Code: Select all

Performance: 1152kN/sec
A total of 12,334,260 moves were generated. 1,352,387 were played. 10%
  Operation took 25.8926s
If you're right I need to go looking for a problem. But to me the result seems logical, though:
Take the opening position for example: Without PSTs almost all lines will receive the same evaluation after quiesence: 0

This leads to bad play but it should mean a lot of early cutoffs where alpha == beta == 0. It just doens't matter what move sequence you play as long as you don't lose a piece. And as long as the score of positions are that homogeneous it should prune better than if they are all different because they have the PST offsets applied, right?

Why should accurate predictions help? Maybe the effect you predicted and I didn't observe requires some other advanced techniques like history heuristics that I don't have yet? MVV-LVA doesn't consider evaluations so I don't see where the alpha-beta search would benefit from accurate but diverse evaluations.
Let's think about the example of the knight and the bishop. Tord had this problem with Glaurung. He could not understand why his search was running so slow when both bishops and knights were in the position and so much faster when only bishops or knights were in the position. The problem was that knights and bishops were so close in evaluation the search could not make up its "mind" whether or not it wanted to move the knight or the bishop and was constantly thrashing the pv switching between them at all depths of the search.

So alpha-beta should have an easy time with material only scores. But what happens when each new ply depth is searched and the material evaluation changes? There is nothing to guide the search to make better positional moves that have more stable material consequences. When I wrote RomiChess the eval was done before Qsearch was written. It ran slow. The node rate was great but it did not search very deep at all. As soon as I added Qsearch it was the deepest single thread searcher that I knew of at the time of its release. Even today it is not bad.

FEN: rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq e3 0 1

RomiChess64P3n:
1 00:00 40 40 +0.44 c2c4
2 00:00 224 224 +0.36 e2e4 g8f6
3 00:00 377 377 +0.66 e2e4 g8f6
4 00:00 1k 1k +0.29 e2e4 g8f6 b1c3 e7e5
5 00:00 3k 3k +0.79 e2e4 g8f6
6 00:00 8k 8k +0.29 e2e4
7 00:00 15k 15k +0.50 e2e4 d7d5 b1c3 g8f6 e4d5 c8g4 f1b5
8 00:00 61k 6,062k +0.15 e2e4 g8f6 b1c3 d7d5 e4d5 c8g4 f1e2 g4e2
9 00:00 108k 5,396k +0.15 e2e4 g8f6 b1c3 d7d5 e4d5 c8g4 f1e2 g4e2
10 00:00 222k 7,393k +0.21 e2e4 d7d5 e4d5 g8f6 b1c3 c8g4 f1e2 g4e2 g1e2 b8d7
11 00:00 392k 6,529k +0.42 e2e4 d7d5 e4d5 g8f6 b1c3 f6d5 f1c4 d5c3 d2c3 d8d1
12 00:00 806k 6,718k +0.30 e2e4 d7d5 e4d5 g8f6 b1c3 f6d5 f1c4 d5c3 b2c3 b8d7 d1h5 g7g6
13 00:00 1,784k 7,134k +0.48 e2e4 d7d5 e4d5 g8f6 b1c3 f6d5 f1c4 d5c3 b2c3 b8d7 d1h5 g7g6 h5d5
14 00:00 2,642k 7,142k +0.50 e2e4 d7d5 e4d5 g8f6 d2d4 c8g4 f2f3 g4f5 f1b5 b8d7 b1c3 g7g6 c1f4 f8g7 g1e2
15 00:00 4,625k 7,227k +0.47 e2e4 d7d5 e4d5 g8f6 d2d4 c8g4 f2f3 g4f5 f1b5 b8d7 c2c4 g7g6 c1f4 f8g7 d1b3 b7b6
16 00:01 8,337k 7,313k +0.42 e2e4 d7d5 e4d5 g8f6 d2d4 f6d5 c2c4 d5b6 c1e3 e7e5 b1c3 c8e6 g1f3 e5d4 f3d4 e6c4
17 00:01 12,559k 7,344k +0.48 e2e4 d7d5 e4d5 g8f6 d2d4 f6d5 c2c4 d5f6 c1f4 c7c5 g1f3 d8b6 b2b3 c8d7 d4c5 b6c5 b1d2
18 00:03 25,110k 7,407k +0.41 e2e4 d7d5 e4d5 g8f6 d2d4 f6d5 c2c4 d5f6 c1e3 f6g4 e3g5 c7c5 f1e2 d8d4 d1d4
19 00:07 52,982k 7,452k +0.42 e2e4 d7d5 e4d5 g8f6 d2d4 f6d5 c2c4 d5f6 b1c3 e7e5 g1f3 e5d4 d1d4 d8d4 f3d4 c7c5 d4b5 c8g4 c1f4
20 00:33 253,957k 7,531k +0.32 e2e4 e7e6 d2d4 d7d5 b1c3 f8b4 d1g4 g8f6 g4g7 h8g8 g7h6 g8g6 h6h4 g6g4 h4h6 g4g6 h6h4 g6g4 h4h6 g4g6

The differences between SF 11 and SF 12 was that the node rate was almost cut in half for SF 12 NNUE and yet it searched more deeply.
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Quiescence Search doesn't improve strength

Post by Sven »

lithander wrote: Mon Mar 01, 2021 2:58 am
Mike Sherwin wrote: Fri Feb 26, 2021 4:36 pm Alpha-beta search works best when it has more accurate scores to backup and prune against. And the better the evaluation function is the better the pruning will be and the deeper the search will search in a given time.
I have now implemented support for PSTs and am happy to see the same synergy that Marcel reported: PSTs together with QSearch creates stronger play than the sum of its parts would suggest. The strength is pretty much where I'd expect it at this point as opposed to when I used QSearch only with a very simple evaluation.

But when I look at the runtime performance I measure the opposite of what you predicted: When I search a set of test position to a fixed depth the PST version takes twice as long as the version that counts only material. To make the test really fair both use the exact same code only that for the material counting version the PSTs are filled with zeros. So the speed difference must come from pruning. When using the PSTs my metrics show that I generate and play more moves to reach the same depth. So the simple eval has the better pruning! :roll:

With PSTs (from Sunfish)

Code: Select all

Performance: 1135kN/sec
A total of 44,377,312 moves were generated. 4,010,561 were played. 9%
  Operation took 50.8739s
Counting Material

Code: Select all

Performance: 1152kN/sec
A total of 12,334,260 moves were generated. 1,352,387 were played. 10%
  Operation took 25.8926s
If you're right I need to go looking for a problem. But to me the result seems logical, though:
Take the opening position for example: Without PSTs almost all lines will receive the same evaluation after quiesence: 0

This leads to bad play but it should mean a lot of early cutoffs where alpha == beta == 0. It just doens't matter what move sequence you play as long as you don't lose a piece. And as long as the score of positions are that homogeneous it should prune better than if they are all different because they have the PST offsets applied, right?

Why should accurate predictions help? Maybe the effect you predicted and I didn't observe requires some other advanced techniques like history heuristics that I don't have yet? MVV-LVA doesn't consider evaluations so I don't see where the alpha-beta search would benefit from accurate but diverse evaluations.
Does not feel right to me ... but it is hard to say why it does not work for you without seeing your code. In general a PST should improve the search when applied correctly. Is it a problem of move ordering that might become worse with a more precise evaluation?
Sven Schüle (engine author: Jumbo, KnockOut, Surprise)
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Quiescence Search doesn't improve strength

Post by hgm »

lithander wrote: Mon Mar 01, 2021 2:58 amThis leads to bad play but it should mean a lot of early cutoffs where alpha == beta == 0. It just doens't matter what move sequence you play as long as you don't lose a piece. And as long as the score of positions are that homogeneous it should prune better than if they are all different because they have the PST offsets applied, right?
This is exactly right. Most moves will not give away material. (Especially when your opponent-in-search is also stupid, and has no inclination to develop his pieces.) And then any random move you try will immediately be a cut-move. With a more discriminating evaluation you often have to try several moves before you can beat beta.

With more advanced move ordering techniques, such as killer and history, you can reduce the time spent on searching inadequate moves in cut-nodes. The success of these techniques depends on there being some consistency in the quality of the moves, though. Which you probably wouldn't have when most moves get a 0 score, and it will come as a complete surprise whether a move will lead to disaster or gain.