Don wrote:diep wrote:Don wrote:diep wrote:lara wrote:No problem
Thanks a lot for your help and links!
I'll have a look at it, perhaps there will be some more questions

Basic ideas didn't change much of search:
- PVS starting with window -infinite,infinite
- Double nullmove with R=3
- Hashtable is very important. Currently doing 8 probes sequential
- Efficient rules to know what to try in qsearch
- popular now is what they call nowadays LMR type reductions,
yet in 90s everyone already was busy with stuff like that,
i'd call my stuff Fischer Pruning. He can comment himself

- extensions are important to do, especially don't do too much
- there is mixed experiences with forward pruning last few plies
As nullmove is already really effective there.
There is another zillion algorithms and every scientist claims they work, yet what i describe above is really in Diep. No lies there unlike some others.
Vincent
Vincent, what is double null move?
- Don
Don't tell me you see this the first time Don

NO, I know you advocate it for many years as being zugzwang-proof.
But I do not know what your exact definition is for this. I have heard many variations including:
1. Do not allow more than 2 null moves in any line.
2. Do not allow 3 consecutive null moves but always do null move.
Some programs only do null move if the static score >= beta and some do not allow even 2 consecutive null moves. If you use the beta test then you cannot do 2 consecutive null moves let alone 3!
I just wanted to hear your definition. I assume you mean always do null move if it's legal, where legal means you are not in check and the last 2 moves were not null moves. Is that what you mean?
Let's go point by point.
Of course when in check you can't nullmove obviously as you'd lose your king.
Several programs seem to not nullmove above beta indeed. If i try it in diep, i'm losing 1 ply of search depth. So i'm not sure why some engines are doing this. Note most of those engines seem to change this. The authors are not very talkative as we know on this subject. Trying to draw statements out of SMK is kind of tough, even after you offer him a german beer of 0.5 liter (me having the advantage of never drinking).
Limiting nullmove in any way is really hurting bad in search depth. So limitation of nullmove one should always restrict to a minimum. For example if you want to detect zugzwang and simply don't nullmove when less than or equal 2 pieces on the board for side to move, then you'll lose a lot of plies, sometimes even in middlegame, as with todays search depths you're hitting far endgames each time.
So from low level viewpoint optimization viewpoint seen, the best is to always nullmove. That misses however obvious zugzwangs. Let's now study when and where zugzwangs occur most.
Pawnendgames are only about zugzwangs, so any form of reducing the search by R=3+1 there is really risky. Also it is whole loads of zugzwangs over there in pawnendgames.
So in pawnendgame i'm NOT doing nullmove at all.
pawnendgame is a pawnendgame ONLY when both sides have no pieces, just 0 or more pawns.
Now in order to detect zugzwang i'm forbidding the 3d nullmove only and only if the previous 2 moves were a nullmove.
e4 null null null <== forbidden
e4 null null e5 null null Nf3 <== legal
e4 null d4 null Nf3 null null Nf6 e5 <== legal
As Christophe Theron noticed years ago it is a kind of implicit way of doing a reduced search. However it hardly requires extra code and it is very safe.
Also it 'by chance' serves sometimes as an IID.
Yet it DOES eat extra nodes compared to always nullmove and just forbidding pawn endgames. Yet with todays search depths, also the ones of a few years ago, you can easily even at blitz, with double nullmove detect a zugzwang in rook endgames.
Usually there sometimes is 1 there, nearly never 2 in a single search line needed to get found. In queen endgames sometimes there is 1 zugzwang, very very seldom, but you would hate to miss it, meanwhile still you want to always nullmove of course.
Doing a special search to detect zugzwangs is over the top IMHO if in this implicit way you can detect it.
And yes, there is testsets where double nullmove loses extra plies over simple tricks to limit nullmove here and there. Yet i found in games that it works great this double nullmove.
I also know some who use double nullmove but only when there is a few pieces left on the board, so not using any limitation to nullmove with a lot of pieces.
That's an alternative to it.
The most important reason to invent double nullmove was not all this by the way. It was just to modify a search using nullmove to a correct form of search. Correctness of search simply is important and might get more important in future, that though of course for last X years the dominating thing seems to be correctness of evaluation function (limiting your number of bugs per second to quote Johan de Koning).
Vincent
p.s. double nullmove has been Frans Morsch recognized. Not sure whether i should say that publicly though
