Dylan Sharp Vs. Harvey Williamson (G4)

Discussion of computer chess matches and engine tournaments.

Moderators: hgm, Rebel, chrisw

Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by Zenmastur »

zullil wrote: Wed Jan 15, 2020 1:01 pm
After a day-long search with MultiPV=2, Stockfish suggests that 16. Re1 is sub-optimal, and that White is already in considerable distress:

-1.82 16. Rfc1 Rfe8 17. Qd3 Bxf3 18. Qxf3 Nh4 19. Qh5 Nxg2 20. Kxg2 Qe6 21. Qf3 Qxe2 22. Qxe2 Rxe2 23. Rd1 Rd7 24. Kf1 Re6 25. Re1 Rxe1+ 26. Kxe1 f6 27. gxf6 gxf6 28. Ke2 Kf7 29. Rh5 Kg6 30. Rh6+ Kf5 31. Rh4 Re7+ 32. Kd1 b6 33. b3 Rf7 34. Ke2 Rd7 35. Rh5+ Kg4 36. Rh6 Re7+ 37. Kf1 Kf5 38. Rh4 Ke5 39. Rh6 Ke4 40. Ke2 Kf5+ 41. Kf1 Rd7 42. Rh4 Ne5 43. Ke2 Ke6 44. h3 Kf5 45. f3 Ke6 46. a4 Ng6 47. Rh6 Rf7 48. Kd3 Kd5 49. Ke2 Re7+ 50. Kd1 Ke5 51. Rh5+ f5 52. Rh6 Rg7 53. h4 Kd5 54. Rh5 Rf7 55. a5 bxa5 56. Bxa5 Ke5 57. Bd2 Rb7 58. Kc2 Nf4 59. Rh6 d3+ 60. Kb2 Nd5 (depth 68, 23:49:39)

-1.98 16. Qd3 Bxf3 17. Qxf3 Nh4 18. Qe4 Nxg2 19. Kxg2 Rfe8 20. Qf5 Rxe2 21. Qxd7 Rxd7 22. Rd1 Re6 23. Bf4 f6 24. gxf6 Rxf6 25. Bg3 Kf7 26. b4 Kg6 27. Rd3 Rf5 28. Rc1 Kf6 29. h3 a6 30. Rcd1 Ke6 31. Rc1 Kf7 32. Rc2 Kf6 33. Re2 Re7 34. Red2 Ke6 35. f3 Rd5 36. Bf2 Kf5 37. Bg3 Red7 38. Kf2 Kf6 39. Re2 Re7 40. Rc2 Kf5 41. Rcd2 Red7 42. Kf1 Kf6 43. Re2 Re7 44. Rc2 Kf5 45. Rc1 Red7 46. Rcd1 Kf6 47. Kg2 Ke6 48. Bf2 Kf5 49. Kg3 Kf6 50. Kg2 Ke6 51. Kg3 Kf5 52. Kg2 (depth 68, 23:49:39)

Terminating this search now.
Any idea what the seldepth is on these searches?

Regards,

Zenmastur
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
zullil
Posts: 6442
Joined: Tue Jan 09, 2007 12:31 am
Location: PA USA
Full name: Louis Zulli

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by zullil »

Zenmastur wrote: Wed Jan 15, 2020 3:38 pm
zullil wrote: Wed Jan 15, 2020 1:01 pm
After a day-long search with MultiPV=2, Stockfish suggests that 16. Re1 is sub-optimal, and that White is already in considerable distress:

-1.82 16. Rfc1 Rfe8 17. Qd3 Bxf3 18. Qxf3 Nh4 19. Qh5 Nxg2 20. Kxg2 Qe6 21. Qf3 Qxe2 22. Qxe2 Rxe2 23. Rd1 Rd7 24. Kf1 Re6 25. Re1 Rxe1+ 26. Kxe1 f6 27. gxf6 gxf6 28. Ke2 Kf7 29. Rh5 Kg6 30. Rh6+ Kf5 31. Rh4 Re7+ 32. Kd1 b6 33. b3 Rf7 34. Ke2 Rd7 35. Rh5+ Kg4 36. Rh6 Re7+ 37. Kf1 Kf5 38. Rh4 Ke5 39. Rh6 Ke4 40. Ke2 Kf5+ 41. Kf1 Rd7 42. Rh4 Ne5 43. Ke2 Ke6 44. h3 Kf5 45. f3 Ke6 46. a4 Ng6 47. Rh6 Rf7 48. Kd3 Kd5 49. Ke2 Re7+ 50. Kd1 Ke5 51. Rh5+ f5 52. Rh6 Rg7 53. h4 Kd5 54. Rh5 Rf7 55. a5 bxa5 56. Bxa5 Ke5 57. Bd2 Rb7 58. Kc2 Nf4 59. Rh6 d3+ 60. Kb2 Nd5 (depth 68, 23:49:39)

-1.98 16. Qd3 Bxf3 17. Qxf3 Nh4 18. Qe4 Nxg2 19. Kxg2 Rfe8 20. Qf5 Rxe2 21. Qxd7 Rxd7 22. Rd1 Re6 23. Bf4 f6 24. gxf6 Rxf6 25. Bg3 Kf7 26. b4 Kg6 27. Rd3 Rf5 28. Rc1 Kf6 29. h3 a6 30. Rcd1 Ke6 31. Rc1 Kf7 32. Rc2 Kf6 33. Re2 Re7 34. Red2 Ke6 35. f3 Rd5 36. Bf2 Kf5 37. Bg3 Red7 38. Kf2 Kf6 39. Re2 Re7 40. Rc2 Kf5 41. Rcd2 Red7 42. Kf1 Kf6 43. Re2 Re7 44. Rc2 Kf5 45. Rc1 Red7 46. Rcd1 Kf6 47. Kg2 Ke6 48. Bf2 Kf5 49. Kg3 Kf6 50. Kg2 Ke6 51. Kg3 Kf5 52. Kg2 (depth 68, 23:49:39)

Terminating this search now.
Any idea what the seldepth is on these searches?

Regards,

Zenmastur
Unfortunately, this particular GUI doesn't show that info, and I'm not logging raw UCI output from Stockfish. Probably > 100, I suppose.
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by Ovyron »

Zenmastur wrote: Wed Jan 15, 2020 8:34 am I'm not sure this explanation is clear enough to understand, but I don't know any other way to explain it. The point is, when you've found something deep in the tree that changes the evaluation of the position and you are backing up through a line of play the last thing you want is for all this information to be overwritten before you get back to the root position.
Yeah, so you're not familiar with the Learning engines I've mentioned. With those, you can reach any depth, at say, move 20, and then the engine will save the relevant info to disk. Then all you need to do is visit previous nodes and make the engine reach this depth. If it finds an improvement, it'll switch and suggest a different move for either side. If not, you can go back to the root node (the position after 1.g4, in this case), and the engine will show the PV that you had at move 20 and all the moves leading to move 20 in the PV, and will reach this depth within seconds (and if you unload the engine, and reload it, it'll still show this learned PV and score and depth within seconds.)

Nothing is overwritten so I never need to give the engine more than 128 MB RAM for the TT, it'll just remember the refutations that I've shown it. After you know what's black's winning plan you just feed it to the engine and backtrack and it'll switch move much earlier than if you let it analyze to Depth 60, so for this game I never needed to analyze any node for more than 10 minutes, and for >95% of moves of the game, at the point one needed to be played, a freshly loaded engine would already find the best move to play (or the moves that Harvey and me played on the game) immediately, and show the PV of the best moves found already. No need for a huge TT or big Depth at any point (well, except that I lost, but for instance the engine would immediately show a line that kills 16. Rfc1 and 16. Qd3 to a score below 16.Re1 immediately, without needing to wait for Depth 68 or whatever, because I've already fed it the lines up to a black win. If anyone wants to try to defend the game against me after a different move 16 they're welcome, I hold that Re1 was best).
Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by Zenmastur »

Ovyron wrote: Wed Jan 15, 2020 9:14 pm
Zenmastur wrote: Wed Jan 15, 2020 8:34 am I'm not sure this explanation is clear enough to understand, but I don't know any other way to explain it. The point is, when you've found something deep in the tree that changes the evaluation of the position and you are backing up through a line of play the last thing you want is for all this information to be overwritten before you get back to the root position.
Yeah, so you're not familiar with the Learning engines I've mentioned. With those, you can reach any depth, at say, move 20, and then the engine will save the relevant info to disk. Then all you need to do is visit previous nodes and make the engine reach this depth. If it finds an improvement, it'll switch and suggest a different move for either side. If not, you can go back to the root node (the position after 1.g4, in this case), and the engine will show the PV that you had at move 20 and all the moves leading to move 20 in the PV, and will reach this depth within seconds (and if you unload the engine, and reload it, it'll still show this learned PV and score and depth within seconds.)
No, I'm not familiar with those engines. I have used a gui that does the does basically the same thing. My problem with it isn't the concept it's the speed of progress. It was slow. No doubt due the use of the HD and it sometimes crashed on long analysis sessions. This would screw up the files it was using and you would have to start over. If I found one that actually worked, didn't crash, and wasn't as slow as molasses I might give it a try. Until then I know my method works and is relatively fast. It does however, require a lot of memory but doesn't need any special software. Memory can be bought almost anywhere, those specialty programs can't.

Regards,

Zenmastur
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
jp
Posts: 1470
Joined: Mon Apr 23, 2018 7:54 am

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by jp »

Zenmastur wrote: Tue Jan 14, 2020 9:05 am The sub-tree that has to be stored in the TT is way to large for a normal sized TT. So, if it is a draw, you will need a very large TT in order to find it and be somewhat sure that your analysis is accurate. Back when SF could have 1 TB of TT I think it was possible. But IIRC this has been reduced to at most 128Gb of TT. This will make if very difficult as I'm not sure a drawing sub-tree will fit in a 128Gb TT. Trying to fit it into a MUCH MUCH smaller TT is futile I think.
But shouldn't this just be a parameter in the SF code (which you can therefore change before recompiling)?

Why did they reduce it?
Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by Zenmastur »

jp wrote: Fri Jan 17, 2020 5:15 am
Zenmastur wrote: Tue Jan 14, 2020 9:05 am The sub-tree that has to be stored in the TT is way to large for a normal sized TT. So, if it is a draw, you will need a very large TT in order to find it and be somewhat sure that your analysis is accurate. Back when SF could have 1 TB of TT I think it was possible. But IIRC this has been reduced to at most 128Gb of TT. This will make if very difficult as I'm not sure a drawing sub-tree will fit in a 128Gb TT. Trying to fit it into a MUCH MUCH smaller TT is futile I think.
But shouldn't this just be a parameter in the SF code (which you can therefore change before recompiling)?

Why did they reduce it?
I think the reason was they implemented the use of non-power-of-2 TT sizes. Not sure why this affects the max size of TT that can be used, but apparently it did.
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
zullil
Posts: 6442
Joined: Tue Jan 09, 2007 12:31 am
Location: PA USA
Full name: Louis Zulli

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by zullil »

Zenmastur wrote: Fri Jan 17, 2020 12:08 pm
jp wrote: Fri Jan 17, 2020 5:15 am
Zenmastur wrote: Tue Jan 14, 2020 9:05 am The sub-tree that has to be stored in the TT is way to large for a normal sized TT. So, if it is a draw, you will need a very large TT in order to find it and be somewhat sure that your analysis is accurate. Back when SF could have 1 TB of TT I think it was possible. But IIRC this has been reduced to at most 128Gb of TT. This will make if very difficult as I'm not sure a drawing sub-tree will fit in a 128Gb TT. Trying to fit it into a MUCH MUCH smaller TT is futile I think.
But shouldn't this just be a parameter in the SF code (which you can therefore change before recompiling)?

Why did they reduce it?
I think the reason was they implemented the use of non-power-of-2 TT sizes. Not sure why this affects the max size of TT that can be used, but apparently it did.
For details, the interested reader can start here: https://github.com/official-stockfish/S ... 981a61ce32
jp
Posts: 1470
Joined: Mon Apr 23, 2018 7:54 am

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by jp »

zullil wrote: Fri Jan 17, 2020 1:18 pm For details, the interested reader can start here: https://github.com/official-stockfish/S ... 981a61ce32
This looks like the reason:
1) the patch is implemented for a 32 bit hash (so that a 64 bit multiply can be used), this effectively limits the number of clusters that can be used to 2^32 or to 128Gb of transpostion table. That's a change in the maximum allowed TT size, which could bother those using 256Gb or more regularly.
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by Ovyron »

Zenmastur wrote: Thu Jan 16, 2020 2:10 pm I have used a gui that does the does basically the same thing.
Hopefully you're not talking about Aquarium's IDEA, which is an abomination and you're basically wasting 90% of resources when you use it. The effects of learning can't really be emulated by a GUI, which would rely on Exclude Moves in lines where it thinks "exploration" is necessary or by others where you check the nodes in Multi-PV and go back to a previous position when the mainline's score falls under some previous line's score.

The engine needs to see the analysis, and when it sees that this doesn't work and this other thing doesn't work it'll automatically find the best line and go deeper in it without you having to play the moves. The magic happens because the engine will automatically tell you if another line is worth considering (as you revisit a previous node it'll switch to it) or not (and it'd just repeat the same move with updated score.)

With IDEA if the opponent has some plan and it works against all your variations it'll take a looong while to play all of them out until the tree is filled and the score is finally useful, with Learning the engine only needs to see it once, then it'll see other tries transpose and will show the useful score on your second revisit! So that's how I was managing to "deem positions as lost" after only visiting a single node...

Sadly, over the years very few users have been able to get the concept, and they don't use it, so it has been removed from engines (Rybka 3 had it removed and Houdini 4 had it removed and Shredder 13 had it removed, etc.) and then the name "Learning" was used for entirely different and unrelated features that fudge scores depending on game result or just bring a TT back as if you didn't unload the engine (but the engine forgets as positions are overwritten.)

So one has to rely on private software, if one is lucky...
Zenmastur wrote: Thu Jan 16, 2020 2:10 pm Until then I know my method works and is relatively fast. It does however, require a lot of memory but doesn't need any special software. Memory can be bought almost anywhere, those specialty programs can't.
I get it, though apparently the private programs remain so because of Stockfish's licence. The programmers making them would freely share them but they don't want their sources to be known, if Stockfish allowed people to create closed derivatives who knows how many Learning Stockfishes would we have. But I guess this is a completely different subject entirely.

But, hey, Jeremy's Bernstein open implementation of learning for Stockfish from 2014 is still here:

https://open-chess.org/download/file.ph ... 6bac1adc60

I still don't get why nobody has implemented it for latest Stockfish and given an up to date engine public learning. But if my opponents had access to Learning and all my advantage against them would vanish, then the current situation is actually the best for me (the point is having a learning engine yourself, not that it's public) and I should shut up about it.
Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Dylan Sharp Vs. Harvey Williamson (G4)

Post by Zenmastur »

jp wrote: Sat Jan 18, 2020 7:20 am
zullil wrote: Fri Jan 17, 2020 1:18 pm For details, the interested reader can start here: https://github.com/official-stockfish/S ... 981a61ce32
This looks like the reason:
1) the patch is implemented for a 32 bit hash (so that a 64 bit multiply can be used), this effectively limits the number of clusters that can be used to 2^32 or to 128Gb of transpostion table. That's a change in the maximum allowed TT size, which could bother those using 256Gb or more regularly.
2^32 * 64 bytes = 256Gb not 128Gb so must be a signed multiply. So, I guess I get it, but I don't have to like it much even though it probably saves me a lot of money since I don't have to buy an Epyc CPU and a lot more ram! :D :D :D

Edit: Oh, that's right they use 32-byte clusters. I forgot they break up the cache line into two 32-bytes clusters. That seems like a waste. Theoretically it would be better to have one six entry bucket than two three entry buckets, I think.

Regards,

Zenmastur
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.