2900 Elo points progress, 10 million games, 330 nets

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: 2900 Elo points progress, 10 million games, 330 nets

Post by lucasart »

crem wrote: Sun Nov 25, 2018 11:21 pm 1. So far in our tests we fail to reach A0 strength given the same number of training games (44 million).
2. We don't know why it is. Maybe we have a bug (probably), maybe we use wrong FPU, maybe we guessed wrong Cpuct, maybe we understood the paper incorrectly, maybe we don't shuffle training games good enough, maybe we release new network too rarely, maybe something else.
3. I agree that the best (or rather the only) way to get consistent improvements is to run lots of small tests with different ideas.
4. Currently the way to do such tests is not developed (it's discussed for 6 months already, but it's constantly being preempted by more urgent tasks [rushing release for some CCCC/TCEC season, or changing Lc0 so that new features could be added in more elegant way, or implementing some new Lc0 feaure myself because it's more fun]).
5. Without implemented easy way of testing, setting up and running a fresh tests is a cumbersome task. Especially if it requires engine changes, then currently it takes weeks to roll it out. Server-side part is not one-click thing either, requires some hours of wiring up training scripts, data transfer, typing some SQL, making sure that clients still not send training data from old test after restart etc.
6. Often things are not changed just because the changes needed for a new idea are not implemented yet. Or sometime it's because all devs are too busy with their non-Lc0 life for a week or two etc.
7. Yes, current use of contributors' GPU is not optimal. But to make it more optimal, things have to be implemented, and devs just cannot keep up.
8. Current idea (from my perception) is "We'll do testing properly (on many small-scale experiments that anyone can submit, and statistically sound conclusions) when we have a framework. Until that's ready, let's run full-size test with intuitively guessed params/ideas and hope it will be stronger that everything we had before."

So, yes we fail to reach A0 level, yes we should run well designed experiments, yes we should have done lots of them, yes they should be small and frequent instead of rare and large (and largely based on just intuitive guess instead of some scientifically sound method), but there's really no infrastructure and very little dev time to implement this infrastructure. And even for doing it manually, idea of starting a new small test every week is too time-consuming.

I totally agree that if some team of 2-3 full time developers would appear, they would leave LCZero project behind within one month. I don't know what to do with that knowledge though.

PS. For "More resources were used than in DeepMind A0 project, not being at all near A0 level strength with 20xxx and 30xxx nets." I hope you mean one run of DM vs one run of Lc0. For total amount of resources (for trial and testing), I'm sure DeepMind used hundreds if not thousands times more resources than we did so far.
Instead of wasting so much electricity, which don't you ask Demis Hassabis for insights?

He may not give you the exact secret sauce for everything, but he can at least bring you some clarifications on what you've assumed from his paper (where unclear), perhaps what parameters he used, or at least ideas on how to estimate such parameters.

A simple email could save the planet a few GWh. Think of the polar bears, man…
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: 2900 Elo points progress, 10 million games, 330 nets

Post by Laskos »

crem wrote: Sun Nov 25, 2018 11:21 pm 1. So far in our tests we fail to reach A0 strength given the same number of training games (44 million).
2. We don't know why it is. Maybe we have a bug (probably), maybe we use wrong FPU, maybe we guessed wrong Cpuct, maybe we understood the paper incorrectly, maybe we don't shuffle training games good enough, maybe we release new network too rarely, maybe something else.
3. I agree that the best (or rather the only) way to get consistent improvements is to run lots of small tests with different ideas.
4. Currently the way to do such tests is not developed (it's discussed for 6 months already, but it's constantly being preempted by more urgent tasks [rushing release for some CCCC/TCEC season, or changing Lc0 so that new features could be added in more elegant way, or implementing some new Lc0 feaure myself because it's more fun]).
5. Without implemented easy way of testing, setting up and running a fresh tests is a cumbersome task. Especially if it requires engine changes, then currently it takes weeks to roll it out. Server-side part is not one-click thing either, requires some hours of wiring up training scripts, data transfer, typing some SQL, making sure that clients still not send training data from old test after restart etc.
6. Often things are not changed just because the changes needed for a new idea are not implemented yet. Or sometime it's because all devs are too busy with their non-Lc0 life for a week or two etc.
7. Yes, current use of contributors' GPU is not optimal. But to make it more optimal, things have to be implemented, and devs just cannot keep up.
8. Current idea (from my perception) is "We'll do testing properly (on many small-scale experiments that anyone can submit, and statistically sound conclusions) when we have a framework. Until that's ready, let's run full-size test with intuitively guessed params/ideas and hope it will be stronger that everything we had before."

So, yes we fail to reach A0 level, yes we should run well designed experiments, yes we should have done lots of them, yes they should be small and frequent instead of rare and large (and largely based on just intuitive guess instead of some scientifically sound method), but there's really no infrastructure and very little dev time to implement this infrastructure. And even for doing it manually, idea of starting a new small test every week is too time-consuming.

I totally agree that if some team of 2-3 full time developers would appear, they would leave LCZero project behind within one month. I don't know what to do with that knowledge though.

PS. For "More resources were used than in DeepMind A0 project, not being at all near A0 level strength with 20xxx and 30xxx nets." I hope you mean one run of DM vs one run of Lc0. For total amount of resources (for trial and testing), I'm sure DeepMind used hundreds if not thousands times more resources than we did so far.
Nice to hear from you, and you are probably the last target of my pretty impolite (maybe unjustly) post. You developed the excellent engine, and the initial EXTREMELY successful 6x64 runs were mostly supervised by you. First, those are 12-15 faster nets, so 10 million fully blown 20x256 nets games are computing-effort-wise equivalent to 120 million games of 6x64 runs. Second, everybody knows that reaching the global optimum with DCNNs is some sort of "black magic", it is acknowledged in serious journals like "Nature". Wasn't it better to have a "toy model" with 6x64 nets, which in early runs reached some local optima using only 100-150 nets for training and having 15 faster games? And this "toy model" is not that "toyish", these were playing good chess, some 3000+ CCRL 40/4 Elo level on a reasonable GPU, nothing really "toyish" about them. I am often using toy models as a start in my research and even in some posts on this forum, trying to find some insights into the "real life" situation or the final model. 6x64 runs have a much more simplified landscape to find tricks of reaching sweet points in parameters for achieving better global results. The 20x256 runs not only are very slow, but the learning landscape is extremely weird, and trying to figure out everything at once could be almost an unreachable goal. To use them as "experimental bedrock", while being 15 times slower, and they settle to _local_ optima with some 1000 nets instead of 100, is just a squandering of resources. From toy models to the real life is the usual procedure of the scientific method.
Sorry again, but those 10 million wasted games with the fully blown slow net, trying to figure out everything at once in an extremely complicated landscape, irritated me, you might end up learning nothing or very litle as procedure goes out of this 30xxx run.

Anyway, I am not any expert on all this, and such things like server maintenance could blow up with 6x64 nets coming out every 5 minutes or so.
User avatar
pohl4711
Posts: 2433
Joined: Sat Sep 03, 2011 7:25 am
Location: Berlin, Germany
Full name: Stefan Pohl

Re: 2900 Elo points progress, 10 million games, 330 nets

Post by pohl4711 »

crem wrote: Sun Nov 25, 2018 11:21 pm
So, yes we fail to reach A0 level
I am not sure, if this is true. Leela with late 11xxx nets (11250 or so) is at an Elo-level around Fire 7.1. I doubt, that A0 would score better, when it had to play with valid testconditions - the competition vs. SF8 was a bad joke (fixed time per move, very small hash for SF, no openings). With valid conditions, I believe SF8 would have played around 80-100 Elo better vs A0.
duncan
Posts: 12038
Joined: Mon Jul 07, 2008 10:50 pm

Re: 2900 Elo points progress, 10 million games, 330 nets

Post by duncan »

crem wrote: Sun Nov 25, 2018 11:21 pm 1. So far in our tests we fail to reach A0 strength given the same number of training games (44 million).
2. We don't know why it is. Maybe we have a bug (probably), maybe we use wrong FPU, maybe we guessed wrong Cpuct, maybe we understood the paper incorrectly, maybe we don't shuffle training games good enough, maybe we release new network too rarely, maybe something else.
3. I agree that the best (or rather the only) way to get consistent improvements is to run lots of small tests with different ideas.
4. Currently the way to do such tests is not developed (it's discussed for 6 months already, but it's constantly being preempted by more urgent tasks [rushing release for some CCCC/TCEC season, or changing Lc0 so that new features could be added in more elegant way, or implementing some new Lc0 feaure myself because it's more fun]).
5. Without implemented easy way of testing, setting up and running a fresh tests is a cumbersome task. Especially if it requires engine changes, then currently it takes weeks to roll it out. Server-side part is not one-click thing either, requires some hours of wiring up training scripts, data transfer, typing some SQL, making sure that clients still not send training data from old test after restart etc.
6. Often things are not changed just because the changes needed for a new idea are not implemented yet. Or sometime it's because all devs are too busy with their non-Lc0 life for a week or two etc.
7. Yes, current use of contributors' GPU is not optimal. But to make it more optimal, things have to be implemented, and devs just cannot keep up.
8. Current idea (from my perception) is "We'll do testing properly (on many small-scale experiments that anyone can submit, and statistically sound conclusions) when we have a framework. Until that's ready, let's run full-size test with intuitively guessed params/ideas and hope it will be stronger that everything we had before."

So, yes we fail to reach A0 level, yes we should run well designed experiments, yes we should have done lots of them, yes they should be small and frequent instead of rare and large (and largely based on just intuitive guess instead of some scientifically sound method), but there's really no infrastructure and very little dev time to implement this infrastructure. And even for doing it manually, idea of starting a new small test every week is too time-consuming.

I totally agree that if some team of 2-3 full time developers would appear, they would leave LCZero project behind within one month. I don't know what to do with that knowledge though.

PS. For "More resources were used than in DeepMind A0 project, not being at all near A0 level strength with 20xxx and 30xxx nets." I hope you mean one run of DM vs one run of Lc0. For total amount of resources (for trial and testing), I'm sure DeepMind used hundreds if not thousands times more resources than we did so far.
Any possibility of paying someone to create the framework and fund raising with that specific objective in mind as it is so important ?

btw what would you charge ?
jp
Posts: 1470
Joined: Mon Apr 23, 2018 7:54 am

Re: 2900 Elo points progress, 10 million games, 330 nets

Post by jp »

crem wrote: Sun Nov 25, 2018 11:21 pm
The worst part of the 3xxxx testing is the TB rescoring, because that is not "zero". (We talked about this in another thread.)
The only good excuse for going non-zero is that you believe 100% that the "zero" approach has maxed out & no more improvement is possible.
chrisw
Posts: 4313
Joined: Tue Apr 03, 2012 4:28 pm

Re: 2900 Elo points progress, 10 million games, 330 nets

Post by chrisw »

crem wrote: Sun Nov 25, 2018 11:21 pm 1. So far in our tests we fail to reach A0 strength given the same number of training games (44 million).
2. We don't know why it is. Maybe we have a bug (probably), maybe we use wrong FPU, maybe we guessed wrong Cpuct, maybe we understood the paper incorrectly, maybe we don't shuffle training games good enough, maybe we release new network too rarely, maybe something else.
3. I agree that the best (or rather the only) way to get consistent improvements is to run lots of small tests with different ideas.
4. Currently the way to do such tests is not developed (it's discussed for 6 months already, but it's constantly being preempted by more urgent tasks [rushing release for some CCCC/TCEC season, or changing Lc0 so that new features could be added in more elegant way, or implementing some new Lc0 feaure myself because it's more fun]).
5. Without implemented easy way of testing, setting up and running a fresh tests is a cumbersome task. Especially if it requires engine changes, then currently it takes weeks to roll it out. Server-side part is not one-click thing either, requires some hours of wiring up training scripts, data transfer, typing some SQL, making sure that clients still not send training data from old test after restart etc.
6. Often things are not changed just because the changes needed for a new idea are not implemented yet. Or sometime it's because all devs are too busy with their non-Lc0 life for a week or two etc.
7. Yes, current use of contributors' GPU is not optimal. But to make it more optimal, things have to be implemented, and devs just cannot keep up.
8. Current idea (from my perception) is "We'll do testing properly (on many small-scale experiments that anyone can submit, and statistically sound conclusions) when we have a framework. Until that's ready, let's run full-size test with intuitively guessed params/ideas and hope it will be stronger that everything we had before."

So, yes we fail to reach A0 level, yes we should run well designed experiments, yes we should have done lots of them, yes they should be small and frequent instead of rare and large (and largely based on just intuitive guess instead of some scientifically sound method), but there's really no infrastructure and very little dev time to implement this infrastructure. And even for doing it manually, idea of starting a new small test every week is too time-consuming.

I totally agree that if some team of 2-3 full time developers would appear, they would leave LCZero project behind within one month. I don't know what to do with that knowledge though.

PS. For "More resources were used than in DeepMind A0 project, not being at all near A0 level strength with 20xxx and 30xxx nets." I hope you mean one run of DM vs one run of Lc0. For total amount of resources (for trial and testing), I'm sure DeepMind used hundreds if not thousands times more resources than we did so far.
For such a brilliant piece of critical analysis of project, I award Alexander the Order of Lenin.
Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: 2900 Elo points progress, 10 million games, 330 nets

Post by Dann Corbit »

Can't we still collect the old net and use it, if we want to?
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
jp
Posts: 1470
Joined: Mon Apr 23, 2018 7:54 am

Re: 2900 Elo points progress, 10 million games, 330 nets

Post by jp »

Dann Corbit wrote: Mon Nov 26, 2018 10:51 pm Can't we still collect the old net and use it, if we want to?
They're all available. By "use", do you mean for play or more training?
Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: 2900 Elo points progress, 10 million games, 330 nets

Post by Dann Corbit »

jp wrote: Tue Nov 27, 2018 9:29 am
Dann Corbit wrote: Mon Nov 26, 2018 10:51 pm Can't we still collect the old net and use it, if we want to?
They're all available. By "use", do you mean for play or more training?
Either
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.