Revisiting GA's for tuning evaluation weights

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Gian-Carlo Pascutto
Posts: 1243
Joined: Sat Dec 13, 2008 7:00 pm

Re: Revisiting GA's for tuning evaluation weights

Post by Gian-Carlo Pascutto »

hgm wrote: I don't see the problem. You know what terms are in your eval, so for any se of positions you could determine where the term contributes, and where it doesn't. You can then divide the set of positions in two groups, depending if the incude the term you want to determine the empirical value of. From each group you then pick a set of representatives, where you take care that every other eval term is equally often present and absent in each set.

Of course you can combine this with orthogonal multi-testing, to use the same set of games you play from these starting positions to determine the value of multiple terms.
Extending what you say to the entire evaluation function, to me this boils down to forcing the matches to be played from a book that leads to as unbalanced positions as possible, rather than more symmetrical ones.
Gian-Carlo Pascutto
Posts: 1243
Joined: Sat Dec 13, 2008 7:00 pm

Re: Revisiting GA's for tuning evaluation weights

Post by Gian-Carlo Pascutto »

Gian-Carlo Pascutto wrote:
michiguel wrote: I agree with what you say, except that I do not understand why you want to end up with "one" best player. I should be better to end up with a set of good players, not one. Otherwise, all the evolutionary mechanisms are not working. It would be more like a breeding process. What do you do after you end up with the best? I may not be understanding your set up.
Have it mate with everybody else.
(the above is for GA)

In the context of PBIL it would mean that you shift and narrow the PBIL probability vector in the direction of the best individual.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Revisiting GA's for tuning evaluation weights

Post by diep »

Gian-Carlo Pascutto wrote:You should just go ahead and do it. I agree with you that nobody has really tried this and it's not obvious how well it is going to work if at all.

An interesting problem is where to put the threshold in the number of games to fit the fittest individual (or the fitness of each individual), compared to just letting it run faster (and more random) and having evolution filter out the weaklings. I'm wondering if there are any theoretical insights that can be made there.

I wouldn't expect any reasonable weights the first week, but hey, it's unsupervised, so you can just let it go and go and go on as many cores as you have...

So, I'd just go ahead and try, and report back what does and doesn't work for you.
Pardon me?

About everything has been tried already with GA's and computer chess, especially end 90s when GA's were very hot.

The 2 advantages you have nowadays is that because engines quickly get through the tactical barrier nowadays, even when giving them warp speed time controls (faster than bullet), that there is less tactical noise.

Additionally there is more cores nowadays available than back then, allowing you to try more at a statistical significant manner.

Thanks,
Vincent
Gian-Carlo Pascutto
Posts: 1243
Joined: Sat Dec 13, 2008 7:00 pm

Re: Revisiting GA's for tuning evaluation weights

Post by Gian-Carlo Pascutto »

diep wrote: Pardon me?

About everything has been tried already with GA's and computer chess, especially end 90s when GA's were very hot.
Just read the post that actually started the thread you are replying to.

FWIW, I tried this for Go last year, and failed. The KO tournament style selection from Dan sounds like a good improvement I missed.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Revisiting GA's for tuning evaluation weights

Post by diep »

Gian-Carlo Pascutto wrote:
diep wrote: Pardon me?

About everything has been tried already with GA's and computer chess, especially end 90s when GA's were very hot.
Just read the post that actually started the thread you are replying to.

FWIW, I tried this for Go last year, and failed. The KO tournament style selection from Dan sounds like a good improvement I missed.
Well in GA's you always have a survival of the fittest generation of course, the rest you bury in binary space.

The biggest advantage of GA's, and as far as i'm concerned the only one, is that it is relative little code to get it to work; just with some random bit flipping so to speak you already will be capable to find some local optimum, allowing you to write a paper about how succesful it is.

Like HGM i see zero problems that GA's will fail for his software.

As for DeepSjeng you might want to limit your attempts to just checking CCC.

Vincent
Last edited by diep on Mon Jan 04, 2010 4:33 pm, edited 2 times in total.
User avatar
michiguel
Posts: 6401
Joined: Thu Mar 09, 2006 8:30 pm
Location: Chicago, Illinois, USA

Re: Revisiting GA's for tuning evaluation weights

Post by michiguel »

Gian-Carlo Pascutto wrote:
michiguel wrote: I agree with what you say, except that I do not understand why you want to end up with "one" best player. I should be better to end up with a set of good players, not one. Otherwise, all the evolutionary mechanisms are not working. It would be more like a breeding process. What do you do after you end up with the best? I may not be understanding your set up.
Have it mate with everybody else.
Who's everybody else in that case if he ends up with one player? That is why I asked.

Miguel
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Revisiting GA's for tuning evaluation weights

Post by diep »

Gian-Carlo Pascutto wrote:
diep wrote: Pardon me?

About everything has been tried already with GA's and computer chess, especially end 90s when GA's were very hot.
Just read the post that actually started the thread you are replying to.

FWIW, I tried this for Go last year, and failed. The KO tournament style selection from Dan sounds like a good improvement I missed.
A possible problem in Go is that you're not getting through the tactical barrier, generating a lot of extra noise. In short selection mechanism gets tougher as there is more noise in the results.

I'd argue that's one of the reasons why in 90s it all went so terrible wrong also for chess.

Vincent
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Revisiting GA's for tuning evaluation weights

Post by Don »

michiguel wrote:
I agree with what you say, except that I do not understand why you want to end up with "one" best player. I should be better to end up with a set of good players, not one. Otherwise, all the evolutionary mechanisms are not working. It would be more like a breeding process. What do you do after you end up with the best? I may not be understanding your set up.
Why is this so difficult to understand? It's pretty obvious that you can end up with as many players as you want. If you want 2 player, don't play the final round match. If you want 4, don't play the last 2 rounds.

And you do not have to fear that this is "too precise" because all you are doing is plucking N players from a population (from an infinite population if you use the more modern and efficient CGA or PBIL.) These players will on average be very good players (which is what you want) but there is always a chance that some weak player will get "lucky" and win the tournament. You have perfect control over this by deciding how many rounds and how many games per match to play.
User avatar
michiguel
Posts: 6401
Joined: Thu Mar 09, 2006 8:30 pm
Location: Chicago, Illinois, USA

Re: Revisiting GA's for tuning evaluation weights

Post by michiguel »

Don wrote:
michiguel wrote:
I agree with what you say, except that I do not understand why you want to end up with "one" best player. I should be better to end up with a set of good players, not one. Otherwise, all the evolutionary mechanisms are not working. It would be more like a breeding process. What do you do after you end up with the best? I may not be understanding your set up.
Why is this so difficult to understand?
I am asking a question that you are not answering. I understand what you are saying. I am just trying to know what is your rationale for your decision of ending up with few players rather than more.

It's pretty obvious that you can end up with as many players as you want. If you want 2 player, don't play the final round match. If you want 4, don't play the last 2 rounds.
Of course, I know that you can end up with n players.

And you do not have to fear that this is "too precise" because all you are doing is plucking N players from a population (from an infinite population if you use the more modern and efficient CGA or PBIL.) These players will on average be very good players (which is what you want) but there is always a chance that some weak player will get "lucky" and win the tournament. You have perfect control over this by deciding how many rounds and how many games per match to play.
Again, I am curious to know why you decided to end up with few players, and I would like to know what is the next step you do. Do you mate them? with whom? do you mutate them and start again and you do not mate them?

Miguel
Gian-Carlo Pascutto
Posts: 1243
Joined: Sat Dec 13, 2008 7:00 pm

Re: Revisiting GA's for tuning evaluation weights

Post by Gian-Carlo Pascutto »

diep wrote: The biggest advantage of GA's, and as far as i'm concerned the only one, is that it is relative little code to get it to work
True, and it's also a zero-knowledge approach. I.e. the code has to know almost nothing about what it's actually doing.
As for DeepSjeng you might want to limit your attempts to just checking CCC.
Letting others discover what doesn't work first is always a time saver :)