Page 1 of 2

TucaNNo: neural network research

Posted: Sun Nov 08, 2020 9:57 pm
by sedicla
Hi,

I'm doing some research on neural network too. For now I'm more interested in learning and hopefully can add some elos to my engine. Would be a nice side effect <E>:)

Just wanted to share what I'm doing. Seems other developers are doing something similar and may share their experiences.

- I'm using a simple network now, 768x192x1 (64 x 6 piece type x 2). I looked at SF NNUE, and plan to switch to something similar, depending on the results. I will have to create the code in tucano to forward propagate, so I decided to start with a smaller nn. I saw Halogen engine is using a small nn too, but seems SF is the desired model?
- Using tensorflow/keras to train (see code below). To test the process I just started with 10m positions, but next step I plan to generate 100m. Also if I have the motivation I may write my own training code.
- To label the positions I'm using a 10 depth tucano search. Was wondering if it is enough to just use the eval value of self-play games at 1+0.05sec. Searches are at least depth 9, and we just need to extract the position + eval from the pgn and no need to search again at depth 10?
- Another idea I would like to test, is to generate from each position all available moves and included also this new positions and evals. Of course some moves will make sense, but others will be bad, I wonder if this will augment the data? In this case have to extract the positions and do the 10 depth search for each move.

So far I'm building my process, and learning about NN, it is good when you have a motivation.
I think in terms of research I wonder what terms or combination makes the NN eval better than hce, probably we will be able to see by comparing positions where NN eval is good, and hce is bad.

Alcides.

Code: Select all

import pandas as pd
import tensorflow as tf

print("create train dataset...")
train_dataset = tf.data.experimental.make_csv_dataset(file_pattern="data.10m.csv",batch_size=10000,label_name="Result",num_epochs=1)
print("create validation dataset...")
valid_dataset = tf.data.experimental.make_csv_dataset(file_pattern="data.1m.csv",batch_size=10000,label_name="Result",num_epochs=1)
    
features, labels = next(iter(train_dataset))

def pack_features_vector(features, labels):
  features = tf.stack(list(features.values()), axis=1)
  return features, labels
  
print("pack features...")
train_dataset = train_dataset.map(pack_features_vector)
valid_dataset = valid_dataset.map(pack_features_vector)

print("create save model callback...")
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath="pesos.h5", save_weights_only=True, verbose=1)

print('building the model...')
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(768,)))
model.add(tf.keras.layers.Dense(192, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(1))

adam = tf.keras.optimizers.Adam(learning_rate=1) 

print(model.summary())

model.compile(optimizer=adam, loss='mean_squared_error', metrics=['accuracy'])

print('model.fit...')
model.fit(train_dataset, use_multiprocessing=True, workers=8, epochs=10, callbacks=[cp_callback], validation_data=valid_dataset)

Re: TucaNNo: neural network research

Posted: Mon Nov 09, 2020 1:49 am
by mar
Nice!

I was thinking that a smaller NN probably won't be able to replace HCE completely, however one might try to build a smaller "correction" network.
I'm not sure if it'd work, but the basic idea is to use a hybrid: HCE plus a smaller NN to correct its mistakes.
There's still valuable stuff in HCE like eg ecognizers and so on.

Looking forward to see your progress, good luck.

Re: TucaNNo: neural network research

Posted: Mon Nov 09, 2020 11:19 am
by Henk
Isn't usage of neural network similar to resignation. Can't solve the problem so I use a neural network. And if it works no one able to explain the solution (in logic). Or maybe use it only to proof that a solution is possible.

Re: TucaNNo: neural network research

Posted: Mon Nov 09, 2020 2:14 pm
by sedicla
mar wrote: Mon Nov 09, 2020 1:49 am Nice!

I was thinking that a smaller NN probably won't be able to replace HCE completely, however one might try to build a smaller "correction" network.
I'm not sure if it'd work, but the basic idea is to use a hybrid: HCE plus a smaller NN to correct its mistakes.
There's still valuable stuff in HCE like eg ecognizers and so on.

Looking forward to see your progress, good luck.
Hi Martin,
Yes, I also have this impression, I can test as replacement and also as correction.
Regarding HCE, maybe one idea is to use NN to a certain point, and in the endgame switch to HCE. In my case, I'm not training positions with less than 6 pieces, hoping the egtb will take care of it, so I think I have to do this.

Alcides

Re: TucaNNo: neural network research

Posted: Mon Nov 09, 2020 2:28 pm
by sedicla
Henk wrote: Mon Nov 09, 2020 11:19 am Isn't usage of neural network similar to resignation. Can't solve the problem so I use a neural network. And if it works no one able to explain the solution (in logic). Or maybe use it only to proof that a solution is possible.
Good question Henk, depends on you, if you think is a resignation, maybe it is, since everybody will do one way or the other, and you'll have to do too.
For me, I see as a motivation to learn something new. We have the ability to generate a lot of games and NN seems to be the tool to use them.

Today you don't see an engine without null move pruning, transposition table, etc. NN eval possibly is being added to the list. I think at some time people will start looking at ways to also incorporate NN in the search. Sub NN that will help prune or less? not sure if this is possible.

As I mentioned before, I' curious to see what makes a position look good for NN eval, and not so much for HCE. Maybe we never know :)

Re: TucaNNo: neural network research

Posted: Mon Nov 09, 2020 3:10 pm
by Henk
If I have to debug I like to have my sources as transparent as possible. So if a neural network engine makes a bad move I would have a problem.
Was the chosen neural network architecture good enough? Were training/test examples or parameters ok? But maybe there are standard solutions for that. I don't know.

Don't like tuning. Starting and stopping a tuner and having to wait very long time until there are results.

If you have slow hardware you can only use small networks.

Re: TucaNNo: neural network research

Posted: Tue Nov 10, 2020 5:45 am
by Kieren Pearson
"I was thinking that a smaller NN probably won't be able to replace HCE completely" not saying a hybrid approach is worse but in Halogen the HCE was completely replaced with a NN and subsequently became a much stronger engine

Re: TucaNNo: neural network research

Posted: Tue Nov 10, 2020 6:13 am
by mar
Kieren Pearson wrote: Tue Nov 10, 2020 5:45 am "I was thinking that a smaller NN probably won't be able to replace HCE completely" not saying a hybrid approach is worse but in Halogen the HCE was completely replaced with a NN and subsequently became a much stronger engine
very interesting. looking at your NN, the topology seems to be 768-128-1, even smaller than I thought

Re: TucaNNo: neural network research

Posted: Tue Nov 10, 2020 8:08 am
by AndrewGrant
mar wrote: Mon Nov 09, 2020 1:49 am Nice!

I was thinking that a smaller NN probably won't be able to replace HCE completely, however one might try to build a smaller "correction" network.
I'm not sure if it'd work, but the basic idea is to use a hybrid: HCE plus a smaller NN to correct its mistakes.
There's still valuable stuff in HCE like eg ecognizers and so on.

Looking forward to see your progress, good luck.
This is exactly what I did in Ethereal a couple months ago. An additional NN which adds its output to the existing eval. +40 or so elo.

Re: TucaNNo: neural network research

Posted: Tue Nov 10, 2020 1:41 pm
by sedicla
Kieren Pearson wrote: Tue Nov 10, 2020 5:45 am "I was thinking that a smaller NN probably won't be able to replace HCE completely" not saying a hybrid approach is worse but in Halogen the HCE was completely replaced with a NN and subsequently became a much stronger engine
Hi Kieren,
Looking at your engine inspired me to go for a small network now, and later I can test bigger ones.
How many positions you used for training? I'm shooting for 100M.
Thanks.