Code: Select all
mport tensorflow as tf
import numpy as np
f = open('testdata')
x_data, y_data, p_data = [], [], []
for l in f:
a = [float(e) for e in l.split()]
y_data.append(a[:1])
p_data.append(a[1:2])
x_data.append(a[2:9])
print "read %d (%d) records" % (len(x_data), len(y_data))
print "read %d inputs" % len(x_data[0])
WM = tf.Variable(tf.random_uniform([len(x_data[0]), 1]))
WE = tf.Variable(tf.random_uniform([len(x_data[0]), 1]))
xm = tf.matmul(x_data, WM)
xe = tf.matmul(x_data, WE)
P = tf.constant(p_data)
y = xm*(1-P)+xe*P
y = tf.sigmoid(y/2)
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.AdamOptimizer(0.1)
train = optimizer.minimize(loss)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
# Fit the plane.
for step in xrange(0, 1000000):
sess.run(train)
if step % 10 == 0:
print step, sess.run(loss)
if step % 100 == 0:
l, m, e = [], sess.run(WM), sess.run(WE)
for i in range(len(m)):
l.append((int(m[i][0]*100), int(e[i][0]*100)))
print step, l
Code: Select all
1000 0.116354
1000 [(27, 19), (73, 188), (353, 407), (379, 452), (535, 738), (1212, 1379), (10, 58)]
Pawn = 73/188
Knight = 353/407
Bishop = 379/452
Rook = 535/748
Queen = 1212/1339
Looks very promising. With all features enabled I can get down to loss 0.109577. This is impressive considering I spent in total of 2h learning the framework and trying different functions to optimize. The best algorithm was AdamOptimizer which converges wicked fast.
I tried using more than one layer, but the loss was not better than 0.109577.