Neural Networks weights type

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
Fabio Gobbato
Posts: 156
Joined: Fri Apr 11, 2014 8:45 am
Full name: Fabio Gobbato
Contact:

Neural Networks weights type

Post by Fabio Gobbato » Thu Aug 13, 2020 5:45 pm

I have seen in Stockfish NNUE that the network uses integer types for the weights instead of floating point types. One advantage is surely the speed but there could also be some drawbacks. What are the differences between integer and floating point networks? Is it possible to build a good net that runs on cpu with floating point weights or it's better to use integer weights?

Rémi Coulom
Posts: 436
Joined: Mon Apr 24, 2006 6:06 pm
Contact:

Re: Neural Networks weights type

Post by Rémi Coulom » Thu Aug 13, 2020 6:53 pm

8-bit accuracy is often accurate enough, and faster than floating point.

The tensor cores of the most recent NVIDIA GPUs can do 4-bit calculation (in addition to 8-bit interger, and 16-bit float). The next generation will also allow sparsity, which is another big potential for performance improvement. Training sparse 4-bit neural network is a bit tricky, though.

Some even do 1-bit neural networks:
https://jmlr.csail.mit.edu/papers/v18/16-456.html
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio; 18(187):1−30, 2018.

Abstract

We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.

User avatar
towforce
Posts: 10346
Joined: Wed Mar 08, 2006 11:57 pm
Location: Birmingham UK

Re: Neural Networks weights type

Post by towforce » Thu Aug 13, 2020 7:50 pm

It seems as though NNs don't require much accuracy. A couple of data types they tend to use:

* half precision - link
* brain float - link

TPU's are a bit like graphics cards, but they use low precision arithmetic, which enables them to do much more NN work for the same amount of hardware. It seems a natural step to have them using integers.
The future is more important than the past.

Post Reply