Neural network quantization

Discussion of chess software programming and technical issues.

Moderators: Harvey Williamson, Dann Corbit, hgm

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
mar
Posts: 2220
Joined: Fri Nov 26, 2010 1:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: Neural network quantization

Post by mar » Tue Sep 08, 2020 11:12 pm

linear is fine only if you have enough resolution, if most of your weights are close to zero, you risk quantizing all of them to 0 (or even worse to some non-zero value if you quantize linearly in min-max range)
of course, everything depends on the structure of the data to be quantized
but generally speaking, I'd pick a quantization scheme that lowers MSE any time
removing outliers seems dangerous, but I may be wrong
quantization-aware training seems like a great idea though
Martin Sedlak

Fabio Gobbato
Posts: 157
Joined: Fri Apr 11, 2014 8:45 am
Full name: Fabio Gobbato
Contact:

Re: Neural network quantization

Post by Fabio Gobbato » Thu Sep 10, 2020 11:22 am

I have tried to use quantized weights in the training, only when I calculate the error of the network and seems to work. I have to try with int8 but with int32 works well.
Thank you!

Post Reply