Page 1 of 1

Google's bfloat for neural networks

Posted: Tue Apr 16, 2019 11:30 am
by smatovic
Hehe, funny, Google started to use their own bfloat datatype in TPU gen 2 and gen 3 for neural networks,

https://en.wikipedia.org/wiki/Bfloat16_ ... int_format
https://www.nextplatform.com/2018/05/10 ... processor/

and now Intel starts to implement it in their hardware. That's when you know you are a big player :)

https://venturebeat.com/2018/05/23/inte ... -training/

Wonder if Nvidia or AMD will join.

--
Srdja

Re: Google's bfloat for neural networks

Posted: Tue Apr 16, 2019 11:57 am
by mar
I misread as "Google's bloat...", thought that Google open sourced yet another masterpiece :D

So this bfloat16 is basically float where you throw away 16 bits worth of mantissa.
Packing/unpacking from 32-bit float should be trivial, so probably clever, but hey only 7 bits of mantissa, is it really enough?

Re: Google's bfloat for neural networks

Posted: Tue Apr 16, 2019 12:07 pm
by smatovic
mar wrote: Tue Apr 16, 2019 11:57 am I misread as "Google's bloat...", thought that Google open sourced yet another masterpiece :D

So this bfloat16 is basically float where you throw away 16 bits worth of mantissa.
Packing/unpacking from 32-bit float should be trivial, so probably clever, but hey only 7 bits of mantissa, is it really enough?
Dunno :)

https://www.hpcwire.com/2019/04/15/bsc- ... -training/

"As training progresses and it hones the value of the weights, then greater precision becomes important in order to optimize the solution."

“We believe dynamic numerical precision approaches offer the best benefit to training and inferencing,”

--
Srdja