Term of the Moment

f-stop


Look Up Another Term


Definition: Bfloat16


(Brain FLOATing point 16 bits) A 16-bit floating point format created especially for calculations in AI machine learning. Bfloat16 enables a wider range of numbers to be calculated. Instead of using five bits to hold the exponent, Bfloat16 uses eight bits, which is the same exponent size in the 32-bit floating point format.

Like all floating point methods, the mantissa is the numeric value, and the exponent is the power to which the mantissa is raised. Following are the binary layouts of Bfloat16, 16-bit floating point (fp16) and 32-bit floating point (fp32). Each letter in the examples below represents one bit (s is the single sign bit, m is one mantissa bit and e is one exponent bit). See floating point.

 Regular and Bfloat 16-Bit Floating Point

 fp16 (10m 5e)     s  mmmmmmmmmm  eeeee

 Bfloat16 (7m 8e)  s  mmmmmmm  eeeeeeee


 Regular 32-Bit Floating Point

 fp32 - (23m 8e)
   s  mmmmmmmmmmmmmmmmmmmmmmm  eeeeeeee