From WikiChip
Difference between revisions of "brain floating-point format"

(bfloat16)
(No difference)

Revision as of 10:25, 7 November 2018

Brain floating-point format (bfloat16) is an number encoding format occupying 16 bits representing a floating-point number. It is equvilent to a standard single-precision floating-point value with a truncated mantissa field. Bfloat16 is designed to be used in hardware accelerating machine learning algorithms.