From WikiChip
Brain floating-point format (bfloat16)
Revision as of 10:25, 7 November 2018 by David (talk | contribs) (bfloat16)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Brain floating-point format (bfloat16) is an number encoding format occupying 16 bits representing a floating-point number. It is equvilent to a standard single-precision floating-point value with a truncated mantissa field. Bfloat16 is designed to be used in hardware accelerating machine learning algorithms.