-
WikiChip
WikiChip
-
Architectures
Popular x86
-
Intel
- Client
- Server
- Big Cores
- Small Cores
-
AMD
Popular ARM
-
ARM
- Server
- Big
- Little
-
Cavium
-
Samsung
-
-
Chips
Popular Families
-
Ampere
-
Apple
-
Cavium
-
HiSilicon
-
MediaTek
-
NXP
-
Qualcomm
-
Renesas
-
Samsung
-
From WikiChip
Difference between revisions of "brain floating-point format"
(bfloat16) |
(No difference)
|
Revision as of 10:25, 7 November 2018
Brain floating-point format (bfloat16) is an number encoding format occupying 16 bits representing a floating-point number. It is equvilent to a standard single-precision floating-point value with a truncated mantissa field. Bfloat16 is designed to be used in hardware accelerating machine learning algorithms.