-
WikiChip
WikiChip
-
Architectures
Popular x86
-
Intel
- Client
- Server
- Big Cores
- Small Cores
-
AMD
Popular ARM
-
ARM
- Server
- Big
- Little
-
Cavium
-
Samsung
-
-
Chips
Popular Families
-
Ampere
-
Apple
-
Cavium
-
HiSilicon
-
MediaTek
-
NXP
-
Qualcomm
-
Renesas
-
Samsung
-
From WikiChip
Difference between revisions of "bitmain/sophon/bm1680"
(bm1680) |
|||
Line 1: | Line 1: | ||
{{bitmain title|Sophon BM1680}} | {{bitmain title|Sophon BM1680}} | ||
− | {{mpu}} | + | {{mpu |
+ | |name=Sophon BM1680 | ||
+ | |no image=No | ||
+ | |designer=Bitmain | ||
+ | |manufacturer=TSMC | ||
+ | |model number=BM1680 | ||
+ | |market=Artificial Intelligence | ||
+ | |first announced=October 25, 2017 | ||
+ | |family=Sophon | ||
+ | |process=28 nm | ||
+ | |technology=CMOS | ||
+ | |tdp=41 W | ||
+ | |temp min=0 °C | ||
+ | |temp max=125 °C | ||
+ | |package module 1={{packages/bitmain/fcbga-1599}} | ||
+ | }} | ||
'''Sophon BM1680''' is a [[neural processor]] designed by [[Bitmain]] capable of performing both network inference and network training. | '''Sophon BM1680''' is a [[neural processor]] designed by [[Bitmain]] capable of performing both network inference and network training. |
Revision as of 23:41, 6 November 2017
Template:mpu Sophon BM1680 is a neural processor designed by Bitmain capable of performing both network inference and network training.