-
WikiChip
WikiChip
-
Architectures
Popular x86
-
Intel
- Client
- Server
- Big Cores
- Small Cores
-
AMD
Popular ARM
-
ARM
- Server
- Big
- Little
-
Cavium
-
Samsung
-
-
Chips
Popular Families
-
Ampere
-
Apple
-
Cavium
-
HiSilicon
-
MediaTek
-
NXP
-
Qualcomm
-
Renesas
-
Samsung
-
From WikiChip
Machine Learning Unit (MLU) - Cambricon
MLU | |
Developer | Cambricon |
Manufacturer | TSMC |
Type | Neural Processors |
Introduction | Nov 7, 2017 (announced) May 3, 2018 (launch) |
ISA | MLU |
Word size | 64 bit 8 octets
16 nibbles |
Process | 16 nm 0.016 μm
1.6e-5 mm |
Technology | CMOS |
Clock | 1,000 MHz-1,300 MHz |
Machine Learning Unit (MLU) is a family of neural processors designed by Cambricon.
Overview[edit]
Announced in late 2017, the MLU family of neural processors designed for cloud-based workloads for both inference and training. In contrast to Cambricon's mobile and edge computing IP cores, those processors have a higher power envelope and are designed for much higher performance.
Models[edit]
This section is empty; you can help add the missing info by editing this page. |
See also[edit]
- Google's TPU
Retrieved from "https://en.wikichip.org/w/index.php?title=cambricon/mlu&oldid=78818"
Hidden category:
Facts about "Machine Learning Unit (MLU) - Cambricon"
designer | Cambricon + |
first announced | November 7, 2017 + |
first launched | May 3, 2018 + |
full page name | cambricon/mlu + |
instance of | integrated circuit family + |
instruction set architecture | MLU + |
main designer | Cambricon + |
manufacturer | TSMC + |
name | MLU + |
process | 16 nm (0.016 μm, 1.6e-5 mm) + |
technology | CMOS + |
word size | 64 bit (8 octets, 16 nibbles) + |