From WikiChip
Editing neural processor
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.
Latest revision | Your text | ||
Line 10: | Line 10: | ||
== Overview == | == Overview == | ||
− | A neural processing unit (NPU) is a well-partitioned circuit that comprises all the control and arithmetic logic components necessary to execute [[ | + | A neural processing unit (NPU) is a well-partitioned circuit that comprises all the control and arithmetic logic components necessary to execute [[machinfadsssssfasssse learning]] algorithms. NPUs are designed to accelerate the performance of common machine learning tasks such as image classification, machine translation, object detection, and various other predictive models. NPUs may be part of a large SoC, a plurality of NPUs may be instantiated on a single chip, or they may be part of a dedicated neural-network accelerator. |
=== Classification === | === Classification === | ||
Line 31: | Line 31: | ||
* [[Amazon]]: {{amazon|AWS Inferentia}} | * [[Amazon]]: {{amazon|AWS Inferentia}} | ||
* [[Apple]]: Neural Engine | * [[Apple]]: Neural Engine | ||
− | |||
* [[Arm]]: {{arm|ML Processor}} | * [[Arm]]: {{arm|ML Processor}} | ||
* [[Baidu]]: {{baidu|Kunlun}} | * [[Baidu]]: {{baidu|Kunlun}} | ||
Line 49: | Line 48: | ||
* [[Intel]]: {{nervana|NNP}}, {{movidius|Myriad}}, {{mobileye|EyeQ}}, {{intel|GNA}} | * [[Intel]]: {{nervana|NNP}}, {{movidius|Myriad}}, {{mobileye|EyeQ}}, {{intel|GNA}} | ||
* [[Kendryte]]: K210 | * [[Kendryte]]: K210 | ||
− | |||
* [[Mythic]]: {{mythic|IPU}} | * [[Mythic]]: {{mythic|IPU}} | ||
* [[NationalChip]]: Neural Processing Unit (NPU) | * [[NationalChip]]: Neural Processing Unit (NPU) | ||
* [[NEC]]: {{nec|SX-Aurora}} (VPU) | * [[NEC]]: {{nec|SX-Aurora}} (VPU) | ||
* [[Nvidia]]: {{nvidia|NVDLA|l=arch}}, {{nvidia|Xavier}} | * [[Nvidia]]: {{nvidia|NVDLA|l=arch}}, {{nvidia|Xavier}} | ||
− | |||
− | |||
* [[Samsung]]: Neural Processing Unit (NPU) | * [[Samsung]]: Neural Processing Unit (NPU) | ||
* [[Rockchip]]: RK3399Pro (NPU) | * [[Rockchip]]: RK3399Pro (NPU) | ||
* [[Amlogic]]: Khadas VIM3 (NPU) | * [[Amlogic]]: Khadas VIM3 (NPU) | ||
− | |||
* [[Synaptics]]: SyNAP (NPU) | * [[Synaptics]]: SyNAP (NPU) | ||
* [[Tesla (car company)|Tesla]]: {{teslacar|FSD Chip}} | * [[Tesla (car company)|Tesla]]: {{teslacar|FSD Chip}} | ||
Line 65: | Line 60: | ||
* [[Wave Computing]]: DPU | * [[Wave Computing]]: DPU | ||
* [[Brainchip]]: Akida (NPU & NPEs) | * [[Brainchip]]: Akida (NPU & NPEs) | ||
− | |||
}} | }} | ||
{{expand list}} | {{expand list}} |