From WikiChip
Difference between revisions of "qualcomm/cloud ai"
(→Cloud AI 100) |
|||
| Line 36: | Line 36: | ||
{{main|qualcomm/microarchitectures/cloud ai|l1=Cloud AI}} | {{main|qualcomm/microarchitectures/cloud ai|l1=Cloud AI}} | ||
[[File:qualcomm-cloud-ai-100-2.png|right]] | [[File:qualcomm-cloud-ai-100-2.png|right]] | ||
| − | Launched in early 2021, the Cloud AI 100 series are the first series of AI inference processors. | + | Launched in early 2021, the Cloud AI 100 series are the first series of AI inference processors. Fabricated on a [[7-nanometer process]], those processors range from 70 [[TOPS]] / 35 [[FLOPS]] to 400 [[TOPS]] / 200 [[FLOPS]] and are offered in both PCIe cards and M.2 modules (with and without heatsinks). |
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" | ||
Revision as of 04:49, 15 September 2021
| Qualcomm Cloud AI | |
| Developer | Qualcomm |
| Type | System on chips |
| Introduction | Apr 9, 2019 (announced) March, 2021 (launch) |
| Architecture | Multi-core Neural Processor |
| ISA | Custom |
| µarch | Cloud AI |
| Process | 7 nm 0.007 μm
7.0e-6 mm |
| Technology | CMOS |
Cloud AI is a family of neural processors for the edge and data center market designed by Qualcomm and introduced in early 2021.
Cloud AI 100
- Main article: Cloud AI
Launched in early 2021, the Cloud AI 100 series are the first series of AI inference processors. Fabricated on a 7-nanometer process, those processors range from 70 TOPS / 35 FLOPS to 400 TOPS / 200 FLOPS and are offered in both PCIe cards and M.2 modules (with and without heatsinks).
| FF | PCIe (HHHL) | Dual M.2 | Dual M.2e (No Heatsink) |
|---|---|---|---|
| Size | 68.9 mm x 169.5 mm | 46 mm x 110 mm | 46 mm x 110 mm |
| TDP | 75 W | 15-25 W | 15-25 W |
| Peak Compute | |||
| Int8 | 400 TOPS | 200 TOPS | 70 TOPS |
| FP16 | 200 teraFLOPS | 100 teraFLOPS | 35 teraFLOPS |
| Configuration | |||
| SRAM | 144 MiB | 144 MiB | 72 MiB |
| DRAM (w/ECC) | 16 GiB LPDDR4x-4266 | 32 GiB LPDDR4x-4266 | 8 GiB LPDDR4x-4266 |
| DRAM B/W | 136.5 GB/s | 136.5 GB/s | 68.25 GB/s |
| Host Interface | PCIe Gen 4 (x8) | PCIe Gen 4 (x8) | PCIe Gen 3 (x4) |
Facts about "Cloud AI - Qualcomm"
| designer | Qualcomm + |
| first announced | April 9, 2019 + |
| first launched | March 2021 + |
| full page name | qualcomm/cloud ai + |
| instance of | system on a chip family + |
| instruction set architecture | Custom + |
| main designer | Qualcomm + |
| microarchitecture | Cloud AI + |
| name | Qualcomm Cloud AI + |
| process | 7 nm (0.007 μm, 7.0e-6 mm) + |
| technology | CMOS + |
