From WikiChip
Cloud AI - Qualcomm
< qualcomm
Revision as of 04:52, 15 September 2021 by David (talk | contribs) (Cloud AI 100)

Qualcomm Cloud AI
Developer Qualcomm
Type System on chips
Introduction Apr 9, 2019 (announced)
March, 2021 (launch)
Architecture Multi-core Neural Processor
ISA Custom
µarch Cloud AI
Process 7 nm
0.007 μm
7.0e-6 mm
Technology CMOS

Cloud AI is a family of neural processors for the edge and data center market designed by Qualcomm and introduced in early 2021.

Cloud AI 100

See also: Cloud AI 100 μArch
qualcomm-cloud-ai-100-2.png

Launched in early 2021, the Cloud AI 100 series are the first series of AI inference processors. Fabricated on a 7-nanometer process, those processors range from 70 TOPS / 35 FLOPS to 400 TOPS / 200 FLOPS and are offered in both PCIe cards and M.2 modules (with and without heatsinks).

FF PCIe (HHHL) Dual M.2 Dual M.2e (No Heatsink)
Size 68.9 mm x 169.5 mm 46 mm x 110 mm 46 mm x 110 mm
TDP 75 W 15-25 W 15-25 W
Peak Compute
Int8 400 TOPS 200 TOPS 70 TOPS
FP16 200 teraFLOPS 100 teraFLOPS 35 teraFLOPS
Configuration
SRAM 144 MiB 144 MiB 72 MiB
DRAM (w/ECC) 16 GiB LPDDR4x-4266 32 GiB LPDDR4x-4266 8 GiB LPDDR4x-4266
DRAM B/W 136.5 GB/s 136.5 GB/s 68.25 GB/s
Host Interface PCIe Gen 4 (x8) PCIe Gen 4 (x8) PCIe Gen 3 (x4)

Documents

Facts about "Cloud AI - Qualcomm"
designerQualcomm +
first announcedApril 9, 2019 +
first launchedMarch 2021 +
full page namequalcomm/cloud ai +
instance ofsystem on a chip family +
instruction set architectureCustom +
main designerQualcomm +
microarchitectureCloud AI +
nameQualcomm Cloud AI +
process7 nm (0.007 μm, 7.0e-6 mm) +
technologyCMOS +