From WikiChip
Difference between revisions of "nervana/nnp/nnp-i 1300"
< nervana‎ | nnp

Line 21: Line 21:
 
|tdp=75 W
 
|tdp=75 W
 
}}
 
}}
'''NNP-I 1300''' is an inference [[neural processor]] designed by [[Intel Nervana]] and introduced in late 2019. Fabricated on [[Intel's 10 nm process]] based on the {{intel|Spring Hill|l=arch}} microarchitecture, the NNP-I 1300 comes in a PCIe Gen 3.0 [[accelerator card]] form factor with two NPU chips, each with all 24 {{intel|Spring Hill#Inference Compute Engine (ICE)|ICEs|l=arch}} enabled for a peak performance of 170 [[TOPS]] at a TDP of 75 W.
+
'''NNP-I 1300''' is an [[inference]] [[neural processor]] designed by [[Intel Nervana]] and introduced in late 2019. Fabricated on [[Intel's 10 nm process]] based on the {{intel|Spring Hill|l=arch}} microarchitecture, the NNP-I 1300 comes in a PCIe Gen 3.0 [[accelerator card]] form factor with two NPU chips, each with all 24 {{intel|Spring Hill#Inference Compute Engine (ICE)|ICEs|l=arch}} enabled for a peak performance of 170 [[TOPS]] at a TDP of 75 W.
  
 
== Peak Performance ==
 
== Peak Performance ==

Revision as of 09:52, 1 February 2020

Edit Values
NNP-I 1300
spring hill package (front).png
General Info
DesignerIntel
ManufacturerIntel
Model NumberNNP-I 1300
MarketServer, Edge
IntroductionNovember 12, 2019 (announced)
November 12, 2019 (launched)
ShopAmazon
General Specs
FamilyNNP
SeriesNNP-I
Microarchitecture
MicroarchitectureSpring Hill
Process10 nm
Transistors8,500,000,000
TechnologyCMOS
Die239 mm²
Cores24
Electrical
TDP75 W
Packaging
spring hill package (back).png

NNP-I 1300 is an inference neural processor designed by Intel Nervana and introduced in late 2019. Fabricated on Intel's 10 nm process based on the Spring Hill microarchitecture, the NNP-I 1300 comes in a PCIe Gen 3.0 accelerator card form factor with two NPU chips, each with all 24 ICEs enabled for a peak performance of 170 TOPS at a TDP of 75 W.

Peak Performance

The NNP-I 1300 has a peak performance of 170 TOPS
170,000,000,000,000 OPS
170,000,000,000 KOPS
170,000,000 MOPS
170,000 GOPS
0.17 POPS
(Int8).

Cache

Main article: Spring Hill § Cache
  • 3 MiB of tightly-coupled scratchpad memory
    • 12 x 256 KiB/core
  • 48 MiB Deep SRAM
    • 4 MiB/ICE
  • 24 MiB LLC
    • 3 MiB/slice

Memory controller

[Edit/Modify Memory Info]

ram icons.svg
Integrated Memory Controller
Max TypeLPDDR4X-4200
Supports ECCYes
Max Mem32 GiB
Controllers4
Width16
Max Bandwidth67.2 GB/s
62.585 GiB/s
64,086.914 MiB/s
67,200 MB/s
0.0611 TiB/s
0.0672 TB/s

Die

Main article: Spring Hill § Die
  • 8,500,000,000 transistors
  • 239 mm² die size

Product Brief

back imageFile:spring hill package (back).png +
core count24 +
designerIntel +
die area239 mm² (0.37 in², 2.39 cm², 239,000,000 µm²) +
familyNNP +
first announcedNovember 12, 2019 +
first launchedNovember 12, 2019 +
full page namenervana/nnp/nnp-i 1300 +
has ecc memory supporttrue +
instance ofmicroprocessor +
ldateNovember 12, 2019 +
main imageFile:spring hill package (front).png +
manufacturerIntel +
market segmentServer + and Edge +
max memory bandwidth62.585 GiB/s (64,086.914 MiB/s, 67.2 GB/s, 67,200 MB/s, 0.0611 TiB/s, 0.0672 TB/s) +
microarchitectureSpring Hill + and Sunny Cove +
model numberNNP-I 1300 +
nameNNP-I 1300 +
peak integer ops (8-bit)170,000,000,000,000 OPS (170,000,000,000 KOPS, 170,000,000 MOPS, 170,000 GOPS, 170 TOPS, 0.17 POPS, 1.7e-4 EOPS, 1.7e-7 ZOPS) +
process10 nm (0.01 μm, 1.0e-5 mm) +
seriesNNP-I +
supported memory typeLPDDR4X-4200 +
tdp75 W (75,000 mW, 0.101 hp, 0.075 kW) +
technologyCMOS +
transistor count8,500,000,000 +