From WikiChip
Difference between revisions of "nervana/nnp"
< nervana

(Overview)
(I-1000 Series (Spring Hill))
Line 55: Line 55:
 
[[File:nnp-i-1000.png|right|thumb|NNP I-1000]]
 
[[File:nnp-i-1000.png|right|thumb|NNP I-1000]]
 
{{main|intel/microarchitectures/spring_hill|l1=Spring Hill µarch}}
 
{{main|intel/microarchitectures/spring_hill|l1=Spring Hill µarch}}
The NNP I-1000 series is Intel's first series of devices designed specifically for the [[acceleration]] of inference workloads. Fabricated on [[Intel]]'s [[10 nm process]], these chips are based on {{nervana|Spring Hill|l=arch}} and incorporate a {{intel|Sunny Cove|Sunny Cove core|l=arch}} along with twelve specialized inference acceleration engines. Those devices come in [[M.2]] form factor.
+
The NNP I-1000 series is Intel's first series of devices designed specifically for the [[acceleration]] of inference workloads. Fabricated on [[Intel's 10 nm process]], these chips are based on {{nervana|Spring Hill|l=arch}} and incorporate a {{intel|Sunny Cove|Sunny Cove core|l=arch}} along with twelve specialized inference acceleration engines. The overall SoC design borrows considerable amount of IP from {{intel|Ice Lake (Client)|Ice Lake|l=arch}}. Those devices come in [[M.2]] and PCIe form factors.
 +
[[File:nnp-i ruler.jpg|right|thumb|NNP-I Ruler]]
 +
[[File:supermicro nnp-i chassis.jpg|thumb|right|NNP-I Ruler Chassis.]]
  
 
* '''Proc''' [[10 nm process]]
 
* '''Proc''' [[10 nm process]]
Line 62: Line 64:
 
* '''Eff''' 2.0-4.8 TOPs/W
 
* '''Eff''' 2.0-4.8 TOPs/W
 
* '''Perf''' 48-92 TOPS (Int8)
 
* '''Perf''' 48-92 TOPS (Int8)
 +
 +
<!-- NOTE:
 +
          This table is generated automatically from the data in the actual articles.
 +
          If a microprocessor is missing from the list, an appropriate article for it needs to be
 +
          created and tagged accordingly.
 +
 +
          Missing a chip? please dump its name here: https://en.wikichip.org/wiki/WikiChip:wanted_chips
 +
-->
 +
{{comp table start}}
 +
<table class="comptable sortable tc4">
 +
{{comp table header|main|5:List of NNP-I-based Processors}}
 +
{{comp table header|main|5:Main processor}}
 +
{{comp table header|cols|Launched|TDP|EUs|Peak Perf (Int8)}}
 +
{{#ask: [[Category:microprocessor models by intel]] [[microarchitecture::Spring Hill]]
 +
|?full page name
 +
|?model number
 +
|?first launched
 +
|?tdp
 +
|?core count
 +
|?peak integer ops (8-bit)#TOPS
 +
|format=template
 +
|template=proc table 3
 +
|userparam=6
 +
|mainlabel=-
 +
}}
 +
{{comp table count|ask=[[Category:microprocessor models by intel]] [[microarchitecture::Spring Hill]]}}
 +
</table>
 +
{{comp table end}}
 +
 +
Intel also announced NNP-I in an [[EDSFF]] (ruler) form factor which was designed to provide the highest compute density possible for inference. Intel hasn't announced specific models. The rulers were planned t come with a 10-35W TDP range. 32 NNP-Is in a ruler form factor can be packed in a single 1U rack.
  
 
== See also ==
 
== See also ==
 
* {{intel|DL Boost}}
 
* {{intel|DL Boost}}

Revision as of 02:55, 1 February 2020

NNP
intel nervana inside.png
Developer Intel
Manufacturer Intel, TSMC
Type Neural Processors
Introduction May 23, 2018 (announced)
2019 (launch)
Process 28 nm
0.028 μm
2.8e-5 mm
, 16 nm
0.016 μm
1.6e-5 mm
, 10 nm
0.01 μm
1.0e-5 mm
Technology CMOS
Package PCIe x16 Gen 3 Card, OCP OAM, M.2

Neural Network Processors (NNP) is a family of neural processors designed by Intel Nervana for both inference and training.

Overview

Neural network processors (NNP) are a family of neural processors designed by Intel for the acceleration of artificial intelligence workloads. The design initially originated by Nervana prior to their acquisition by Intel. Intel eventually productized those chips starting with their second-generation designs in late 2019.

The NNP family comprises two separate series - NNP-I for inference and NNP-T for training.

Learning (NNP-T)

Lake Crest

Main article: Lake Crest µarch

The first generation of NNPs were based on the Lake Crest microarchitecture. Manufactured on TSMC's 28 nm process, those chips were never productized. Samples were used for customer feedback and the design mostly served as a software development vehicle for their follow-up design.

T-1000 (Spring Crest)

NNP T-1000
Main article: Spring Crest µarch

Second-generation NNP-Ts are branded as the NNP T-1000 series and are the first chips to be productized. Fabricated TSMC's 16 nm process based on the Spring Crest microarchitecture, those chips feature a number of enhancements and refinments over the prior generation including a shift from Flexpoint to Bfloat16. Intel claims that these chips have about 3-4x the training performance of first generation. Those chips come with 32 GiB of four HBM2 stacks and are packaged in two forms - PCIe x16 Gen 3 Card and an OCP OAM.


Symbol version future.svg Preliminary Data! Information presented in this article deal with future products, data, features, and specifications that have yet to be finalized, announced, or released. Information may be incomplete and can change by final release.

Inference (NNP-I)

I-1000 Series (Spring Hill)

NNP I-1000
Main article: Spring Hill µarch

The NNP I-1000 series is Intel's first series of devices designed specifically for the acceleration of inference workloads. Fabricated on Intel's 10 nm process, these chips are based on Spring Hill and incorporate a Sunny Cove core along with twelve specialized inference acceleration engines. The overall SoC design borrows considerable amount of IP from Ice Lake. Those devices come in M.2 and PCIe form factors.

NNP-I Ruler
NNP-I Ruler Chassis.
  • Proc 10 nm process
  • Mem 4x32b LPDDR4x-4200
  • TDP 10-50 W
  • Eff 2.0-4.8 TOPs/W
  • Perf 48-92 TOPS (Int8)
 List of NNP-I-based Processors
 Main processor
ModelLaunchedTDPEUsPeak Perf (Int8)
NNP-I 110012 November 201912 W
12,000 mW
0.0161 hp
0.012 kW
1250 TOPS
50,000,000,000,000 OPS
50,000,000,000 KOPS
50,000,000 MOPS
50,000 GOPS
0.05 POPS
NNP-I 130012 November 201975 W
75,000 mW
0.101 hp
0.075 kW
24170 TOPS
170,000,000,000,000 OPS
170,000,000,000 KOPS
170,000,000 MOPS
170,000 GOPS
0.17 POPS
Count: 2

Intel also announced NNP-I in an EDSFF (ruler) form factor which was designed to provide the highest compute density possible for inference. Intel hasn't announced specific models. The rulers were planned t come with a 10-35W TDP range. 32 NNP-Is in a ruler form factor can be packed in a single 1U rack.

See also

designerIntel +
first announcedMay 23, 2018 +
first launched2019 +
full page namenervana/nnp +
instance ofintegrated circuit family +
main designerIntel +
manufacturerIntel + and TSMC +
nameNNP +
packagePCIe x16 Gen 3 Card +, OCP OAM + and M.2 +
process28 nm (0.028 μm, 2.8e-5 mm) +, 16 nm (0.016 μm, 1.6e-5 mm) + and 10 nm (0.01 μm, 1.0e-5 mm) +
technologyCMOS +