From WikiChip
Difference between revisions of "nervana/nnp"
< nervana

m (Reverted edits by 31.11.239.182 (talk) to last revision by David)
Line 39: Line 39:
 
The NNP family comprises two separate series - '''NNP-I''' for [[inference]] and '''NNP-L''' for [[training]].
 
The NNP family comprises two separate series - '''NNP-I''' for [[inference]] and '''NNP-L''' for [[training]].
  
== Training (NNP-T) ==
+
== Learning (NNP-L) ==
 
=== Lake Crest ===
 
=== Lake Crest ===
 
{{main|nervana/microarchitectures/lake_crest|l1=Lake Crest µarch}}
 
{{main|nervana/microarchitectures/lake_crest|l1=Lake Crest µarch}}
 
The first generation of NNPs were based on the {{nervana|Lake Crest|Lake Crest microarchitecture|l=arch}}. Manufactured on [[TSMC]]'s [[28 nm process]], those chips were never productized. Samples were used for customer feedback and the design mostly served as a software development vehicle for their follow-up design.
 
The first generation of NNPs were based on the {{nervana|Lake Crest|Lake Crest microarchitecture|l=arch}}. Manufactured on [[TSMC]]'s [[28 nm process]], those chips were never productized. Samples were used for customer feedback and the design mostly served as a software development vehicle for their follow-up design.
=== NNP T-1000 (Spring Crest) ===
+
=== NNP L-1000 (Spring Crest) ===
[[File:nnp-l-1000 announcement.png|thumb|right|NNP T-1000]]
+
[[File:nnp-l-1000 announcement.png|thumb|right|NNP L-1000]]
 
{{main|nervana/microarchitectures/spring_crest|l1=Spring Crest µarch}}
 
{{main|nervana/microarchitectures/spring_crest|l1=Spring Crest µarch}}
Second-generation NNP-Ts are branded as the NNP T-1000 series and are the first chips to be productized. Fabricated [[TSMC]]'s [[16 nm process]] based on the {{nervana|Spring Crest|Spring Crest microarchitecture|l=arch}}, those chips feature a number of enhancements and refinments over the prior generation including a shift from [[Flexpoint]] to [[Bfloat16]]. Intel claims that these chips have about 3-4x the training performance of first generation. Those chips come with 32 GiB of four [[HBM2]] stacks and are [[packaged]] in two forms - [[PCIe x16 Gen 4 Card]] and an [[OCP OAM]].
+
Second-generation NNP-Ls are branded as the NNP L-1000 series and are the first chips to be productized. Fabricated [[TSMC]]'s [[16 nm process]] based on the {{nervana|Spring Crest|Spring Crest microarchitecture|l=arch}}, those chips feature a number of enhancements and refinments over the prior generation including a shift from [[Flexpoint]] to [[Bfloat16]]. Intel claims that these chips have about 3-4x the training performance of first generation. Those chips come with 32 GiB of four [[HBM2]] stacks and are [[packaged]] in two forms - [[PCIe x16 Gen 3 Card]] and an [[OCP OAM]].
  
  

Revision as of 02:04, 12 June 2019

NNP
intel nervana inside.png
Developer Intel
Manufacturer Intel, TSMC
Type Neural Processors
Introduction May 23, 2018 (announced)
2019 (launch)
Process 28 nm
0.028 μm
2.8e-5 mm
, 16 nm
0.016 μm
1.6e-5 mm
, 10 nm
0.01 μm
1.0e-5 mm
Technology CMOS
Package PCIe x16 Gen 3 Card, OCP OAM, M.2

Neural Network Processors (NNP) are a family of neural processors designed by Intel Nervana for both inference and training.

Overview

Neural network processors (NNP) are a family of neural processors designed by Intel for the acceleration of artificial intelligence workloads. The design initially originated by Nervana prior to their acquisition by Intel. Intel eventually productized those chips starting with their second-generation designs.

The NNP family comprises two separate series - NNP-I for inference and NNP-L for training.

Learning (NNP-L)

Lake Crest

Main article: Lake Crest µarch

The first generation of NNPs were based on the Lake Crest microarchitecture. Manufactured on TSMC's 28 nm process, those chips were never productized. Samples were used for customer feedback and the design mostly served as a software development vehicle for their follow-up design.

NNP L-1000 (Spring Crest)

NNP L-1000
Main article: Spring Crest µarch

Second-generation NNP-Ls are branded as the NNP L-1000 series and are the first chips to be productized. Fabricated TSMC's 16 nm process based on the Spring Crest microarchitecture, those chips feature a number of enhancements and refinments over the prior generation including a shift from Flexpoint to Bfloat16. Intel claims that these chips have about 3-4x the training performance of first generation. Those chips come with 32 GiB of four HBM2 stacks and are packaged in two forms - PCIe x16 Gen 3 Card and an OCP OAM.


Symbol version future.svg Preliminary Data! Information presented in this article deal with future products, data, features, and specifications that have yet to be finalized, announced, or released. Information may be incomplete and can change by final release.

Inference (NNP-I)

NNP I-1000 Series

NNP I-1000
Main article: Spring Hill µarch

The NNP I-1000 series is Intel's first chips designed specifically for the acceleration of inference workloads. Fabricated on Intel's 10 nm process, thos chips are based on Spring Hill and incorporate a Sunny Cove core. Those devices come in M.2 form factor.


Symbol version future.svg Preliminary Data! Information presented in this article deal with future products, data, features, and specifications that have yet to be finalized, announced, or released. Information may be incomplete and can change by final release.

See also

designerIntel +
first announcedMay 23, 2018 +
first launched2019 +
full page namenervana/nnp +
instance ofintegrated circuit family +
main designerIntel +
manufacturerIntel + and TSMC +
nameNNP +
packagePCIe x16 Gen 3 Card +, OCP OAM + and M.2 +
process28 nm (0.028 μm, 2.8e-5 mm) +, 16 nm (0.016 μm, 1.6e-5 mm) + and 10 nm (0.01 μm, 1.0e-5 mm) +
technologyCMOS +