From WikiChip
NVDLA - Microarchitectures - Nvidia
< nvidia
Revision as of 00:51, 3 September 2018 by David (talk | contribs)

Edit Values
NVDLA µarch
General Info
Arch TypeNPU
DesignerNvidia
ManufacturerTSMC
Introduction2018

NVDLA (NVIDIA Deep Learning Accelerator) is a neural processor microarchitecture designed by Nvidia. Originally designed for their own Xavier SoC, the architecture has been made open source.

History

NVDLA was originally designed for their own Xavier SoC. Following the Xavier implementation, Nvidia open sourced the architecture. The architecture was made more parameterizable, given the designer the tradeoff choice between power, performance, and area.

Architecture

Block Diagram

nvdla block diagram.svg

Overview

NVDLA is a microarchitecture designed by Nvidia for the acceleration of deep learning workloads. Since the original implementation targeted Nvidia's own Xavier SoC, the architecture is specifically optimized for convolutional neural networks (CNNs) as the main types of workloads deal with images and videos, although other networks are also support.

At a high level, NVDLA stores both the activation and the inputs in a convolutional buffer. Both are fed into a convolutional core which consists of a large array of multiply-accumulate units. The final result gets sent into a post-processing unit which writes it back to memory. The processing elements are encapsulated by control logic as well as a memory interface (DMA).

Memory Interface

New text document.svg This section is empty; you can help add the missing info by editing this page.

Convolution Core

New text document.svg This section is empty; you can help add the missing info by editing this page.

Bibliography

  • IEEE Hot Chips 30 Symposium (HCS) 2018.
codenameNVDLA +
designerNvidia +
first launched2018 +
full page namenvidia/microarchitectures/nvdla +
instance ofmicroarchitecture +
manufacturerTSMC +
nameNVDLA +