From WikiChip
Editing habana/microarchitectures/goya

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.

Latest revision Your text
Line 19: Line 19:
 
==Architecture ==
 
==Architecture ==
 
=== Block Diagram ===
 
=== Block Diagram ===
:[[File:habana goya block diagram.svg|400px]]
+
{{empty section}}
 
 
 
== Overview ==
 
== Overview ==
Goya is designed as a microarchitecture for the [[acceleration]] of inference. Since the target market is the data center, the [[thermal design point]] for those chips was relatively high - at around 200 W. Goya relies on [[PCIe]] 4.0 to interface to a host processor. Habana's software compiles the models and associated instructions into independent recipes which can then be sent to the accelerator for execution. The design itself uses a heterogenous approach comprising of a large General Matrix Multiply (GMM) engine, Tensor Processor Cores (TPCs), and a large shared memory pool.
+
{{empty section}}
 
+
== Scalability ==
=== Tensor Processor Cores (TPC) ===
+
{{empty section}}
[[File:habana hl-100.jpg|right|thumb|{{habana|HL|HL-100/102}} PCIe Card]]
 
There are eight TPCs. Each TPC also incorporates its own local memory but omits caches. The on-die caches and memory can be either hardware-managed or fully software-managed, allowing the compiler to optimize the residency of data and reducing [[data movement|movement]]. Each of the individual TPCs is a [[VLIW]] DSP design that has been optimized for AI applications. This includes [[AI]]-specific [[instructions]] and operations. The TPCs are designed for flexibility and can be programmed in plain [[C]]. The TPC supports mixed-prevision operations including 8-bit, 16-bit, and 32-bit SIMD vector operations for both [[integer]] and [[floating-point]]. This was done in order to allow accuracy loss tolerance to be controlled on a per-model design by the programmer. Goya offers both coarse-grained precision control and fine-grained down to the tensor level.
 
 
 
== Bibliography ==
 
* {{bib|hc|31|Habana}}
 
* Habana, AI Hardware Summit 2019
 
* Habana, Linley Fall Processor Conference 2019
 
 
 
 
== See also ==
 
== See also ==
 
* {{\\|Gaudi}}
 
* {{\\|Gaudi}}
 
* {{habana|HL}} series
 
* {{habana|HL}} series

Please note that all contributions to WikiChip may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see WikiChip:Copyrights for details). Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)
codenameGoya +
designerHabana +
first launched2018 +
full page namehabana/microarchitectures/goya +
instance ofmicroarchitecture +
manufacturerTSMC +
nameGoya +
process16 nm (0.016 μm, 1.6e-5 mm) +
processing element count8 +