(→Inter-/Intra- communication) |
|||
Line 5: | Line 5: | ||
== Inter-/Intra- communication == | == Inter-/Intra- communication == | ||
− | [[File:epyc tech dayp77.jpg|right|thumb|[[AMD]] {{amd|EPYC}} dual-socket config]] | + | [[File:epyc tech dayp77.jpg|right|thumb|[[AMD]] {{amd|EPYC}} dual-socket config]][[File:amd if slide.png|right|thumb]][[File:amd if scalable control .png|right|thumb]][[File:amd if data fabric.png|right|thumb]] |
A key feature of the coherent data fabric is that it's not limited to a single die and can extend over multiple dies in an [[MCP]] as well as multiple sockets over PCIe links (possibly even across independent systems, although that's speculation). There's also no constraint on the topology of the nodes connected over the fiber, communication can be done directly node-to-node, island-hopping in a [[bus topology]], or as a [[mesh topology]] system. | A key feature of the coherent data fabric is that it's not limited to a single die and can extend over multiple dies in an [[MCP]] as well as multiple sockets over PCIe links (possibly even across independent systems, although that's speculation). There's also no constraint on the topology of the nodes connected over the fiber, communication can be done directly node-to-node, island-hopping in a [[bus topology]], or as a [[mesh topology]] system. | ||
Revision as of 06:30, 24 September 2017
Infinity Fabric (IF) is a system of transmissions and controls that underpin AMD's recent microarchitectures for both CPU (i.e., Zen) and graphics (e.g., Vega), and any other additional accelerators they might add in the future. The interconnections were first announced and detailed in April 2017 by Mark Papermaster, AMD's SVP and CTO.
The Infinity Fabric consists of two separate communication interconnections: Infinity Scalable Data Fabric (SDF) and the Infinity Scalable Control Fabric (SCF). The SDF is the primary means by which data flows around the system between end points (e.g. NUMA nodes, PHYs). The SDF might have dozens of connecting points hooking together things such as PCIe PHYs, memory controllers, USB hub, and the various computing and execution units. The SDF is a superset of what was previously HyperTransport. The SCF handles the transmission of the many miscellaneous system control signals - this includes things such as thermal and power management, tests, security, and 3rd party IP. With those two interconnections, AMD can efficiently scale up many of the basic computing blocks.
Inter-/Intra- communication
A key feature of the coherent data fabric is that it's not limited to a single die and can extend over multiple dies in an MCP as well as multiple sockets over PCIe links (possibly even across independent systems, although that's speculation). There's also no constraint on the topology of the nodes connected over the fiber, communication can be done directly node-to-node, island-hopping in a bus topology, or as a mesh topology system.
- Dual-socket, 2-Die multi-chip package:
- 600px
- Dual-socket, 4-Die multi-chip package:
- 650px
- Rates assumes DDR4-2666 is used.
Note that at most, there's a maximum of 2 hops between any two physical dies - both in the same package or between sockets.
With the implementation of the Infinity Fabric in the Zen microarchitecture, Intra-Chip (i.e. die-to-die) communication over AMD's Global Memory Interconnect has a bi-directional bandwidth of 39.736 GB/s per 4B link. AMD uses Single-ended signaling (as opposed to differential PHY) along with zero termination power in order to increase efficiency when transmitting idles. This allows the CPU cores to make use of the added power when workloads are not utilizing the entire fiber's bandwidth. Note that this is exactly the same bandwidth as a dual-channel DDR4 operating at a rate of 2666 MT/s that should be used in the system, therefore the bandwidth of the system will be directly tied to the DRAM transfer rate. In AMD's EPYC server processor family, which consist of 4 dies, this gives a bisection bandwidth of 158.944 GiB/s. At 2666 MT/s, the fiber can transfer a bit between two dies at the cost of roughly ~2pJ/bit. From an outside view, the control fabric can also be seen as a single extended control fabric which allows the multiple dies to communicate and handle various controls such as power management. AMD claims that the bisectional bandwidth achieved is twice which helps an MCM design behave more closely to a monolithic design.
Inter-Chip communication (i.e., chip-to-chip such as in the case of a dual-socket server) has greater restrictions (e.g. number of external signals you can have). AMD uses four wide high-bandwidth links that go between each of the dies between each of the sockets. This gives a maximum of two hops between any two requests and responders. Those links use traditional differential SerDes techniques in order to address the further physical distance between the sockets operating at 10.6 GT/s. This network has a bi-directional bandwidth of 35.3 GiB/s for a bisection bandwidth of 141.2 GiB/s (slightly less than maximum given the operating rate due to additional overhead from the CRC error detection which accounts for roughly 10% of the total bandwidth). This works out to around ~9pJ/bit TDP.
The processor keeps track of how active each of the links are and make use of dynamic SerDes link width management mechanism based on bandwidth and workload requirements, allowing conservation of power when not necessary.
References
- AMD Infinity Fabric introduction by Mark Papermaster, April 6, 2017
- AMD EPYC Tech Day, June 20, 2017