From WikiChip
Editing User talk:Cem
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
This page is not enabled for semantic in-text annotations due to namespace restrictions. Details about how to enable the namespace can be found on the configuration help page.
Latest revision | Your text | ||
Line 9: | Line 9: | ||
:::I mean, on the technical level those 12 lanes should be for whatever purpose the MB motherboard manufacturers want to use it. But AMD specifically designated 48 lanes to be dedicated for multiple GPUs. That was the intention behind the edit I've made. The other 12 lanes are designated for NVMe in all of the AMD ref designs and spec guides that were given out to partners. I'm not aware of any MB that doesn't follow this or uses x8 more lanes for the GPU beyond the existing 48. From a design point of view if you allocate all 56 lanes to the GPU for a "deep learning beast", you're going to starve it with just x4 NVMe + whatever the chipset delivers (which is less than x4). So you're not really gaining anything by doing this. I also doubt such workload can consistently saturate 48 lanes anyway. --[[User:David|David]] ([[User talk:David|talk]]) 17:52, 4 September 2017 (EDT) | :::I mean, on the technical level those 12 lanes should be for whatever purpose the MB motherboard manufacturers want to use it. But AMD specifically designated 48 lanes to be dedicated for multiple GPUs. That was the intention behind the edit I've made. The other 12 lanes are designated for NVMe in all of the AMD ref designs and spec guides that were given out to partners. I'm not aware of any MB that doesn't follow this or uses x8 more lanes for the GPU beyond the existing 48. From a design point of view if you allocate all 56 lanes to the GPU for a "deep learning beast", you're going to starve it with just x4 NVMe + whatever the chipset delivers (which is less than x4). So you're not really gaining anything by doing this. I also doubt such workload can consistently saturate 48 lanes anyway. --[[User:David|David]] ([[User talk:David|talk]]) 17:52, 4 September 2017 (EDT) | ||
− | |||
− |