From WikiChip
Editing User talk:Cem

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

This page is not enabled for semantic in-text annotations due to namespace restrictions. Details about how to enable the namespace can be found on the configuration help page.

Latest revision Your text
Line 9: Line 9:
  
 
:::I mean, on the technical level those 12 lanes should be for whatever purpose the MB motherboard manufacturers want to use it. But AMD specifically designated 48 lanes to be dedicated for multiple GPUs. That was the intention behind the edit I've made. The other 12 lanes are designated for NVMe in all of the AMD ref designs and spec guides that were given out to partners. I'm not aware of any MB that doesn't follow this or uses x8 more lanes for the GPU beyond the existing 48. From a design point of view if you allocate all 56 lanes to the GPU for a "deep learning beast", you're going to starve it with just x4 NVMe + whatever the chipset delivers (which is less than x4). So you're not really gaining anything by doing this. I also doubt such workload can consistently saturate 48 lanes anyway. --[[User:David|David]] ([[User talk:David|talk]]) 17:52, 4 September 2017 (EDT)
 
:::I mean, on the technical level those 12 lanes should be for whatever purpose the MB motherboard manufacturers want to use it. But AMD specifically designated 48 lanes to be dedicated for multiple GPUs. That was the intention behind the edit I've made. The other 12 lanes are designated for NVMe in all of the AMD ref designs and spec guides that were given out to partners. I'm not aware of any MB that doesn't follow this or uses x8 more lanes for the GPU beyond the existing 48. From a design point of view if you allocate all 56 lanes to the GPU for a "deep learning beast", you're going to starve it with just x4 NVMe + whatever the chipset delivers (which is less than x4). So you're not really gaining anything by doing this. I also doubt such workload can consistently saturate 48 lanes anyway. --[[User:David|David]] ([[User talk:David|talk]]) 17:52, 4 September 2017 (EDT)
 
::I'm ok with a line like, "PCIe lane configuration is always x16+x16+x8+x8+x4+x4+x4 with the remaining x4 for the chipset."  Not everyone uses PCIe only for graphics, though, so I do not like describing 48 of the lanes as dedicated to graphics. :-)  [[User:Cem|Cem]] ([[User talk:Cem|talk]]) 12:02, 5 September 2017 (EDT)
 

Please note that all contributions to WikiChip may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see WikiChip:Copyrights for details). Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)