Threadripper Pro build

Hi all,

I’m building a workstation, leaning towards Threadripper Pro, as I need PCIE lanes for extra graphics card, masstorage and network. As I’m not planning on upgrading yearly (built current dual Xeon system 2013) I need to leave room to upgrade.

Going trough the available motherboards,there are basically 2 that the high-end integrators use and I have seen videos here too with both of them. Asus Pro WS WRX80E-SAGE SE WIFI and Asrock WRX80 Creator, both ticks all the boxes I need. I would appreciate any tips/suggestions/pros/cons from users that have used both why choose one over the other please?

I will start with the 3955, and if needed update it to a 5975 some years down the line when used ones can be had for a reasonable price. And yes I know about Lenovo locking their processors to their platform.

Custom water cooling, processor only 3*120mm fan. Pump Iwaki RD-30, voltage dropped to limit flow.Looking at Heatkiller IV PRO for Threadripper as block.

Graphics cared:
Nvidia 2080 Super or 3080.

ECC is a must here. Will start with 8*8GB, if needed later I can upgrade based upon need.

Disks: M.2 PCIE4

Power supply:
Corsair AX1200i, 1200W. Has had a comfortable life in my current setup, fan has most likely never have had to spin up.

Proxmox. I would like the convenience of running the os(s) virtually. Mainly windows with at least GPU pass-trough.


  1. Memory, should I choose modules with Hynix or Micron chips ? And why one over the other?

  2. Any suggestions for setting up disks please?

  3. Any caveats or good to knows, like which slot should I put the graphics card in etc. ?

I can’t really provide input on all components but I may list you my choices for 5965wx build with explanation why those since I also decided to go with TR mostly due to pci-e lanes.

ASRock WRX80 Creator by faaaaaaar margin. Especially since I managed to obtain R1.0 with Intel x710 NIC but if you’re not into virtualization then R2.0 with Aquantia 10G should do just fine as well. This mobo is smaller (better case compatibility), has built-in VGA, IPMI, Thunderbolt, supports OC and stuff. It’s imho much more feature-rich platform than Asus mobo. I was actually only considering Gigabyte and ASRock but I believe Gigabyte went EOL so…

5000 series has really strong single core performance and I kinda wanted CPU that is also usable for consumer tasks and performs well all around that’s why went with 5965wx. It boosts to around 4.25ghz for ma all cores and I’m on air.

I went with Noctua air cooling. It’s just decent and simple. Also AC helps with cooling down VRMs and other stuff in pc…

I’ve got bargain deal on 3080 and it’s imho decent choice considering gpu prices are now more reasonable.

I made mistake and got myself 2x32gb just to later realize using 2 channel setup trashes TR performance. I went with Micron, Wandell also tested Micron so I believe it’s safe choice but I wouldn’t really overthink it. You probably get more performancd difference just from different layout (1R vs 2R, smaller vs bigger chips etc). Remember to get 3200.

idk, hard to tell. I decided to exeriment with FireCuda 530 because of outstanding write performance

I’ve got Seasonic 1300W. Believe it or not - this build is not as power hungry as calculators and people will say it is. My setup draws like 800-900w from the wall with both Gigabyte 3080 Turbo and 5965wx under 100 load with two Intel x710-DA4 quad 10G cards, Quadro RTX4000 as display gpu and few hard drives…

os: linux

network cards:
It’s a bit of caveat of ASRock mobo - R2.0 has Aquantia chips and those don’t support SR-IOV. x710 does. TL;DR - you can only passthrough one Aquantia interface to one VM afaik and Intel card can present itself to OS as 64 virtual pci-e devices that can be individually passed through to vms providing baremetal performance to LOTS of vms. I’ve got external x710 pci-e network cards like I said so you may wanna get that as well if you happen to go with R2.0 ASRock.

1 Like

Welcome to the forum!

Ok, for this one, if you need the PCI-E, then TR it is. Not that good with mobos, I have a 1950x with x399 taichi (bought from a fellow forum member, but if I could choose, I’d have gone for the taichi anyway).

I suppose if you do it custom, you know what you are doing and are capable enough to maintain the loop. Otherwise, I’d advise you to just get a Noctua (I have one, I’m very satisfied with it).

Go with 4x16GB. While 3rd gen is not as finicky as 1st gen (ask me how I know), you probably want 4 sticks and high frequency. Get 3200 MT/s, or if you can find any ECC with 3600, go with that.

Doesn’t matter.

Depends how many you have. Assuming you are going full NVME, single disk or ZFS mirror with 2 SSDs for OS. Either stripped mirror with 4 or 6 drives for whatever else, or if you want to save some capacity, RAID-Z2 with 6 drives. If you only use 1 disk for OS, you could do a Z1 with 5 drives, but at this amount of money, probably worth adding 2 more SSDs.

In all honesty, for a single workstation, this amount of NVME storage may be overkill. I’m using 2x NVME in ZFS mirrors on my 1950x and they are far more than I need. And I run VMs and other stuff from them and everything is still wicked fast. I would save on some capacity and do RAID-Z2 on 6 drives, stripped mirror is definitely overkill. For me personally, I’d even go with 6x SATA drives, but that’s just me (I have a humongous Antec P101 Silent which I love, but haven’t added any drive in it yet, besides some temporary 2x Ironwolfs that will go to my NAS build once I finish a data transfer).


1 Like

7950X is actually quite interesting as well for this space - especially once you realise a PCIe 5.0 x8 GPU card is… Well, let’s just say it’s doubtful if we ever need the full x16 lanes again. Ever. (short version, PCIe 5.0 x8 is enough bandwidth to feed a [email protected] monitor)

There are also SATA->m.2 connectors coming out nowadays, like this one:

It would not surprise me if future motherboards put fan controllers, RGB controllers, SATA contacts, legacy USB 2.0 / 3.0 and so on on m.2 expansion cards, going forward.

There are also smart solutions like a combined Eth10 + m.2 storage card, like this one:

Sure, if you really need 128 PCIe 4.0 lanes, Threadripper is the way to go. But I am no longer certain you do need that, and some out-of-the-box thinking could save you a couple of thousand bucks.

That said… You will be close to hitting the AM5 limits. But Threadripper is atleast $6500 vs $3500 for a decked out AM5 system. At that kind of price difference… Do you really feel you need it? It is fine if you do, but I’d think twice and thrice before deciding on TR.

Again, if you need it I am not going to argue you do need it. But a lot of people think they need ECC when in reality they do not. DDR5 has some ECC mitigations in it which makes this less of a problem than DDR4.

Do you need it? Maybe. Is the premium of $300-$600 extra for a motherboard that supports ECC worth it? Perhaps. But it is an extra cost of $300-$600, so you better make sure it is necessary. It’s a Risk/Reward ratio kinda thing. Is the data you are running through that system so important, someone could die or it could cause huge financial loss? Then yes, you need it. If the data is, say, tons of video streams, then no, not really - worst case, a pixel will change color from 0xabcdef to 0xabccef or something like that. For a single frame.

1 Like

I enjoy my combination of the Asus Pro and AMD Pro 3XXX.

One lesson I learned the hard way. The MB is 12.2"x 13", so when you see the computer case can handle E-ATX. Make sure you read the details. Most of them mention E-ATX, then express up to 280mm wide only max in the details. So this MB will not fit in most cases without modifications or covering up the grommet holes. So you will need to plan for a large case.

One of the best things I am trying to do now is to find a smaller case that can hold the MB, but I realize I will get it customized to handle the MB. I want to cut down on my case’s size, so I plan to sell off my current case and find something smaller.

Perhaps a stupid recommendation, but… Perhaps this could be of interest?

Well, worst case is that you generate a private key for your crypto wallet, and a bitflip incorrectly generates a public key for the receive address, so now when you mortgage your house and buy crypto with it, you lose everything :wink:

1 Like

ECC is not absolutely necessary but it’s strongly advised for any reasonable filesystem like ZFS, ReFS or Btrfs. Especially if you’re running bigger RAID configurations. Bit flips are way more common than people realize and it just happens that people don’t know it because surprisingly many people don’t even have important data on their PCs which is something I always struggle to understand how is that even possible and how it’s possible that people value their data and their work so low… But when you do actually have files that CANNOT get damaged and you don’t want to calculate md5 checksum of all your files 7 times a day then it’s nice to have ECC. Let’s put it that way…

Also basing on fact that OP mentioned Xeon build before I’d assume he has workstation habits like having multiple GPUs, RAID controllers, network cards etc. Using 4x4 risers on AM4 / AM5 boards to connect more pci-e cards is absolute PITA and borderline fire hazard… Also please note that as of now GPUs don’t use PCIe 5.0 and there are many cards that just use pci-e 3.0 and will absolutely suffer from using x4 pci-e 3.0. Because that’s what you’re ending up with when you’re trying to use consumer mobo to drive 4 pci-e x16 cards… Plus struggle with NVME… I tried to plan some ghetto setup on consumer board but all the risers and external pci-e switches required to connect reasonable number of pci-e x16 cards led me to conclusion it’s not financially viable option.


Sure, I am not arguing ECC is not nice to have. But is it worth spending, at the very least, $600-$700 for a motherboard when you can get a non-ECC motherboard for $200?

What is the risk here? How much data are we pushing? In what way? If we are pushing 2 TB of data a day and we get a bit flip on 0.0001% of those, that means 2 MB of corrupted data. Is that data o.k. with being corrupted?

Sure ECC is nice to have. But it is not essential and while I would love to have ECC memory on my machine, I’m not paying what amounts to three times the cost of RAM for my personal rig just to have ECC. That’s stupid.

Does this mean ECC should not exist? No, just that you should be aware the premium is quite steep on ECC, and AM5 can do most of everything else Threadripper can do.

Sure, absolutely. Only viable one of those for 99% of everyone nowadays is network cards. Only a data scientist might want a dual GPU for extra number crunching, but even then a 4090 is pretty much godlike data crunching already and you can only fit a maximum of 2 4090 FE anyhow. Hardware RAID is pretty much dead and can be run from NVMe ports these days, too.

Does this mean a full 7x16 PCIe array is dead? No, definitely not. But a lot has changed since the last workstation upgrade for many people. I’m not standing in the way or anything, I’m just asking, do you really want to spend all that money when it is likely you do not have to?

If the answer is “Heck yeah” then pedal to the metal, bro! :slight_smile:

If not, I just saved you 50% or more of the cost on your new system. You’re welcome.

1 Like

Good point for all of you thinking about using this board, thank you.

Forgot to mention that I have a custom case that hosts EVGA Classified SR-2 which is 13.6 inches long and 15 inches wide , with some room to spare around.

Current setup is EVGA Classified SR-2 and I do have a RAID controller running RAID6 as 5 is not an option anymore (7*3TB), what I mean is if you have a RAID5 and 1 drive fails you end up with a RAID0 for the whole rebuild. I only run 1 graphics card, and that will continue… Looking at a used 3080… Will need the extra slots for adding more M.2’s and as I plan on running Proxmox, I can set up ZFS file system which might benefit from Optane cash…perhaps a network card, depending on how much throughput I can get of a NAS I’m planning on to build next…

Here is another thought. If you’re going to maximize the PCI lanes and use water cooling. Find a GPU that you can watercool and is a single PCI lane card. I learned the hard way. The cards take up MULTIPLE PCI lanes. When brands. For me, it got frustrating to know I had two 3090 water-cooled front and back and find out they each took up three PCI lanes. And the only way I was able to regain a PCI lane if the additional PCI card was slim enough to fit between the two 3090. So to fix this, I pulled the 3090 and got a professional card with EKWB professional card watercoolers. Now the one card I have now gives me back all my PCI lanes, and if I so desired, I could get up to seven of the single-lane PCI card with water blocks, and no one is messing with anyone else. Plus, the cool thing about the professional cards - is more computing power, and it takes only 250-300 watts to do it. So from a power stand point. I went from 2-390 at 400 watts each to a single pro at 300 watts. The one thing I am really learning is how to maximize the system. For example - the A5000 and the 3090 are almost in sink. The professional waterblock for the A5000 allows you to place 6-A5000 where the 2-3090 once reside.


Or get a motherboard that has that thought out by its design, like the x399 Taichi.

I have:

  • RX 6400 at the top (2 slots, only occupies 1 PCI-E slot)
  • USB 3.0 adapter (1 slot)
  • Empty x1 slot (the adapters are x4, although I’m only using a KB and mouse on each, I might save up one x16 slot in the future)
  • RX 6600XT (2 slots, also only occupies just 1 PCI-E slot)
  • USB 3.0 adapter (1 slot)

You can make the argument if you are using a server motherboard, with all slots x16 one next to each other though. But even with a 2 slot GPU, you can reclaim the slots if you use a PCI-E expansion ribbon and your case supports vertically mounted GPUs. I think this option should be cheaper than water cooling, just need to know your requirements from the get-go, or buy a case and sell your old one.

1 Like

You have a great point if he gets the AMD Threadripper non-pro. I believe the gentleman mentioned he would get the new AMD Threadripper Pro 5975WX. The board you showing is the tr4 socket. All 6-8 Threadripper Pro Motherboards have PCI with 6-7PCI slots, all back to back to back. And the Pro version of the threadripper has the sWRX8 sockets, which are not interchangeable.

1 Like

Yes, that is the board I own. I was just making a point by giving an example, I don’t know what sWRX8 boards are out there, but there could be similarly designed boards that accommodate m.2 slots in those spaces occupied by the thick GPUs.

1 Like

I got ya. I understand your point and side of it all—great point, by the way.

All of the WRX-80 pro boards accommodate 2-3 m.2 slots for hard drives. The Asus board he is looking at I actually have, and the board accommodates m.2 just laying on the motherboard itself.

And to be super honest, I have been in touch with most of these motherboard manufacturers about why they don’t produce something like the board you suggested or a smaller MATX or ATX. The closest you get is a small format AMD Epyc motherboard, the M-ATX format. BUT those server boards don’t have all the features you would desire for outputs. I considered one, and I would have to insert a Bluetooth/wifi card off the jump.

I have searched long and hard, the manufacturers are treating this processor as if it is not going to be around. Not sure why, but they do. It keeps most of these manufacturers from making something that is a smaller format like the other boards. But then you have the manufacturers creating motherboards for processors, and now the processor is connected to a board, and you cant maximize the processor. This is why the threadripper pro board has 6 plus PCI lanes on all the boards.

1 Like

I would like to thank everyone for their input/suggestions.