Return to

Threadripper Build


Wonder if Zen2 will come to x399 boards or if they will release TR4v2 (prob hackable to put new chips on it,but idk about 64 core on x399 gen 1 even with power savings)



Well there will likely some new TR cpu’s,
my guess is that those will likely be backwards compatible with current x399 boards.
But i would’n be surprised if we also might get a new chipset series of boards like X499 or so maybe?
With upgraded vrm solutions.
Atleast whenn the new TR cpu’s will also get pci-e 4.0 that is.
Or Threadripper might stick to X399 with just a new revision of boards idk.
Very little info known about this atm.

1 Like



But its pretty likely that we are going to see that 16 phase infineon pwm,
used more on boards maybe likely next gen TR boards aswell.
Because that would be an ideal solution for a Threadripper vrm,
driving 16 phases straight from the pwm to 70A Infineon smart powerstages,
or IR3555’s 60A stages.



Sorry for the false start. I was hoping to solicit advise on the RAM selection first but didn’t make that clear.

I get excited thinking about building a 32core, 128GB top of the line monster, but as it stands now I have a lot of other projects chomping at the checking account, and I think the practical side of me is winning out. I know that I would find occasion to occupy all those cores and it would be sweet when I did, but 99% of the time 16 is going to be more than I need, same goes for the Ram. For the data processing research, its better, if slightly less fun, to launch some temporary cloud instances anyways. So I’m downgrading the requirements to a 2950x and 64GB ram. I think this also lowers the stakes a lot and it feels like the parts are probably a bit easier to match (VRMs, thermal load, ram timings, etc).

I’ll put together a parts list and post it in a little bit.


  • Case

  • CPU

  • Motherboard

    • MSI X399 MEG Creation
    • Thoughts : the Zenith Extreme Alpha costs way to much, this one seems like a heck of a nice board, I have a better upgrade path if gen 3 is compatible or if I try to snag a 2990wx later on, but the MEG might be overkill? not sure here.
  • Ram

    • Crucial 16GB DDR4 SDRAM ECC CT16G4WFD8266
    • Reasoning: suggested earlier in thread, and cheaper than the non ECC “b-die” ram I had been looking at.
    • Question: Will I notice a difference while using the computer between this and faster, non-ecc ram with those swanky heat spreaders?
  • Primary Storage

    • Reasoning: Saw a review saying it was just as good as the Samsung?
    • Use: Primary OS, home dir, workspace, scratch
  • Secondary Storage

    • 2TB Sata SSD (ADATA)
    • Use: Steam, downloads, test data sets, windows boot for games and/or VM (FreeNas holds media, archives, and additional space)
  • PSU

    • Seasonic 80+Ti 1000W
    • reasoning: this just seems to be what everyone gets, 1000w gives me room to grow.
  • CPU thermal

    • Enermax Litech TR4 II 240 [ELC-LTTRTO240-TBP]((
    • AIO is necessary for this case. Want 280mm, but there’s a good chance it won’t work with eATX in this case (no hard confirmation), 240 should be good for 2950x right? Ideally I’d get both and take one back, if that where possible.
  • GPU

    • Gigabyte Vega 64 GV-RXVEGA64GAMING
    • Linux compatibility is important for me so I am going AMD. This card seems to be a good price point. If this card is not advisable, I wouldn’t go much more 400 without just wanting to jump up to a Radeon VII.
  • Fans



Primary Use: linux software development workstation

(In case you didn’t read the wall-o-text original post…)

development means front end & back end, polyglot languages, databases for testing and evaluation, lots of docker, R&D of all sorts, data processing

Gaming? I play games a few hours a week, and mostly play indie games off steam, but would like the option to play an A list title on occasion.



im doing a TR build too.

im going to be using pop_os, my use case is a bit like your but more towards devops (some polyglot development and a lot of kubernetes docker use) i also do some gaming, my plan is to use VFIO to passtrough a dedicated GPU to a windows machine with 4/6 cores and 16gb of ram.

1 Like


what are your lessons learned with k8s? I’m just getting into k8s on bare metal. One project uses k8s in azure which has worked surprisngly well. I’m tinkering with local/DIY hosting because the production k8s cluster in azure costs a lot and another project that has its own devops team gets a much better deal out of a DIY k8s solution.

I am thinking of setting up a mini-cluster of two 32c threadrippers for quasi-permanent experimentation but not sure yet. Limited to 256gb on each machine is kinda meh though.

1 Like


Epyc 32 core zen 2 out yet?



i was one of the ones crazy enough to deploy a prod kubernetes cluster to bare metal very early on pre 1.0 release. it was a nightmare due to the lack of documentation and the complexities of kube in metal back them.

configuring flannel and keep it running without failing was a challenge specially because we had the master in AWS (HA setup) and all the kubelets were bare metal.

those days were very tough on the team as it took us a while to make it work and keep it reliable but it was way faster to work with kube than say swarm or mesos/marathon or and nomad was to green back them.

now days most of my cluster are in GCP or AWS (i have zero experience with Azure) GCP is very awesome and depending on your workflow if you have stateless jobs you run in kubernetes running a cluster with preemptible instances is super cheap, especially when paired with autoscaling.

for baremetal i used to have a very complex ansible deployment to manage everything but now days i do everything with k8s-tew

it does HA, deploys calico, manages ingress with nginx and also certmanager, uses Ceph for storage, manages backup also uses metalLB comunications are encrypted by default.

if you already know kubernetes by heart and dont need to actually learn all the individual components (kubelet, api-* controllers-* etc.) this is one of the best ways i found so far to manage metal.

my biggest metal deployment was like 100 nodes, and in cloud i barely get that big in a single cluster as i use multiple for HA and disaster avoidance.

i would be more than happy to help out more if you need more info or jump in on a call some day, i do mentoring every one in a while.



I’m the k8s subject matter “expert” at work, I was the main dev-ops guy for a long time. I’ve been running it in AWS via Kops and Ansible since maybe 1.5. Kops is pretty painless to get started. There have been some hiccups with specific things with Kops but I don’t have much to compare it too. EKS wasn’t out yet or I probably would have considered it. I’ve never run k8s on bare metal.

One thing I am interested in is leveraging Kubernetes for development workflows, currently developers work with local docker-compose setups depending on what they are doing, and it is getting increasingly unwieldy.



docker for mac and windows now come with minikube embedded!!! i moved everyone to that as here all devs use MBP´s

i used Kops in AWS for a long time until we migrated to GCP (cheaper and the managed k8s is awesome, also RBAC is integrated into IAM which is amazing)

i think EKS is subpar compared to the GCP k8s offering.

i have a small ansible role that uses the k8s_raw module that setups minikube to works very similar to prod so devs can do everything there. no docker compose and they all got so familiar with k8s resources that is now pretty nice seeing them how to optimize deployments without us (devops involved) everything started going a lot smoother since we did that.

we also use ansible with k8s_raw + AWX to deploy to prod.

1 Like


Let’s start a different thread about k8s - I’m a big fan and lots to talk about.

Meanwhile, I think I’m going to go ahead and start ordering the parts list above. Maybe waiting to get the AIO and fans last, mostly on the small chance it looks like a 280 could fit, don’t think the 2950x needs it, but if it can fit then why not?

Any further suggestions on parts I should consider or swap?

1 Like


are you going to be running this 24x7[quote=“derekv, post:22, topic:143334”]
Any further suggestions on parts I should consider or swap?

are you going to be running this 24x7? if the answer is no go for non ECC faster ram.

im building a TR build with the 2990wx and the asus zenith extreme (non alpha)

the zenith extreme is amazing, i have not finished the build but looking at the mobo and the features is amazing.

i went with 128GB of corsair 3200 i bought trough a friend and employee discount for 450 euros. as this is a workstation for me and i wont be running 24x7 rebooting is a non issue for me. i see no benefit what so ever going ECC.

i dont think you will need 1000w PSU even for grow, but one of the things it will give you is that you will be able to run it at its sweet spot and keep it cooler, i went with a 1200w mostly because im trying to keep noise down, i guess the fan will barely spin with my usage.

it also looks like you can put fans on bottom and front of the case, maybe consider that to keep the other componets cooler (mobo and gpu)?



I’m on the fence about the ECC. I tend to leave it running 24 but rebooting is not a huge deal, just a preference. I have gotten ECC for workstation builds in the past but always wondered if it was doing anything for me or not.



I don’t have a lot of time but if you and @lacion want to do something interesting I have two 32 core threadripper machines with 64gb ram I can wipe and setup with ~1tb of local flash and maybe up to ~8tb spinning rust if you want to do a write up or step by step for k8s, I will turn it into a video with props to you guys and links back here.

I am not necessarily talking about k8s on bare metal – a few k8s VMs on KVM on linux would be fine. The networking part is a bit tricky, but both of these machines have 1 10 gig adapter and dual 1 gig adapters, and I can add more PCIe nics.

One thing I DO like about bare metal k8s is that I can give them access to GPUs like the v100s to do jobs that can be gpu accelerated. But I am thinking about maybe showing how you can use basically the same k8s cluster with ngress or something for both dev and production workflow.

git push remote master > k8s vm to check/link/composer install/whatever > new k8s VM pool and/or rsync changes to other pods or w/e.

I like this because mixed workflows are more possible. Up till now I’ve been struggling with some projects where the other people are really just awful terrible people that make their own lives easier at the cost of making everyone else miserable… and this new setup I’ve been working on means I can do my own thing in the k8s cluster then just use ssh to force-clobber things the way they need to be … no developers have to get their hands dirty in the terribleness outside our walls in other words…

But I’d love your insights/perspectives/etc on that because I think the context of my experience is a more narrow use case and I hadn’t considered k8s on bare metal till now.

write up something ro me, or give me some guidance, and I’ll try to build it.

64c/128t on two machines would make a pretty decent k8s cluster probably with some HA? (I could run the controller on another machine if need be lol)

(sry didnt mean to threadjack)


Discuss: Deploying Production K8s Cluster to Baremetal/VMs guide/video

Yes please!! +1 for sure!!

@lacion, Have you looked at Kubespray?

Regarding the OP’s question -

I’ve been running 2x 1950X Threadrippers on 2x ROG Zenith Extremes (v1). One has G.SKILL TridentZ RGB Series 32GB (4 x 8GB) 288-Pin DDR4 SDRAM DDR4 3200 (PC4 25600) Desktop Memory Model F4-3200C16Q-32GTZR; the other has a 32GB Kit of Corsair Dominator Plats 3000MT/s (originally used on an Intel X99 build).

Both setups are Fedora “desktops”, although I’ve just turned the second Threadripper into a Xen/XCP-ng server, to run my VM instances.



I am building a 2990wx, gonna be oced and on water with 820mm of rad, will 128 GB of Corsair Dominator 3000mhz be comparable to the gskill TR kit at 2966???



should perform more or less the same, you know the timing on both of those?

also its not very common to have workload that will fully use ram to max capacity speed wise.



In some workloads there could be a difference but nothing major