Threadripper 2990wx workstation

i have build several loops before, this is not my first PC or high end hardware. my previous machine (current one) is a X299 with a custom loop https://builds.gg/builds/hellas-6288

what will be the advantages of an optane drive of 3 1TB NVME drives?

i though about ECC and frankly i cant see to see the ROI on that investment for my use case. the current DDR4 128gb of ram i got trough a friend that works at a distributor here in my city using hes employee discount (128GB for 500€euros.)

Optane has high endurance, low latency, crazy iops


https://pcper.com/reviews/Storage/Intel-Optane-SSD-900P-480GB-and-280GB-NVMe-HHHL-SSD-Review-Lots-3D-XPoint

There is the 905p now for larger options.
Also I dont know how you are running your 3x nvme drives so not sure how it would stack up at that point.

On the ECC side if you already have some yeah wouldnt bother, wasnt sure if you were building from scratch. Unless you need that feature set

awesome thanks for that, will look into optane.

im not sure how i will setup the 3x NVME driver.

they are stupidly fast in my X299 system (dual 500gb raid 0)

yeah i dont think i will be needing ECC, i dont forsee anytime in the future were i will be running anything for long time were ECC will help.

if i need to run anything like that i would most likely do that in the cloud.

im am super lost on 2990WX numa and linux kernel. thats were im most lost. i read that it works perfectly and i also read that people are having tons of issue, so im not sure what the current state is.

I dont have much exp with the linux side, (can do basic stuff but honestly I dont do anything crazy with my 1950x and have only ever booted it on windows) I am pretty sure wendell has done a lot of different things with the 2990wx, so dig around on the forum and his videos. He is around a lot and usually chimes in pretty frequently. There are a few other users on here doing similar stuff with similar hardware so hopefully one of them will chime in on this.

1 Like

i been doing a bit of research and i think im going to give Pop!_OS a try

The 2990wx can work fine on windows it depends on the app. The 2990wx is universally fine on Linux. Depends on apps

With 128gb memory you will have an error rate of 1-2 flipped bits per month in the worst case. So ecc is good if you want your system to be up sans reboot for 6+mo at a time

1 Like

I don’t really care for windows except for maybe a VM using VFIO for a bit of gaming. I would prefer to run Linux natively so development is easier.

I will mostly be working with things like, golang, python, nodejs, terraform, ansible, kubernetes, docker, pgsql, mysql, mongodb, redis, influxdb, prometheus, pycharm, goland, and browser apps.

i may get back to some hobby video editing using resolve.

with what i paid alreay for my ram i think is an aceptable tradeoff this is not going to be a server so it wont be on 24x7 and giving i will be testing and developing high volume/scale apps in this i expect i will fuck up at some point and have to restart.

thanks for the info super usefull.

components are arriving.

5 Likes

walletripper build confirmed

1 Like

I always think the same and would never think about spending that much on a PC. Then i remind myself, that i’m currently looking at spending 15k on a newish car. So depending on Priorities one could argue, that this is a rather “cheap” hobby :grin:

Yeah but cars last for so much longer, my 05 is still plenty good and has no major performance loss over newer stuff. Yet a computer from 05 would be cripplingly slow. So really depends how long you use your hardware if it would be comparable.

I’m pretty sure a car that was new in 05 will have lost more in Value that the same in high-end PC’s for that time.

In both cases we need to assume it’s somewhat of a hobby. Sure, if you don’t care for cars, buying a 1000 bucks 90s Golf is fine and works. But that’s not on the same level of getting a 5k Computer.

I agree though, that a PC will reach a point where it’s value is close to zero. But so will most cars. You can only put so many miles on it before repairs get more expensive than what the car is worth.
So, if you choose between cheap car and expensive computer, or “expensive” car and cheap computer, the expensive computer will probably be cheaper in the long run.

Also, many people get a new 1000 dollar phone every two years. Over the lifespan of a high-end desktop i think most people are spending more on phones than is spend on even a killer Gaming-PC.

Bought it a few years (2014 ish 2005 legacy gt 66k miles)ago for about 10g could prob get 7k for it now (115k miles, also Subarus dont lose tons of value) So doubled the millage only down maybe 3-4k with parts needed in that time besides tires/oil/gas aka normal stuff.

You never hold on to a car that long

Those people have more money then sense, you can have a need for a 1k phone, but not every 2 years. I usually aim for around $400 ish is tops ill ever pay for a phone(nexus 5, Moto x4, Pixel 3a)

If you need this level of power for work then sure its not really a personal buy at that point (most people who buy this level aren’t just enthusiasts buying for e-peen) I would say the only part of this build that is not “Needed” for work would be the water loop so maybe $600ish in parts and only maybe $300 of that isnt reusable in future builds.

so lets clarify, the OP perfectly states this is a workstation use for business.

im a devops engineer and i do a lot of infra work. the idea behind this is not to waste several hours a day just waiting for cloud infra structure to be created or destroyed (high-end cloud resources dont get created as fast as micro or nano instances in AWS or GCP) my calculation was that i was wasting about 2 to 3 hours a day just testing deployments and inframanagement.

also let me point out the absurdity of comparing a high end rig used professionally to a 15k car, a high end car or even mid range will be way over 40k (at least here in europe.)

a local setup will decrease that time wasted to a few mins a day.

also the custom loops being unnecessary is very relative, this is a workstation that would be right next to me, a loop like that would allow me to cool everything while keep the fans pretty much an minimun reducing noise level.

if by expending 6-8k i can guarantee my business will be able to adquire a bit more work that will land my company 150k+ a year or even allow me to get more clients before having to hire an extra engineer and go pass 200k i would say thats going to be best 8k i have ever invested.

the photos were me winking at every other build guide or youtube video arround.

1 Like

Yep business need is usually what drives this level of rig, and its a sound investment in that case since time is money.

Oh yeah, perfectly fine. I just said that i often remind myself, that spending that money on a PC (for business or private use) is not as outlandish as it might look at first sight. I didn’t intend to make this into a discussion about value or such.

I also highly appreciate the pictures as i like looking at hardware a lot. And the parts chosen here certainly are worth looking at.

i may end using the photos to get back to my blog, and do a write up about a devops workstation.

3 Likes

Beautiful components. I wonder, though, how far sub 5Ghz it’s worth going on the primary driver system versus splitting the budget to a faster workstation and a server solution that scales with use.

I’m kinda a newb, but what I ended up with is a 2700x main system (8 core 4.2ghz sustained boost no problem with a 280mm rad), a fileserver (another 8 cores with dual gigabit nic), and an old r610 (dual hexacore with the memory downclocked to reduce power consumption).

What I wish:

I kinda wish my ryzen system had another 500MHz and 4 more hyperthreaded cores for non-cuda machine learning. When I run small batch tensorflow stuff (50 epoch on a 10kb file with say 10 parameters for cross val), it finishes pretty fast, but not quite fast enough for my taste. I also wonder if faster memory would be nice (i’m at 3.2Ghz, but that 4Ghz on Zen2 looks tasty).

I kinda wish I had a 10gig internal network (just a small switch). When my 610’s proxmox hypervisor backs up the VM’s every day, it saturates a 1gig stream to my fileserver and I haven’t figured out the load balancing dual port thing yet. Me being a better system admin would greatly help that, but I almost would rather be dumb on a single 10gig connection and not worry about it so much.

I REALLY wish I didn’t install CUDA on my workstation. Workstation graphics drivers suck donky for games. Not really bad or anything, but certainly not optimized. Frankly, I loaded my workstation with dual NVidia gpus to use tensorflow-gpu and months on I just don’t do it. If I need them, I rent a box online for a day after I proof the run on my CPU. Again, if I was a better config artist, less problem. But when you get those errors. And you aren’t sure where the bottlenecks are…I really wish I had wholly separate systems for some of the more complicated hardware configs. For CUDA. For Cuda. What am i talking about. I want to get a linux box with a bunch of cards in it, spend a week figuring out how to f’ing make it work, and not touch it except to run scripts!

I wish I had more PCIe lanes. 20 isn’t enough. I get stuttering and weird balancing depending on how I plug in monitors, pass the GPUs around. Now I’m on a windows main for the workstation for PTC Creo stuffs, and that makes the problem even worse. Windows 10 Pro doesn’t handle lots of peripherals well and you can’t pass them to VM’s.

VNC Viewer and windows remote desktop have made it happen for me, though. I run critical software on windows VM’s (business side quickbooks for example) and make a remote desktop link on my workstation desktop. Load times are way faster to connect to an already open VM. With VNC Viewer, putty, and monitors, I can mostly monitor everything at once should I want to. However, I can’t pass data to them easily. I’ve mounted shared drives across the machines and can drag drop, but it’s clunky. I have to click 4 or 5 times and I think a single workstation I’d only have to click or CD/dir once. I’m also highly reliant on an always-on fileserver and application server, which isn’t super efficient…

Just a thought. If it were me, I would get the 16 core and spin the savings off into a used 710 with a couple hdd’s in mirrored zfs and proxmox or vmware.

my use case benefits a lot more with cores and thread than raw performance on single core (VM’s is not my primary use case.)

i already have a 2700x file server that runs a handfull containers for my inhouse use. 10GB nic on that one

my ML use case is very heavy my test datasets have millions of entries i need the gpu helping out.

when im done with tests and development long training seassing in prod or stage happen in the cloud

the zenith extreme has a 10gb pci card, i have 2 other servers rocking a 10gb nic, for switching im using netgear with 4 rj45 10gb ports and 1 SF port Netgear XS505M

i been doing for my production servers for a while, not an issue for me, also for gaming i plan to add a VM with passtrought one of the GPU’s

dont spect any problems here the TR 2990wx has 60 pci lanes

i dont think thnis will be an easy for me, i will only have 1 vm with a dedicated GPU, and if i have more they will be most probably be server VM’s with no X or windows manager all managed to SSH

go for it, your use case is a lot less extreme than mine. i been using 2600 and 2700x for a lot of small home servers for me and my friends and they work pretty well specially with unraid.

Sounds like your spec is about as good as it gets. I saw the pic of the parts coming in. Super amazing. I am curious what kind of numbers it puts out. Plan to post any benchmarks?

If you ever want to start a thread on scheduling/loading GPUs in tensorflow/theano/pytorch, I’d be happy to read!