Workstation recommendation

Do you have an idea what stepping those are running? I’ve been checking out those but they’re all lacking some basic info.

I’m either going to cut down on my pcie requirements and go with the 7950x or go all in on an xeon w7-2475X and w790 system. I’m currently leaning towards the 7950x as it will be 40% of the cost of the w790 platform whilst the performance is comparable and without having to sell off my 5950x to counter the costs.

I think i might invest some time and effort into getting gpu sharing working on the 5950x, create a kubernetes setup over my different machines and save up for an extra gpu, maybe amd for some rocm development on the main desktop and leave the consolidation for a future system :sweat_smile:

Maybe oems, never heard of a locked qs cpu, not sure if they even work in oem mother boards. My current workstation/server is a dual 7742 epyc system.

They have been great value since epyc Naples, but i have no experience with genoa qs cpus, and Theres not much info online

Generally qs cpu run slightly lower turbo speeds than retail, but in the past they have been unlocked so overclockable to faster than stock. Not sure if thats the case anymore though so wouldnt bet on it. But looks like they turbo just a few hundred MHz under retail.

Hopefully i can test a one soon, i also considered the 7950x but need the PCIe lanes

I got one of these 10GB for my Threadripper Pro build and ASUS Mobo. Comes in RJ45 variant as well ($86 on Amazon today) if you don’t want the SFP+. Was under $100 if memory serves. Works natively on Windows 10 and all the Linux Distros I run. Aquantia chipset.

I use Twinax cables as I could get three of them for one SFP+ RJ45 converter or SFP+ fiber.

if you are getting 7xxx series cpus many are vendor locked. The way they become vendor locked is to put it in a motherboard of a vendor who locks their CPUs. So don’t test the epyc cpu for your future supermicro build in your work dell or lenovo server.

Also, something to be aware of regarding second hand eBay CPUs…

When I was printing epyc systems this spring the 7xx3 cpus cost more than $1200.
The 7xx2 cpus started at $400.

The pcie4 motherboards were the same price as the Genoa Motherboards.

After testing the genoa 16 core cpu at $1100 it has double the single core performance of the 7xx2 cpus.

I think the Genoa systems are superior to the 7xxx system.

If someone is going to spend a few thousand on a new epyc system why get the one that is slower, less expandable and more expensive?

Not everyone has those couple thousands to spend. I got 2 full EPYC 7551P systems for €2k-ish. A Genoa CPU by itself is already more then double that price. And I don’t need the performance of a Genoa CPU, it’s way overkill for my use case. Heck, even the 7551P’s will be idling pretty much all of the time.

I got my CPU from here:

the motherboard is $800.
so 2K.
get the ram on eBay, people part out boxes of 25 dimms. 32GB dimms are about $80.

Then you are out the same amount as your pair of 7xx1 systems, except you have more than double the single threaded performance, the genoa 16 core multi threaded is the same speed as the 7xx1 64 core multithreaded and the system has 4 times the IO bandwidth.

And the system uses much less power at idle, and much less power when running full bore. And if you need more CPU than that, there are CPUs available capable of much higher speeds.

I went TR Pro, though this was back when the first gen had just come out to retail (mid 2021-ish) - it was a no-brainer thanks to a deal that brought the MB+CPU to ~$1400 bucks (M12SWA + 3955WX). I needed more cores, but figured to use the one it came with to tide me over till the 5975WX came out, and thankfully it did.

My biggest gripe is frankly the aquantia NIC… And it’s not even that there’s anything wrong with it, just that I wanted something with SR-IOV so I wouldn’t have to use one of the slots for a NIC. Seemed like the only supermicro board I’d ever personally laid hands on that had 10Gb that wasn’t Intel lol. The other options that were available at the time were fairly limited, and I figured I’d eventually want 25/40Gb at some point, so I’d be using a slot down the line regardless.

The main reason I’d go TR Pro today is memory costs honestly. When you can pick up 64GB DIMMs for less than $100 a pop (FAR less if you’re patient), as long as you don’t need DDR5, it’s a good way to save some significant $$. Otherwise, I’d either wait for the 7000WX release (whenever the hell that’s gonna end up being), or more likely, pick up one of those shiny new Xeon W’s and likely save yourself a ton of coin on the processor side of things. With how many PCIe lanes you’re needing, along with the seeming need for high clocks, I’d be looking at the W3400 series and pairing it with something like supermicro’s X13SWA-TF. All depends on how much you wanna spend, and how pressing the need is (meaning, do you wait to see what AMD’s 7000WX shakes out like and buy ‘whomever wins out’)

My biggest hope is that Intel’s re-entry into the workstation market brings AMD back from their money-printing level pricing to something a bit more reasonable. It already seems to have had an affect, as the same M12SWA that was going for ~$1100-1200 6 months ago is once again selling for ~$750, so there’s hope!

1 Like

Hello guys,
I am also considering building my own server/workstation build for home usage(or when on vacation) for deep learning purposes. I am considering the Supermicro H13SSL-N motheboard and epyc 9124 cpu for start and will pair it with several gpu-s. I plan to upgrade as many gpus as I can, so my question is, can I go on more than 3 gpus on this motheboard?

It has 3 pcie x16 slots but 2 x8 slots? I have heard about riser cables but can it be safely used in pair with ‘normally’ plugged gpus?

Is it worth going with pcie 5.0 build at this point at all for my purpose? I am planning on using double rtx 3090 for start and later add some a6000 and so on.

I picked this option since epyc provides more pcie lanes than tr mobos do.

Do I need ddr5 ecc memory for starters or I can go with regular ddr5 memory?

What is the reason this motheboard is cheaper than the others? Is there some other detail I should watch at?

Is the ‘cheaper’ build with milan gen better bfb at this point?
Thanks alot!

You certainly can install more than 3 GPUs, but I think the question you’ll want to answer to start is “where the hell am I going to fit all this stuff” lol - look for a chassis that’ll fit everything you want to put into the build (generically I mean, x GPUs, etc), and then find the rest of the components that suit the need.

There are cables / adapters / risers for just about everything of course, but you’d likely be best suited to sort out your requirements prior to making hardware choices; just a little research will help get you on your way.

Once youve got the fundamentals ironed out, then you’ll at least have enough information to know what questions might still be lingering / need help with, and you can start up a new thread with them laid out so others can try to chime in to help :+1:

Thank you for answer, BVD. At this point while writting I came up with 5 additional questions… It does not seem like a lot of people build their own servers for AI specifically, so not much can be found online. I just don’t want to overspend on something which won’t be useful to me, e.g. going for genoa if I can go with second gen with more cores, if I can get more from additional cores then gen difference…There is just too many questions to answer so I want to simply ask for a reccomendation of a lets say low mid and high end/pricewise (cpu+mobo) up to my budget of like 2k(for those two components). Thanks alot.

I’d recommend creating a new thread for this specifically, not just to keep from hijacking @vaxou 's thread (sorry about that!), but also so you detail more specifics - it’d be difficult to make a recommendation without a better understanding of the needs you’re trying to fulfill, any ancillary requirements when filling those needs (e.g. noise level concerns, electricity costs, and so on), and so on.

Some specific things you can include in the post that’d help folks more effectively respond could include -

  • Workloads you plan to run - AI is quite generic in some ways, so are you looking to train new models? Or simply run inferencing on existing ones? No other workloads on the machine, dedicated solely to AI work? Each of these make a significant difference in the amount of horsepower you need.
    • Storage / Network requirements - This somewhat depends on the answer to the above as to how important it is, but still could use some detail either way. If you’ve a 10Gb(+) network with a high-ish performing NAS, for instance, then there’s a lesser need to account for so much storage within the build cost. By the same token, if you’ve neither in place, you’re at the very least going to need some decent storage (and a lot of it possibly, depending on the size of your models), the lower latency / higher throughput the better.
  • Total budget - I know you mentioned 2k for the cpu and motherboard, but the rest of the components in the build can vary widely to the point they either dwarf or even pale in comparison to that, meaning that one could recommend a combo that fits that stated budget, but make the rest of the build financially impossible. Unfortunately such budgets are rarely useful in situations such as these for this reason. You can exclude GPUs from that budget given you’d noted this was going to be a piece of the build you ‘grew in to’ over time.
  • Time constraints - is this something that has a timeline attached (need to buy before X date for tax purposes, for example)? I know you’d said this is personal use, but doesn’t necessarily rule out hobby businesses and the like.
  • (Anything else - again, noise levels, efficiency/electricity costs, whatever the case may be)

Hope this all makes some sense! Like I said, a new thread requesting input while including the details above, and it’d be much easier for proper recommendations to be made - feel free to tag me in it if you’d like. The folks here hugely helpful in any myriad of ways, just gotta give em the parameters necessary to inform those recommendations. Even the best chef’s meals depend on the quality of their ingredients!

2 Likes

I’m currently developing and training something which should be capable of scaling workloads by checking out some metadata of the 3d files and giving an estimate on what tool type will take how much resources to get it processed within a certain time frame. That in itself is just some fancy statistics but a second part of the package is failure prevention through sound and image. Think about certain noise differences when a bit of a cnc machine is about to break etc.

About the storage and network, I’m handling tons of images and sound clips/streams and about 5TB of historical workload data which is being used on a daily basis. I think i can get away with a 10 gigabit network for now as long as i make my local caches big enough and script a bit with zfs send and receive.

I found a local deal for 4x intel 2650 v3’s (10c 20t) and 512GB of memory (2133mhz) in 32gb dims and noticed a local leftover stock of asus x99m ws which go for 4$ a piece. That would give me 4 extra nodes for about 100$ a piece (without storage, psu and case) I’ll scale out horizontally with slower/older hardware and keep my main and my new workstation as the gpu nodes. So in short, i’ll try out the 7950x with 96GB (2x48Gb) 6000mhz ram to keep some extra cash and squeeze out some extra $ on gpu(s).

As far as costs go, it’s a side project which can evolve into a product. The initial cost should be as cheap as possible but if i can iterate 10x faster by spending 3x, i’ll take that any day of the week. I’ve been living out of a cellar the past few years in order to fund my out of proportion hobby which is almost achieving it’s first customer :smile:

Quick update:
I decided to wait out the new threadrippers, price drops on the intel or older ones, and started building out horizontally with cheap 2nd hand servers.

  • I ordered 2 super cheap old xeons (2650 v3’s) and 128gb 2133mhz ddr4 with 2 x99-m ws (asus) motherboards which i received today to do a temporary (most probably permanent) expand of my compute.

Installed proxmox on them and did some vfio tests with an old quadro p2200 i had lying around. Got everything working after some minor fiddling and created a bunch of vm templates.



Now the real fun begins :smile: So to unload my current workstation which also counts as my actual pc i use for work, programming and everything else, i decided to use 1 of the xeon machines as my daily driver (with proxmox on it) and dedicate the 5950x machine for raw compute with the gpus in it.

I’m now 3 proxmox nodes deep (clustered), vfio vm templates for win11 and nixos (will use these as my dev machines for different customers), templates for kubernetes nodes (3 types gpu/high-perf-cpu/slow-cpu) and sliced them up as needed.

Everything is currently attached with the onboard 1gbit nics which suck, except for my 5950x which has 1 x 10gbe.

Todo (small sneak peak into the upcoming hw and sw stack):

  • find sfp+/28 intel cards and transceivers to bolt into my cheap xeon systems.
  • order 4x 1TB nvme’s (pcie3) for the slow xeon systems and do a z1 raid
  • order 8x4tb sata ssd’s for the cheap xeon systems
  • order 4x20TB spinning rust for the cheap xeon systems
  • deploy HA truenas scale setup (virtualized)
  • finalize my infra setups with some small terraform workloads
  • get some build servers going for generating my daily packer images

I cheaped out (again) but diverted some of the saved money to decent enough storage which will last me long enough. I’ve also managed to buy a bricked rtx 4090 (for 400$) which has a slightly bend pcb from what i can tell, which will hopefully only require reballing of the gpu die. Worst case there will be some more reballing of ram chips but i think i’ll manage.

Let me know if anyone of you is interested in a lab log or if i should opensource some of my scripts and packer builds to get my env going.

4 Likes

You may have a different idea than me what a truenas HA setup is, because I don’t think that it can be virtualized.

The HA deployment requires a pair of head units the be linked together with a heartbeat cable, (maybe gigabit, I think it was rj45, it was about 8 inches long) and to share a storage device. They used to share a battery and hdd backed ddr4 SAS drive. If one head unit went down, the other head unit would broadcast that it was the shortest path to the virtual network interface, and attached switches would then transition all communications to the second head unit. No sessions would be dropped. The longest delay in any ongoing communications was about 1.2 seconds.

For my own home lab, I am going to have a single zfs SSD with 10 minute snapshots, then log ship daily to an independent larger HDD which will hold the current data set plus a year or more of daily snapshots.

Primary SSD = Intel D7-P5600 6.4TB. , $369
There is also a 118GB optane drive that I am going to partition for Virtual Memory, as it has high write endurance and I don’t want those transactions polluting the zfs drive.
This is the SSD I bought to use for my primary SSD:

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.