Consolidating Homelab into Threadripper Pro Workstation

Hello,

Currently, I am running a Dell R730 with dual 2680v4’s and 256GB of 2400T DDR4 RAM. The server is running Proxmox and has served me well. For my desktop, I am running a Ryzen 5900x with 64GB of RAM, along with an RTX 2070 Super for when I occasionally game.

I am a DevOps Engineer by day and practically live in an exclusively Linux environment but have recently tinkered with utilizing Hyper-V, after seeing some of Wendell’s videos on Threadripper Pro and some of the neat virtualization scenarios it can handle. Honestly, I enjoy Hyper-V more than I would have previously thought.

I have tinkered with doing this “all in one” setup in my 5900x desktop, which has more than adequate CPU power but it seems to fall short in ECC memory support (mixed bag/only UDIMM), as well as the relatively small amount of PCIe lanes that are supported by it.

The PCIe limitation is honestly the biggest factor in deterring me from going with the 5900x as my main “sworkstation”, since loading it with a 10 gig NIC, an HBA, NVMe, and my graphics card seems like it would be way too much in terms of needed lanes.

That being said, is anyone here able to speak on the topic of using Windows 10/11 Pro with Hyper-V in a Threadripper build as a sort of “all in one” machine? I’d love to throw a healthy amount of RAM in it and having the majority of my lab running on the one box, with a secondary hosting redundant DNS, etc.

Am I crazy for wanting to do this? Should I stick with having a machine dedicated to VM duties, like my R730 currently is? I’d love to hear your input

Same here. I looked at the cost of Threadrippers and found them too expensive for my mostly hobbyist usage. I evaluated the Ryzen platform very carefully if there was a way for me to have my cake and eat it, too.

Eventually, I convinced myself that I will be able to live within the boundaries of 24 PCIe Gen4 lanes when combined with a 570x chipset. I found the motherboard ASUS Pro-WS-X570-ACE , which to my knowledge offers the best configurability of PCIe lanes of any AM4 board.
If offers 3 16x slots that electrically offer at least 8 lanes of PCIe Gen4. All slots can be bifurcated into 4x segments.

I just checked and it doesn’t seem to be offered new anymore. :frowning:

I have two of those boards configured with 128GB memory and a 5900X for my hobby workloads. I use them with varying combinations of m.2, u.2, Optane nvme storage, ConnectX 3 Pro NICs, SAS HBAs, and/or NVidia 3080 graphics.

I am dismayed about the announcements from both AMD and Intel that their next gen consumer / enthusiast CPUs will only support 20/24 lanes of PCIe Gen5. That’s more bandwidth that I won’t be able to use.

3 Likes

Glad to hear that you made the AM4 setup work well for you! For my purposes, I think I’ll probably focus on Threadripper for the PCIe lanes. I just don’t want to feel limited in the future.

I agree with you that it’s ridiculous that AMD and Intel are still limiting PCIe lanes so aggressively in their consumer lineup. It seems you must have some sort of HEDT setup to have any reasonable amount of PCIe lanes for anything more than gaming

Hoping someone can chime in on their experience with a similar setup

1 Like

Most trying to do this end up either staying on multiple systems, or going EPYC instead.

2 Likes

I guess I’m failing to see any major advantages to going the Epyc route. I was leaning the TR route because of its higher clock speeds that would be advantageous for when I do want to game

I’m not a big fan of the One Box to Rule Them All thing just in case that one box dies. Cattle not pets, and all that.

3 Likes

I would have the previously mentioned secondary box for failover with secondary DNS, etc, running on it, which isn’t far from what my current setup is, with my R730 running most things and the whitebox server running secondary instances of things

1 Like

I quit drinking and spend the money I saved from not drinking booze on a threadripper pro build :slight_smile:
At first I also thought about the price, and when I realized the amount I’ve already saved from not drinking I said better spend the money on something fun that keeps me busy instead of drinking.

8 Likes

How have you liked it? What’s your use case been?

Any virtualization?

I like it very much. My previous system was an ITX system from 2014/15 I think, that I had to get in a hurry . For work I had to use Windows to connect to company network and for private I was using linux. So I always had to reboot, as for virtualisation it was a bit weak and I couldn’t expand it anymore.

So the new system was a very different approach as I had done small system, and I wanted to do a big system again, where I can run linux as main os and run windows virtualized, run a homelab where I can do some mockup test setups for work (there’s too much bureaucracy involved todo some proof of concept stuff/test setups) and hobby stuff/learning. Linux and windows have their own dedicated GPUs.

1 Like

Higher lane count means bigger socket and far more traces on the board and much more expense.

99% of end users would be fine with half the lane count of the consumer platform to be honest.

So I don’t think it is amd and intel being stingy just because. It reduces cost a LOT, and most people (talking the wider consumer market) simply don’t use more than 1x m.2 and a single low-mid tier GPU. If that.

This is why you need a HEDT setup to do HEDT things.

Personally I went multiple AM4 boxes due to cost; threadripper here is ridiculously expensive and epyc is not available outside of prebuilt servers.

It does give me more flexibility to try bare metal things like QUBES and if one box dies I’m not screwed. I can also do rolling upgrades at less cost. I have 3 generations of am4 here for example.

With regards to hyper v as a lab. Been doing that for a few years now at work and home. Not on threadripper. But for me the annoyances are shitty Linux gui performance, sometimes VMs can get in a state where you can’t power them off/on (if something happens during a commanded shutdown for instance).

Virtual disk/VM management is a bit more manual/painful than VMware workstation which has built in template support.
But it works. And it’s a once off payment (upgrade from Windows Home to Pro) vs getting charged for new VMware versions.

1 Like

While higher clock speeds are indeed helping with gaming, market has moved on quite a bit. Nowadays buying a threadripper for gaming boostz are akin to buying a Formula 1 to go grocery shopping, way overkill for what you want to do and in some ways even more inconvenient than ye olde budget build. :slight_smile:

While the idea of a “Sworkstation” is interesting in theory (an always-on box that is used as a primary workstation and secondary a server) in practice it is just sooo much hassle. I would instead recommend a three box approach, that being Firewall, Home Server and Workstation.

Also, these days a server is… Well, not what it used to be. You have ye olde NAS that is still around, that fulfills 99% of the server needs at home. Otherwise, IT and server maintenance have moved so far out from the “closet server racks” that today, there are standardised racks that provide redundancy and should they fail the entire rack get swapped in a matter of minutes while IT does diagnostics and figure out what went wrong - and that usually only takes an hour or two, too. The entire industry has changed so much, a person like me who trained in the early noughties is completely ROFLstomped by the current state of the art. Back in those days people were just getting used to the fact that multiple redundant servers could be a thing, and you could shut one down while letting the other run. Come a long way since then, but i digress. :slight_smile:

Back to the topic, given that a TR and/or EPYC platform costs as much as two or more Ryzen systems combined, I see no reason whatsoever to go for TR for most people. If you have extraordinary needs, sure, there is a place. But, for the most part, the only thing the TR is offering over Ryzen is more cores and PCIe lanes.

Given how the market is slowly shifting towards one or two PCIe x16 slots and rest m.2, and the fact that PCIe 5.0 is so blazingly fast it can almost imitate RAM speeds, and PCIe 6.0 will be another doubling of the speed… It is unlikely we will ever go back to full x16 slots anytime soon. I think the new normal will be x8 and x4 slots in most regards. We just do not have anything that requires that fast transfer speed anymore. Not even 16k@300Hz gaming.

So, in a home setting… Do you really need more than 16 cores and 20 PCIe lanes? Answer: Probably not. Feel free to feed your ePeen though, but you’re doing it mostly for bragging rights at this point. Nothing wrong with that thou!

3 Likes

While I won’t benefit massively from more than say, 16 cores, the PCIe lanes are absolutely something I want and need in a build and is unfortunately why the 5900x whitebox server just hasn’t worked well for me. It is why I have continued to use my R730, with it’s plethora of PCIe lanes, as well as why I’ve entertained Epyc/TR for my lab

I don’t disagree, though, I’m sure 20 lanes is just fine for a lot of users.

Also, in reference to:

I would instead recommend a three box approach, that being Firewall, Home Server and Workstation.I would instead recommend a three box approach, that being Firewall, Home Server and Workstation.

I would not be virtualizing any of my network appliances. My networking setup currently exists outside the scope of my virtualization environment.

This would simply be a consolidation of my Windows 10 host, which I use for occasional gaming, and the hypervisor running the majority of my VMs

What I use as an actual workstation is my 16" M1 Macbook Pro. My current desktop “workstation” is currently only utilized when I want to play some games a couple of times a month

1 Like

Just curious, why? Only legit use cases I can think of is if you have a ton of expansion cards that you absolutely positively cannot under any circumstances converge. Things like old music equipment that requires a specific capture card stuck on PCIe 2.0 x16 lanes or similar.

It is either that or you need a ton of m.2 storage, everything else today requires a maximum of x4 4.0 or x2 5.0 lanes, except for GPU which requires twice that. X570 boards are ridiculously capable especially boards like X570 Aorus Master that has four m.2 PCIe 4.0 slots.

But yes, you do what you feel best just pointing out 24 lanes is a lot already :slightly_smiling_face:

2 Likes

The amount of NVMe I need to run, along with an HBA for SATA SSDs and HDDs, plus lanes for a 10 gig NIC, not to mention a graphics card, would mean that 20 usable lanes on AM4 would be far too few

The goal of this theoretical build is consolidation and the Ryzen 5900x simply doesn’t offer the amount of PCIe lanes required to support everything I need, moving from my R730.

Even if I were to settle for 4 NVMe, a boot SATA SSD, 2-4 SATA HDDs, an RTX 2070, a 10 gig NIC and assume there would be zero expansion for the future, I’m not sure AM4 could support it

Hmm, a lot of the things you write doesn’t make much sense, why would you ever have a boot SATA drive if you can instead have a boot NVMe drive? It is also possible to cheat like with this combined m.2 and 10Ge card that allows you to have a boot drive, a cache drive AND a 10 GbE on the same x4 slot:

Not to mention m.2 SATA adapters:

It is amazing what one can dig up with a little patience, but if we remove NAS from the equation, the first thing I think you should look at is a couple of 18 TB SATA drives and 2TB NVMe drives to consolidate stuff.

Sure, it’s awesome to be able to spec your system out with ECC and SAS and twenty three SATA drives… But is it reasonable to do so? I cannot answer this, only you can for your own use case. Just trying to provide some creative options and make you think one time extra, but I’ll let it rest at that. :slight_smile:

Lastly, at least wait until Black Friday - by then the AM5 platform is out with a whole heap of new and sexy motherboards/solutions, and worst case you might snag a threadripper on the cheap in either case. :slight_smile:

You’d be over-subscribed on the chipset for sure. If you can get by with lower clock speeds an 8 core Epyc might be a good choice. Gaming might be a bit of an unknown there, but you can get a 7232P for $400ish, so if you’re willing to DIY it and go off-script as Wendell says, then you might be able to save a bit. I’ve also seen the 7f32 on ebay recently at suspiciously low prices which has clocks much closer to what you’d get with threadripper pro, but I worry they’re vendor locked or something.

The nice thing about threadripper pro is you can just buy the whole machine off the shelf and don’t even have to think about it, but you’ll be paying for the privilege.

1 Like

I think you may be misunderstanding on a couple of things :slightly_smiling_face:

I wouldn’t be booting off of NVMe in a 5900x build because that would be even more PCIe lanes being used that could be better spent on other things.

This is also in no way a NAS build. The storage is purely for VMs. I already have a 100TB NAS set up separately from all of this, running TrueNAS Scale. This machine would be very focused on virtualization.

As for AM5, it will still be limited limited on PCIe lanes, which doesn’t lend favorably to what I’m trying to achieve

Getting back to the original question, I have no experience of using hyper v as the bare metal hypervisor, should work fine, as long as you are not planning on passing through the GPU or part of it, or maybe it will, the impression from reading the docs is that you will have way less configuration options than running proxmox/kvm …

Sorry, my bad, I was under the impression you were about to consolidate NAS + Services + Workstation into one.

Then I do not see the need for more than a few TB of storage locally, especially if…

Most virtual machines are between 50-100 GB in size since they tend to be single purpose and having a 500 MB Linux core system to boot a payload app is not very much. Even a Windows 11 VM could be shrunk down considerably, especially if you have a duplicated Read-Only part.

A 2 TB drive would easily be able to fit 10 of them, on a Ryzen 5950X you’d be able to fit 30 or so before running out of threads completely, and more realistically you’d fit maybe 10 VMs before hitting performance issues. So one 1TB OS drive + one 4 TB VM drive and one 4 TB games drive would probably be everything you need for that entire computer.

Not to mention most modern file systems, like ZFS and BTRFS, deal with logical volumes and have removed the concept of physical partitions. You have a pool, you create a logical volume from that pool, then the actual data can be on any drive in that pool, with redundancy if required.

You are aware a single PCIe 4.0 lane is enough to saturate a 10 GbE connection, yes? (each lane supports 2GB/s which is 16 Gb/s)

Since AM5 will be all PCIe 5.0, x1 would be able to feed two 10 GbE ports with 90% saturation and x4 would be able to feed six 10 GbE with 100%. Two lanes would be able to saturate the fastest SSD drives available, and eight lanes would be enough bandwidth to saturate a 16k @ 300 Hz resolution. So this is why an AM5 build would make a LOT of sense for what you want to do. with four NVMe, a GPU and single port 10 GbE you’d still only occupy 17 lanes if everything was PCIe 5.0. Most X670 will probably have 30+ lanes to play around with, 20 from CPU and another 12-16 from chipset(s).

That all said though, I think it will take a few years before PCIe 5.0 GPUs, NICs and NVMes become common enough to be affordable. Therefore, the 128 lanes of Threadripper/EPYC makes much more sense for building what you want today, and it is probably a better fit for what you want to do now. If virtualisation of many systems is indeed your end goal, a 24 or 32 core actually does make sense, as do VFIO partitioning.

Sorry for being so long-winded, it’s just that I’ve seen quite a few people go Threadripper just because “It’s FAST!!111!111One!” and yes, it is fast, but if you don’t have a use for all that hardware that’s just spending twice the money for bragging rights. That’s just dumb. :slight_smile: We live in a free country though, if paying a premium for things you don’t have a use for makes you happy, knock yourself out!

In your case you actually do seem to need a TR though.

[Edit]Let me just quickly show you two example builds over at PC Part Picker to show you where I’m coming from, this is what you would need to get your current setup up and running on AM4 (5900X, X570S, 1+4+4 TB NVMe storage, 6700 XT for gaming + whatever else, $2 700):

And this is basically where Threadripper starts, more or less same build but ECC + TR makes it cost $1000 more, though I did go for a 2950X to get even remotely same IPC, a 3960X + motherboard is at least $2000 more expensive than this:

What I’m trying to show is that a TR / EPYC system core (Mobo+RAM+CPU+CPU Cooler) is a lot of money and unless you are absolutely sure that you need the expanded capabilities of that platform, those money could be better spent elsewhere. Again though, your situation, your money, your time, and you do as you please. If you need to go TR, go TR, just be aware it is a very significant step up from what Ryzen provides, with very little benefit outside of a few niche cases.[/edit]

1 Like