Talk me out of Epyc (I was talked into Epyc)

So I’ve been planning on my next Truenas Core build. I ordered the drives over a few months. I was hoping while doing that I would be able to locate and get my hands on desktop/server motherboard that would use a desktop CPU. Either a Ryzen 5/7 or the Intel i5-12500.

The usage of Truenas is 6 drives in Z2 and 2x drives mirrored and backed up online. I can’t afford to backup the larger array of media online. Its also replaceable data. I’ll run Plex and NextCloud for no more than 5 users. Probably more like 1 to 2. Which is why I thought a desktop cpu would be great.

I don’t think what I want in a board is a lot. I’d really like it to have IPMI but I can work around it not having that. The things I consider a must are ECC and either 10Gigabit nics or the room to add them. Aside from an HBA for Truenas the only other thing I think I might really want to put in place is GPU for Plex at some point.

The workstation/server boards for I’ve found for desktop cpus that I like I can’t find anywhere to actually purchase in the US. The ones that all seem to be missing something and are available cost just as much as a server board for an Epyc CPU.

At this point if the board and the RAM are the same cost. Should I not just drop the extra $200-$300 for the Epyc CPU and get all the goody that comes with it? I was hoping it would save on the build cost as well as the power. But its not looking like it is really worth it anymore.

For reference these are the parts I’m considering currently.
AsRock Rack EPYCD8-2T
AMD EPYC Rome 7252

TLDR: If Ryzen board and i-Series board cost as much as a server board should I not just drop the extra dollars on a server CPU and have all the extra goodies to?

1 Like

The biggest drawbacks of using desktop gear for servers are a lack of ECC, a lack of IPMI, and a lack of PCIe expansion capability. For small scale home use, unless you are certain that you want/need these features, IPMI and ECC aren’t that important. You didn’t mention either of these in your post, so I’m assuming these don’t matter much to you. They’re nice to have for certain, but you are going to pay a premium for them.

So it really comes down to the PCIe expansion capabilities. What PCIe devices are you planning on using, and how many of them? Do you actually need the full bandwidth of all of them? Are you planning on expanding and growing over time or is what you have right now all you need for the foreseeable future?

1 Like

And that would be the other the thing I forgot in that paragraph I re-wrote. I’ll fix that.

The things I want in a board are ECC, either 10Gigabit nics or the room to add one, and IPMI. IPMI is probably the easiest one to drop.

For pci I want to be able to add a GPU in case I need one as well as an HBA. I feel like if I had 2x 16 slots and a by 1x by 8 slot that would be all the expansion room I need. I want more. But I don’t know what I would I do with it. So I’m not holding out on it. Most of the boards that are available are Asrock Rack Micro-ATX. They only tend to have one x16 slot.

In regard to ECC, that is a bit of a crapshoot on desktop hardware. You are also limited to UDIMMs which tend to cost more than RDIMMs. Most vendors don’t care about providing support for ECC on desktop gear, so it’s not an advertised feature on anything other than server boards. The AsRock Rack Ryzen boards provide an awesome feature set for what they are, but even they are limited by the design of Ryzen.

If the goal is to save money, this is a question you should find a definite answer to before you pull the trigger on anything. Leaving a GPU running 24x7 costs a lot of power. And if your use case involves heavy GPU use, you definitely need a full x16 slot with 16 lanes, which is necessarily going to increase the cost.

You can get desktop mobos with 3 physical x16 slots, but desktop CPUs don’t have that many PCIe lanes to go around. A common situation with those boards is to have two x8 slots and the third is running at x4 coming from the chipset instead of from the CPU. You can put a GPU, an HBA, and a NIC in such a system, but not all of the devices are going to get the full bandwidth. But does that severely negatively impact your use case?

For instance, take a x8 PCIe 3.0 HBA with 8 hard drives attached. A high end 7200 RPM hard drive can only move data at 200 MB/s. 8 of those total up to 1.6 GB/s when all of the drives are running at full-tilt, which is far short of the 7.8 GB/s that 8 PCIe 3.0 lanes can move. Putting that card into a slot running at x4 doesn’t matter much.

Overall if your goal here is to save money and only buy what you need, you need to specifically define your use case. This is crucial to determining what you need to buy. If you want wiggle room and greater PCIe connectivity than any desktop CPUs can provide, then you are quickly going to find that all roads lead to EPYC. If you can deal with reduced capacity/speeds though, the AsRock Rack Ryzen boards are pretty attractive. The only way to know this for certain is by having a well-defined use case to guide your shopping.

1 Like

I’m here to talk you in to EPYC … Go for it. It’s going to be rock solid, take a lot of ecc ram, plenty of lanes for hba, 10gbe and GPU and whatever else you might want. You know you really just wanted to be talked into Epyc and that is what I’ve done for you. Enjoy…

8 Likes

Enterprise gear is known for high used prices, even on “cheap” sites like Aliexpress. Here’s a few of the cheapest I found:

(choose the mainboard option)

(choose the cpu option)

ECC RAM is a bit hit 'n miss, even on Aliexpress, so I didn’t search for it too hard. YMMV!

A few more options to consider:

I have one! Neat board, a little limited in connectivity, but if you only have 8 SATA drives max, the MiniSAS SFF-8654-4i port offers 4 SATA-III ports next to the included 4 SATA ports on the board, w/o resorting to an HBA. Leaving the sole PCIe 16x slot (gen 3!) free for a 10Gb NIC, either copper of fibre. I paid 500€ here in EU-land, I expect a similar amount in the US.

HTH!

1 Like

+1 : I went the EPYC desktop route (wanted Threadripper Pro 5xxx but only EPYC Milan was available at the time) for similar reasons, but its a no-brainer for server.

Pros:

  • Rock-solid stable: Check out that thread on here of people trying to get 128GB RAM working on non-server boards, no such issue on EPYC.
  • Huge package surface area makes cooling simple: 1) Apply big Noctua 2) Enjoy never going over 60C
  • No head-scratching trying to figure out which PCIe lanes come from the CPU and which come from the chipset, only to discover some slots are actually 1/2/4/8 lanes: My board has 7 16-lane PCIe 4.0 slots all support bifurcation - just slap-in a PCIe device wherever fits!
  • Expandability: Use all 128 PCIe lanes? - no problem, swap for a 64-core Milan-X when they are cheap on auction sites? No problem. Upgrade to 2TB of RAM? - no problem.

Cons:

  • Takes 3 minutes to start the EFI loader.

On-board 10GbE NICs are nice, but consider that a separate NIC may be more flexible - e.g. one with two SFP+/SFP28/QSFP ports give you fibre options and can still use 10GBASE-T if you want.

3 Likes

@AbsolutelyFree I really appreciate the well thought out advice. You went into good detail and made it very clear.

I absolutely should do this. I’ve been struggling with it a bit as I will be moving soon and not sure of once I’ve moved if the use case will change any. The biggest reason I might decide to drop in a gpu is I will be converting my media to H265 or AV1. And that might mean its worth it to put a gpu in the system instead of handling on viewing end for plex.

This is not completely wrong. Lol. Yes Epyc is what I want. But if it could be accomplished at a lower price point I was for that as well.

Interesting. Though not sure how I feel about used parts from China. I don’t mind used. But when I look fleabay the first thing I do is restrict to US/Canada. My paranoia might be to high to trust those boards. I could be full of crap to.

I would actually be ok with an SOC if they had the more expandability. That doesn’t leave room for me do mirrored boot drives or any special data devices later.

I’m beginning to occur. Because I don’t have an exactly defined use case at this time I’m leaning more towards the Epyc.

I appreciate all of the input. You’ve helped clear some things up. I’m not in a hurry yet to finish it yet. But I think I will start shopping Epyc parts on fleabay. Epyc will also give me the room to tinker in the future. It is a home lab after all. I need some room to play.

The only problem with epyc is there is no low power high connectivity solution other than this one
https://www.asrockrack.com/general/productdetail.asp?Model=EPYC3451D4U-2L2T2O8R

But yes, epyc is the answer.

1 Like

If you want to do some transcoding, a 7700X could theoretically save you some money on a GPU due to the built in APU, this combined with a 10 GbE + NVMe card could potentially be an option.

Apart from the lower cost and power drain such a solution would save you, server Ryzen 7xxx just costs too much at the moment in terms of lost functionality vs price saved. It needs to become cheaper first

Problem with Epyc is the baseline power consumption, all that connectivity takes a lot of power, and most of the time your Plex/nextcloud server won’t be doing a lot - but I suspect will still be taking 100w in the process.

There isn’t really a good solution for your need right now imho. Perhaps one of the asrock rack am4 motherboards? Perhaps refurb/second hand workstation?

Ddr5 ECC seems a very new and expensive thing to find

W680 intel motherboards which support ECC seem ridiculously hard to find and ridiculously expensive.

I’m still running a Lenovo ts140 from many years ago, with a quad core Xeon and 8gb ECC, it was crazy cheap and although I’d be happy to replace it with something better, it sips power (a little over 20w most of the time) and actually still does what I need - very rare that I need Plex to transcode.

2 Likes

That’s the beauty of IPMI: switch the system on remotely when you need it, shutdown when you’re done. So, in idle it should only use a few Watt for the IPMI/BMC to run.

Mind, that EPYC3451 board had a NewEgg price of 2k USD when I looked it up last summer! :exploding_head:

1 Like

No one wants to wait for their Plex server to boot up before they can watch something, especially if they’re sharing it with several users.

Did I mention my ts140 has vpro and so I can turn it on remotely, remotely install an os/change bios settings etc? Not enough love for vpro imho.

You can make most any pc turn on remotely with a smart plug and a bios tweak, or properly setting up wake on lan - I had it so I could boot my desktop remotely over ten years ago, would much rather have an always on server.

1 Like

This is why I was looking into desktop CPUs. I’m not a situation where I need to count watts or dollars that much. But if I could make it more efficient why not do it.

This works for tinkering servers that I’m the only one that uses. For things like data, plex, and nextcloud. I want those when I want them. I don’t want to have to VPN in boot the server and wait 5 minutes for it all to be available.

Exactly. I’m good at planning ahead. But thats a level I don’t want to have to worry about.

Never messed with that one. I’ve used Dells, SuperMicros, and HPs IPMI tools but not intels.

How are electricity prices in your area? An Epyc system is going to be use significantly more at idle over a desktop system. It adds up over the life of the system, so make sure to add that to the cost.

3 Likes

So I did the math. According to passmark and the average rate of kwh for my area. Epyc will cost me ~$5.55 more per month at 100% usage. Considering its likely to be running at 20% usage most of the time, I think Epyc will be fine for my use case. I’m willing to double the CPU cost for the extra features and expandability.

5 Likes

Having built my first server about about 2 years ago its uses just keep growing. i have quadrupled the amount of digital data(Yes im a data hoarder). I thought what i got at first was going to be “the right amount”, boy was i wrong. Once you discover all the things you can self host and do, its to easy to just keep tacking on tasks to your server till its buried in a mountain of work.

Mine started as backups. Then it was backups and steamcache. Then backups, steamcache, and plex… add vpn… add nextcloud… add minecraft server… add home-assistant… the list goes on.

I think my point is since you don’t seem to be able to define your exact use case ,your probably like me, and will keep tinkering and adding on over time.

I am on my second revision of my server, and am ok with having to upgrade it again in a year if i need to. To be honest there was no way i would have know what i need now when i first built it. Also my first build was completely not optimal for my first task, and i learned a lot.

My point is buy what you can afford today and is necessary for the task you know about today, and look forward to the fun of the upgrades you will make along the way. There is no future proof.

3 Likes

Well, your description is a bit open-ended, but some thoughts.

My guess is your 6x drives are spinning. Not sure what your use-case is for mirrored drives, or whether they also are spinning. You have not mentioned M.2 drive(s). You have not mentioned budget.

You can can keep your spend down quite a lot.

I use LVM rather than ZFS. With LVM I can throw all the similar disks into one pool (volume group), then allocate striped and/or mirrored volumes from the pool. I expect ZFS has something similar. Not sure you if you need two drives dedicated for mirroring.

Easy to find desktop motherboards with 6-8 SATA ports. Use an M.2 drive for fast storage, and perhaps as cache for the spinning drives. Can skip the HBA.

If you go with the current AMD 7000-series, you get a GPU on the chip. Not needing a graphics card frees up PCIe lanes.

You might find a motherboard with built-in 10Gbps Ethernet - but this is going to limit your choices quite a lot. My suggestion would be to snag a decent/cheap 10Gbps Ethernet card, as I did.

Some(?) of the Asus / Asrock AMD motherboards support ECC … at least that is the rumor.

Want to say I saw one recent review of an AMD desktop motherboard that had IPMI.

You can get a lot of performance for not very much. Allocate a striped volume off your 6x spinning disks, and you can get pretty close to 1GB/s over 10Gbps Ethernet.

For talking you into epyc, Newegg currently has a Tyan open box for $550. That is a motherboard with a 7313p(16 core 3ghz).
Dual 1G Ethernet
Dual 10G Ethernet via Intel controller (better for freenas)
14 Sata channels
2 m.2
2 x dual channel m.2 via $50 adapters
5 16x pcie slots, 2 or 3 double wide
I got 256gb ECC for about $500 from Newegg

Thanks for the heads up. I actually got an open box Super Micro H12SSL-i
from newegg and and an Epyc 7282 from fleabay for a good price.

So I have officially gone Epyc. :grin:

4 Likes