Ideas for converting my AMD Ryzen 9 5900X 12-Core Processor to a home server

I have an AMD Ryzen 9 5900X 12-Core Processor , 32.0 GB DDR-4, Asus Rog strix Mini ITX, in a hude Phantecs case. I am building a new cretor PC and want to convert this one to a home server. I will also need to decide on a GPU. I want 10gb speeds, a lot of storage, the ability to edit videos using the drives in the server, media server, and a few virtuatl machines (each machine will be dedicated to each of my business brands). Ideas? My buggest issue is the lack of expansion on the mobo. I only have 1 16 slot which I use for the GPU. I will use one M.2. to exapand to 6 sata. I have 2 sata on the mobo.

The good news is, I already converted your CPU to a home server CPU with my mind powers. No need to thank me.
The bad news is, you’ll probably want to replace the board if you want a lot of connectivity. One of the reasons I aim for ATX everything, and avoid micro-atx and mini-itx, is future expansion. When I started plugging in HBAs and NICs, those free slots very quickly vanished.
I recommend getting something with 3x x16 slots if you can, and if you aren’t using your GPU for some major task, you might want to consider adapting it down to a 1x slot at some point to free up precious wider slots. It doesn’t sound like you’re really going to use that GPU for more than displayout anyway, right?
A single 8x slot and some mining 1x risers can be used to turn a single 8i SAS HBA into something like 32x with SAS expander cards, I believe? Since all they need the PCIE slot for is power, you shouldn’t even need to use up your precious 1x slots. Just plug in power and the card. I haven’t tried this myself yet though, so I can’t confirm as of right now if it actually works.
10Gb speeds, I don’t think you can get from M.2 adapters right now. But, 10Gbe cards aren’t that pricey on ebay, for a 8x slot model.

I see an Aorus Elite on ebay, for instance.

Click me

Three slots, but it has some odd bandwidth contention. The bottom two slots share 4x, so it’s 4x0x or 2x2x. Running these slots at 2x speed may limit 10Gb card bandwidth slightly for PCIE 2.0 10Gb cards, as 2.0 is 5Gb per lane. Should be relatively minor, and still plenty of bandwidth for storage HBAs imo.
GIGABYTE X570S Aorus Elite AX, Socket AM4 AMD Motherboard (Please Read) 889523028827 | eBay

At least, that is what Ershin tells me.

1 Like

I agree with the comment above: it is probably worth it to go for a larger motherboard.

  • Do you need 10Gb just point-to-point to your new workstation? In that case you might go straight for 25Gb, these cards can be had for under 300 $/€ nowadays. You probably want a dedicated nic any way since the onboard 10 Gb nics usually don’t support RDMA etc.
  • If the GPU is there mostly for media server purposes, intel is probably a great choice. For encoding intel is second to none IMO. Very happy with quicksync on my intel iGPU for this purpose. If SR-IOV support ever makes it into the mainline kernel (or you are willing to tinker, see the forum threads on this) you can even have virtualized GPUs for your VMs.
  • If you’re getting a new board any way you could consider ECC support (though you have ram already, so it would add some cost again?)

Turning the AM4 platform into a home server can be challenging. Having a mobo built with add-on capabilities in mind can help a lot.
You didn’t mention what mobo you are currently using.

IMHO the best mobo for this purpose is the ASUS Pro WS X570 ACE. The main reason are three 16x PCIe4 slots that can be configured to run 8x/8x/8x and all of them support bifurcation. The main con is that it’s pretty hard to find nowadays and expensive on top.

You’ll need a quite beefy storage backend to saturate 10Gbit or anything near it, spinning rust won’t do unless the data is cached. AM4 is on its way out so availability of “suitable” moderboards are sparse and quite expensive, you also have a issue with potential lack of available PCIe lanes too.

Given the above I’d make use of what you already have as it makes little to no economical sense starting to replace hardware of such an aging platform.

You can easily go with ~20TB mirror setup with your current motherboard however over Gbit (you wouldn’t be able to saturate 2.5Gbit using such a setup anyway).

For video go with the cheapest (simpliest) ARC video card you can find.

I recommend the ASRock Taichi X570.
I bought one and a Ryzen 3950X when they were first released, and used it it as my primary workstation (dual boot with Linux / Windows) for a few years.

Once the AM5 platform was released, I switched my workstation to AM5, and repurposed the Taichi/3950x as my primary Proxmox server… with 128 GBs of 3200 MT/s ECC DDR4, and it’s been great.

PCIE slots - I have an HBA, 10 GB ethernet, and a gfx card (in the x4 slot).

Rock solid stability - uptimes of almost a year with Proxmox (neighborhood power loss put an end to that!)
3 Physical PCIE Gen4 x16 slots, that can run 8x8x4, and support bifurcation.
2 PCIE Gen 4 x1
Mutliple Gen4 M.2 slots.
8 SATA ports.

Currently just over $200 USD at MicroCenter and NewEgg

I don’t see the importance of saturating your network to justify the faster speeds. It’s useful any time you’re transferring at more than 115MB/s or so. For most modern drives, in a single drive config, 2.5G is about right. For dual-actuator drives, 5G is probably where you want to be. If you’re using any kind of RAID5, you could probably see data flying at about three times those rates, so 10Gb may actually be worthwhile.
Also, if someone is talking about editing video off it, I would think they’re expecting to have at least some solid state.

I think the point of 10Gb is to not have your transfer rates limited by the link speed quite so badly.
It is worth looking into higher speeds though, as some high-speed cards are very cheap right now. Cables are costly AF, but the cards themselves, very cheap. 40Gb I think? Might be worth getting just to not have to ever think about network speed of your storage server again. 's enough for a Gen3 NVME or SATA RAID config.

2 Likes

@alkafrazin I totally agree.

I recently replicated a 28 TB Dataset between TrueNAS severs, all with spinning rust - one with 4 mirrored pairs, the other with a 7 drive Z2 - and it sustained 4.5 Gb/s the entire time (over 10 Gb Ethernet).

With 2.5 GbE it would have taken twice as long, and 1 GbE more than four times longer…

Actually, funny thing… 1Gb is actually 1.28Gb, it’s just called 1Gb to simplify things. Sometimes you even see “2.5G” cards or “5G” switches with 1.28G ports, advertising “total bandwidth” instead of per port, or advertising the full duplex of 1G as 2.5G total.

So, probably less than 4x longer, but still, yeah. Much nice to have more than gigabit, even with spinning rust.

Interesting, but I don’t think that’s what I’ve seen historically from iperf3 tests over my lan… Gigabit is ~940 Mb/s, and 10G is ~9.4 Gb/s…

1 Like

I must be mistaken then.
It’s strange that I couldn’t escape listings for 1.28Gb nics a year ago, and now I can’t even find anything about it. Did I slide into another parallel again?

I agree as well, based on multi-terabyte transfers over 2.5 Gb case and also dual 1 Gb SMB multichannel. To add on to your points, a RAID10 with vaguely current-ish 3.5s supports ~1 GB/s read and an eight actuator RAID10 ~2 GB/s. While it’s not exactly for media, some of my essentially unoptimized code reads 2 GB/s single threaded. So dual 10 Gb isn’t necessarily unrealistic for keeping a processor from bottlenecking over data residing on remote 3.5s.

I too think ticking all your boxes requires changing motherboards. There’s a fair number of X570 boards with eight SATA ports and, at least here, they’re not hard to get (contrary to the claim above). I’d second @edge-case on ASRock, though mind the specs as sometimes ports are provided with ASM1601s (particularly on Taichis, though the one linked uses the chipset). If you want to stay in mITX, consider the X570D4I-2T but, given the apparent need for a case with eight drive bays, I’d also look at PG Velocita. I haven’t seen good characterizations of X570 SATA performance, however.

As you’re probably aware, M.2 ASM1164s are mechanically fragile. If you don’t need six ports out of M.2, there is a more robust JMB585 implementation. Alternatively, with a well chosen ATX board you can get an ASM1164 3.0 x2 PCIe card and a 10 Gb NIC on without obstructing GPU airflow too badly.

There’s at least a couple 10 Gb M.2 options around, though I’ve no information on whether they’re any good and 10GBASE-T might be asking too much thermally from NGFF. Also I think OCuLink is a possibility. Might be a way to do it with MCIO, too.

Considering what most AM4 boards offer for slots and integrated NICs a 4.0 x1 or 3.0 x4 NIC seems easier, though.

Yep. They are covered here.

I recently found and ordered one for ~$85 USD on Amazon, and it’s on its way via a reasonably fast boat from China. Should get it within next week or so.

Hopefully it works OK, with good stability.
( I want it for my main Linux workstation - an ASRock X670E Steel Legend - so I can free up the PCIE x4 slot for a Thunderbolt 4 or 40 Gb Mellanox card).
Like most/all ( reasonably X670 'boards, it has way too much PCIE Gen5 and M.2, and too few physical x16 PCIE slots… )

If you don’t go for a new board with more ports, get a bifurcation adapter (eg those sold by c-payne.com or their ebay user) to split the x16 slot into x8x8 or x4x4x4x4.

K3n.

I really want to use my Ncase M1 for the small form factor, and an external hard drive array like the Yottamaster PS500U3 Aluminum 5 Bay 2.5"/3.5" , connected via the USB 3 port on the board, and bifurcation on one my 2 m.2 slots to add a 10gb nic. It’s a pretty board , case, and cpu. I don’t want to have to get a whole new case and board, at that point I am shelling out for pretty much a new system.

1 Like

I don’t think bifurcation – which means the chipset would take the x4 lanes of the M.2 socket and treats them as 2* x2 or 4* x1 – is available or configurable for the M.2 slots in any of the X570 ITX boards I’ve seen (I compared a few MB manuals for a build last year). I use a converter like this example at Ali Express with a cable extender to wire a PCIe 3.0x4 twin 10G NIC on a M.2 port, preferring Intel X550T2 (for RJ-45) over X550DA2.

K3n.

Given the state of DAS enclosures, case and board’s likely more cost effective and potentially lower total cost—be mindful of reliability, thermals, transfer rates, latency, implications for the value of 10 GbE, and total solution physical volume.

(In some ways your objectives appear better met by a NAS, though those don’t fit the project definition or budget.)

Yeah, I can’t recall coming across an M.2 that’s user bifurcatable in this way either. On any board, whether X570 or ITX or not. There might be a something like a single to dual NVMe adapter with a PCIe switch but some quick googling isn’t turning up anything.