Full background here, but I’m shopping for a machine to run an Unraid file server and host a number of containers:
- MSSQL for small point of sale (3 concurrent users)
- Caddy reverse proxy
- Static business website (max 5 concurrent users)
- Discourse forum for a niche CAD community (20 concurrent users)
- Facial recognition (4 IP cameras).
I was thinking about copying Wendell’s Unraid GN build, but I don’t think the ASRock Rack X470D4U2-2T has enough PCIe lanes (x8 GPU, x8 SAS HBA, and x1/x1 Coral M.2 E).
Things I already have:
Things I need to buy:
Sounds decent. Have you looked at decommissioned enterprise gear? They often have more PCIe lanes, and more expansion for RAID plus often come with all the extra parts you need for a server.
Lots of what you want can be found included and at the same price point.
I had not thought of that. It’s a little out of my wheelhouse, but I was looking at them until I realized the 3800x I already have equals about $2k worth of 2016 dual Xeon compute.
If you download the manual from ASRock there’s a block diagram which shows you what slot connects to where.
Two long slots and network connect directly to CPU.
M.2 and a x1 slots go through the chipset.
I’m not sure why you’re thinking it’s not enough
PiKVM might be a decent alternative to built-in IPMI for small home servers - it opens up a lot more options for building servers out of desktop parts.
Yeah, it could but I’m not sure. That Coral card said it needs an x2 slot that’s split into x1/x1 (M.2 E).
I also don’t know how a Coral card differs from my old Quadro. It’s something that I want to play with and Coral was the only option listed for hardware acceleration.
I went ahead and got a used SuperMicro server for time being. Hopefully, this works until all the supply chain issues are fixed…
I saw that it had support for an x8/x8/x4/x4 config, but didn’t really think about there not being an x16 slot for the GPU…googling for an x8 GPU I saw people trimming down x16 cards to fit an x8 slot. I went the other way with it:
Yeah, it used to be a pretty common thing to have open ends on x8 and smaller PCIe slots just in case you wanted to use a larger card in them. I don’t know why board makers stopped doing that.
Usability has been remarkably absent from this whole project. I’m down two drives until I get a new case. One drive blocked the PCIe power connector on the GPU and the GPU blocked the sata connector on the other.
There’d be plenty of space if the slots on the drive bay let me seat them forward by an inch.
This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.