Yeah, my recommendation would be (as long as you’re not using SSD), HBA with port multipliers, some sexy norco case, some mid-range Ryzen and a gt710 or so.
Would a Ryzen 7 3800X be over the top? 8-cores (and not too expensive to jump from a 3700) decent enough if I could get VMs to boot in bhyve?
In my mind the 3800X is an in-between, the 3700X is almost the same performance for quite a bit less money and the 3900X is a ton more performance for not that much more money. Cores and clocks are the obvious but also take a look at the cache.
Hi all, I’ve done a bit of research to ensure that ECC works with the chosen mainboard.
The LSI controller chosen is a bit of ‘new territory’ for me, as it may require flashing to IT mode (not sure), and I’ll only know if things pan out once I reach that stage of the build.
- Ryzen 9 3900X
- Asus Pro WS X570-Ace mainboard
- x2 Crucial 16GB UDIMM ECC CT16G4WFD8266 (Crucial QVL)
- Noctua NH-U9S CPU cooler (Thanks to @noenken)
- GPU: Using a spare Nvidia card.
- 1x Product Model: IPC-4424 (4U Server Case with 24 Hot-Swappable SAS/SATA Drive Bay, MINI-SAS backplane) @ SGD 598
- 1x Short Depth Sliding Rails (Sliding Rail for 1U to 4U rackmount server case) @ SGD 66
LSI HBA Card + 10G Nic
- x1 LSI Logic Controller Card 05-25699-00 9305-24I 24-Port SAS 12GB/S PCI-Express 3.0 (https://www.provantage.com/lsi-05-25699-00~7LSIG0VL.htm)
- x7 StarTech.com Internal Mini-SAS Cable - SFF-8087 to SFF-8643 - 1 m (Manufacturer Part# SAS87431M) (https://www.provantage.com/startech-sas87431m~7STR943H.htm)
- x1 Intel X550-T2 Converged Network Adaptor (https://www.provantage.com/intel-x550t2~7ITEN0LA.htm)
With regards to choosing HDDs, I am considering
- 12x Western Digital 10TB Ultrastar DC HC510 SATA HDD - 7200 RPM Class, SATA 6 Gb/s, 256MB Cache, 3.5" - HUH721010ALE604 @ $293.50 each.
A single RAID-Z2 vdev of x12 drives + accounting for slop space + 20% free space limit, this 120TB gives me 70TB of usable space (prior to free space limit, ~73% practical usable capacity). (Ref: https://www.wintelguy.com/zfs-calc.pl)
The reason I’m considering 12x drives is because this means I can expand in future by adding another 12x into a single Norco case.
70TB is ~3x my current used space (at least on my single FreeNAS box, but I have data offloaded to my older Synology units).
I want to be able to re-organise my data in terms of -
- better organisation
- delete old archives that aren’t really needed etc.
- Should I go for the cheaper 8TB drives?
- Are 12x drives over kill compared to say x8 drives?
Put a U9S in there instead. You get a better fan, more heatpipes and at least the same amount of surface area. Not to mention it fits the airflow path a lot better.
The 4U height is ~ 177m vs 125mm for the N9S, I guess 52mm clearance should be OK. Thanks!
More than fine. I’m about to test out a NH-U12A in a 4U pretty soon. Will let you know.
I think the case can be made for different things depending on the work load you plan to use them for.
E.G. more vdevs for better I/O might be preferred …
I assure you there is QA/QC, but it’s not necessarily a high priority to make sure it works on desktop platforms. I don’t think they have a Threadripper system lying around to test every FreeNAS release on. Send me a TR, I’ll work on it
FWIW I just briefly looked into it and I don’t see any commit messages mentioning Threadripper on FreeBSD stable/11, so you might have to wait for FreeNAS 12. Note that FreeBSD development is currently on 13, and 11.2 EOL is October 31.
You joke but I could…the motherboard is the harder part
Every TR system of every generation dies at the place in my screenshot. With every major Mobo brand. I can try it on epyc but I will be surprised if it doenst die the same way.
Edit: oh and freenas based on freebsd12 was fine for booting but a lot of the GUI is broken
Ok that answers what my next question was going to be
FreeNAS 12 is nowhere near finished at the moment, but it’s good that it at least boots Just this week I’ve been struggling to track down a boot loader or firmware bug that’s preventing me from even booting the installer on my hardware, haha. Tedious, painful work.
I could probably get you in via ip kvm. I think there are some fixes backported by freebsd devs for TR that may be missing. One installer bug is that there is no alternative kernel anymore
OK, of course I don’t have to do this but it’s just too tempting not to call you out on your own forum for being off topic. xD
why pick that high end of hardware? the FREENAS MINI XL+ runs a Octa-core C3758 Intel and can do 10gb, its a 8 bay though.
my Freenas storage is a Dell r710 with 72g RAM and a X5675 @ 3.07GHz with a MD1200.
its the CPU is over kill, i don’t some close to pinning the CPU with encryption and compression turned on on iSCSI.
I am running this equipment just because i picked up for next to nothing, $400 maybe with drives and upgrades.
if you want Dell look into some R720MD or something like that.
if i build something i would get that Norco case, i have used that case not bad. i upgraded the fans because thy were poor on airflow causing the drives to get warmer that i liked. and a Ryzen 5 2600? or a Threadripper 1900X both are a fraction the cost of a Ryzen 9 3900X. grab a MB with on board 10g, LSI SAS9220-8i flashed to IT mode, and a SAS expander
for spinning rust that more than though.
Currently doesn’t boot the freenas installer.
If it’s just pure storage mode without encryption or compression then you really won’t need an incredibly new or beefy CPUl
Though you’ll still want an Enterprise solution for ecc and all the extra dimm slots
Thats a real bummer
TL;DR is the LSI 9305-24i SAS2 backwards compatible?
Something just came up and I want to clarify if anyone here knows if the LSI 9305-24i HBA card is compatible connecting to the SASv2 expander backplane (like on the Norco cases)?
The spec sheet says the card is designed for SAS3 and this incredible build by Jason refers to this card but he’s using Supermicro head units (which on the supermicro site confirm as SAS3).
FYI, the Norco 4224 has 6x SFF-8087 (Mini-SAS) connectors on the expander backplane. It is pretty hard to find out if the 4224’s backplane is SASv2 or SASv3, although chances are it is the former. Neither the Norco site or the sales rep seem to confirm nor dispute this.
@SgtAwesomesauce would you perhaps have any idea?
TBH I would have preferred to have gone with older X99 (as the mobos have more than just 3x full-length PCIe slots), which means I can run GPU, 10G nic, and 3x LSI9211-8i cards + ECC support with Xeon; however, may here have pointed out that it’s much older “tech” in general.
I don’t want to go Xeon scalable generally due to the high cost, and even though the Ryzen 9 3900X is a fairly high-end CPU, it does provide the option of running VMs in bhyve should the need arise.
My main concern right now is being bottlenecked (physically) by the 3x PCI-slots on X570 boards.
X470 Taichi Ultimate has the Aquantia 10Gb RJ45 onboard. If you can use that in your network, it would free up one of the PCIe slots.