Copypaste of my reddit thread, but L1T is more technical so I guess this should have been the place I started with…
Skip to the next part if you wanna get a shorter version of the rambling below.
I’m considering my options for expanding my nas, and while I currently run Truenas Scale, with the way the charts keep seeming to break previous setups and etc, I’m considering swapping to unraid.
Which of course leads me to consider other aspects of the NAS that I’m not quite satisfied with. Thus, my first steps to remedy the situation. I have ordered one of those Rosewill L4500U chassis so I can get 15 3.5" drives in, but I wanna also set up expansion for other pcie heavy stuff.
Cue the topic of the title. I’ll need to find an LSI card, I think, which can support the 15+ (sneak a few 2.5" ssds or something in the sides) disks I expect to be able to plug in, but it seems that those cards for cheap would only support 8 drives per. So, that’s two cards at least.
I’d also like to slap on a few optane p1600x as a special metadata drive. (I think that’s why Windows Explorer keeps lagging out when I try to save images to a folder with a lot of unmanaged files atm?)
Thinking 3, or 4 in either a raid 10/alike or more likely a raidz1. And maybe 2~4 nvme ssds for application pool or just a really fast pool. Apps can survive on the sata ssds fine, right?
Additionally, would possibly grab an Arc GPU for media transcoding.
And if in the future the 10Gbit nics come in cheap, might wanna grab one of those too. Sfp/rj45, don’t care atm, but my switch only has the one 10Gbit uplink sfp as it is so I’m not too hurried to get that. Probably just a 2.5Gbit one for now.
Anyways. Adding all that up, there’s at least 40 lanes being used on just optane (4 x4), nvmes (2~4 x4), Arc (1 x16?), and I don’t know how many lanes the LSI cards would take, plus the extra 1 lane for the 2.5Gbit, not accounting for future expansions.
So. Physically, that’d be around 5~6 PCIE slots already, and way more lanes too.
Am I overallocating things here? Overthinking stuff? Or is the only way to really run a decently kitted out NAS to abandon any notion of AM4/5, go threadripper with the higher tdp, or the epycs, or some sort of xeon to get the number of lanes required?
On a side note too, should I go with 2 6 wide raidz2’s and three hot spares, or 3 5 wide pools…?
So, Adding onto the initial post, this is what I’m currently thinking with some of reddit’s suggestions.
A recap though. The NAS OS is currently running truenas scale, though with how often truecharts stuff breaks on any updates, either will be moved to unraid, still on zfs, or will stick with truenas, but spin up a VM and start learning ansible for the apps. It will be running media playback, serving documents, and hosting game servers.
Firstly, the special metadata drive. As I saw on Wendell’s video on Optane drives, that’s what caused the initial idea to add that into the plans.
File explorer is actually kinda of sluggish, lurching forward when I try to save to a folder that contains many images. I’m hoping that this would help with that issue.
The plan is a RaidZ1 of 3/4 optane p1600x’s(, though I’d have much preferred the smaller M10’s for the much cheaper cost, but the same video said that the performance of those is practically pointless to get them, not to mention the capacity anyways) split between two carrier boards for the benefits of OPtane, with the redundancy in case an Optane fails. However, there was apparently concern that the Special metadata pool in that case may not be big enough.
As I understand it, I should aim for 0.3% of the main pool’s capacity, and while I’m currently running the commands to check the block sizes and the zdb Lbbbs poolname
command to try and get that info, as it is at the moment, I have a 5 wide single vdev pool of 14TB drives in RZ2, and the possibility for 10 more drives, which I can either expand the current pool to make a 6 wide vdev, and put another 6 into a second vdev into the same pool, with the remaining 3 as hot spares, or I can leave the 5 wide vdev, and set the other 10 as two more 5 wide vdevs, making for a pool with 3, 5 wide RZ2 vdevs.
I have also grabbed an LSI 9305-16i as recommended by reddit, though I may run some sata ssd’s off the motherboard’s built in sata sockets too.
I’m now on the fence for a dedicated gpu for transcoding in Jellyfin.
At least, for the time being, I’d like to stay on the DDR4 level of hardware, and my current hardware on hand is what I’m working with, so I’m sticking with my Unbuffered ecc I got in my current system. Also why I cannot use intel’s igpu for transcoding, and instead using the rdna2 gpu in the 4650u, though I’m not confident it’s actually working.
I’ve also gotten a 2.5Gbit network card for now, and that takes up only 1 pcie gen 2 lane anyways.
There’s no intention to make a slog or a l2arc, as it seems like general consensus is if the arc hit rate is >10% then I’ll need the latter, which I don’t, and the SLOG is kinda useless unless for NFS, which the charts are using to access the vault.
Still. that does mean at least at least 4 pcie cards, maybe 5 if I get a ssd vdev/separate pool as well for superfast stuff.
This is as follows:
16x Pcie Gen 3/4 bifurcated for the 4x optane disks + 4x nvme drives, x2 carrier cards = 32 lanes
8x Pcie Gen3 LSI 9305-16i = 40 lanes
8/16x (?) Pcie Gen3/4 Transcode GPU = 48/56 lanes
1x Pcie Gen2 2.5Gbit rj45 nic = 49/57 lanes (Though I if I do in the future get a 10Gbit rj45/sfp card, it’s still x1 Gen4)
So, given this, admittedly and knowingly rather overkill setup to begin with, what would be my options for the platform? Xeon? Threadripper? Epyc? Those should be the only options capable of handling that many lanes, but I’d also like to keep power draw down, obviously.
Any suggestions? Places I can whittle down to save on lanes/money/power/etc that don’t make sense? I’d like to see what criticisms there are for this, since it’s obviously the sort of “I’d love this, but is it actually sensible?” kind of build, and I’d like some help to ground myself back to reality.
Space constrained too, thus why I don’t try and split off the transcoding/ charts off to a separate device. It’s certainly open as a potential option in the future, but that one’s going to be quite far away, and this project is going to be piecemeal upgrades as parts get cheaper and I try to snipe them.
Additionally, am in Canada, so suggestions to just get an R730XD for cheap ($300~$400 range) isn’t exactly viable, unless I’m missing something locally, since the shipping costs to get a box like this is around 2/3x the cost of the box itself.