16x sata disk NAS for luggable backup

I recently came into possession of 20 2tb ssds, and wanted to turn them into a 16 disk NAS, probably using TrueNAS scale, however I’m very much unsure which disk controller or motherboard should I use for this kind of build as I would like to keep it physically as small as possible so that I can make it portable Backup system.

All of this is so that I can easily update a backup that will be stored off site so that when something goes horribly wrong, I will at least have a backup to return to.

10gb networking would be nice but not required.

I was looking at mini-itx MBs but most of them seem to be very limited in pci-e lanes or cost a lot, I could use a normal ATX sized one but then the size pretty much doubles, most NAS appliances are for 3.5" disks and most of the time dont even fit 8 disks let alone 16 and the cost is way more than what a DIY build will be.

I did see a NAS build on reddit using a dell backplane ( dell h310 in IT mode,
Dell T710 SAS Backplane ) https://www.reddit.com/r/homelab/comments/h10m1q/my_covid_woodworking_project_is_finished_8_bay_nas/

Which left me wondering, can I just get a whatever MB, slap some disk controller on it and attach a backplane to that and end up with a working solution, because I can’t seem to find if there are any limitations on them like dell controllers only working on dell backplanes, because normally no one would even try to do that.

So in short:

  • Can I mix and match backplanes and controllers from different manufacturers and expect it to work (I mean I expect this to be a no, that’s dumb)
  • Are there some limitations to Truenas regarding the above
  • Is there some go to parts I should be looking at for a project like this

I am so in on this…

broadcom 9600-16i will give you the bandwidth necessary to push the drives
requires 1 x8 PCIe 4.0 slot

anything with PCIe 4.0 x8 slot and ECC RAM

alot of industrial boards in mini itx are available
More reasonably, there are tiny cube cases that use micro atx

find a board with ECC support and send it

add in NIC so you can upgrade later
16 drives will want: 5-6 drives parity
2 drives parity per 6 drives
1 drive parity for the last 2, or use them as hot spares if the drives are used

that’s 10-11 SATA SSD drives running full tilt: 3-6 GB/Sec transfers
25-40 gig interfaces would be desired for true I/O enjoyment

high density 2u servers have 24+ hot swap bays on the front, but not really what you are describing here.

yeo

prolly not

You will likely have to use daisy chained SATA power plugs from molex to power all the drives in a backpackable format.

The biggest supplier to OEM is Broadcom (formerly LSI, but they got gobbled up)

You haven’t even come close, I’ve seen setups saturate 100 gigabit in prod without flinching. Disk enclosures and shelves totalling into the hundreds of drives, not a problem.

The above mentioned Broadcom card, and a good power supply as you’re talking 400 watts just in drives.

1 Like

Thats actually where I was considering getting the backplane from, the same way that the reddit user did, but just use the 2.5" 2U chassis ones, which come in 8, 16 and 24 disk varieties, to power the drives those have both the power and sas/sata connections nicely in the back but you need to power them using some non-standard molex connector, which is fine just some soldering and sleeving connectors.

I was more thinking of the Disk controller talking to the backplane, that do those use some proprietary per manufacturer protocol or are they just SAS, and you just need to slap for ex. a mini-sas from the disk controller to the backplane. Because if I use a backplane and with that the drive sleds, which means less pain when I need to swap the disks. I learned from my old builds of spinning rust where no disk is labeled and only half of them are in hotswap bays.

Also about the case, I’ll just build the entire thing into a clone pelican case so that its easy to move around, Which is the main reason to keep it small.

Also why 5-6 drives for parity, shouldn’t a raidz3 pool suffice as the likelihood of the SSDs dying should be both lower in time to die if not written to than spinning rust as well as rebuilds should take less time and be far less demanding to them than on traditional drives. The only I know I don’t know about them is how long will they retain data after being powered off.

1 Like

Just SAS
and most OEM’s use LSI controller cards so you may even score a card in the process
I think a drive array shoved in the front of a narrow case would be sick

order of magnitude simpler, if you can use toolless caddies, EVEN BETTER

I hate putting 4 screws into 12 drives, much less 24

you need to scale parity with totals drive in pool

in raidz3:
1 drive dies and you are rebuilding and another fails (most often how this happens and why RAID 5 is not allowed in enterprise deployments above 3 drive arrays) you can fall back to a second drive, or even a third.

But since all the drives are likely the same manufacturer, model, and even from the same lot with the same run time: the chance of multiple drives dying simultaneously past the initial 3 month mulligan window is an order of magnitude higher.

When we are talking 16 drives all running at once: if 1/8 drives, or just 0.125 of the drives remaining die:
the pool is irreversibly lost and gone

You can begin a forensic recovery but that involves a pro and a bad ass GPU to brute force the missing data as metadata hashes are not sufficient to recover lost data.

Rule of thumb is 2 drives of redundancy per 6 drives.
That scales beautifully to 12 - 4 = 8 usable drives
24, 36, 48, 60, 96…
That is why enclosures are multiples of 12, beyond packaging constraints.

When dealing with used drives (you haven’t specified), you take the precautions and trade upfront cost for total usable storage.
BUT you gain peace of mind.

Hot spares are really only used in mission critical, high uptime deployments and really should just be added to the pool as redundant drives.

A master list of serial numbers and location inside the case is ideal for recoveries as backplanes tend to only blink when teamed to their corresponding OEM’s MoBo and more specifically: BMC.

MAN!!! Wish I was that lucky! All I got today was a medical bill for $135.00 for an office visit and some otc drugs!

3 Likes

“management has even provided you with screws, why are you complaining”
image

Raidz3 is 3 disks redundant so that would be 4 dead disks to actually lose data, so 1/4 needs to die for data to be gone if I’m counting correctly.

Also thankfully the drives are used in a way that they have wildly different manufacturing and use dates so thankfully that should prevent a cascading failure right out the gate, but due to the fact that some of them are heavily used, some of them will die sooner than later.

I assure you, I’m going to be more in the hole than you are after I’m done with this project.

1 Like

every time
The one time I received proper counter sunk, they were wrong thread for the HDD’s

Perfect… though you should note the array will be limited by the speed of the slowest drive (no free lunch).

Definitely recommend doin some testing on the drives to see if any are notably worse than others.

Yea … your project is a hell of a lot more fun than what I spent my money on!

1 Like

Are these SSDs 2.5" SATA drives? If so, any chassis that accommodates external 5.25" bays should work. I have 24 x 2TB SATA SSDs running in three backplanes. If you’re interested I have the parts and pictures over on my post: FrenziedManbeast's HomeLab

2 Likes

Be sure and run DiskFresh and schedule it to be run 1-2 times a year to keep the cell voltage where it should be. Dont want data sitting for years on NAND and only realize it is now corrupted when you go to read something way down the line.

1 Like

Nope. That’s 2TB drives gifted. They most likely have like 60-100MB/s top speed because of their age.
Adjust the best case numbers accordingly.

Did you carry all 20 drives in one go as you received them? If you did you don’t need any additional workout.
even 8-16 HDDs are HEAVY. Not impossible to lug around, but not desirable.

Also, if you lug this around - will all the sites you plan on connecting to have 10gb networking? or 25/40gb networking?
If not, a better luggable NAS consists of a single 20TB HDD that connects via 1gb eth. :stuck_out_tongue:

I know - it’s not as much fun though. Go ahead - I’ll watch from afar.

1 Like

The SSDs are less than 4 pounds total. I’m pretty sure even an SFX PSU will weigh more.

1 Like

I missed what all the excitement was about - I somehow missed the SSD bit.

1 Like

What’s your budget for the mobo/CPU/HBA/NIC combo?

ASRockRack have the ROMED4ID-2T - which, while it is pricey - second-hand Rome/Milan 8 core’s are relatively cheap, and you get 16 SATA ports (from 2x SlimSAS 8i ports) and dual 10G LAN included, so you don’t need to buy an extra HBA/backplane and NIC.

Aswell as the 16 lane PCIe 4.0 slot, there’s also an extra 32 PCIe 4.0 lanes you can break out from the MCIO, all supporting x4 bifurcation so you can split that into 8x NVMe if you wanted.

I use one of these with an EPYC 7313P in a NAS, idles at 30W and 25C with the smallest SP3 cooler Noctua have.

1 Like

yeah, this is gonna get DUMB with the iops and throughput.
Definitely tuned in

@purkkaviritys post pics of the drives so we can see this nastiness

They actually look far less impressive in person than in text as 2.5" does not really take space

After doing some thinkening I’m starting to lean on getting one of those aliexpress X99 systems (as if this build wasn’t already going to go places)for this so that I can keep the cost in check. I already run 2 promox hosts and 2 Truenas instances so there is little to no need for extra power, also since my fastest network at home is 10g there is little point in going over that in speed because while that would be fun, it would be of little use and surprisingly expensive compared to 10g.

860 or 850 evos?

40 gig to the switch
so all 4 hosts can dig in 10 gigs at a time

I am in the market for a switch with 6+ SFP28 25 gig ports, but it isn’t looking promising

1 Like

yes

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.