NVMe Thunderbolt ZFS

Long-time ZFS user here, looking for self-inflicted trouble.
I have a rackmount system with 10 spindles and a bit of flash that is both as bulletproof and as clunky to start as a tank. Only runs a couple of hours a month, but I really want a more everyday data dump location - with ZFS. 10TB of usable capacity should do, I’m not a data hoarder like I used to be (relative to available HDD capacities). I’m also not demanding high speeds, any storage with SSD-like access times and SATA3 sequential transfer speeds will absolutely do for me. Furthermore, I do not plan on running this 24/7, more like an hour a couple times a week when I feel like it. So power consumption and noise isn’t really the deciding factor for me.
Main machine is a Thinkpad 25 Anniversary that has one TB3 40Gb/s port. If needed in the network, I’ll probably SMB it via an ac-WiFi or 2.5G direct ethernet link.

So I was thinking about NVMe storage via Thunderbolt recently.
I guess there’s four ways to do it:

  1. The “my day pays for it” way - buy a iodyne Pro Data device at the desired capacity and be done with it. 12-bay NVMe case, not sold as barebone, but can be had in 12TB flavour at 5k (US), 24TB at 7.5k or 48TB at a cool 17.5k. Unfortunately, my dad wouldn’t pay for that even if he were still alive.

  2. The correct way - buy a quad-bay NVMe DIY TB3 case. Like the OWC 4M2 or Startech M2E4BTB3 ones which are somewhat readily available. Or get the obscure ones, like the Netstor (Taiwanese) NA622TB3 that seems EOL, or the JEYI (Chinese) ThunderRate-4*NVME that’s probably also EOL, or maybe the Trebleet (Malaysian) TRE-8145PLUS or 8145PRO that aren’t sold here but apparently can be shipped directly by the manufacturer. Either way, fill 'em up with new 4TB drives and there you go. If more capacity is needed, these can be daisy-chained or one could move to 8TB drives one day, once their per-GB price settles down to what 1/2/4TB units currently go for. I’d prefer two cases for 8x2TB on Z2 instead of 4x4TB on Z1, but that’s just me. They’re like 350€-400€ each and apparently terribly noisy due to the form factor. Maybe there are low-power NVMes that could reduce the need for cooling, I don’t know yet.

  3. The clearly not correct way - there are 4-way NVMe USB 10Gb/s docking stations available on Aliexpress that are like 60 bucks delivered. I don’t really know how they do that, maybe a 10Gb/s USB hub and four individual 5Gb/s converters attached to it on a single PCB, but I’m really curious about this. Still, the open construction with four NVMe sticks flapping in the breeze is likely not a sign of a quality ZFS setup, but will be nice for cooling, gathering dust, and contact issues. There are also 4-bay NVMe duplicators in the same form factor, but these are in the 350€ ballpark and it’s not entirely obvious what makes them that expensive. Not that a hardware cloning tool is in any way helpful here, but those things exist and are, according to Amazon reviews, sometimes confused with regular JBOD-ish devices without any processing capabilities that we want for ZFS.

  4. And the please-don’t-do-that way - individual NVMe-USB converter cases (5Gb/s) for <10 bucks each, bundled into groups of four to however long your 10Gb/s USB hub is (those start at 20€ for classic 4 port units), and two (or three!) of those meet at one Club3D CSV-1580 TB4 hub that offers three downstream ports at USB 10Gb/s speeds each, at the price of just 150€. That solution could house dirt-cheap garbage 480GB/500GB/512GB/960GB/1TB drives from all the shady corners of eBay, it’ll be an unsellable rats nest of single cases, wiring, hubs and fans, and boy will it not be portable once more. But as someone running packs of 2TB, 3TB, 4TB drives for quite some time now, that would be fully in line with my penny-pinching agenda when it comes to drives. As much as I hate the consequences, I quite like the concept.

Soo…has anyone actually set up one of those options and wants to share their pain?

I personally would run it 24/7. It shouldn’t use much while idling and having it up all the time will allow for automatic maintenance and failure detection. Bring it up and down will add wear plus letting the system sit powered off for long periods is not good for it.

Best to set and mostly forget. I think some sort of NAS is the right idea

1 Like

Have you considered building a iSCSI server.
Because that’s what I’m currently considering I might be mislead in what I’m suggesting.

Great question!

I have bought two used OWC 4M2 units so far, plus three used SSDs (970 Evo Plus with local DRAM that shouldn’t clog up TB), a short USB4/TB compatible cable and a Y cable for powering everything from a single PSU.
Used-market pricing however goes up rather than down at the moment, so I’m stuck with three drives for now. Hope that’ll work out in the end, I’m busy with other things anyway.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.