Recommended multi bay hdd enclosure for ZFS

I’ve made a DIY NAS with a Raspberry Pi 4 and a couple of external USB 3.5" drives. I am planning on adding more drives but I am fearing the mess of cables that comes with that. To avoid it, I bought an external enclosure with 5 bays (Orico 9558u3) but I had to return it since, when trying to import the pool, it didn’t recognize the drives of the pool for their /dev/disks/by-id but rather for sdx.

I know I could import the pool anyway but a good practice for zfs is use by-id to avoid headaches to detect damaged disks an also to be able to insert the drives in any order.

With this enclosure the output of the command ls /dev/disks/by-id corresponded to “serial number of the case.DISK1” and commands like udevadm could only see the data of the enclosure instead of the data of the drives.

I have an HDD docking station with 2 bays (generic brand from amazon) that do not present this problem. So I am going to guess that must be something related to the brand.

Does anybody know about a multi bay enclosure that gives me direct access to the drives?

Thanks in advance

I didn’t know raspberry pi had enough RAM to satisfy the recommended requirements for OpenZFS? Even the 8GB model isn’t enough for as big as hard drives have gotten these days.

Did you mod the USB 3 port to expose the PCIE x1 lane and connect a SAS or SATA hba?

1 Like

Nope. I have to 2x14TB WD Elements connected through USB + 1x1TB Samsung T5 (SSD but not in the pool) and it’s being working wonderfully.

OpenZFS runs fine on my 4gb pi4.
More ram would for sure allow for larger ram caches, but is not required.

At the moment, I’m only running a mirror of 2tb drives, and of 4tb drives.

@SgtAwesomesauce runs (or ran) a 5- or 8- bay HDD enclosure over USB3. iirc, Speed is a little bottlenecked, but it works. Not sure if he uses ZFS though. :man_shrugging:

@Picatoste if you you only have one pool, using the /dev/sdx probably won’t be a problem itself, as the drives could all be mixed up, (zfs would just deal with it) unless a USB boot drive is listed as a /dev/sdx device (sd cards list as /dev/mmcblk, rather than /dev/sdx)

To get around it, you might have tried labelling the partition,s and creating the pool with partition labels (/dev/disk/by-partlabel ?) but I would be more concerned in case the enclosure is doing anything “clever” with the drives, instead of passing them through raw.

And I much prefer to use the /by-id/ too, even if I have to use the wwn, which gets screwy with some systems flipping characters.

Yep, I’m on ZFS, it’s an 8 bay, but frankly, I wouldn’t recommend it.

Get thunderbolt if you can.

1 Like

Thunderbolt? for a pi4? :gigathink:

Sounds like Apple talk to me… :apple:

But thanks for the heads up not to 8Bay

What enclosure are you using? Does it allow you to see the drives data?

The one I bought is unavailable, but this is in stock:

yeah, it’s JBOD.

Thank you. I am going to take a look at it then.

I have also seen, with new drives attached via USB, the lack of /by-id/… links.

Simply hand the drive(s) to “zpool create …”, to create the pool, and zfs creates a clean EFI (I think?) partition (actually, 3 partitions) on the drives you give it.

Next, you can force the new “by-id” references to be used by zfs if you:

  1. export the pool, and
  2. re-import the pool like this:

zpool import -d /dev/disk/by-id/ my_pool_name

You only have to do the above once, since zfs remembers the last device references used…

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.