ZFS with HBA or Onboard SATA

I do not understand, 0.8.4 is the version that is currently out. Wouldn’t new changes come in the next version 0.8.5?

Sure but I am not hesitant anymore about setting it up. I am more hesitant for the following reasons:

  1. It was mentioned a encrypted dataset can leak metadata and I do not know what that includes. I still was unable to find a source that explains in datail what information can leak. I would like to know this in advance.

  2. The fact that I can only use either passphrase or keyfile. When I use a keyfile I need to somehow make and manage secure backups of the keyfile outside of my PC. Since when I would fuck up my system and reinstall or something like that I would need it to mount the encrypted dataset again. Passphrase means I always need to enter the passphrase after every boot. Not gonna happen.

Sure enough. I think if I don’t decide to migrate to ZFS I might give mdadm another try. I am with you that I assume it might have been the slow SATA ports. That is exactly the reason why I asked the same question about ZFS in my inital post here. I would like to throw out the hardware raid controller and replace it with a 10Gb ethernet card if my onboard SATA has enough throughput.
I can faintly remember back in the day when I tested mdadm there were people who described the same as you, that their mdadm setup is just fast as hell, and then there were a lot of other people who described setting up mdadm and when they benchmarked they had like 50MB/s throughput and were happy with it. That however was unacceptable to me. I gave up since the people who reported acceptable speeds never mentioned how they achieved them.

Actually I do not bother to much about it. My hardware raid controller is from 2011 and I can get replacements for 30-50EUR on ebay. Also getting the firmware is still possible, but maybe I should make a local copy if the manufacturer stops hosting it.

But I will have to think very hard now how I want to progress since all solutions are not optimal to me.

Why not try to plugin a similar HDD in one of the sata ports and test performance of the block device itself without file system or even partitioning, then re-plug in the next port to make sure all ports are performing as expected.
Also if you try to setup either zfs or mdadm, make sure that you either use the entire disk block device (like /dev/sdx) or if you want to partition, make sure the partitioning does not break HDD’s internal sector alignment. I think some fdisk versions assume ancient 512 byte sectors, hence break alignment for modern HDDs that use 4k as internal sector size, which would devastate performance. use parted for partitioning (it has an option to check alignment) or don’t partition at all.

What is and isn’t encrypted is in human readable format here: https://blog.heckel.io/2017/01/08/zfs-encryption-openzfs-zfs-on-linux/#What-s-encrypted

So file metadata isn’t leaked, just some basic information regarding datasets. So don’t name your dataset “Illegal hacked government Docs password=12345“ And you’re good.

And you are correct, openzfs 0.8.4 is what is currently out. The next major version will be 2.0, which is waiting primarily on bsd compatibility issues. If a 0.8.5 version comes out it’ll basically be fixed for bugs and regressions only.

1 Like

Yeah, the 0.8.3/0.8.4 is my bad- I’m on an older os on 0.8.3

I am not really worried about the performance of a single port. I am fairly certain that most onboard controllers will handle one or two devices without problems, I mean we live in the age of SSDs. I am worried if I use 6 of 6 possible ports and have moderate IOPS and read/write to 4 discs simultaneously., which can happen with ZFS or RAID, that the controller will have a hard time performing and become a bottleneck.

Thanks for the reminder! Correct sector alignment is one of these things that are already burned in my mind to take care of then handling anything more than a single disc.

Oh ok that is assuring to hear.

I checked Github and it does not seem like version 2.0 will fix the issues I criticized.

What I will do now is first I will try out mdadm once more and see if I can get it to perform like I intend to. This way at least I might get rid of the raid controller and gain one more free PCIe slot while maintaining the same feature set.

I will play around with ZFS in a VM Setting and see if I can find a satisfactory solution, but since I am looking more for hassle free solutions, ZFS currently seems like just one more thing I need to keep an eye on.

Thank you all for your input!

I just wanted to give an short update because something happened that was really unexpected:
I made an backup of the contents of my RAID6 and then removed the hardware raid controller to create a Raid6 with mdadm. Then I connected all the drives to the onboard SATA controller. Before formatting the drives I wanted to check if they are all healthy and my disk manager showed a raid entry in the drives section. So I clicked on it and it asked me for my luks password. I entered it an I could not believe my eyes. Linux is fucking able to mount the Raid that I created on the hardware controller. That is some next level magic!

P.S. I will recreate the raid with mdadm non the less since I don’t know how well this would work, but that Linux is even able to mount the old raid … just wow.

2 Likes