Let's fix Arch!

Then i’m screwed becose i’ve bin using this setup for 6 months, Not to mention this German guy :joy:

I hope people give him some view and likes, becose my gearman is abit rusty and it would motivate him to make more english videos.

1 Like

Maybe not for workstations, but being able to make a zfs clone read/write Cheaply after a botched upgrade has saved my bacon more than a half-dozen times. (Then cleanup after hours for various reasons: https://jrs-s.net/2017/03/15/zfs-clones-probably-not-what-you-really-want/ )

I don’t know that btrfs is the best choice for my level of paranoia, and number of disks, though. I thought it was pretty lol that every raid-in-a-box nas I’ve used puts btrfs on top of md arrays. lol.

I hate how BTRFS slows down significantly once you hit about 200 subvolumes. (and that’s trivial if you’re using docker at all)

I don’t have the cash for a separate workstation and dev machine, so… yolo

That’s why ZFS snapshotting would be nice for me. Not to mention, I run ZFS on my NAS, it’d be nice to be able to zfs send to back up all my machines.

3 Likes

i was planning on building and trying to break a btrfs raid this spring, but my plans are delayed becose of an unsold house.

As a private consumer it would be quite benefical for me to be able to grow a raid over years. As filling up a 11 drive case with 8tb+ drives is not for the weak hearted.
The space usage would be limited by the 3 largest drives in a raid6. So it would allow me to buy drives in increments of 3.

And btrfs has everything i want from zfs except for the ability to set different properties on each dataset.
I got the idea when for a short moment raid5/6 was not marked unstable on btrfs. Last i checked it was unstable again.
And from looking at the kernel Opensuse is still pushing btrfs forward.

The only downside is that i would need to install btrfs on my backupserver aswell if i want to send datasets.

2 Likes

I think I adopted btrfs too early when I made a rockstor NAS and lost everything. I like the idea of extensability vs. how I’m building my freeNAS now (lots of drives, trying to for-see max growth so I do not have to mess around with rebuilds). Here is to hoping btrfs matures.

1 Like

17 posts were split to a new topic: ZFS vs EXT

Throwing my hat in the ring to do code review if you guys ever get anywhere

1 Like

It is a thing, someone just needs to port libbe to Linux :smiley:

1 Like

libbe:


bectl:

1 Like

hmmmm this might be the right solution.

1 Like

Good (late) news folks.

image

:confetti_ball: :partying_face:

4 Likes

I really didn’t want a contest over whether this was worthwhile or not, I just wanted to mention about the possible issues.

Okay, so I’ve been doing some reading and it appears most of the heavy lifting for ZFS for Arch has been handled and it’s well documented here:

From what I gather here, it should be pretty simple to get ZFS partitions to work in Arch.

I’m not so sure about getting a root partition running it, have you been able to get this to launch successfully? would it be better to have ZFS partitions be separate and have the root / boot be standard?

Don’t have a snapshot tool on the dev laptop yet, so I had to SSH in:

So, yes. 100% working ZFS root on Arch.

4 Likes

Update:

image

I’ve successfully switched datasets via grub.

this is happening

6 Likes

This is quite impressive. How fast is it to switch? Have you seen any I/O errors?

1 Like

Manual at the moment. Just getting the required actions down.

Zero. Running on a 970 Pro. When I had to send | recv a dataset for testing, I was getting 1GB/s. :smiley:

2 Likes

wowzers, that’s fast.

I’d be interested in some hdparm numbers.

Are you going to build this into a separate branch?

1 Like

This will be a separate project entirely.

Standby for numbers. EDIT: it’s not happy with my NVMe disk. I’ll play with it, but it might take some time.

2 Likes

Alright, so the way I see this, we’re going to need a few things to automate it:

  • ALPM hooks (for monitoring pacman)
  • script to handle the dataset creation (I propose the name znapper for the time being, suggest something better please)
  • some way to notify GRUB of the additional datasets.

Best way I see for notifying GRUB would be a hook. How we determine this though, will be interesting. I’m thinking we need a standard dataset configuration.

I propose the following:

We name the boot pool something reasonable, have that in an env somewhere. (people can have multiple pools, afterall)

We make a dataset off the pool called sys. In there goes your home and root dataset control groups.

We can also create a dataset off the pool called data for user data that we don’t want to get versioned in relation to package updates.

IE:

${POOL}/sys/${HOSTNAME}/ROOT/default  # Root datasets
${POOL}/sys/${HOSTNAME}/home  # Home datasets.  Typically dotfiles don't like being messed with too much.  We'll snapshot this with the root partition, but we might want different ZFS options on /home.
${POOL}/sys/${HOSTNAME}/var  # Not sure about separate usr, var, opt, etc... datasets for now.  (I don't have them at the moment)
${POOL}/data/* # User Data datasets.  These don't get messed with

From there, we can write a wrapper script for GRUB that fixes this.

We will also need a dataset that holds control data for the management script, since it’s really not a good idea to keep multiple versions of the data. For that, I propose creating a ${POOL}/znapper dataset. I think we’ll just use sqlite for now, unless someone has a better option.

I’m going to write the initial script in Python, since there are ZFS bindings for it, so we don’t have to do a bunch of string parsing.

1 Like

For the grub bits did you look at the previous posts here from other distros? Maybe something reusable there. This is aaawweesooomeeeeeeee :smiley:

3 Likes