Allan Jude Interview with Wendell - ZFS Talk & More | Level One Techs

FreeBSD Mastery Advanced ZFS:
Allan Jude's Podcast:

This is a companion discussion topic for the original entry at

That moment when google chrome uses more ram than ZFS


Chrome eats RAM for breakfast. I have a whopping 6 tabs open and Chrome is using like a gig and a half by itself.


Whoop whoop!

Allen is my man!

except i dont really use BSD, but dont tell him that.

Content like this is what makes L1T great, long informative videos are a rarity these day and ZFS is a very interesting subject, for sure I'm going to watch the whole thing and its going to be great :+1:


Thank you so much for that content! So glad it is there.

If you could update us on the next weeks guest appearance of yours would be great!


Use Vivaldi

Tried it before, didn't particularly care for it.

Really enjoyed this video. Will probably read the book when I finally reach the point of setting up zfs on my network. I suspect I could have listened to a week long seminar on zfs without much ovelap


5:39 "... and my database needs this much space. And if you where ever wrong about that, you had to like move files to a different partition..."
My personal biggest beef with ZFS is the above, they fixed adding storage easily, except you cannot shrink your pool. (Last time I checked.)
As such you still get to move all your files to a different partition, in case you want to shrink, which does occur quite a few time in desktop moves.
I really like LVM here: pvcreate /dev/newdisk; vgextend vg /dev/newdisk; pvmove /dev/olddisk /dev/newdisk; vgreduce vg /dev/olddisk; Potentially adding a resizefs in the there. But other then that, the system just keep running without any downtime.

1 Like

Oh how lovely. You know I was going to comment "my two most favorites have come together to solve the worlds problems and have a laugh." I do warn you however that the Stephen Fry & Hugh Laurie reunion popped up on my feed just then SO ... I'm afraid you've both got your work cut out for you.

Absolutely fantastic addition to Level1. Let us plan for many more. Thank you @wendell and indeed a gratuitous thank you to Mr Allan Jude.

Good talk with someone very knowledgeable on the subject. I learnt quite a few things.
This would be a good addition to the podcast library!

Though I use it daily and love it, in terms of RAM it doesn't make a difference since it's still Chromium underneath...

The nice thing though (not sure if Chrome has that now too), is that you can hibernate background tabs. Just wish there was an option to automatically hibernate background tabs after X minutes :confused:

Absolutely my two most favorite nerds!


Yes. Learned some stuff :slight_smile: Excellent vid

1 Like

Been waiting for this interview! Awesome job guys. Hope more to come and can't wait to watch BSDNow episode. Also @wendell when the video was edited it sometimes looked like ya cut out Allan Jude maybe running deeper into the question at hand. If that was the case and not Skype issues or something would you mind uploading entire interview to download.

Watching it now and getting confused somewhere around 8:10. I can see why disks in adjacent slots may fail consecutively during a short time period (or even fail together): they're subjected to more or less the same environment (air flow, temperature, vibration, etc). But saying that FS is able to compensate for that is either total bullshit or black magic. Even if you have these disks in a shelf, even if disks are stacked in a single row and not something like 3x4, even if you have their bus IDs, it does not necessarily mean that you know which disk is where. What kind of sorcery is this?

10:18 OMG THAAAANK YOU! I've been telling people for fsking years not to trust their battery backed RAID controller. There are tons and tons of stats out there talking about how many businesses (small, mid-market, and enterprise) actually test their UPS's. Y'know, the things that are big bricks and constantly in the way, all the way up to the things that look like fscking refrigerators and are taking up significant floor space in your server room or datacenter. People STILL forget to check the batteries on these things that are actively in their way, and are easy to test.

What do you think the stats look like for RAID controller batteries? How often do you think they, the tiny, out of the way battery that no one thinks about, get tested? How often do you think a failure in the RAID controller battery is completely written off as a larger part of another failure? Given how often people don't actually do root cause analysis (I have literally been laughed at for doing root cause analysis because I thought it was not only necessary, but part of the job of being a sys admin, like finger pointed at me, ha ha, stopped just shy of "neener neener neener," laughed at), I'm going to guess the numbers are higher than people might think.

1 Like

This was interesting.

Since books were brought up, @wendell got any absolutely must have useful books?
Except the obvious one in this video.

I've got a SSD coming set aside for Linux. This is food for thought. If it's an option from the installer I'll probably use zfs. If there's ppa's involved or other hoops probably not. Thanks for a thorough and interesting look at ZFS! :slight_smile:

Question also:
can I still use gsync grsync for backup? (I know there are better solutions, but I live with this comfortably).