Level1Linux - Fireside Chats

Let us work on accomplishing something with the Level1Linux community. What can we tackle?

Anything from career how-tos to writeups like the “Why ZFS metadata devices” etc?

TODO: video coming



Here’s one

Benchmarking drive speeds in a more intelligent way. Currently people try to copy paste things they get from google, completely ignorant of the caveats, gotchas, and considerations for workloads, which you only come across after really burying yourself into it.

Basically try to organize a set of consumer/prosumer oriented latency and throughput tests that can help more realistically compare performance across ZFS, btrfs, EXT4, XFS, VM storage, etc, keeping in mind the pitfalls each might throw at you, like the different caching layers used by Linux, KVM/QEMU, or ZFS’s ARC turning your read test into one of ram, not your drive if you aren’t careful. Or things like how sync writes behave in certain situations can greatly complicate whether your test is meaningful.

I suppose what also goes along with this, is a tutorial on how to figure out what kind of workload you are actually doing.

A secondary thought is a tutorial on how to dig into potential system performance issues in general for someone completely unfamiliar with such things. Things like generating flame graphs and zooming in on source code to figure out where an issue is. A very inspiring example is this story, where ZFS had no pools and wasn’t being used, and was using +30% CPU: ZFS Is Mysteriously Eating My CPU.



  1. I would be very appreciative of showing a home user-oriented ZFS NAS setup that can go to sleep (S3) and use WoL to reactivate on-demand without having to wait five minutes (dramatization) for a complete boot sequence.

  2. The current state of SMB Multichannel & Linux - yes, probably still “experimental”, but come on, it’s been 10 years since its introduction with Windows 8/Server 2012 and it would be such an easy way to multiply network speeds without having to go 10 GbE and beyond - also you can use SMB MC with two 40 GbE connections, for example, fine af even without RDMA :stuck_out_tongue:

On the other hand, have never been able to get SMB MC to work with anything other than Windows Server (working between 2012, 2016, 2019, 2022, no luck with 8, 8.1, 10 until 21H2) even on the very same hardware with just the boot drive swapped.


How to untar a file without googling it would be nice

  1. Also, if anybody wants to donate two Gen4 SSDs (for a mirror) with > 4.000 MB/s write and powerloss protection for a home user-edge case high performance SLOG/Cache/L2ARC, I could shoot 4K B roll of my open experimental X570 home server system and Wendell could do the setup via KVM IPMI :wink:

Would be interesting to let someone competently push AM4/X570 to its absolute limits with PCIe Gen4 NVMe switches etc. :kissing:

1 Like

I wouldn’t be able contribute anything directly at this point, but count me as one of the level0’s who would appreciate “career how-tos” or write ups for different projects that someone wanting to get into Linux side of things can dig into and learn a lot from. Looking forward to seeing what this community does in the future!


Thing about what can be done with Linux is a very broad topic and beginners can be very immediate on where to start with something as simple as which Linux Distro should they go with.
That was a huge reason why I worked/work on the A Linux Distro Guide Wiki on the forum. I wanted to have something can use as a point of reference for those different distros people could look into.

To that end I think it might be good idea to go over some basic things you can do in Linux which are really cool and neat like:

Copying Mass Amount of Data using command line to another Hard Drive

Using Linux to Troubleshoot your Windows Computer Issues


Basic idea to give people stuff they can use to build a foundation on the start of their Linux Journey.


Thank you! I will check it out


The next ‘itch’ in my head is to have XCP-NG to use vGPU for my NVIDIA cards. I want to split one gpu into multiple VMs. One VM with Docker Containers with GPU acceleration (Linux Host), another with Windows (gaming), and another Linux one (virtual workstation). My mind was blown after watching the interview with Liqid about their ThinkTank, but I do not have the resources to purchase a system from them. Was wondering if it would be possible with open source tools.

There are videos on YouTube about this for Proxmox ([Link])(https://www.youtube.com/results?search_query=proxmox+vgpu+nvidia) but the last time I had Proxmox as my GPU host I ran into weird issues with Windows not seeing my GPU (this was after they removed the Code 43 from their driver). I installed XCP-NG, and never had any issues. Perhaps this is down to the Xen hypervisor, or Proxmox 7 had just came out? Not sure, but I don’t really have the time right now to move all my VMs and try it out again.

Additionally, I would like more general or future technology talks. I really enjoyed this one by ChrisTitusTech, exploring blockchain domains YouTube Link. I have not been extremely up to date with the industry, but learning about new areas to explore (or if anyone wants to link me some resources) would be much appreciated.



I’d be interested in your thoughts related to keeping a system maintained over months and years:

  1. Keeping your install “clean” - I’ve definitely needed to clean up the crap that builds up over time. Unused config files build up and in multiple places from libraries and dependencies you don’t use anymore, etc. I’m familiar with Arch’s tips, thanks to their wiki. I’m curious how other distros and package managers handle maintenance and if you have any general tips to keeping your system in tip-top shape.

  2. Debugging errors and system instability - I’d appreciate your thoughts / methods for problem solving any issues that pop up. I’m thinking tips utilizing journalctl, /var/log, or some other programs to diagnose problems. I generally find myself grepping the journal, checking for a log file, and then sleuthing the internet for similar issues, but many times I’m just out of luck. I know a lot of programs are snowflakes and have their own way of error handling / logging, so maybe it should be scoped to more system level things - especially when they cause kernel panics, etc.

That said - I’m always happy to see more linux specific content!


Also healthy data hoarder habits

To add on, show us how to make the config backup you mentioned on the news. I used clonezilla, but that takes a long time and copys the whole disk. Maybe make a template git repo and leave instructions for how to dump a config then have a thread where the community can post their configs. Then a follow up reviewing anything that caught your eye

Suggestions for linux specific

  1. A review of skills and tools we need to contribute to linux or FOSS projects. Can also include skills ouside of programming.
    • Maybe soft skills such people management or delgating tasks for FOSS projects
  2. A guide for your thought process while debugging and solving problems. Elaborated with questions
    • Would love to see in depth how you restored LTT’s data or debugging a looking glass issue
    • How do you know which rabbit hole dig into? (I frequently dig into the incorrect rabbit hole)
    • What parts of error messages do you key in on and what parts you ignore?
  3. Bug reporting etiquette, how to save developers time when reporting issues.
    • Alternatively, a interview with gregkh, gnif or a project lead, getting the perspective of someone who had to review bugs to explain things they are looking for.
    • Alternative to a alternative. Get a expert to give insights to setting up a bug reporting system with best practices they’ve picked up maintaining large projects
  4. Interviews with rising contributors for how they got their start into contributing to FOSS projects
  5. Coverage of projects that need contributors
  6. Cool underrated tools people sleep on, like the powertoys video. Alternatively, getting a list of apps/config file you use
  7. A review of tools suggested in this thread

My suggestions for how to guides
A video simlar to this, but for storage

Updating home media server series. Is there better solutions? LTT did stuff with trueNas

Or pointing someone else’s series


is tar xvpf $filename --xattrs-include='*.*' --numeric-owner difficult for you? :troll:

I’ve just aliased it to “yeet”


I think there are sooo many possibilities here. Just depending on how much people would like to contribute.

I would love to contribute in any way possible for a less tech educated person who is self teaching to the best of my abilities short of college classes. I would be happy to review or attempt to follow guides to see if I can make whatever topic work or where I needed more instruction or clarification.

I find myself bouncing from one thing to another from networking, Linux CLI useage (automation of tasks, email notifications etc) for basic tasks and more advanced “scripting” programming. I am all over the place there, and here lol.

I am also retired at 40 and I have all the time in the world to even just transcribe and do subtitles for your videos lol… Of course I would love to do something more technical… but maybe that the equivalent of needing to clean kennels when you volunteer at a shelter before you get to play with the dogs…lol

I believe not too long ago Ryan did a video series on basic linux CLI usage. I have to check my bookmarks. If these are older maybe it’s a good time for a refresh or revisit?

With this if we do tackle CLI it would be nice to clarify distro used for distro specific commands or differences.

These two interest me a lot.
Debugging would be great. I know linux keeps logs and there are ways to view them but finding them and using them seems to be beyond me for the moment. As a windows user I am very familiar with event viewer… There has to be something similar with linux either through CLI or GUI.

I know you have to have some useful tools that are opensource or useful in your tool box of tools you use for testing and fixing issues.

Lastly, back to @wendell’s original question…
Are there open source projects we can get behind? Help them develop by using them daily to test and error report to improve quality, safety and maybe usefulness?

Till I found this site “opensource” was just a word, now that I know what it is I really believe in it and would like to make ease of use and availability to everyone much easier.

I have some machines running all the time and would like to safely allow them to contribute compute power to help opensource projects by compiling or however that works. It’s still a bit of a mystery to me.


I think this is a pretty good one, but I guess I very diverse.

Maybe also polling /proc/device/whatever to find status like CPU temp, or USB controller state or whatever one can gleam from the devices for trouble shooting?
But perhaps that again is not worth going over/too large


I’m going to be famous! Besides my Easy-to-follow not-so-easy-to-follow guides on the forum, I’m currently tackling reproducible self-hosting home infrastructure for the layman. Well, it has been a WIP for a few months now, because I don’t have the time to finish it up, but I’m getting there. Once my journey into SBCs (Biky in ARM-land) ends, I’ll start with the guides on self-hosting stuff.

In the environment of techtubers going balls-to-the-walls with server hardware, it paints a bad image for self-hosting hardware in most people’s eyes. Everyone currently believes the old debunked idea that ZFS needs 1GB / TB of storage and everyone is suggesting L2ARC, ZIL and special metadata devices, when a simple SBC with 4GB of RAM and a mirror should be more than plenty for anyone with less than 5 people in a household.

Besides, I see no guides online on how to do backups properly. Nobody is taking care of the backups. There are thousands of guides on how to setup different services, but nobody shows how to maintain a backup of them. I want to change that myself and will do so when I start laying the steps in the self-hosting guides on ARM SBCs.

Literally `man tar`. I can't believe I'm in a situation that I have to unironically tell someone to RTFM. Expand to read further, I'm not saying in a mean way.


  • -j = bzip2 (.tar.bz2)
  • -J = xz (.tar.xz)
  • -z = gz (.tar.gz)
  • -x = extract
  • -c = create
  • -v = show output / verbose mode
  • -f = file
  • -C = extract to directory

Except for -C, everything goes after tar. The file name must be inserted right after the -f option, so you cannot do -xzfv, you have to do -xvzf or any other combination with f followed by the file name. Then -C is at the end, after the file, to point to a directory. Note that you need a directory to exist first. If you don’t use -C, implicit is ., which means the folder you are currently located in, or where the tar archive is located.


# extract gzip file to a folder
mkdir megusta
tar -xzvf archive.tar.gz -C megusta/

# extract non-compressed tars to the folder you are in
tar -xvf archive.tar

# same, but to a directory
mkdir -p /path/to/newfolder
tar -xvf archive.tar -C /path/to/newfolder/

# create tar archive and compress it with xz
tar -cJvf newarchive.tar.xz file1 file2 /path/to/file3 fileN

yes, thanks! also

not just man tar

info tar

often I find when I cant quittteeee remember syntax info is muuuuuch more helpful than the man page.

Kinda agree with this, and am having a bit of mixed luck with a zfs setup that sleeps and has power management.

I am having betterish luck on the alder lake platform, and using a pair of 1tb nvme with a pair of 20tb spinning rust, metadata on the nvme, for maximal sleepy-time low power management.

Docker for the infrastructure, zfs for the datasets, truenas gui is a bit in the way except making me lean toward separate VM for portainer.


man needs to be a T-Shirt, a default wallpaper, global ad campaign.
The one saviour command.


When to stop using more or less unmaintained software and move on? :wink:

You can also install libarchive and get bsdtar (which also is the default in macOS) =)


Was thinking about this thread earlier and I had a semi-intelligent thought pop up.

There’s a lot of push for “Check out the Level1 Forums for tutorials/etc” but the community here can tend to be a disorganized mess around here sometimes. And I don’t mean that in a critical way, but along the lines of sometimes like Discourse just doesn’t have the framework to be a knowledge repository in the way I think @wendell wants it to be. Just my perception, maybe I’m wrong here.

At the same time, I’ve been here working on my MediaWiki deployment and had an idea. Not saying a wiki would be the best format, but what if L1T were to host some sort of official repository of tutorials/guides/etc? I’ll keep using a wiki as an example here - there would have to be an approval/moderation process for what gets put into the Official L1Repository, but a contributor could have a page dedicated for each project/guide with a standard Topic, Subtopic, Created, Lasted Updated, Newest Verified Versions, etc etc header and then edit permission granted only to themselves, admins, and anyone they invite to collaborate.

It would be an awesome resource but I also acknowledge it would have overhead with approvals and moderation (you’d need some sort of “Report this page” function for things abandoned/gone bad).

Edit for other thought as they roll in -

  • Maybe you could even outsource part of approval to the community here; don’t know if there would be a way to integrate with Discourse but have a queue of ‘Yea/Nay’ polls open to the forums that close at some arbitrary number of votes and would decide whether something is worthy of the repo or not.

  • Expanding on above, I actually really like this idea… It would take care of moderation too: after a certain age since the last update or a number of community flags, you could put a page back into the moderation/approval poll queue. Done. Although you’d still need some moderation for things that are actively malicious.


There isn’t much to document regarding aarch64 and/or backup, it’s pretty much transparent to any other platform as expected and the backup part is use rsync, zfs send/recv over a tunnel, VPN to another box.

If any it makes more sense to educate in why some hardware is less ideal and that upstream support is important.

1 Like