Level1Linux - Fireside Chats

I wouldn’t be able contribute anything directly at this point, but count me as one of the level0’s who would appreciate “career how-tos” or write ups for different projects that someone wanting to get into Linux side of things can dig into and learn a lot from. Looking forward to seeing what this community does in the future!

5 Likes

Thing about what can be done with Linux is a very broad topic and beginners can be very immediate on where to start with something as simple as which Linux Distro should they go with.
That was a huge reason why I worked/work on the A Linux Distro Guide Wiki on the forum. I wanted to have something can use as a point of reference for those different distros people could look into.

To that end I think it might be good idea to go over some basic things you can do in Linux which are really cool and neat like:

Copying Mass Amount of Data using command line to another Hard Drive

Using Linux to Troubleshoot your Windows Computer Issues

etc.

Basic idea to give people stuff they can use to build a foundation on the start of their Linux Journey.

4 Likes

Thank you! I will check it out

2 Likes

The next ‘itch’ in my head is to have XCP-NG to use vGPU for my NVIDIA cards. I want to split one gpu into multiple VMs. One VM with Docker Containers with GPU acceleration (Linux Host), another with Windows (gaming), and another Linux one (virtual workstation). My mind was blown after watching the interview with Liqid about their ThinkTank, but I do not have the resources to purchase a system from them. Was wondering if it would be possible with open source tools.

There are videos on YouTube about this for Proxmox ([Link])(https://www.youtube.com/results?search_query=proxmox+vgpu+nvidia) but the last time I had Proxmox as my GPU host I ran into weird issues with Windows not seeing my GPU (this was after they removed the Code 43 from their driver). I installed XCP-NG, and never had any issues. Perhaps this is down to the Xen hypervisor, or Proxmox 7 had just came out? Not sure, but I don’t really have the time right now to move all my VMs and try it out again.

Additionally, I would like more general or future technology talks. I really enjoyed this one by ChrisTitusTech, exploring blockchain domains YouTube Link. I have not been extremely up to date with the industry, but learning about new areas to explore (or if anyone wants to link me some resources) would be much appreciated.

Thanks!

3 Likes

I’d be interested in your thoughts related to keeping a system maintained over months and years:

  1. Keeping your install “clean” - I’ve definitely needed to clean up the crap that builds up over time. Unused config files build up and in multiple places from libraries and dependencies you don’t use anymore, etc. I’m familiar with Arch’s tips, thanks to their wiki. I’m curious how other distros and package managers handle maintenance and if you have any general tips to keeping your system in tip-top shape.

  2. Debugging errors and system instability - I’d appreciate your thoughts / methods for problem solving any issues that pop up. I’m thinking tips utilizing journalctl, /var/log, or some other programs to diagnose problems. I generally find myself grepping the journal, checking for a log file, and then sleuthing the internet for similar issues, but many times I’m just out of luck. I know a lot of programs are snowflakes and have their own way of error handling / logging, so maybe it should be scoped to more system level things - especially when they cause kernel panics, etc.

That said - I’m always happy to see more linux specific content!

3 Likes

+1
Also healthy data hoarder habits

+1
To add on, show us how to make the config backup you mentioned on the news. I used clonezilla, but that takes a long time and copys the whole disk. Maybe make a template git repo and leave instructions for how to dump a config then have a thread where the community can post their configs. Then a follow up reviewing anything that caught your eye

Suggestions for linux specific

  1. A review of skills and tools we need to contribute to linux or FOSS projects. Can also include skills ouside of programming.
    • Maybe soft skills such people management or delgating tasks for FOSS projects
  2. A guide for your thought process while debugging and solving problems. Elaborated with questions
    • Would love to see in depth how you restored LTT’s data or debugging a looking glass issue
    • How do you know which rabbit hole dig into? (I frequently dig into the incorrect rabbit hole)
    • What parts of error messages do you key in on and what parts you ignore?
  3. Bug reporting etiquette, how to save developers time when reporting issues.
    • Alternatively, a interview with gregkh, gnif or a project lead, getting the perspective of someone who had to review bugs to explain things they are looking for.
    • Alternative to a alternative. Get a expert to give insights to setting up a bug reporting system with best practices they’ve picked up maintaining large projects
  4. Interviews with rising contributors for how they got their start into contributing to FOSS projects
  5. Coverage of projects that need contributors
  6. Cool underrated tools people sleep on, like the powertoys video. Alternatively, getting a list of apps/config file you use
  7. A review of tools suggested in this thread

My suggestions for how to guides
A video simlar to this, but for storage

Updating home media server series. Is there better solutions? LTT did stuff with trueNas

Or pointing someone else’s series

3 Likes

is tar xvpf $filename --xattrs-include='*.*' --numeric-owner difficult for you? :troll:

I’ve just aliased it to “yeet”

5 Likes

I think there are sooo many possibilities here. Just depending on how much people would like to contribute.

I would love to contribute in any way possible for a less tech educated person who is self teaching to the best of my abilities short of college classes. I would be happy to review or attempt to follow guides to see if I can make whatever topic work or where I needed more instruction or clarification.

I find myself bouncing from one thing to another from networking, Linux CLI useage (automation of tasks, email notifications etc) for basic tasks and more advanced “scripting” programming. I am all over the place there, and here lol.

I am also retired at 40 and I have all the time in the world to even just transcribe and do subtitles for your videos lol… Of course I would love to do something more technical… but maybe that the equivalent of needing to clean kennels when you volunteer at a shelter before you get to play with the dogs…lol

I believe not too long ago Ryan did a video series on basic linux CLI usage. I have to check my bookmarks. If these are older maybe it’s a good time for a refresh or revisit?

With this if we do tackle CLI it would be nice to clarify distro used for distro specific commands or differences.

These two interest me a lot.
Debugging would be great. I know linux keeps logs and there are ways to view them but finding them and using them seems to be beyond me for the moment. As a windows user I am very familiar with event viewer… There has to be something similar with linux either through CLI or GUI.

I know you have to have some useful tools that are opensource or useful in your tool box of tools you use for testing and fixing issues.

Lastly, back to @wendell’s original question…
Are there open source projects we can get behind? Help them develop by using them daily to test and error report to improve quality, safety and maybe usefulness?

Till I found this site “opensource” was just a word, now that I know what it is I really believe in it and would like to make ease of use and availability to everyone much easier.

I have some machines running all the time and would like to safely allow them to contribute compute power to help opensource projects by compiling or however that works. It’s still a bit of a mystery to me.

3 Likes

I think this is a pretty good one, but I guess I very diverse.

Maybe also polling /proc/device/whatever to find status like CPU temp, or USB controller state or whatever one can gleam from the devices for trouble shooting?
But perhaps that again is not worth going over/too large

6 Likes

I’m going to be famous! Besides my Easy-to-follow not-so-easy-to-follow guides on the forum, I’m currently tackling reproducible self-hosting home infrastructure for the layman. Well, it has been a WIP for a few months now, because I don’t have the time to finish it up, but I’m getting there. Once my journey into SBCs (Biky in ARM-land) ends, I’ll start with the guides on self-hosting stuff.

In the environment of techtubers going balls-to-the-walls with server hardware, it paints a bad image for self-hosting hardware in most people’s eyes. Everyone currently believes the old debunked idea that ZFS needs 1GB / TB of storage and everyone is suggesting L2ARC, ZIL and special metadata devices, when a simple SBC with 4GB of RAM and a mirror should be more than plenty for anyone with less than 5 people in a household.

Besides, I see no guides online on how to do backups properly. Nobody is taking care of the backups. There are thousands of guides on how to setup different services, but nobody shows how to maintain a backup of them. I want to change that myself and will do so when I start laying the steps in the self-hosting guides on ARM SBCs.

Literally `man tar`. I can't believe I'm in a situation that I have to unironically tell someone to RTFM. Expand to read further, I'm not saying in a mean way.

tar:

  • -j = bzip2 (.tar.bz2)
  • -J = xz (.tar.xz)
  • -z = gz (.tar.gz)
  • -x = extract
  • -c = create
  • -v = show output / verbose mode
  • -f = file
  • -C = extract to directory

Except for -C, everything goes after tar. The file name must be inserted right after the -f option, so you cannot do -xzfv, you have to do -xvzf or any other combination with f followed by the file name. Then -C is at the end, after the file, to point to a directory. Note that you need a directory to exist first. If you don’t use -C, implicit is ., which means the folder you are currently located in, or where the tar archive is located.

Examples:

# extract gzip file to a folder
mkdir megusta
tar -xzvf archive.tar.gz -C megusta/

# extract non-compressed tars to the folder you are in
tar -xvf archive.tar

# same, but to a directory
mkdir -p /path/to/newfolder
tar -xvf archive.tar -C /path/to/newfolder/

# create tar archive and compress it with xz
tar -cJvf newarchive.tar.xz file1 file2 /path/to/file3 fileN
6 Likes

yes, thanks! also

not just man tar

info tar

often I find when I cant quittteeee remember syntax info is muuuuuch more helpful than the man page.

Kinda agree with this, and am having a bit of mixed luck with a zfs setup that sleeps and has power management.

I am having betterish luck on the alder lake platform, and using a pair of 1tb nvme with a pair of 20tb spinning rust, metadata on the nvme, for maximal sleepy-time low power management.

Docker for the infrastructure, zfs for the datasets, truenas gui is a bit in the way except making me lean toward separate VM for portainer.

7 Likes

man needs to be a T-Shirt, a default wallpaper, global ad campaign.
The one saviour command.

5 Likes

When to stop using more or less unmaintained software and move on? :wink:

You can also install libarchive and get bsdtar (which also is the default in macOS) =)

2 Likes

Was thinking about this thread earlier and I had a semi-intelligent thought pop up.

There’s a lot of push for “Check out the Level1 Forums for tutorials/etc” but the community here can tend to be a disorganized mess around here sometimes. And I don’t mean that in a critical way, but along the lines of sometimes like Discourse just doesn’t have the framework to be a knowledge repository in the way I think @wendell wants it to be. Just my perception, maybe I’m wrong here.

At the same time, I’ve been here working on my MediaWiki deployment and had an idea. Not saying a wiki would be the best format, but what if L1T were to host some sort of official repository of tutorials/guides/etc? I’ll keep using a wiki as an example here - there would have to be an approval/moderation process for what gets put into the Official L1Repository, but a contributor could have a page dedicated for each project/guide with a standard Topic, Subtopic, Created, Lasted Updated, Newest Verified Versions, etc etc header and then edit permission granted only to themselves, admins, and anyone they invite to collaborate.

It would be an awesome resource but I also acknowledge it would have overhead with approvals and moderation (you’d need some sort of “Report this page” function for things abandoned/gone bad).


Edit for other thought as they roll in -

  • Maybe you could even outsource part of approval to the community here; don’t know if there would be a way to integrate with Discourse but have a queue of ‘Yea/Nay’ polls open to the forums that close at some arbitrary number of votes and would decide whether something is worthy of the repo or not.

  • Expanding on above, I actually really like this idea… It would take care of moderation too: after a certain age since the last update or a number of community flags, you could put a page back into the moderation/approval poll queue. Done. Although you’d still need some moderation for things that are actively malicious.

6 Likes

There isn’t much to document regarding aarch64 and/or backup, it’s pretty much transparent to any other platform as expected and the backup part is use rsync, zfs send/recv over a tunnel, VPN to another box.

If any it makes more sense to educate in why some hardware is less ideal and that upstream support is important.

1 Like

tar is too slow :frowning:

At least in some cases I had, when there was GB’s of data in a huge amount of small files.
tar and untar took hours for a backup and restore during a hardware migration.

Next time I will try to start early and use rsync in the days before and then just do a final sync on the migration date.

But I think this could be an interesting topic:
What other tools/practices are there for creating backups and restores, that could perform better than tar?
snapshots (if filesystem supports it), squashfs, rsync, a multithreaded / recursive / map-reduce like tar?, others?

2 Likes

I am using ARM as an easy and cheap way to get into this. Imagine if home routers were to use x86, the costs would be pretty big and you wouldn’t see home routers so widespread. Same goes in this case I am envisioning.

Regarding backups, this is another story. I am saying that there are setup tutorials, like “how to configure apache” and “how to setup gitlab” and “how to install and use postgres,” but none of those show you how to back them up. GitLab has its own backup tool, if you show someone to set it up, also show them how to backup in the setup process. If you configure a DB, show people how to dump it, or how to enable transaction logs and back that up.

What I was saying was that in my tutorials to come, I will include backup methods towards the end, after the setup is complete. I will likely go through my previous posts and add this towards the end if required. For example, setting up the roadwarrior VPN router doesn’t need a backup scheme, just copy the 2 scripts and your keys / certificates and you’re done. But for something like a DB, it needs at least a few words said about how to backup.

3 Likes

bzip2 is the fastest I tried when it comes to compression and decompression. And bzip by default does -9 anyway. I found that rsync was unreliable when sending big files (200GB+) over the ocean, so what worked for me was to bzip to standard output, pipe to ssh to another box and run bzip to extract from standard input. Not sure why that worked and not rsync.

1 Like

zstd and lz4 are much faster, lzma2 is more efficient but also slower :wink:

2 Likes

The problem isn’t necessary the compression, but in this case the sequential reading/processing of each file. If you have a huge number of files the sequential processing takes a lot of time. If all the files are on a single HDD parallel processing probably doesn’t make sense (because of read head). But if the files are on SSD(s), Storage Aray, I think there could be some speedup if you do it in multiple threads considering the IOPS limitation and the number of threads to start. (like when you do filesystem performance tests with fio with multiple threads, different queue depths,etc) .I don’t know if such a tool already exists, but I haven’t really digged into this.

1 Like