The small linux problem thread

(beating my anti-UUID drum again sorry) IMO if one is manually maintaining /etc/fstab, one should use labels, not UUIDs. One has to label partitions carefully (that’s a good idea anyway), but the result is a readable /etc/fstab, and thus less error-prone. F.ex.

 LABEL=EFI      /boot/efi   vfat    umask=000,noatime,noauto,user  0   1

I have a coder’s perspective on this. /etc/fstab is essentially code, and good code doesn’t have strings of hexadecimal strewn through it.

1 Like

SkyNet enters the chat…

1 Like

ls -l /dev/disk/by-uuid
Get the id of the partition (so that would be /dev/sdc1). 32 characters are usually for ext4, 8 characters are usually for fat32 and exfat. Not sure why you’re using exfat on an SSD though. Also, not all kernels have exfat support built-in.

For an ESP partition for UEFI boot loader binaries?

This actually makes sense to me. Readability is very important. My only concern with a dual-boot system is that it’s way too easy to change volume labels in Windows (intentionally or by accident) and then get dumped to an emergency shell when you try to boot Linux.

Honestly I don’t know. Whichever is the default method of exFAT support in Fedora 33 is the most likely answer. I do not remember deliberately installing fuse, but this root partition has been around the block a few times.

/dev/sdc(1) is not the SSD. It was displaced from /dev/sdb(1) by the SSD. sdc is actually spinning rust on which I wanted to use a filesystem accessible from both Windows and Linux, but that’s a whole other rant.

tenor

Alls i want is ssh, bittorrent, samba, btrfs support. I think thats it.

1 Like

Output of findfs with the UUIDs for /dev/sdc1. It seems to be working properly…

    $ findfs PARTUUID=d0bbf865-0be0-4d70-9a80-4f79dc56599a
    /dev/sdc1
    $ findfs UUID=A640-FFC0
    /dev/sdc1

Okay, try:
ldd /usr/sbin/mount.exfat | grep -i fuse

If it says it’s linked against fuse, you need to upgrade to rpmfusion’s exfat-utils instead (not sure about Fedora 33, specifically).

If not, you could try going the other way and set the partition type as exfat-fuse. Just a thought…

Seems others have had similar problems to yours, so may well be a bug in systemd that doesn’t get along with the shiny new exfat code. Might work better with more recent releases.

Here are the results of the ldd command:

[root@localhost ~]# ldd /usr/sbin/mount.exfat | grep -i fuse
ldd: /usr/sbin/mount.exfat: No such file or directory

EDIT: Weird, it mounted with the PARTUUID in /etc/fstab for a change. I’ll try a couple of reboots and see if that remains true. I’ve been seeing a LOT of updates lately. Something may have changed…

EDIT2: No, it didn’t automount /dev/sdc1 a 2nd time. And exfat-fuse is not recognized as a valid fstype.

You could try adding _netdev to the mount options… see if mounting later in the boot process works better at picking up the UUIDs. Just a shot in the dark, though.

The thing to do is boot-up with init=/bin/bash or even inside the initrd shell, and try some commands, instead of after the system has booted.

That, or upgrade to Fedora Rawhide, and file a bugreport if it hasn’t already been fixed.

Hi,
Here is a small linux solution :smiley: I though i’ll share something i didn’t find the solution for online so google can pic it up.

I am running fedora 33 workstation and was having issue running podman container
error while loading shared libraries: libc.so.6: cannot change memory protections

I did some re-installation of podman and the selinux thing, reset some right, no luck.
My containers folder was by default in /home/vlycop/.local/share/containers/ and i didn’t want to move it to /var/lib/containers

The issue was with selinux obviously, wrong type. So i fixed it by doing:

sudo semanage fcontext -a -t container_home_t "/home/vlycop/.local/share/containers"
sudo semanage fcontext -a -t container_file_t "/home/vlycop/.local/share/containers/(.*)?"
sudo restorecon -R -v /home/vlycop/.local/share/containers

it’s maybe not the perfect type, but it work and i am just starting with selinux.

Also i had to add :Z to the mapping in my dockerfile in order to access it
-v /toto:/toto:Z
see Using docker volumes on SELinux-enabled servers -- Prefetch Technologies

Have fun !

2 Likes

Here is a problem I recently had and a solution:

I run a NAS with Fedora OS and btrfs on the HDD array. I also use a small 250 GB Samsung M.2 as the boot and OS device.

I was having a lot of problems with the root partition running out of space overnight. All kinds of services would crash and leave things in a bad state.

I finally tracked it down to two things: the default install of mlocate and the new btrfs subvolume and snapshots I had added last week.

I use snapper to make timeline snapshots of important directories and I had recently moved my /home/zlynx/Backups into its own subvolume and snapshot setup. This is where my other devices write their backup files.

It turns out that mlocate's database builder likes to recurse everything including each snapshot. It was bad before but I hadn’t noticed. But once it started trying to build 20 GB database files, which it does in a temporary copy (so that’s 40 GB total) things were no longer sustainable.

The solution? Rather than figure out exclusion masks for mlocate I removed the package. I never use it anyway.

If you have a LOT of files on a system watch out for this.

1 Like

I was in the habit of using locate, that habit gained in the days of spinning drives. When I adopted btrfs and snapper /var/lib/mlocate/mlocate.db quickly became a problem. In /etc/updatedb.conf PRUNE_BIND_MOUNTS was no good, because updatedb didn’t distinguish a subvolume mount from a bind mount, so skipped /home. So I’ve had to keep /etc/updatedb.conf revised with PRUNENAMES including the names of directories typically used for snapshots, like snapper’s “.snapshots” and PRUNEPATHS with the places I manually put them, and other problematic mounts, such as the btrfs top level mount.
IMO if btrfs is used distros should not install problematic packages like mlocate by default.

1 Like

Finally got around to learning Ansible.

Several examples of include_vars use "{{ item }}" as the value. What does this mean?

 pre_tasks:
    - include_vars: "{{ item }}"
      with_first_found:
        - "{{ ansible_os_family }}.yml"
        - "default.yml"

Say I have foo: bar in default.yml. How do I access it later? I assume that without "{{ item }}", I would just use "{{ foo }}", or if I used - include_vars: my_vars, I would use "{{ my_vars['foo'] }}} or "{{ my_vars.foo }}}. But not sure what it means to put a variable after include_vars: that doesn’t appear to have been defined yet. Or does the item variable have some implicit value?

@nx2l

2 Likes

On phone so going to be short rn…

Item is referring to a looped task.

…With first found is a type of loop.

Item is place holder for var in the loop.

1 Like

Thanks, that was what I was missing.

1 Like

Does RHEL/CentOS 7 not have a python3 library for selinux? I installed python3 and set the ansible interpreter to it, but it’s erroring out No module named 'seobject'. I’ve installed every python/selinux/policy package I can think of/find in yum and still no luck.

1 Like

I had the same thing happen to me last night. I don’t suppose you made any progress in getting to the bottom of the issue?

System specs:

OS: Proxmox VE 6.2

Ryzen 2700x
ASRock x470d4u
2x 32GB Samsung M391A4G43MB1-CTD
Seasonic Prime GX-850

Here are the messages from syslog-

Jul 22 03:25:07 pve01 kernel: Uhhuh. NMI received for unknown reason 3d on CPU 15.
Jul 22 03:25:07 pve01 kernel: Do you have a strange power saving mode enabled?
Jul 22 03:25:07 pve01 kernel: Dazed and confused, but trying to continue

I recently had cause to run Memtest86 anyway (my CMOS battery died and you get memory errors with my RAM and Mobo combo unless you manually set the memory speed to 1333). So the RAM has rencently had 10+ full passes of a mix of Memtest86 8.4 and 9.1.

I also ran another three passes of Memtest86 9.1 overnight, last night, without errors. I’m now running more passes with 9.1 but with just tests 5 and 8 enable as I’ve read they’re the tests that can find errors with the CPU’s memory controller.

The errors occurred when the system was just going about it’s daily business of taking ZFS snapshots, and running a handful of LXCs. So system load would have been light. Which makes me wonder whether the issue is to do with power saving?.. I found one or two users with similar looking syslog messages who said that disabling c states stopped the error.

There was no apparent problem with the server itself. It didn’t reboot or freeze (unless it unfroze by the time I happened to see the errors about an hour after they happened).