That’s good. Now that I think about it I think the repurposed nuc I put in the home gym still runs fedora. I think I just told it to update to 41 a few days ago.
It’s not possible to just take the string as-is because you want the string to do two things… First there’s an arg, separated by a space from the filename, then you want it NOT to use the spaces in the file name as a field separator.
It’s true that other languages handle this better, because they aren’t primarily shells that need to guess what the user wanted. When your language is strong-typed, it’s easy to know exactly what the programmer wants.
A few though-provoking options to fix this:
- Take the spaces out of your file names.
- Use TWO variables… one for the option, and a second for the file name.
- Set IFS= to something that isn’t commonly found in file names… like a carat(^), then put carats in your variables between the various args instead of spaces…
But that’s kind of my point. It shouldn’t be treating the space as a separation.
When I have a string like:
--chapter "file name.mkv"
… then that is one string, not an option and its argument. It should take that string literally like it is and not make assumptions how it’s going to be used and change it based on them.
Ofc for it to be one string I’d have to quote that, and that’s what I did:
"--chapter \"file name.mkv\""
The only thing it should do with this is unescape the \" and that’s it. But instead it’s inexplicably doing the opposite in double and triple escaping the escape characters…
I would, but I have limited control over them.
Hadn’t thought of that, but seems a bit clunky.
Hmmmm that’s… interesting. I’ve seen IFS mentioned before but it’s one of those things that I never think of to apply on my own (kinda what we touched on last time with the manual lol)
Think I’ve got some mildly borked networking… Every now and then (a few times a day) I’ll just momentarily lose my connection to a remote machine. Enough to lose my remote desktop (sunshine/moonlight) and any SSH sessions (in which I have done stuff such as started sunshine, which then dies when the terminal dies.)
Where do I start digging into this? I’ve got another machine on the same switch that I haven’t had any issues losing my xRDP or SSH sessions. Don’t see anything of interest in the logs on either the host or the client, but I may not be looking in the right place, or maybe don’t have network manager verbose enough or something?
You’ve got both a command-arg and a file name (with spaces) in your single variable. If you had IFS=“” if would treat them both as a single thing, and not work.
i.e. –chapterfilename.mkv isn’t a valid arg (and isn’t a valid file name, either).
It uses the space between “–chapter” and “file” to know that they’re two different things. But then you’ve got a space in your file name, too, so it thinks there’s three things…
One way to make it understand what you want:
IFS=^
VAR="-chapter^file name.mkv"
It should take that string literally like it is
That’s not bash you’re complaining about, but how running commands works in Unix-like operating systems. Programs don’t get a long string with the command line in it, they get an array of pointers to strings, one pointer for each argument. You must have seen in C int main(int argc, char* argv[]). And, conventionally, --option value is two arguments; if bash did that differently it would break most commands.
My sympathy, though, quoting issues in shell scripts can be awful.
Right I got that from your previous post. But that’s what I’m saying. I’m defining a single string, not 2 parts of a string. It doesn’t need to “know” these are two things because to the interpreter it should not matter what I’m going to end up using the string for.
I understand why it’s the default but that’s why I’m saying I’d like a “dumb” mode where it doesn’t make these assumptions and when I fuck up the quoting that’s on me.
I guess that is what the IFS method could do in a roundabout way.
bash is already not POSIX compliant so an option for it would not make a difference. I’m not talking about defaults here.
It is supposed to have a POSIX-compliant mode when run as /bin/sh instead of /bin/bash, but when I tested this some time ago it would still let me do bash-specific things so IDK what’s up with that.
EDIT: I forgot to check which of my drives had each OS and made a stupid mistake. I used to run the small SSD on my LInux everyday installation and forgot I swapped them a year ago. Ignore this post
Hi there! I’m currently using 2 SSD’s in my ZorinOS installation. One 256GB one which I installed ZorinOS to (nvme0n1), and a second Windows 10 Pro 512GB one (nvme1n1).
I thought my bootloader was installed into the ZorinOS one, but as soon as I removed the W10 one to give it another purpose, it wouldn’t boot into Linux anymore. This is weird since I especifically installed each operative system in a different physical drive, but I guess Windows, which got installed afterwards, is predatory enough to have taken the bootloader partition somehow.
This is how it currently looks in Disks:
Could someone let me know how to fully transfer whatever partition is making my 256GB SSD depend on the Windows SSD to boot? I tried booting a Live USB of ZorinOS and run a Boot Repair utility, but the Live USB
Probably the boot sector didn’t get installed on that drive because there was already a drive in there flagged for it. So you would have to carve out a boot sector, I guess - I don’t know any other tricks, but I’ve not had to deal with many problems on this front.
https://wiki.archlinux.org/title/EFI_system_partition
edit: except it looks like the 256gb disk has windows, and the 512 has linux? and the boot sector is on the 512?
My bad, that is right, it was the other way around. I’m so dumb.
I’ll read the documentation, I will need to figure all this out once I upgrade from the 512GB SSD anyways, but that’s a story for another day.
My god, I can’t believe I Just spent the whole afternoon trying to troubleshoot this. I originally removed the 512GB SSD and left the 256GB in, so I guess Windows couldn’t boot because the 512GB (Linux drive) had the boot sector on it.
Shame on me
Hello everyone!
I was unsure whether or not to open a new thread, so i figured i may as well ask here first.
I’m trying to get my HP reverb g2 to work (fedora 41), i found Envision and managed to get the WMR profile installed (after hours of tinkering, manually compiling and installing monado+basalt, only to find i was just missing the boost-devel package
), now my issue is that when i try to start it, i have permission failures when accessing the device:
ERROR [p_open_hid_interface] Failed to open device ‘/dev/hidraw3’ got ‘-13’
ERROR [wmr_create_headset] Failed to open HoloLens Sensors HID interface
i tried making a udev rule, but it didn’t help - maybe i did it wrong?
12:45:41 zvir@fedora ~ → cat /etc/udev/rules.d/98-reverb.rules
SUBSYSTEM==“usb”, ATTRS{idVendor}==“03f0”, ATTRS{idProduct}==“0580”, ACTION==“add”, MODE=“0666”, TAG+=“uaccess”
12:47:23 zvir@fedora ~ → lsusb|grep HP
Bus 001 Device 021: ID 03f0:0580 HP, Inc QHMD A85V
when testing monado with vulkan, it encounters the same error, unless i invoke it with sudo (running this: VK_INSTANCE_LAYERS=VK_LAYER_KHRONOS_validation ./build/src/xrt/targets/service/monado-service)
For the record, steamVR does not recognize the headset either, but i’m assuming it’s also related to the USB permissions, as i did add the monado folder to the steam external drivers.
Any ideas on what i should try next? should i try Envision/Monado support?
Thank you.
So I have a weird problem where Virtual Machines aren’t able to connect to the internet, specifically the NAT virtual network, when firewalld is running on the system. They seem to connect to the isolated virtual network and macvtap devices just fine. I cannot figure out what configuration is causing this. Here’s my firewalld configuration for the libvirt interfaces:
$ sudo firewall-cmd --zone=libvirt --list-all
libvirt (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: virbr0 virbr1
sources:
services: dhcp dhcpv6 dns ssh tftp
ports:
protocols: icmp ipv6-icmp
forward: no
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule priority="32767" reject
I wonder if it could be a problem with the underlying NFTables or IPTables rules, but I am not knowledgeable enough in those to be able to diagnose. I feel like I am only just now getting comfortable with Firewalld itself. I do wonder if the problem here is also why my Firewalld port forwards haven’t been working too - though me trying to forward ports to a particular virtual machine using Firewalld rules could’ve been what’s caused this too.
P.S. Earlier today, I upgraded the system, and upon boot I got the message: unmaintained driver detected: nft_compat which lead me to this article: The ipset and iptables-nft packages have been deprecated - Red Hat Customer Portal
This could be related, too, but when I try to uninstall ipset and iptables-nft it tries to uninstall a bunch of other things that I do not want it to uninstall.
P. P. S. I just realized that the isolated network seems to only connect when I configure the VM’s interface to obtain a static IP instead of one from the host’s DHCP server.
P. P. P. S. Well, I just tried to set a static IP for the NAT interface, and while I have a connection between the host and the VM as on the isolated network, the VM cannot access the open internet ![]()
Ok so I am starting to think that I have way too much computers at home and that I need to somehow centralize accounts and identities because I think that file permission gets weird for the admin account.
So how does one centralize identities for logging in and file ownership purposes? Do I have setup my “adminz” across all my devices at home? Should I make things easier by using NAS instead with no file ownership?
Easiest way to start is FreeIPA
I’ve botched up an update and had to chroot into my system to regenerate kernel files in /boot directory.
I used some automation (manjaro-chroot -a) and ended within chrooted environment but without /home directory mounted.
Invoking pacman -Suyv made the system bootable again, but with quirks.
Lost icons on login screen and in launcher / status bar. Some windows got a black frame around (pic, right).
Many programs won’t start: Terminal, Gedit. Firefox didn’t wanted to start either, but somehow it recovered data from ‘old’ profile.
Any way to repair it with less hustle then new installation?
Run them from the command-line to see what error messages they’re spitting out. Probably some library version mismatch/missing.
I realize that might be a bit of a chicken-and-egg problem when Terminal doesn’t work… You can try installing a couple other terminals (xterm, rxvt, kitty, etc.) first, to get one that’s working.
Or you could log-in to a virtual TTY (Ctrl+Alt+F4) setting DISPLAY=:0 then running Terminal from there.
Thank you. I think I found the problem, still not sure about solution thou. It is Gtk related:
I’ve tried to reinstall all packages, but I think I failed it.
Can you do me a favor. Go into network manager, disable the connection, then re-enable it and see if the VMs are able to make a connection. At least that has been an issue for me lately.
Missing expected Papirus-Dark theme/icons…
Have you tried: pacman -S papirus-icon-theme





