ThatGuyB's rants

I can’t believe how useful the forum is by just using a keyboard. Ain’t the finest experience, but definitely usable.

My wireless KB+touchpad combo suffered brain water damage and kept typing the letter “n,” so I stopped it, unplugged its batteries and I’m letting it sit out for the night. Been using one of my trusty SK-8115s on the forum, got no cursor. I have 2 mice, 2 touchpads and a trackball waiting to be used, I was just too lazy to unpack one of the touchpads, my trackball is being used on my TR windows VM, although its dongle is on an USB extension, so I could easily remove it, but the TB is giBALLtic.

1 Like

I’ve been using my keyboard+touchpad again, works like brand new.

The forum could have been a lot of keyboard-friendly than it is, if the custom top bar / banner that says “Return to level1tech” was not there. Even if it doesn’t show on other threads, it is still hidden away. If you hit F6 to highlight the URL bar, then hit tab a bunch of times, the first thing on the page will be that custom banner.

The bugs I encountered were that in a long thread, like Post What you Are Working On or whatever it’s called, when I was scrolling with j and k to go through the comments and then hitting tab to give an upcummie to someone, sometimes the tab would just jump to an older comment randomly, not necessarily one I upcummied before, then I had to scroll down to that one again. I made it easier by using pg down.

Another weird bug was because of the custom banner, when I was in a long thread and went from F6 to my top icon to check notification by hitting tab a bunch of times, it would reveal the banner to return to L1T, but the forum would also jump from the comment I was at, to the top of the page. Complete asinine, without it, I doubt I would be pushed to the OP, it would just jump to the L1T icon near the thread title.

Yesterday I finished Hitman 5. I didn’t really enjoy the game, especially not like previous episodes, mostly because of the challenges. I just didn’t feel satisfied until I played the game a few times, completing the challenges. But it was such a chore. The game got boring 1/4th of the way to completion. I took a month or more break from it, because I really didn’t feel like playing it anymore. Left it at the last mission.

Yesterday I wanted to finish it. I didn’t care I was not getting all challenges or that I can’t be a silent assassin. I just killed anyone that stood in the way.

Then I played DOOM. I got the latest version of FreeDOOM, latest gzdoom, grabbed BrutalDOOM v21 and played Going Down on Ultra Violence with Pistol Start. Boy, did I have a lot of fun. Even with all the deaths (it is pretty brutal), I had fun conquering levels 1 through 3.

Why are games these days so overly complicated? Game devs just take the last drop of enjoyment out of their games. I remember playing Thief 3, Dishonored, DOOM 3, Command and Conquer 2 and 3 and obviously, the original DOOM. All of them were pretty simple games, if you wanted a challenge, you played it on higher difficulty. I usually would start a game on medium AI, then after finishing it and getting the hang of it, going on hard. And I would replay games quite often.

But modern games? I finish them and never look back. I played Hitman 2 more than 10 times. Hitman 4, only played once, but managed to complete it. Hitman 5? I could barely go through it. I don’t feel like buying the new versions of Hitman 1, 2 and 3. I have Hitman sniper challenge that I didn’t even start at all.

Am I just being nostalgic for old games, or is the game industry possessed by demonic managers who only think about making money and not leaving enough leeway to the devs to actually make enjoyable games?


No, you are not just being nostalgic for old games; I have noticed since 2011 while game visuals have steadily improved, gameplay and the desire to finish a game have steadily declined. I believe the fault for the decline in games falls on the publisher and poorly trained devs.


I just had a weird bug with openvpn or something. My rkpr64 (router) was pinging my VPN’s outside IP just fine, I had a mtr open, but another mtr toward’s my VPN’s internal address, it had a lot of packets lost and my openvpn tunnel was reconnecting every 2 minutes or so. Not sure what the problem was.

In all fairness, weirdness with OpenVPN is nothing new to me. On my previous router (rpi 3), I had my vpn disconnecting sometimes. But it always worked well after I reconnected it. It happen on the rkpr64 too, so I just opened a new terminal (with abduco, so the session stayed on the rkpr64) and opened the vpn. It worked for a few days. But now it just refused to work more than 2 minutes.

Killing and reconnecting did not fix it.

Interestingly enough, the USB device (my wifi card) was set to usb7 and after the reboot, it’s usb 1. I did not notice any disconnects. I would understand the problem if the mtr to the public IP of the VPN also showed lost packets, but there were 0 packets lost out of a few hundred, so the wifi card had nothing to do with it. A reboot fixed the issue with the VPN (I have a cron job at reboot to connect to my VPN - I should probably convert it to a runit script, so I don’t have to manually connect if it dies).

Never mind, just hit the same problem a few minutes after reboot. Leaving mtr a bit longer, it looks like I lose about 3% packets to my public IP address of my VPN, with all the packets for 9 hops (out of 13) being 0% lost and with internal IP addr of the VPN about 60 to 90%, never dropping bellow 50%. It’s pretty bad. It is definitely not the internet or the router on this side. Likely problems on the other side.

Hope the internet will be fine tomorrow morning.

1 Like

I remembered that OpenVPN sometimes needs to be updated. I’m bad at keeping my router updated, but I did update pfsense 51 days ago (that was the uptime and I only reboot on update - proxmox had 115 days uptime, but I update it more frequently, just not reboot)… Anyway, given the horrible disconnects, I knew I could not update pfsense from the CLI.

I ssh’ed to proxmox on the other side, entered a screen and ssh’ed to the pfsense box. And good choice it was, I had to reconnect to screen 4 times during the update. I didn’t see what was updated though. After a reboot of pfsense, the loss packets to openvpn’s internal IP started going lower than 50%. Currently at 1750 packets sent and 38% loss and still going down.

Seems like the problem was with pfsense. Which is not something you see often. I’m glad I took the time to fix it, instead of hoping it goes away tomorrow and be met with the same problem.

1 Like

No, the issue persists. It’s definitely network. Weird how it worked for a bit.

Now I can’t even connect to openvpn, complains about certificates. Kinda BS. Moved to wireguard to my other site. Works for now. But I still have issues with some websites. Tested duckduckgo just to see if the page loads. No dice through wireguard, works on openvpn. I don’t really use it, it was just an example.

Now I feel kinda stupid. The problem is worse now tho’. I get TLS handshake errors. It says I should check my network connection. From what I can tell, network is fine to the openvpn server’s router. I fear the issue may be something local, like a port going bad or the switch misbehaving. I just hope it’s nothing like this and it’s just a network problem with my ISP over on my other site, but from the looks of it, I highly doubt it.

Too tired to try to troubleshoot further, I’ll try tomorrow.

1 Like

Today I unplugged my router’s RJ45 cable for a few seconds to move a cable around, I only did it for like 5 seconds. I was prepared to restart all my SSH sessions and relaunch my zfs-send. When I did so, the router did not even detect it had its port unplugged for a while, the ethernet port’s LED lights on the router were constantly lit, while on the switch they were off. I plugged the cable, the LEDs on the router stopped lighting up, then the switch LEDs turned on, followed by the router. Yes, the LEDs turned off on the router only after I plugged the cable back in.

My SSH connections did not flinch, my zfs-send did not bat an eye and my VPN connection showed no events happening. It was so short that for everything on my network, it looked like just a few normal packets lost, which happens more often than I’d like (about 1.3% out of 15k packets, according to MTR to the gateway’s IP and about 3.4% to the VPN’s internal IP address, tho I’m not noticing that at all during my daily activity).

It is quite fascinating how resilient the software that we are using is to to the unreliability of the internet. And the internet is unreliable. The bad part is that most people nowadays believe the internet is not only reliable, but that it will always be up. We see things like “whatever goes on the internet, stays on the internet,” but that is only true because there are still people who treat the internet as the unreliable mess that it is and do surprise decentralized backups of online data.

If more people would treat the internet as it should be treated, like the unreliable medium that it is, we would not have such difficult times.

> strong men create good internet
> good internet creates weak men
> weak men create bad internet
> bad internet creates strong men

The software protocols we have today were created by absolute chads. Today’s soyboys are making horrible software that becomes unresponsive the moment the connections from the browser to the servers gets severed. I can’t wait for the days where people will start decentralizing again and get rid of the abominations that are internet 2.0 and 3.0.

For real, if people would plan for the day the internet goes down, people wouldn’t care if a hurricane hit and took out the infrastructure. Back in the day, people just used to turn on their radios and pick up broadcasted waves from all around. In the modern day, everyone should have solar-powered routers that create a mesh network with all the other routers around, through protocols like B.A.T.M.A.N. or BMX6.

But we need discovery protocols for services around us. Sure, you can do nmap around and use ports 80 and 443, but that ain’t really gonna cut it. We should have a protocol in which you broadcast a packet with a small TTL, then get a unicast reply back with messages of what services are available nearby. Of course, the service response would be a server set up by each admin who wants their service to be public. It would for the most part get rid of search engines for discovering nearby stuff.

Well, I can dream. I am too noob to write such a good piece of software, I could probably only write insecure exploitware™. But I know that, at least even if I tried, it would not be soyftware™.

1 Like

SSH in particular will hang onto a connection for much longer than a 5 second interruption. Definitely the product of strong men (and women?).


I was using the old meaning of the word, as in human.


I was literally wondering if any women work on SSH/OpenBSD. Seems like mostly curmudgeony European men.


I find it so cringe that I bought a RX 6600XT and I’m still mostly playing DOOM on it. I mean, at least I got the potential for better, but still… Well, I haven’t managed to get the second GPU, the RX 6400 to work in passthrough on Linux, but it might be a me thing.

1 Like

I kind of felt like ranting on this anyway. The RockPro64 official case has some issues, with the included sata power cables and the included sata data cables putting a lot of strain on the ports on the SSDs. Maybe it’s just the cheap plastic that Crucial uses on the MX500, but the tolerance in the case is very tight.

I chose to grab new cables and do my own runs, I got the sata power ones with that are pinched with some metal teeth (kind like punch-in RJ45 sockets) and covered with a cap.

The included power converters from 12v to 5v rail on the cables themselves are only officially sanctioned to power 2 drives at a time. The official case has 4 drives slots, 2x 3.5" and 2x 2.5", yet with the included cable you are only supposed to power 2 drives. Maybe you could split each lead to a HDD and SSD and it might be fine, but I decided not to trust it, Pine64 doesn’t have the best track record with power stuff, as some people on the forum can attest.

It might work with 4 SSDs, but I got 2 Ironwolfs 10TB spinning rust, along 2 MX500s. So I went with a 19v brick that needs to be stepped down to 12v to power the RockPro64 and the drives. The drives also need 5v, so I went with another one of the same adjustable step-down converter and will connect the 2 in parallel from the 19v brick, giving me 2x 12v wires and 2x 5v wires (hot and ground for each).

And by removing the heat producing elements from the case, like the 5v converters that Pine64 provides, the components inside should run a bit cooler. Although I will have an 80mm Noctua fan on the case to pull air out.

But because of my design choices, I will need to be careful with how I connect things, make labels on what each input and output voltage is and make some shrouds for the step-down converters, as they have exposed electronics. :skull: And in my shroud, I need to let some space for the heatsinks to dissipate heat to the surroundings, as the MOSFETs need some serious cooling.

I don’t mind that I have to go through all this, but I wish I knew about it before I hopped on this endeavor. Had I have known, I would have probably bought a 3x 5.25" to 4x 3.5" hotswap bay from Chieftec, IcyDock or StarTech and made a support on the side for the board. Might have used less space like that and had better wire management options. I’m no master designer and I wouldn’t be able to come out with anything metal, but I’m a fan of transparent cases and seeing tidy cables.

1 Like

After a long break, I finally built my RockPro64 NAS. It has been a combination of lack of tools, electronics wiring, waiting for parts, going away for a while, and not wanting to fry my components.

I will be posting the update, hopefully with the pictures I took during the build, in the ARMLand thread. That official case needs a serious guide somewhere, because the component sizes / fittings are unforgiving. Get the wrong component, and you have to wait to buy another one.


Woot woot, I’ll be looking out for that or include me in the tag when you post. I’ve finally been able to get on a little more with the hectic holiday schedule.

1.) Hope you got your pass thru working, I had a heck of a time…but once I figured it out it’s been much easier…well for me on Proxmox at least.
2.) Hope all’s going well in your neck of the woods and your settling in, IE able to tinker some more and relax a bit.


I mean, it’s been working on windows fine for months now. But on linux, I can’t figure it out.

Somewhat, I can’t wait to finish the setup on the rkpr64.

1 Like

There is nothing more permanent than a temporary rapid test prototype.

1 Like

I just finished setting up iSCSI targets in FreeBSD and iSCSI initiator in linux, using authentication. It’s not difficult at all, I’m actually surprised how easy it was. Why didn’t I use this before?

FreeBSD part

zfs create -V 8G tank/test-target
echo ctld_enable=\"YES\" >> /etc/rc.conf
  • edit /etc/ctl.conf
auth-group ag0 {
  chap user1 password1
  chap user2 password2
# in freebsd, password needs to be at least 12 characters long, at most 16

auth-group ag1 {
  chap user1 password1_new
  chap user2 password2_new
# not the same password as ag0

portal-group pg0 {
  discovery-auth-group ag0
\# only listening on a single interface, you may want to add more interfaces, or just use

target iqn.20221228. {
  auth-group ag1
  portal-group pg0
  lun 0 {
    path /dev/zvol/tank/test-target
  • change permissions of the file and start the service
chmod 600 /etc/ctl.conf
service ctld start

Linux part

  • install open-iscsi (on rhel, it’s “iscsi-initiator-utils”)
  • edit /etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username = user1
node.session.auth.password = password1_new
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = user1
discovery.sendtargets.auth.password = password1
  • edit /etc/iscsi/initiatorname.iscsi
  • enable the iscsid service, or create your own service file in case it doesn’t exist, like so:
mkdir /etc/sv/iscsid
cd /etc/sv/iscsid
cat > run <<EOF
[ -r conf ] && . ./conf
exec iscsid -f

ln -s /run/runit/supervise.iscsid supervise
ls -lh /etc/sv/iscsid/

# -rwxr-xr-x 1 root root   49 Dec 28 00:21 run
# lrwxrwxrwx 1 root root   27 Dec 28 00:22 supervise -> /run/runit/supervise.iscsid
  • start the process
sv start iscsid
# or 
# systemctl enable --now iscsid
# with systemd, you're on your own, lol, although I would be surprised if any distro that has open-iscsi in its repo doesn't ship with a systemd unit file
  • discover the targets
doas iscsiadm --mode discovery --type sendtargets --portal --discover --login
  • happy formatting
ls -lh /dev/sd*
fdisk -l
  • to unmount iscsi
doas iscsiadm --mode node --portal --logout
1 Like

I have an iscsi zvol target that I formatted as ext4 on the initiator. This way, I get the benefit of zfs without needing to run the dkms on other linux boxes, meaning no need for fsck, as zfs can do the scrub. It would technically be better to just use non-journaled fs, because there is no benefit to them if you’re running them on top of zfs, which does everything you need, at least AFAIK, just to get a bit more performance (probably negligible anyway, unless you do some really heavy stuff).

It appears that I can do zfs snapshots, modify the fs, unmount it, revert the snapshot, then mount the fs back and the changes on it will be gone. Bonus points for this approach: if a hypothetical ransomware hits me in one of my container, or the entire host, I can revert the snapshots just fine after I remove the ransomware, without needing to restore from backups (that doesn’t invalidate backups though, please do backups, people).

Interestingly, if I do it in another order, like take a snapshot, modify stuff, revert the snapshot, modify some more stuff, unmount the fs and mount it back, all the changes are still there, even if using the -r and -f options in zfs. The fs must either be unmounted and then the snapshot reverted, or at least the fs needs to not be modified, so only mount, change, revert, unmount, mount order and mount, change, unmount, revert, mount order works. The odds of the fs not getting modified after the revert is low, unless I stop the container, so it’s safer to just unmount, then revert.

1 Like

Inspired by Chain’s migration to wireplumber, I took the jump too.

In my first attempt, it was a failure, audio would not load at startup. Little did I know my system was a mess. I either installed pulseaudio again at some point, or just disabled it and forgot to uninstall, but when I stopped all the pipewire and wireplumber services, I found out pulse was still running on my system. Special thanks to mpv for showing the audio output.

This time, the switch was relatively straight forward, just needed a bunch of reboots. All I had to do was:

  • uninstall pulseaudio
  • comment out the current context.exec for pipewire-media-session in /etc/pipewire/pipewire.conf
  • add the following lines:
context.exec = [
    { path = "/usr/bin/wireplumber" args = "" }
    { path = "/usr/bin/pipewire" args = "-c pipewire-pulse.conf" }
  • make sure the services for pipewire, pipewire-pulse and wireplumber are enabled

And that was it. I have nothing in ~/.config/autostart, nor in /etc/xdg/autostart (related to wg, I have some lxqt remnamts). It works like expected (I only have hdmi out on my pc, and if Iplug headphones, it’s through my monitor 3.5mm output). I’m keeping pipewire-pulse because I like pulsemixer - I could probably use amixer and replace sway’s command shortcuts with amixer, but meh).

Reason for my jump was because of a message in the package manager mentioning that pipewire-media-session will be eventually removed.