What you are asking for is: make -n install
Bit of a hassle to pick through the screens of output, but it absolutely tells you where the files are going.
A stupider trick is to sudo to a non-root user, and do make -k install then you’ll have a list of all the files it FAILED to installed.
But I heartily agree with others who have recommended rpmbuild. Go search for your software on rpmfind.net or rpm.pbone.net and then go download and build the SRPM. If there are no RPMs, grab the SPEC from some very simple package and customize it. Takes a little time up-front, but it’ll save you time and hassles for years.
Sometimes on Linux I wonder why GUI tools even exist…
So there is the KDE Partition Manager. Just shrunk my old Windows partition (worked), and created a new ext4 partition (worked). Then edited the mount point (by UUID and I gave it a path), it’s even flashing a warning that it will change /etc/fstab and the process is not reversible. Go ahead and click save aaaannnddd… no change to the file.
I mean… really? Of course I can go and research what I need to put in it manually, but why is there a GUI option that doesn’t even work? And it’s not like that’s the first thing I ran across that was the same way… just
/rant
Also, not sure if ext4 was the best choice? There’s just going to be games on it so I guess it’s fine? I don’t really get the intricate differences between the filesystems anyway but some people seem to be a big fan of xfs too?
also what I don’t quite get:
Why is the size in the properties (in Dolphin) smaller then in the partition manager, and why are there more then 20 gig used right after formatting? I mean that’s almost a 30 GB difference from what I formatted it at…
Also properties in Partition Manager:
That one at least matches with the list in it, but still don’t get why these numbers don’t match up…
Yes, it asks for them automatically (also fairly certain if it didn’t have it the formatting wouldn’t have worked either).
I edited the fstab manually and mounting works, but I have a small issue with it. The drive is mounted read-only, although I used the defaults setting in the fourth value. According to the docs:
defaults
use default options: rw, suid, dev, exec, auto, nouser, and async.
I don’t know what all these options do, but I am fairly certain rw is read-write. So why is it mounted read-only?
edit:
even setting rw explicitly doesn’t make it writable…?
Uh OK seems the drive is owned by root and only that has read/write. chowned it, but is that the best thing to do?
For flexibility and reliability, ext4 is best in most cases.
XFS can grow but not shrink, where Ext4 can be resized either way. XFS may have slightly higher performance on heavy loads, and a bit more flexibility in inode counts, but it also has issues with zeroing files after crashes, though nothing near Btrfs’ data loss issues.
XFS now has Reflinks support, so “offline” dedupe tools made for btrfs work on XFS with recent kernels, no sign of that coming to ext4 yet. ZFS is good, but I wouldn’t recommend it for average home users.
Having an interesting issue. I broke my .zshrc up into parts to isolate various parts in their own config files. All .zshrc does is source the individual config files. This is working great, with one exception. I have tmux automatically attach to a daemonized session via a tmux.zsh file that is sourced in .zshrc. If I source it manually, it works as expected, but when it’s sourced by .zshrc, it can’t find the tty.
open terminal failed: not a terminal
Not sure what I can do about this…
Looks like problem is being caused from sourcing from a while loop.
Solved the problem. I was piping ls to a while read ... loop to source the config files. read was controlling stdin which prevented tmux from running. Using a for loop now and all good. Here is my .zshrc if anyone is curious.
# ZSHRC
# sort and source sh (generic) and zsh rc configurations
for RC in $( { ls -d "${XDG_CONFIG_HOME:-$HOME/.config}/sh/rc.d/"*
ls -d "${XDG_CONFIG_HOME:-$HOME/.config}/zsh/zshrc.d/"*
} 2>/dev/null |
sort |
tr '\n' ' '
); do
source "${RC}" ||
printf 'ZSHRC: An error occured in %s\n' "${RC##*/}" >&2
done ||
{ printf 'ZSHRC: An error occured in .zshrc\n' >&2
return 1
}
return 0
This isn’t exactly a problem, but this is almost the Linux Lounge and I found this incredibly interesting (undoubtedly more so than it actually is):
I changed one of my server’s ssh ports around a month ago and enjoyed almost zero automated brute-force attempts on the host since then… Then, they started back up.
It’s almost as if the attacker(s) re-scan hosts periodically (monthly?). That time frame between mapping ports in itself is interesting, but so is the decision to even bother… You’d figure anyone with the wherewithal to change their listening port isn’t going to be susceptible to brute-force attacks, and so not bother with the extra complexity (albeit trivial) of coding this re-scan up, or expend resource attacking it when you could be targeting likely more susceptible servers.
I wonder if something like Shodan does hourly/every few hours a simple ping, and a rolling monthly deeper scan?
Like, ping the whole lot once, and check full name, splitting all the addresses over the month/between it’s servers?
I just upgraded from Ubuntu 18.4 LTS to 20.04.1 and my mounted smb shares aren’t showing up on the desktop now.
My smb shares auto mount with fstab, and do so as normal. But on 18.4 they popped up on the desktop, since they mount under /media. But they don’t now. The auto mount is set up exactly as it was on 18.4.
I cant seem to find a setting to change, so I’m a bit miffled.
Any ideas?
I encountered this as well when I started playing with fail2ban and ssh-tarpits. Some sort of “try again” and conditional I’m guessing. With cloud architecture they probably assume a ton of people change IPs so after a while there will be different security measures.
I suck at thinking like an attacker, but that would be my line of thinking if I were one.