Before making the switch to Linux, I have several questions. I’m not a complete noob, but am definitely still a novice. I’ve played around with a lot of distros in VMs, but have never daily driven on bare metal.
Firstly, I feel I should lay out how I have my Windows system set up.
OS on it’s own drive (Disk 0) with most user directory folders symlinked to a separate drive (Disk 3)
Programs that allow changing install location on a separate drive (Disk 1)
Games installed on a separate drive (Disk 2)
Personal storage on a separate drive (Disk 3)
What Linux folders do I need to mount separately at install to achieve this same type of set up? Also, is formatting the entire system as btrfs a good idea or JUST the OS drive? I like all of the features of btrfs, but am unsure if it is a good idea for the whole system. Suggestions are welcome.
Secondly, I have an external direct attached QNAP device for RAID 1 backup with two Seagate Exos 16TB drives. I currently use Macrium Reflect for daily, full system automatic backups. I do Synthetic Full backups with one Full and 60 Incrementals (one per day) all while using Intelligent Sector Copy, verify image and e-mailing for success or failure.
What open-source program (preferably with a GUI) can I do all of this with? I already plan on using Timeshift for OS snapshots, and as I understand it, that is ALL that Timeshift does.
I sincerely appreciate any and all help and insights.
I plan on learning more cli (noobish) in a VM after the system is fully operational, hence preferring a GUI.
First, I can’t find a reason for your setup, other then having a very small capacity disk for particularly the OS. But that’s fine, you’ll have your reasoning and I’m sure it’s valid for your circumstances.
Anyway, I’d recommend delving into the Linux directory tree hierarchy, as it’ll answer your questions by itself once you understand how the tree works.
(TL;DR: you can, particularly for games and personal data. Read up if you want to know how)
On file system choice: BTRFS is not fully mature yet and it may have issues if used for boot partitions. Good alternatives, though not offering the feature set of BTRFS, are Ext4, JFS and XFS. I’ve standardized my systems on JFS, until such time BTRFS is production-ready (it isn’t yet). Having said that, BTRFS is stable enough for daily use on normal, not-boot partitions like /home, etc. Just make sure you have the support package with appropriate tools for BTRFS installed before rebooting! (fsck won’t run, which is extremely problematic when checking the fs at boot!)
btrfs-assistant package provides a GUI tool. Although major changes to disk configurations like adding disks to a volume, is beyond the scope of the tool. But great for managing snapshots and scheduled maintenance.
There is no reason to dedicate specific disks to a directory. With access to BTRFS, ZFS or even LVM, those old-school windows-style drives are just more management, less performance and more wasted space. BTRFS is excellent at handling Raid0 with different disk capacities, , one of the really cool advantages with BTRFS. Level 1 and 10 are still governed by the smallest drive and level 5+6 are still experiencing problems. And if you need special directories for special purposes, just pump out those subvolumes.
As a copy-on-write filesystem, I don’t see the use of fsck.
Linux is not Windows. The partitioning scheme that makes sense for you there, makes no sense at all on Linux.
Modern Linux systems will do fine with one good sized root (/) partition of maybe 30GB, a small swap partition of maybe 1GB, and the rest dedicated to /home where ALL your data will be stored. A UEFI/BIOS boot partition is likely necessary as well.
For backups, stick with Borg unless you find a good reason that it won’t work for you.
Others have already given a few good answers, and while I agree it is a good idea separating /home and / and this should be sufficient for most use cases, other interesting candidates are:
/usr - All system programs are installed here. Useful if you, for instance, want to put all programs on a network drive. rcxb rightly pointed out modern Linuxes are dependant on this directory. It is still possible to transpose most of /usr but it is possibly more trouble than it is worth.
/opt - This directory is supposed to contain third party software, e.g. Flatpaks, AppImages, etc.
/boot - Having boot on a separate partition makes it much easier to swap out various distros, especially with UEFI shenanigans.
If you do need more storage, I recommend creating a permanent mount point in /mnt for bulk storage, and symlink that from home, e.g:
I suppose I should clarify that all of the disks are not partitions, but actual physical disks. I have a 500GB NVMe for OS, another 500GB NVMe for programs (so the OS doesn’t f*ck them up if I have to reinstall) and a few misc things, a 6TB HDD for game installs and an 8TB HDD for misc storage. If one disk goes, I still have a computer that works while waiting for a replacement drive to arrive to restore data to.
Thank you. I appreciate you taking the time to spell it out. I have been reading up this morning and my brain feels a little fried. I’ll get there. I’m willing to learn.
I was trying something similar in a VM and the install kept failing on EndeavourOS, but I was trying to put /home, /opt, /usr, /var on one partition. It’s fine if it doesn’t work like that. My logic was keeping those extra directories on a separate disk to not lose settings or logs if something happened to root by my fault or disk failure, but still be able to access them with a live-cd. Make sense? I suppose the extra directories are unnecessary if I can have file level access in my backups.
That’ll be my approach too. Gets some more juicy out of the rust. I would also go so far as doing Raid0 for the 2xNVMe as well. You can always do subvolumes if you need more mountpoints and/or different mount options. It’s not ZFS dataset level of customizing properties, but usually does the job.
Not really, unless you sometimes have >90% usage (page cache or similar doesnt count). I always keep a swap because you need it for hibernate and out of tradition really.
Nooooooooo! /usr and / are inseparable these days. Absolutely do not do this on a modern (systemd) Linux system unless you’re an expert and know how to deal with the huge can of worms you’re opening…
Though it’s soft-pedaling, here’s what RedHat had to say:
If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex because these directories contain boot-critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system may either be unable to boot, or it may hang with a Device is busy error when powering off or rebooting.
It is either that, or a swap file on the root partition. With 64 GB RAM, you need to have at least 100 GB of storage on your root. I guess you could make a 400 GB+ root partition, but in that case I’d just have multiple partitions on the root drive for experimenting with new Linux versions and so on.
… Why? The rest of the system can only go 100 mph, which is plenty of fast regardless, and the drives goes 150 mph without the raid. Going 300 mph makes very little sense except for extreme niche circumstances. If you really need it, sure, but the only RAID I would do on NVMe is RAID1 or RAID5 for the redundancy. Everything else makes zero sense these days.
Right you are, I’ll edit that. Sorry, I learned Linux with the five inseparable folders (/etc, /bin, /sbin, /lib and /var). I guess that is now the three inseparable folders instead, since /bin, /sbin and /lib has since been symlinked to /usr.
I don’t have the money or extra space for a separate PC, but I had considered it and it’s not a bad idea. I have the Phanteks P500 and I plan on getting a SAS HBA 16i for storage expansion in the future. I already have fans that do both great airflow AND static pressure, but as long as I keep the drives mounted above the video card it should get plenty of air anyway.