I’m looking to move my instrument/plugin collection to a separate disk, and its in the order of hundreds of gigabytes.
I want it to be an SSD, but I don’t want Linux to keep writing to it, journaling, etc. It’s just static data, except when I install a new plugin, and there’s no need for the OS to write anything to it, ever, unless I copy files to that disk myself. And I don’t want to mount it as ‘ro’.
What filesystem/configuration should I use for that disk?
My concern is to have fast loading of the (mostly large) files, without the degradation of the SSD caused by the constant writing that the OS does all on its own.
Once the files are installed on the disk, there’s no reason for any writes to occur, unless I install new files on it.
If journaling is an issue, then the only FS left is ext2. JFS and XFS have good results in performance on large and not-so-large files, but both are journaling file systems. So is BTRFS, but as it’s a COW FS, the original data-set remains untouched. Still, it’ll probably write to it at some point and apparently, you’re quite adamant about preventing that. No idea why, modern SSDs are perfectly capable of handling journaling file systems w/o ever getting low on remaining cell capacity.
I don’t know what you’ve experienced in the past to make you worry about this. I don’t see writes to my drives unless I write something.
The exceptions are “atime” which is short for “access time”. That will write an update to the timestamps every time a file is read. That’s why I always mount my drives with “noatime” as a mount option. There’s also “relatime” if you like that better.
Then there’s SELinux relabel which happens whenever your operating system updates its selinux policy packages. Only relevant if you use SELinux. I think AppArmor just uses file paths so it isn’t a thing there.
Some file indexers write extended attributes to the files as they are processed. That helps them know if the file was already indexed and if so, what data was found in it.
I happen to like using btrfs as my usual filesystem. I would recommend it to you as well. Since you are only doing reads, for the most part, its sometimes slow performance during writes will not bother you. And the CRC checks for data errors are really great.
For the rest of the potential writes I mentioned, I don’t think they’re a big deal. I personally don’t care if Fedora wants to relabel every file now and then for SELinux updates.
But definitely try adding “noatime” to your /etc/fstab lines.
Manufacturers program (fairly) arbitrary values to various SMART attributes and the SMART programs out there only react to whatever value is programmed. I have an old Crucial SSD that has exceeded its lifespan about 3x over and it still worked fine. This was an OS disk (so lots of R/W action!) in a server. I’ve since replaced it, but IIRC it had an cumulative uptime of 65-70k hrs, the replacement SSD already clocked 20,000+hrs in a different setup.
I’m using a NVMe boot drive in my home NAS and it’s a funny thing. Its statistics are mostly writes! Because a boot drive is small and servers have a ton of RAM. It mostly gets cached on boot, and so the disk is essentially read one time on boot.
But then every time there’s a package update it gets written again. And of course the various log files which are mostly write-only.
Yep, that drive is 30 TB read and 35 TB written.
The reads are even helped because I run a “btrfs scrub” every week on all of the drives. Without that it would be even less.
Oh yeah, here’s from a workstation with a 500 GB NVMe boot drive and using the XFS filesystem, so no weekly scrubbing:
Data Units Read: 1,003,677 [513 GB]
Data Units Written: 5,779,452 [2.95 TB]
Nothing is wrong with LVM. The questions as I understood it was how to eke out every last bit of performance for large file reads and minimize writes. The starting point for that is XFS with no additional layers and then you’d want to tweak from there (turn of atime, etc).
If you want volume management, snapshots, etc, then LVM is what you’d typically use for most Linux distributions, esp for boot/os drive.
There, 4TB SSD installed, with just plain old Ext4.
It just sruck me, though - I forgot to partition it before formatting, but it still worked - the disk is now one big partitionless /dev/sdb.
This means it does not show any partition flags in gparted. It seems to be working ok, though - is this a problem, except for the fact that it’s now outside of LVM?