I am currently working on consolidating a lot of my data across multiple older drives and filesystems onto a newer, single, larger drive. Some of this data is on an HFS+ volume. These days I’m dual booting Linux and Windows, so I can only read this data in Linux. Ideally this data would be accessible from either OS. For this reason I was hoping to use exFAT. However, the source filesystem (HFS+) supported files with special characters (like colons) that are not supported on exFAT. Also, I’m seeing problems copying symlinks to exFAT.
I’ve been trying to do this with rsync, and I kind of resigned myself to loosing the symlinks. However, it’d be nice to keep files with colons in their name. I found this tool, rdiff-backup, but it doesn’t like having exFAT as a destination.
Is there a way to change the file names to something supported during the copy, so it all goes over to exFAT reliably? Should I just give up on exFAT and use ext4?
Disclaimer: The last time I’ve used windows for something other than playing a game in a VM was in 2013 I think
When I need a drive to be mountable under windows and linux I’m using NTFS most of the time.
I don’t know id windows can mount ext4 (my guess is that there are 3rd party drivers for this filesystem) but that’d be my second choice.
You can also go overkill and use ZFS, but afaik windows doesn’t have any tools that would support it, if you go with ZFS you could probably install zfsonlinux in WSL and then attach the drive to the WSL VM I guess(?) (I’ve never used WSL)
Yeah. I’m hesitant to use NTFS because I know that it’s proprietary and could give me issues in the future. I don’t want the same problem I have now, with HFS+, down the line. I suppose NTFS is tried and tested, so it’s probably decently reliable.
If I went the ZFS route, could I introduce more drives later to get raid3 or raid5 without having to build the entire thing from scratch? In other words, going from a single drive to an array “dynamically”.
NTFS has a fairly decent FUSE driver and I haven’t noticed any problems with copying files between virtual drives, physical drives and USB sticks
On ZFS - no, you can’t add more drives, however you can replace all of them one by one for a larger ones and the pool should grow to the capacity of the largest drive I think (I’ve never done this, you might want to read some ZFS admin tutorials)
On Btrfs - yes, you can even migrate between raid levels “on the fly”.
However please keep in mind that with Btrfs there is a “write hole” (I’ve read somewhere a long time ago that it occurs very rarely and you just have to be unlucky) bug with RAID 5 and 6 (the same “bug” occurs with any RAID controller and in linux mdadm [in this case can be somehow mitigated with a dedicated journal device… I think])
Thanks for the suggestions! Here’s what I decided: I’m going to use ext4 for the time being because it’s reliable and simple. I really don’t need to get at it from Windows anyways. It’s a nice-to-have but not essential. If I ever want to upgrade to more drives, I can convert my ext4 to Btrfs and add them. Windows does have a Btrfs driver, and if all else fails there’s always WSL2.
on a single disk btrfs gives you multiple copies of metadata, and checksumming of both data and metadata. You also get built-in snapshots (readonly or read writeable), and you can send/receive snapshots and snapshot deltas.
If you need to share to windows → samba (either from another machine or from a VM).