mpv "Big Business (1988).mkv"
Failed to recognize file format.
Exiting... (Errors when loading file)
The second (Alice) does, but it eventually starts vomiting errors and halfway through it just nopes out entirely.
I guess the first is indeed a sparse file, less just shows it as zeroes. The second one is filled but that’s expected given that it plays (partially anyway).
Shows the full size:
rsync -hP "Alice in Wonderland (2010) BORKED/Alice in Wonderland (2010).mkv" "Big Business (1988) BORKED/Big Business (1988).mkv" ~/Videos
Alice in Wonderland (2010).mkv
7.99G 100% 107.33MB/s 0:01:11 (xfr#1, to-chk=1/2)
Big Business (1988).mkv
8.53G 100% 104.28MB/s 0:01:17 (xfr#2, to-chk=0/2)
ls -gohl ~/Videos | grep mkv
-rw-r--r--. 1 7.5G Mar 29 01:56 Alice in Wonderland (2010).mkv
-rw-r--r--. 1 8.0G Mar 29 01:57 Big Business (1988).mkv
Interestingly du now shows both as the full size too:
du -h ~/Videos/"Big Business (1988).mkv"
8.0G /home/tarulia/Videos/Big Business (1988).mkv
du -h ~/Videos/"Alice in Wonderland (2010).mkv"
7.5G /home/tarulia/Videos/Alice in Wonderland (2010).mkv
But then again I didn’t use the --sparse option for rsync so it makes sense (same result with cp too).
I don’t really get the point of this then though… I can create a 1TB sparse file on a drive that has only 100 gigs left of space, I mean… that’s cool but why?
If I were to start writing actual data to that file I’d run into issues anyway because the space obviously isn’t being “reserved” for later use… so I don’t really get why this is a thing? Like why have a different apparent size when it is inaccurate either way?
You run Clonezilla or Partclone on a 10TB disk that only has 50GB of data on the file system. (Partclone skips over the empty-space portion of the drive and creates a sparse file.)
You want to get some files off the disk image…
Extracting that 10TB disk to an image file will only use about 50GB of space, and only take a couple minutes to write out as a sparse file. As a non-sparse file it would need 10TB of free disk space (if you even have it) and take maybe 6 hours to write…
Other examples include downloading torrent files or ddrescue to make a disc image… You don’t need to use up all the disk space and wait for it to all be allocated up-front, before you can even start writing anything.
OK the use of cloning or creating a disc image I get, but what’s the advantage for a torrent (or any download for that matter) here? I don’t really get it.
Say I have 100 gigs left on my drive and I download a 200 gig torrent. Now normally the client would check available space, but let’s say it doesn’t. It creates the 200 gig sparse file and then what…? As soon as the download is crossing the 100 gig threshold we’re getting I/O errors so what’s the point then?
It could just not create any file and write the data as it comes in, which would have the same effect, no?
The zero’d sparse isn’t actually allocating/reserving the disk space so what’s the advantage here?
A) Your system doesn’t have to freeze up and wait (several seconds) for the whole 200GBytes of disk space to be allocated, before it can continue and even get started downloading.
B) If you’ve got 20 torrents running, but most of them only downloaded 10MB before the seeders each dropped off, why would you waste disk space on all of them?
C) You can check the difference between du and ls to see how much progress has been made.
Right but it doesn’t have to allocate it beforehand anyway is what I mean. When I download a file, whether it be via Torrent or in a Browser, the file can start at 0 and just grow as data comes in, it doesn’t need to allocate the whole 200 gigs right away, right?
I am not sure about this, and it may depend on the file system too. But when the header says “the next 200GB belong to me” and then the file ends after 10MB, that could cause problems.
It’s exactly how downloads (or even file copies in progress) work though.
Download a file in chrome and check the download directory, the filesize shown is climbing as the download progresses. Although I haven’t checked what du shows in that case.
Well, the difference between downloading a file in the browser and downloading a torrent is that file downloads in the browser work in a linear fashion, while torrent downloads are chunked, and the chunks may come down the line in random order. Hence the need to preallocate complete file sizes, sparse or not, and fill them in using memory-mapped file access.
That makes sense yeah… although as noted the sparse isn’t allocating anything, no? Since it doesn’t actually occupy the space. Unless that’s some filesystem internal wizardry for optimisation or something.
I’m using log_fmt=sub because it makes it easy to check low-scoring frames (because they get loaded as a subtitle in mpv).
Anyway, when I use this without the redirection I get what I want:
Isn’t \r just a carriage return? Why is it translating spaces too Or am I having a misconception of what a carriage return is or does? I thought it’s going back to the start of the line? Which clearly spaces don’t, so… confused
col -b isn’t very smart, it works in the simpler cases and it’s convenient as it happens to be included in the base of every unix system. One of the other tools (that you’ll have to find and install) are sure to do better.
A while ago, before ZFS supported 6.13, I did this sudo grubby --set-default /boot/vmlinuz-6.12.15-200.fc41.x86_64 to have my ZFS-pool available, and now that 6.12 is the only kernel that boots, anything else that is listed (in Grub) results in kernel panic.
Ohhh I think I got you wrong. I though you installed Fedora on ZFS but you only wanted to make your ZFS pool available on Fedora. I meant to ask you why you went through the trouble of installing Fedora on ZFS when it comes natively with BTRFS as a filesystem that has largely the same features but I understood you wrong.