To make fast large writes to ZFS server do I SLOG with funky settings or mount an SSD via NFS and just copy data from SSD to ZFS pool once its on the server?
Background:
I have setup my first dedicated Jellyfin/storage server with a ZFS storage pool (2x14TB HDD, mirrored) shared to a client over 10 gig fiber and I want to increase write speed to the pool. This is all on a bit of a shoe string budget with mostly stuff I already have so sane ideas like “add more vdevs to get more write IOPs” or “just use all SSDs” aren’t going to fly (for the time being at least) but I do have a decent 1TB NVMe laying around and a friend that might give me a deal on a 960GB Optane 905p
What I want to do:
Ideally I’d like to add either the NVMe or the cheap optane drive to my server, and when I go to send 200-500+ GB to the server have the data get quickly written to the ssd cache drive and then let the server migrate that data over to the ZFS pool at its leisure. To my inexperienced pea brain, that would sound like a slog device if not for flushing every five seconds. I also thought of just mounting the SSD (or optane) directly to the client via NFS, do my transfer from the client to the server, and once the data is there, move it off the SSD and on to the ZFS pool.
My only concern is speed and freeing up the client device as quickly as possible/making best use of fast LAN. The data is usually going to be blu ray rips and I’ll likely keep a copy on my local machines storage so potential data loss/redundancy on the server is not a big concern. (that said I would prefer to keep the 14TB drives mirrored)
Anyone have any thoughts or third options or winning lottery numbers to help improve my write speeds?
“I would prefer to keep the 14TB drives mirrored”. Good idea with the hw available to you.
“a friend that might give me a deal on a 960GB Optane 905p” - not sure where you’re at, but in the US you can get one for ~$200 right now at NewEgg. Keep that in mind.
Writes to NFS share (you mentioned NFS) are almost always synchronous, meaning zfs will force write to disk as it receives data removing all kinds of fancy optimizations it is capable of in asynchronous mode. Also, the writes happen at vdev speed, which in a HDD mirror of two drives is as fast as a single drive can handle. In the best case this is max bandwidth of the HDD (~250MB/s) - the worst case happens when writing small recordsizes or writing in parallel (<1MB/s for HDDs). This is the root cause for the slow speeds that you encounter.
Generally, zfs write speeds are limited to max write speeds of the vdevs of the pool. So, in your current config you’ll never go beyond ~250MB/s.
With ZFS tweaks you can mostly avoid lower speeds (SSD based special devices that can write small recordsizes much faster than HDD, SLOG devices that absorb synchronous writes quickly). But these features are really designed for zpools with (many) more devices than your have at your disposal. Cool tech, but I would understand if you say that this is not for you.
The idea of using an existing SSD as a staging area for network writes is not a bad one.
Export SSD as NFS share allows relatively quick writes from your home network (needs to be NVMe SSD to get a chance to saturate 10gb network).
cron based script to move any incoming files to zfs pool (rsync?). zfs would treat these transactions as asynchronous, which would be beneficial to your setup.
Thanks for the quick response. That is one incredible deal on optane. I had no idea they had gotten that cheap. I’ll be using my shiny new optane drive as an NFS share to transfer over the network and offload from that onto the ZFS pool.
For anyone curious, I ended up mounting a folder on the optane drive as an NFS share. That allows me to get files off my client and onto the server as quick as possible as described above.
To offload from optane to my ZFS pool, I ssh into the server and append nohup to the mv command and disown the PID. Nice and simple, I can close out ssh, power off my client and go to bed while the server rust spins away.