Increase Zil Size - Freenas

I purchased a Samsung 970 Pro NVMe drive and the performance is not fantastic using it as a Zil. Is there a way to increase the amount of writes that happen (write do the drive longer)?

ZIL is pretty specialized… ZFS itself doesn’t cache a lot of writes, so if you’re not seeing a lot of hits on your ZIL device, it’s probably because your disk writes aren’t cachable. That’s typically why they recommend 16-32gb Optane SSDs for the ZIL, they don’t need to be very big, but you want something that’s durable.

2 Likes

Also slog only operates on synchronous writes, iirc

2 Likes

How much memory do you have? To literally answer the question about zil, I think the tunable you’re looking for is vfs.zfs.dirty_data_max but not sure if that’s current in latest openzfs. It is (or was) 10% ram up to 4GB.

ZIL is not a cache, it’s a journal…the data that was just written would be cached in ram (then maybe in L2ARC), and subsequent reads would come from there.

What particular performance issue are you hitting?

Currently has 192GBs of memory.

I’ll get numbers this weekend but I was hoping to hit 500MBs writes or higher on regular large file transfers. Currently has a SSD Pool that does an NFS mount to my ESXi Hosts for VM storage. I have NFS sync off and would like to use the 970 Pro as the Zil.

Large file transfers are sequential writes, those are not cached.

How many spindles and what’s the layout? 500MB/s shouldn’t be too hard to hit with conventional drives.

10 drive pool in a Raidz2 is what I tested on.

Server:
Dell R720
192GB Memory
2x E5-2620
LSI SAS9207-8E - Connected to two DS4246 NetApp shelves
Pool1 - 8x 1TB SSD Raidz2
Pool2 - 8x 2TB Raidz1 & 8x 3TB Raidz1
Pool3 - 8x 5TB Raidz2 & 8x 5TB Raidz2
Pool4 - 8x 10TB Raidz2
Pool5 - 10x 12TB Raidz2

If you wanna go for speed, deep mirrors might be a better way? Speed is more number of vdevs in a pool, rather than the drives in a vdev

Much less efficient, but a few spinners will flood a gigabit link, cheaper than SSD’s?

ZIL stores a few secs, so the faster the pool, the larger the ZIL. but even if it was larger, it would just stop accepting more info, until the current data was written out?

ZFS pools differs from conventional raid in topology. Instead of eight or ten spindles in a single vdev it is strongly recommended to use multiple vdevs to increase throughput. Instead of a single raidz2 pool you could have a pool comprised of three or four drives in a raidz configuration, with pool data spread across the vdevs but not strictly striped like conventional raid, there’s more mechanics in play.

This link may be helpful https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/

3 Likes

Furthermore, you want to avoid the single point of failure of you ZIL drive dying.

You need to at least have 2 in a mirror.

If the ZIL goes your pool just stops working altogether.