NTFS formatted zvol over ISCSI

I’m planning on making a new ZFS pool and one of the things I want to do with it is export a zvol over ISCSI to a windows 10 machine that will format it in NTFS and use it for game storage. What I’d like some help with is what is the best configuration to get the best performance. Specifically should I stick with the default 4k block size for NTFS and if so I assume the block size (or whatever ZFS calls it) of the zvol should be set the same? The disks I will be using are 4TB HGTS NAS disks which use 4K sector sizes and ISCSI will be running over a 10gb network with jumbo frames enabled.

I'm not sure this solution will offer optimal performance, but I'm no ZFS expert.

If this is on Linux, I'd recommend using the md driver to provide RAID/striping/whatever and then give the raw md device to iSCSI to export. In other words, I'm not sure what advantages ZFS gives you for this use case. I think it'll just add overhead for no real benefit.

The most important thing for performance will be to ensure the NTFS partition is aligned on a drive 4k boundary, which also aligns with the filesystem block size (normally 4k for NTFS). Otherwise when you modify something in the middle of a filesystem block, that block might actually span two 4k sectors on the disk (because for example it starts 1k into the first sector on disk). The disk will then have to do a read/modify/write cycle on two sectors instead of one, which is dire for performance.

One would still get the benefit of snapshots. It's just that in order to make use of them, one would have to either roll back the entire volume, or clone it and setup a new iSCSI extent on the target, access the needed data from there, destroy the extent, destroy the clone. So more leg work. But still, snapshots, and replication if it's wanted/needed.

I've done a lot of performance testing of recent. Here are some things I've discovered.

  • 4k random read/write is the most intensive operation regardless of your transport. For gigabit expect between 10-15MB/s 4k random read/write. For untuned 10Gb expect between 40-50MB/s. I have not figured out how to tune 10Gb in any meaningful way that has lead to better performance, but I've only had access to 10Gb for a relatively short period of time.

  • I have yet to see Jumbo Frames actually increase performance on gigabit or 10Gb. The limiting factor for 10Gb has thus far been my disks, even when I set all 4 (7200 RPM SATA II fwiw) into a RAID 0 and performed an easy, large sequential write operation. Topped out at ~350MB/s with NFS, iSCSI, and SCP. I eventually tested locally, and sure 'nuff, they topped out at ~350MB/s. If SATA II is ultimately the limiting factor here, then with SATA III you should hit a wall at about 600MB/s.

  • I have been pretty obsessed with lining up 4k sectors on disk with 4k file system on iSCSI with 4k block sizes in ZFS in the past, trying to figure out academically if it will be the more efficient this way. I've recently had the hardware to actually test this, and was interested in doing so after the interview with Allan Jude when he mentioned that a database writing a single 4k block of data to a ZFS dataset set to 128k block size would be a pretty big performance hit. I tried created datasets with different levels of block sizes, and what I found was that as the block sizes on the dataset went down, so did performance. This may have to do with the fact that the parent dataset always had the default 128k block size, but I don't know. I have not yet experimented with increasing NTFS's allocation unit size to see if this has a similarly bad effect, no effect, or a positive effect.

CrystalDiskMark is my go-to for fairly casual performance testing. I used to use sqlio, but it has since been replaced by diskspd.exe, which is apparently what CrystalDiskMark uses.

Yeah I guess testing it is really the best way to go. I have seen improvements using jumbo frames with another 10gb link which shares several disks over iscsi between two machines. I wasn't sure if there is some kind of ideal relationship between block size and frame size when using iscsi, but it sounds like there isn't.

I have read that it is recommended to set the zfs record size for a dataset to whatever block size is expected to be read and writen to it, so for virtual disks they recommend setting it to the block size for that file system and for databases they recommend the block size that the database uses etc, I guess you just have to find a balance between the increased IOPS and the overhead of partial read write operations.

When I get the disks and set it up I'll do some testing and post the results.