Guide: iSCSI Target/Server on Linux with ZFS for Windows initiator/clients

Today I set up an iSCSI target/server on my Debian Linux server/NAS to be used as a Steam drive for my Windows gaming PC. I found that it was much more confusing than it needed to be so I’m writing this up so others with a similar use case may have a better starting point than I did. The biggest hurdle was finding adequately detailed documentation for targetcli-fb, the iSCSI target package I’m using.

I only figured out this out today and I’m not a professional. Please take my advise as such. I did piece a lot of this information from other places but have not referenced all of it.

ZFS zvol Setup for iSCSI Storage

If you’re not using ZFS you can skip this and use the file-based iSCSI backend to store your iSCSI disk as one big file, or research applicable block-based backends for your filesystem. The file-based backend may be very slow, so review your options before using it.

I’m using a zvol as the backend for my iSCSI drive. A ZFS volume (zvol) is a dataset that represents a block device, rather than a dataset/filesystem like we normally use ZFS for. ZFS volumes are identified as devices in the ‘/dev/zvol/’ directory.

I have an existing ZFS zpool with sufficient free space for me to fill my iSCSI drive if I wanted to. I created the zvol like so:

zfs create -s -V 1tb -o volblocksize=4096 -o compression=lz4 h/iscsi-hafxb

Where:

*-V creates a zvol instead of a dataset, and requires a fixed size to be specified. 1tb sets the size to 1024 GB.

*-s creates a “sparse” zvol. This is NOT recommended by the Oracle documentation. I assume that if your zpool fills up, data loss, corruption, or write failures could occur on your iSCSI drive. However, without the -s flag, whatever size you give the zvol will immediately be permanently deducted from your free space on the zpool. This could be a “do as I say but not as I do” situation if there’s important data on your zpool.

*-o volblocksize=4096 specifies 4K sector size. 4096 is the maximum that is supported by targetcli when using the block backend, so it’s a sensible minimum. However, setting this to larger values is advisible to reduce overhead. Feel free to create multiple zvols with different volblocksize and experiment. I’ve found that 256k works well for a steam drive on a pool of HDDs in mirrored vdevs.

*-o compression=lz4 specifies that the data will be compressed using LZ4 compression. There are a few different compression algorithms to pick from but lz4 what I use. The benefits of enabling compression on a Steam game drive will be mixed because many games may not be very compressible, but ZFS seems fairly clever about when it gives up on compressing an incompressible file.

*h is the name of my zpool

*iscsi-hafxb is the name of the zvol being created

If you don’t get an error message, you should now have a zvol at /dev/zvol/{zpoolname}/{zvolname} (e.g. /dev/zvol/h/iscsi-hafxb).

targetcli-fb Configuration

Backstore

This is the annoying part. targetcli-fb is the iSCSI target (i.e. server) package I picked. The man pages and help are very minimal and I figured out what I was supposed to do mostly from trial and error. To make things more complicated, targetcli-fb is a fork of targetcli and the commands aren’t all compatible, so you have to watch what guides you follow. On Debian, targetcli-fb doesn’t seem to ship with a systemd service file or equivalent to get it to start on boot, so we have to make one ourselves.

On Debian Linux, install targetcli-fb with a simple apt install targetcli-fb, then enter the interactive shell with targetcli. Some fundamental commands include:

*ls / works similarly to ls in *nix. The targetcli-fb shell is set up like a small filesystem, which ls / will show.
*cd {path} works similarly to other shells.
*saveconfig saves the current configuration to the default directory.
*exit exits the interactive shell. By default it may also run saveconfig on the way out.
*delete is the rm/delete/remove/destroy command.
*clearconfig true is the factory reset option that will delete everything you’ve set up in the targetcli-fb config. Use when something went wrong and it’s easier to start over.
*get {category} {setting} is how you query the current value of a setting. You can run just get if you don’t know the category to get a list of the applicable categories. You can run just get {category} to list all settings in that category.
*set {category} {setting}={value} is how you change a setting.

It’s important to note that when you’re specifying a subdirectory to run a command in, a trailing slash is needed. For example, in a subdirectory containing tpg1, running tpg1 {command} is wrong while tpg1/ {command} is right.

Create Backstore

zvol Block Backstore

Create a block-based backstore using your zvol as follows:
/backstores/block create name=iscsi-hafxb dev=/dev/zvol/h/iscsi-hafxb
Where:
*name is the name you want to give your backstore. It doesn’t have to match your zvol name, but mine does.
*‘dev’ is full path to the block device in /dev.

fileio Backstore

If you don’t use ZFS then this should be a good catch-all approach. However, the performance may be very slow. This is the command:
backstores/fileio create disk1 /disks/disk1.img 1T
Where:
*disk1 is the name of the backend being created.
*/disks/disk1.img' is the full path to the file you want it to create. I don't know what it does if this file already exists. *1T` specifies that the file will be 1024 GB in size. By default this should create a sparse file, so it doesn’t immediately eat up 1024 GB of space.

Backstore Tweaks

After creating your backstore, there are a couple settings to change, so run the following:
cd /backstores/block/iscsi-hafxb/ if you followed the zvol directions.
cd /backstores/fileio/disk1 if you followed the fileio directions.

set attribute block_size=4096
set attribute emulate_tpu=1
set attribute is_nonrot=1

The block size could arguably be higher for a Steam drive, but this backend doesn’t seem to support any more. This should match the volblocksize of your zvol, and is why I picked 4096 for the zvol.

emulate_tpu=1 enables the UNMAP command which is functionally similar to TRIM for SSDs. This is good to enable even for HDD zpools/filesystems because it also allows you to reclaim the space from deleted files in your iSCSI drive. This is especially important for zvols created with the -s flag (which is not recommended by Oracle, please see above).

is_nonrot specifies whether the pool is composed of HDDs or SSDs. Windows uses this to decide whether or not to make the iSCSI drive defragmentable. With ZFS, even if you are using HDDs, chances are you do NOT want this enabled.

iSCSI Target Setup

Now we set up the actual iSCSI target that will serve access to the backstore to initiators (i.e. clients).

/iscsi create

ls /

Under iscsi you will the name of the iSCSI target that was created. Mine was iqn.2003-01.org.linux-iscsi.dn2.x8664:sn.327f56103dbb. cd to it:

cd /iscsi/iqn.2003-01.org.linux-iscsi.dn2.x8664:sn.327f56103dbb

Now create the LUN that links the target with your backstore:

tpg1/luns/ create /backstores/block/iscsi-hafxb if you followed the zpool directions.
tpg1/luns/ create /backstores/fileio/disk1 if you followed the fileio directions.

CHAP Authentication

If you don't want any authentication you might be done. I would suggest you at least set a username and password. This post covers two approaches:
  1. A simple username and password. This is helpful when you don’t want to whitelist each OS/device.
  2. A whitelist of initiators, each with their own associated username and password. This is helpful when you want to add an extra (probably weak) factor of security, and don’t want to remember a username.

I prefer #1 for simplicity.

1. Simple Username & Password Authentication

Start in the path of your iSCSI target:
cd /iscsi/iqn.2003-01.org.linux-iscsi.dn2.x8664:sn.327f56103dbb

tpg1/ set attribute generate_node_acls=1

Using =1 makes targetcli-fb use the auth settings associated with tpg1, rather than tpg1/acls.

tpg1/ set attribute authentication=1
tpg1/ set auth userid=username
tpg1/ set auth password=passwordword

userid can be composed of 1 to 12 letters and numbers, and password can be composed of 12 to 16 letters and numbers [REF]. I’m not sure the userid size limit is 12 because with the ACL approach your userid is much longer than 12 characters. Windows’ initiator would freeze and fail to connect when the password was <12 characters though, so stay within 12 to 16.

2. ACL Initiator Whitelist & Password Authentication

By default Windows' initiator assumes the username matches the initiator name, so if you set the username this way in `targetcli-fb` then you only have to remember the password, not the username.

tpg1/ set attribute generate_node_acls=0

Using =0 makes targetcli-fb use the auth settings associated with tpg1/acls, rather than tpg1.

Get your initiator name and create the ACL using its name. You can get your iSCSI initiator name if you’re using Windows by running the “iSCSI Initiator” and looking in the Configuration tab. The one on my gaming rig is “iqn.1991-05.com.microsoft:hafxb-w10”

tpg1/acls create iqn.1991-05.com.microsoft:hafxb-w10
cd tpg1/acls/iqn.1991-05.com.microsoft:hafxb-w10
set auth userid=iqn.1991-05.com.microsoft:hafxb-w10
set auth password=passwordword

password can be composed of 12 to 16 letters and numbers [REF].

Last Steps for iSCSI Target

cd /
saveconfig
exit

Now we have to get targetcli-fb to start on boot. I have systemd so I used that. As root:
nano /etc/systemd/system/target.service

Enter the following [REF]:

[Unit]
Description=Restore LIO kernel target configuration
Requires=sys-kernel-config.mount
After=sys-kernel-config.mount network.target local-fs.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/targetctl restore
ExecStop=/usr/bin/targetctl clear
SyslogIdentifier=target
[Install]
WantedBy=multi-user.target

Save and exit (CTRL+X then y)

chmod 644 /etc/systemd/system/target.service
systemctl enable target
systemctl restart target
Now the iSCSI target configuration should be done.

Windows iSCSI Initiator Setup and Unmounting

There’s nothing special that has to be done here, but just for completeness I’ll go over the basic steps:

  1. In the Windows start menu, search and open the “iSCSI Initiator”.
  2. In the “Discovery” tab, click “Discover Portal” and add the ip address of your linux server. Port 3260 is the default. Click ok to close “Discover Target Portal”.
  3. In the “Targets” tab, you should see the iSCSI target name appear. Click on it and hit “Connect”, then “Advanced Settings”.
  4. In “Advanced Settings” check “Enable CHAP log on” and enter userid and password as “Name” and “Target Secret”, respectively. If you picked authentication option #2, the default name should match. Click OK to close “Advanced Settings” then click OK to close “Connect to Target” and connect.
  5. If the target shows “Connected”, then in the Windows start menu search and open “Disk Management”.
  6. You should be asked to initialize a new disk (the iSCSI target). Pick GPT unless you have a good reason not to. If you’re not able to initialize the disk, reopen “iSCSI Initiator” and in the “Volumes and Devices” tab and click “Auto Configure”.
  7. In “Disk Management”, create a new simple volume (i.e. format) the initialized iSCSI drive. You can keep all the defaults except the “Allocation unit size” should be a multiple of 4096 (i.e. 4K) to ensure consistency with the iSCSI backstore, and whatever volblocksize you selected. My steam drive uses 256k volblocksize, and so I selected a 256k NTFS allocation unit size to match.
7 Likes

I set up something similar too. But Initiator is a Linux system for me, so I wasn’t forced to run the share via iSCSI. Turned out my steam library works better/faster via NFS share. But for Windows, iSCSI is the only option left.

I was rather disappointed in the ZVol performance as it can’t really benefit from my special vdevs and I don’t want to run the library just on spinning rust alone.

I didn’t check on compression while using ZVols, but I get like 1.3x compression with zstd on a normal dataset with 3T of data.

Thanks for sharing this valuable ressource I can reference in future excursions into ZVol/iSCSI

1 Like

Thanks a lot for this guide, and for actually working this out.
This seems to be legitimately the only good guide (I could find) for how to set this up that bothered explaining anything, and not just rattling off a few commads.

1 Like

I just wanted to say thank you for documenting the attributes
block_size, emulate_tpu, and is_nonrot!

The official target_cli documentation is basically non-existent. No mention on MAN page, or anywhere else. And http://linux-iscsi.org is offline/broken. The docs are so bad, it made me furious!

If I hadn’t found posts like yours via hour long Google search I would have never known how to configure this. So thanks again! :heart:

I just wanted to add that in my humble opionion using such a low volblocksize to match the sector size is not practical on raidz-2 backed ZVOLs, as it can easily 3x the required disk space!

I am running a multiple of the exposed sector size (4k) as volblocksize (128k) and benefit from the improved compression and much less wasted space.

The write amplification wasn’t a problem so far for me (my use is read mostly).
I have an Optane logs special device, maybe that helps.
And I am using an NTFS cluster size of 128k initiator side.

I’ve modified the post to recommend volblocksize and NTFS allocation unit size of 4096 as a minimum rather than a recommended value. Thank you for the feedback, and I’m glad you found the guide helpful!

1 Like

Can anybody verify that trim/unmap works properly?
I delete some files from iscsi. The backstore file or zvol never shrinks. I tried both fileio and zvol. Neither of them worked. I am able to run Optimize-Volume -DriveLetter <DiskLabel> -ReTrim -Verbose inside Windows, but it does nothing to shrink the file or zvol.

Update: after removing the lun and recreating it again, the trim is working now. The order of configuration matters.
This guide is excellent. Thank you.

1 Like

I can also verify that it works for me with a Windows client if I am using emulate_tpu and is_nonrot in the backend on the targetcli iSCSI target. After an optimize run on the client, I am getting lower referenced, usedbydataset, written, logicalused and logicalreferenced values on the ZVOL. It’s not instant as it finishes in Windows, instead it takes a few seconds or minutes to affect the ZVOL.

In any case this is much, much better and safer than regularly manually overwriting free space with zeros on the client, so it gets picked up as sparse or squashed by compression, as I had done before.

Greatly appreciated this thread. And agreed with kwinz on the explanation for is_nonrot.

I am currently diggin through all this block_size shenanigans. And what I should set where. I am running some Windows build servers on ZFS over iSCSI and it is going to be touching a mixture of 10.000 cpp/c# files + 1.000 ~20MB images and a some 50 video files pr. build target (Just to give perspective on workload type)

How do I determine what I set my block size to?
I have Micro 7450 drives in the ZFS pool. Which according to their technical specs supports 512 and 4096-byte sector size.

And I guess I have a few variables that I need to align correctly?

  1. zpool ashift=12 (Believe this one is correct
  2. zfs block volblocksize=4096 ?
  3. ZFS over iSCSI storage setup blocksize (found in /etc/pve/storage.cfg), blocksize is default 64k, should this be 4096?
  4. Block device attribute block_size in targetcli is default 512, that sounds low? And apparently targetcli supports max 4096 here.
  5. Windows VM filesystem block size? 4096 as well? 64k?

Doing benchmarking for this is kinda massive as I have a 5-axis matrix to test, so I would love to get some hints on how I should at least try to set these.

In general, my understanding is:

  1. Yes, for 4k minimum sectors which should allow you to mix and match 512b and 4k logical sector size drives.
  2. Set to a multiple of #4. For databases and other heavy small I/O, you generally set this to equal the I/O or logical unit size of your database to minimize “wasted” data in each I/O. Larger than the default (8k?) should improve sequential throughput and reduce L2ARC overhead (L2ARC indexes eats some of your ARC), but there is probably a happy medium somewhere that depends on your usecase. I wouldn’t go below 4k unless you’re absolutely sure it’s better than the default for your workload.
  3. I’m not familiar with this file or what it means. Based on the pattern for #4 and #5, my first guess would be to set it equal to #2.
  4. Set to #2, a the largest value that divides into #2 with no remainder. Generally, that means 4096 for targetcli.
  5. Set to #2.

So really, #2 is the only free variable. Your test matrix is now a vector :slight_smile:. I would consider testing 4k to 256k (inclusive), and sprinkle in however many intermediate values you want to test. You may not find much difference in performance vs the default size.

If you have an L2ARC (obligatory reminder: don’t bother with L2ARC until you can’t install or free up any more RAM for ARC), higher values will result in fewer L2ARC blocks which reduces RAM usage. If your L2ARC is >~10x your ARC size, this is something worth considering.

Hey! I just tried to do this on Proxmox via fileio, and when I go to try to initialize the disk in Windows, it fails, complaining about an I/O error. I assumed using fileio won’t cause issues as far as zvol blocksize vs fileio blocksize, so where am I going wrong? I followed your entire guide and was able to get it connected to the initiator on Win11, but then in disk management it fails to initialize.

I just want to add that if you don’t use ZFS you can still use LVM2 or normal partitions to create block devices and still use the block driver instead of fileio.