Homelab iSCSI questions

I have ran a TrueNAS VM for years, it’s been great. I’m looking to get a little more advanced both for fun/learning and potentially photo editing off the NAS.

I have a truenas data share shared via SMB and it’s been great as an archive/network location I can hit while remote over WireGuard. I want to retain this SMB share so I can access the data while remote or from other machines… while also going 10gb LAN from my PC direct to the NAS via a direct link using fiber and connectX 2’s. Ideally I would set up iSCSI mount points for my desktop, but I am curious f there is a standard way folks “mirror” data from an iSCSI mount to a truenas datastore shared via SMB. The reason I’m considering iSCSI is for the improved block level performance.

Any recommendations here?

My iscsi is not fastet than my smb Shared, you can install any program without any issue on an iscsi Drive but thats it.

I have an 16tb sata ssd zfs Block iscsi Drive as Game Library and an smb Share on the same Array.

Smb speeds are much consistent.

2 Likes

iSCSI speeds will greatly depend on your network hardware. If you don’t have NICs in your NAS and PC that support iSCSI and RoCE acceleration then speeds won’t be great.

iSCSI does like a lot of bandwidth of course, as the more you have the greater the throughput can be. A 10 gigabit network can transfer files around a gigabyte per second, as it is about the minimum you should have for implementing this sort of thing. If you are on gigabit then you will get no real benefit since SMB can do gigabit just fine. But iSCSI is also very latency sensitive since it is block-level networking rather than file level. Block level is more like how the hard drives access data. That is why it is important to have NICs meant for that traffic, such as Chelsio’s T5 series and how it gets iSCSI traffic latency down to 1.5 microseconds. The T540-LP-CR is their most commonly found NIC, but the T520-BT is their only 5 series that has RJ45’s. Without the NIC acceleration, the latency is up towards the milliseconds and it is just too much for a block-level protocol to be good at. SMB or other file-level protocols are going to be faster at that point.

edit: Actually the Mellanox/Nvidia ConnectX-5 series might be a better choice than Chelsio T5. I know the Chelsio support iSCSI/TCP acceleration and I assume it supports iSCSI/RDMA acceleration but I didn’t see that specifically mentioned. I Do know the ConnectX-5 series specifically supports iSER acceleration which is the name for iSCSI/RDMA traffic. The MCX512A-ADAT model would be the one you want, as it is the PCIe 4.0 variant and uses standard SFP28 cages instead of moving up to QSFP28. The “ACAT” model is far more common and cheaper to find on Ebay though, but it is the PCIe 3.0 variant.

edit again: oh and you will also want to enable jumbo frames on your LAN to greatly increase your speeds.

3 Likes

This is good info! Thanks.

I already have connectX-2’s which I will be using. Assuming this isn’t ideal for this, I suppose I can just stick with SMB.

ConnectX-2 may support the necessary protocols, Im not really sure how far back support goes or whether drivers have the features enabled. I know the mlx5 linux driver supports it all and that goes back to ConnectX-4.
This page may interest you to see if you can get some things enabled. Im not sure how much of it TrueNAS supports, though I would suspect that it being specifically made for network file storage and sharing it should support everything necessary to accelerate such workloads.
https://docs.nvidia.com/networking/pages/viewpage.action?pageId=58764560
https://enterprise-support.nvidia.com/s/article/How-To-Enable-Verify-and-Troubleshoot-RDMA

1 Like

This is the correct answer. Enigma is totally right.

1 Like