Has anyone here tried to create an iSCSI target in Proxmox?

I am in the middle of a massive NAS server consolidation project for my home lab where I am consolidating 4 or 5 NAS systems down to a single, monolithic system.

My plan is that since pretty much all of my storage is shared throughout the house anyways, what I would like to do is carve out like a 10 TB iSCSI target that I can install Steam games on (and then enable compression and deduplication to reduce the total disk space that’s used).

The host OS is going to be Proxmox 7.3-3.

I see that you can mount an iSCSI target that’s provided by another system, but I don’t see a direct way to CREATE said iSCSI target that you can “export” to the VMs that’s hosted by the system itself.

Is there a way of doing that?

The only other way that I would think of doing something like that would be to basically create however many virtual disks (as a qcow2 file) on the host storage itself, and then pass those along to a VM running TrueNAS, (so it would be iSCSI-on-ZFS(VM)-on-ZFS(host)) and then let TrueNAS deal with presenting an iSCSI target for other systems to use.

One of the biggest downside that I can with setting it up this way is that the VM is an extra added layer on top of that, rather than being able to interact directly with the block storage on the ZFS pool.

(If that’s a terrible way of setting that up, please let me know if there is a better way. Because that’s how I have my current TrueNAS server set up.)

So, I wasn’t sure if anybody has set something up like this before.

Thanks.

there is no built in way to use ProxMox as an iSCSI target. it is Debian so something can be added, but…

reading through your other posts i must add that i fail to see why you are trying to force proxmox into ‘advanced NAS role’ instead of using say TrueNAS Scale?

everything you state seems a lot more NAS like, as apposed to needing a real VM host.

1 Like

To be honest, I haven’t given TrueNAS scale a shot yet.

It might be worth looking into.

I think that this was because when TrueNAS Scale was introduced, even the TrueNAS Scale’s (original) download page said something to the effect of “not for production use”.

edit
This was the post that I originally read which alluded to TrueNAS SCALE being not (ready) for production use (yet). link

I might have to give that a shot since presumably a lot of the lessons that I’ve learned with virtio-fs and GPU passthrough – those lessons learned should all transfer over to TrueNAS Scale.

I was originally looking at using Proxmox mostly because of the VMs and its ability to run virtio-fs. But TrueNAS Scale can do all of that, plus all of the other stuff as well, that I am trying to get the system to do, then it’s definitely worth taking a look at.

Thank you.

It’s both.

The ideas with the VMs is that I can power-down as many systems as I can, and migrate them over onto the single system via virtualisation (which includes being able to game on the server via a VM with a GPU passed through).

And for the things that are more difficult to virtualise (e.g. 4K video playback/AV1 decode and then sending that video stream over GbE), that’s where the mini PC clients comes into play (which is what is driving the demand for a SMB/CIFS and/or iSCSI target.

I haven’t researched how to set up an iSCSI target on a ZFS pool yet (and how to do that in Debian/Proxmox).

But there’s the interaction between the VMs and the NAS where the NAS is the centralised storage and the VMs push/pull data to/from said centralised storage.

But that’s a good point re: TrueNAS Scale.

I might have to try that out to see how well that may or may not work for this.

i build and run enterprise environments and have a full DC and at VLANS at my house. i have used PCIE pass through for nearly every possible device type at some point.

my point here is, i have a VM server that runs nearly everything in my house, it is based on Proxmox. and i have a fully separate gaming PC that only interacts with said environment over the network.

1 Like

Soo…does that mean that you have separate servers for your SMB and/or iSCSI or is it all a part of your VM server?

yes! in fact I just finished creating an iSCSI target in proxmox last week, it contains a 1TB volume piped over a 10gbe connection to my desktop

you just follow the debian instructions. My proxmox install is on zfs so I created a 1TB dataset and used that as the target backing device.

it works completely fine

edit:

Linux pve 5.15.74-1-pve #1 SMP PVE 5.15.74-1 (Mon, 14 Nov 2022 20:17:15 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Jan 31 19:17:42 2023 from 10.0.0.55
root@pve:~# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2023-01.pve:tw
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 13
            Initiator: iqn.2016-04.com.open-iscsi:b76a35e41461 alias: tw-optane
            Connection: 0
                IP Address: 10.0.0.55
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 1099512 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: rdwr
            Backing store path: /dev/rpool/data/tw_iscsi
            Backing store flags:
    Account information:
    ACL information:
        ALL
root@pve:~#
3 Likes

Stupid question – you wouldn’t happen to have a link to the Debian instructions that you followed, would you?

I found one earlier today, during my lunch break from tecmint.com, but it was for Debian 9 and was also using lvm rather than ZFS.

Thank you.

no, but allow me to get you started

  1. you need to install tgt
 apt update && apt install tgt
 systemctl enable --now tgtd
  1. create a ZFS dataset of your desired size
 zfs create -V 1T rpool/data/<dataset_name>

where 1T is your desired volume size. This will create a block device for you underneath /dev/rpool/data/<dataset_name> which you will use for your backing store.

  1. create a new iscsi target with tgtadm
 tgtadm --lld iscsi --op new --mode target --tid 1 --targetname <iqn>

where iqn is a string in the format iqn.<year>-<month>.<hostname>:<volume name>. Mine is iqn.2023-01.pve:tw

  1. Now you need to create a LUN and attach it to your backing store
 tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --backing-store /dev/rpool/data/<dataset name>

where dataset name is the name of the dataset you created earlier

  1. Bind the tgt listener on all interfaces, and disable access control
tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL

it’s just me on my home network, so i don’t care about access control, I just want it to work

the final step is to save this shit into a file so it doesn’t get lost when you reboot

tgt-admin --dump |grep -v default-driver > /etc/tgt/conf.d/my-targets.conf

you can see how you’re going with the targets at any step by issuing:

tgtadm --lld iscsi --op show --mode target

and it should dump out something like the excerpt i pasted above

hope this helps mate

cheers

5 Likes

I was running TrueNas core virtuaized with an hba passed through. There is nothing wrong with this but i recently wanted more drives so now i am running a separate proxmox host and truenas core box.

Thank you!

I appreciate you taking your time to help this n00b out (me).

for the rpool/data/<dataset_name>, I am guessing that it doesn’t have to reside on the root pool, correct? (I am assuming that’s what rpool means?) Please educate me if I am wrong.

So for example, in my case, what I have (in Proxmox) is I manually created a ZFS pool using the command:

zpool create export -o ashift=12 /dev/sba

So when I do zpool list
It shows the export pool.

From there, I created a folder called myfs (which is what I am using for all of the virtio-fs passthrough and bypassing the VM NIC <=> host interface so that I can get access to the files on /export/myfs directly (on VMs that support it).

If I want to create the steamlibrary cache, would I change the /dev/rpool/data/<dataset_name> to /export/myfs/steamlibrary?

(The /dev/rpool(/data) is throwing me for a loop a bit.)

I’m still learning.

My apologies for my dumb questions, given what I am trying to do.

1 Like

nope, it can be anywhere, that’s just where i had mine cause it’s easy and convenient

correct, i just used the root pool that came with proxmox cos it’s installed on a 3 nvme array

edit: the /dev paths just correlate with this:

no this is the mountpoint, you need to use the block device exposed underneath /dev

1 Like

Yeah, the system that I just bought is a Supermicro 36-bay 4U server.

This is big enough that it would be able to house all of my drives that I have currently, all in a single “box”, and has the added fringe benefits that instead of having to buy all this 10 GbE networking gear (NICs, switch(es), cables, etc.), apparently the virtio NIC driver is a 10 Gbps NIC.

So yay!!!

And then on top of that, the virtio-fs allows for immediate and direct access to the host system’s files, without the need for a NIC, which, in theory, should be even better than having to go through the virtio 10 Gbps NIC. (Yes, I know that I’m going to be using spinning rust, so chances are, it probably doesn’t matter THAT much anyways, but where direct access is possible, it should be better (or at least marginally faster). Or so goes the theory anyways.)

It just didn’t make sense to me having to run all this networking stuff when I can “stuff my VMs all into the same box” and try and have it access the shared data directly.

My current TrueNAS server is a dual Xeon E5310, I think, which launched in November 2006.

I think that it’s time for an upgrade (moving to dual Xeon E5-2697A v4, or at least that’s the plan anyways). (The system that I just bought comes with I think 2x Xeon E5-2690 v4.)

So that should be a very nice performance boost.

Again, I am really grateful to you for taking the time to write out the instructions for me.

Not a lot of people are necessarily willing to do that, so thank you!

re: “to expose the underlying block device”
So…I would need to pass on /dev/sd*x* then?

This is where I am a little bit confused.

So let me tell you what I am planning on doing and you can educate me in terms of how I might actually go about executing your instructions:

So the system that I just bought is going to have 36 3.5" drive bays (It’s a Supermicro 6048 4U server, dual Xeon, X10 motherboard, 128 GB of RAM, etc.)

I’m planning on having four raidz2 vdevs, two of them will be with 6 TB drives (16x 6 TB drives total) and the other two will be with 10 TB drives (16x 10 TB drives total).

Total raw capacity should be 16x6 + 16x10 = 256 TB.)

Total raidz2 capacity will be 192 TB (if I did my math right).

That would all be in one giant zpool called export.

So, I would think that if I were to do a zfs list, it should show the /dev/sd*x* ID for each of those drives (or if it is like /dev/by-id) or something like that, so that it would be more resilient to if the drives get enumerated in a different order or something like that.

So, that is the plan.

At the end of it, I don’t think that there will be any free block devices left because they will all be members of the raidz2 vdevs which is a part of the export pool.

Hopefully that helps to explain what I am thinking of doing.

And if you think that this is a terrible way to go about it, I am open to being educated.

(In TrueNAS Core, that’s how I had the system set up, and via the iSCSI wizard that they have, I was able to carve out a 10 TB chunk out from the export pool and use it as a iSCSI target. So that is the background of my perspective in terms of where I am coming from in regards to this approach.)

Thank you.

not exactly

when you create a volume with zfs create <name>, you get a block device under /dev/<name>. the /dev/sdx devices are still block devices but they refer to the physical SCSI disks attached to the computer, whether it be SATA or whatever

the same goes for /dev/nvme(x) devices

you’re creating a sub-volume on ZFS, which still creates a “block device”, but the block device refers to the space carved out of your big zfs volume, which is a layer chucked on top of the physical disks located at /dev/sd(x)

scenario:

given i have a zfs filesystem called export,
and i create a volume with zfs create -V <size> export/steam
then i will have a block device at /dev/export/steam which i can either mount or use as the backing store for an iscsi target

you are very welcome mate :+1:

1 Like

zfs will assemble itself and you won’t have to worry

Ahhh…okay.

Gotcha.

I’ve never done nor knew/realised this before.

Thank you!

Yeah, the “issue” that I found with Proxmox was that if you created the ZFS pool via the GUI, then you can only store certain types of content on there vs. if you created the zpool via the command line, and then added it as a directory, then you can store more types of content on there.

Don’t know why, but that’s how it apparently works.

So I was worried that I was going to have this big, complicated, CLI command that I was going to have to type out in order to get all of the drives added to the big storage pool (in my case, called export), to get it going so that if I have to take the drives out of the server, and then put them back in, out of order, that ZFS wouldn’t freak out because I changed the physical locations of drives.

So…I just tried this and unfortunately, TrueNAS scale doesn’t have virtio-fs access in the same way that Proxmox does.

(It was WAYYYY easier to passthrough a GPU though.)

For virtio-fs, the TrueNAS forums says to file a JIRA issue with it, which, I am guessing, means that this feature/functionality might not be included in TrueNAS Scale any time soon, unfortunately.

That’s such a bummer because if the VMs were controlled by a config file like it is in Proxmox, then this was a potential solution to what I was trying to accomplish. Such a shame really. :frowning:

nah, that’s a proxmox concept, disregard the UI

i created the volume in the console and it works fine and doesn’t interact with the UI in any way

1 Like

Yeah, my test system that I am currently using for the time being (where I am testing the software, GPU passthrough, etc.) – as I mentioned, I created the ZFS pool outside of the GUI and then pointed the GUI to it so that I can tell it to store the VM disk images, etc.

I don’t have to, but it’s less typing/copying-and-pasting of commands by enabling it via the GUI.