So I’m just doing some reading here in my downtime at the office. I’ll skip to the question first, since this is probably going to be a wall of text: How exactly is an iSCSI target server configured under Windows Server 2016 Specifically, does it need a unique logical volume for each initiator which is served, or can the WinSvr target server role be config’d to serve an .IMG to each initiator, all of which reside on a single logical volume?
Am away from my server at home, so I don’t have the ability to just go poke around and find out at the moment.
Now the story for anyone who cares. I’m trying to separate out my gaming and design/development stuff into two separate systems, but with the storage centralized in one place (my SuperMicro server in the basement). I was originally thinking of just having my single massive workstation host two VMs that I could switch between, but I think this is sounding like a better solution. I’ve been looking at iSCSI and Fiber Channel (I thought this died years ago, but I guess that was wrong) just because they’re the only two block-level network protocols I’m aware of… in my head, I would have an HBA in each initiator/client and those would run entirely off of an image hosted by the target server which I could back up/restore as needed, but I have no idea if that’s actually how any of this works.
I’ve never really kept up with this part of the server world, so I’m not really even sure what else might be out there. Anyone have recommendations other than booting over a block-level network protocol? There are three PCs that would need to boot from this server - one running Win10, one Win8.1 Embedded, and one Arch Linux. Maybe a fourth running WinXP but I can live without that one.
Booting a Linux machine off an iscsi San is relatively easy and supported at the kernel level, so it will not be trivial but it can definitely be done … Windows clients…that is asking for a lot of trouble if at all possible. If you have a workstation with spare ram/cores I would go with an hypervisor there, still serve iscsi lungs from the San. To the hypervisor and boot whatever you need using virtual disks…
No experience with using windows as an iscsi target, so can’t comment on that…
I use iSCSI here at work for RHEL and MS Windows systems. I can probably walk you through it but there are a lot of things that need to be setup on the MS Windows side.
Essentially you would need a Logical Volume and store a LUN for each machine that you intend to boot. You would then need to setup/define an IQN that goes to each LUN. From there in DHCP, you would need to setup a reservation for each machine that will boot and provide it a Boot paramater that points to its IQN (LUN), setup the PXE boot payload (if it is going to run diskless) and also define its DNS servers. From there it is a process of jiggling the handle until you can get all pieces work.
If you are just trying to do it, then by all means, it is something interesting to learn. If you just need a VM, setup a VM instead.
Fibre Channel is still alive but it is on life support due to the cost and vendor lock in. It only makes sense in niche use cases now that general networking equipment is as fast or faster at doing what Fibre Channel specialized in plus also functioning as standard networking infrastructure.
With that said iSCSI can be finicky/flakey at times. If you don’t have pro-sumer grade equipment, I really would not trust anything critical to it if your value your data, in flight and at rest.
Yeah, that pretty much sums it up - I don’t really need to do this, but it’s just something new I’ve never tried before.
Okay, now that is EXACTLY what I’m looking for on the server side of things. That answers one question, although I don’t have a license for 2022… will have to see if the old iSCSI service in 2016 can serve a virtual volume in the same way when I get back home.
I think my next search then is for an inexpensive 10G+ iSCSI hardware HBA. I have a handful of Mellanox ConnectX-3’s, but the manual only describes how you can do a PXE boot from them which then deploys iSCSI, not an actual iSCSI boot. I see something about flashing an iSCSI initiator EFI to the ConnectX-3, but I can’t find shit for documtation from Mellanox ever since nVidia bought them.
I might not even need to do anything to these ConnectX-3’s. Never had a reason to try this before, will confirm/deny once I get back and stick one in some PC.
Confirmed, the Mellanox ConnectX-3 (CX354A-QCBT) I tried supports both BIOS and UEFI booting from an iSCSI target.
Well that just made life a lot easier. Never had a reason to try that before this point… I think these things are my favorite piece of hardware, they’ll do basically anything.
I’ll probably make a new thread when I get around to actually putting this this project into service for anyone else that’s interested, because I’m gonna need:
Faster storage for the server to replace the local NVME SSD
Another small 2U system with a handful of ConnectX-3’s to run the network, since 40G infrastructure is still out of my budget and I can just add more as I need ports.
Lotta fiber, MTP/MPO transceivers, wall plates and conduit to run it down into the basement
…none of which I have the budget for right now unless @wendell decides to be uncomfortably generous with the Devember prize pool lol