iSCSI HBAs - Do they work like this

If I put an iSCSI HBA into my system, would I be able to boot into some config/bios and login to the iscsi and then have a hard drive appear within the guest OS? I don’t care to boot from it, I just want the networked drive without the networking to use with Qubes OS.

In other words, would it appear like normal storage to my Fedora/Qubes host os?

Afaik there’s no such thing as an ISCSI HBA …
You have Fiber Channel HBAs, that need dedicated fiber channel-compatible switches and SAns, or you have Iscsi SANs that provide storage through a standard ip network, and you use network adapters to connect to them
Some servers/some network adapters have an Iscsi enabled firmware that can access the Iscsi san during BIOS post but I don’t know if they can map the remote LUNs as make them appear local to the BIOS, especially UEFI …
Can you clarify what you have in mind?

[Sorry, I had an incorrect reply here earlier.]

That’s a Broadcom NetXtreme II BCM5709, a 4-port 1Gbps NIC with some iSCSI offload support — but like TCP offload, it’s for use by drivers, not an independent operating feature. No boot environment support.

Also after that title, I wouldn’t trust the seller…

My dude.
Let’s try and decode the question a sec.

Before we talk about ISCSI, let’s talk about SCSI. Let’s also talk about ATA. At the surface level, SCSI is an inherently better protocol which handles error recovery natively, and helps ensure data sent is data received. There is overhead in that extra error handling, and part of SCSI drives being more expensive than ATA/IDE is due to the controller which handles the extra commands on both sides.

Current generations of products are either SATA or SAS. Without diving too deep, they are just faster serialized versions of the parallel technologies of the past.

When we are talking about ISCSI we are talking about literally running SCSI on top of the TCP/IP stack. Back when Gigabit was first widely adopted, people needed ISCSI offload cards because the extra overhead on your system required to encapsulate SCSI commands inside of network packets would otherwise nuke server performance. That same trend continued when 10Gbe was first a thing.

The offload cards literally do as the name suggest, they offload a task from the general purpose CPU. Since ISCSI is built on top of TCP, additional error handling exists. Unlike UDP, TCP is a two way communication. When data corruption occurs during a network transfer, the TCP protocol will try to correct it by requesting the data be sent again. It also will try to “scale” how much data is sent at a given moment using something called windowing.

All of this “stuff” adds up. People have tried to mitigate the issue of CPU overhead by increasing their MTU size from the default 1500 to over 9000. That helps to some degree, and in conjunction with offload cards, is how 10 gigabit ISCSI networks worked in the enterprise for a long time.

But as we fast forward in time, CPUs have gotten way faster and those things aren’t really necessary for 1 or 10 Gigabit networking anymore. They just add unnecessary complexity. When you’re talking about things faster than that, shit gets crazy and way more complicated.

But seriously, I asked in the other thread, whatcha trying to do?? :slight_smile:

My dude, what I am trying to accomplish is iscsi HBA. But MadMattt answered my question in one paragraph. Thanks to him!

K thanks.

Just trying to help.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.