Zfs ha ctl

I have been thinking about what would be the way to do this, I have seen a post in the same forum that talks about the HA option of the ctl(4) driver. I have followed the papers of the following user called mezantrop and about his project called The BeaST Classic, I have also seen other solutions from other users in other forums, but I do not want to focus on external solutions that are not by default in ctl(4)

Basically my base configuration from the following paper:

First look at the renewed CTL High Availability implementation in FreeBSD

Just change the disks for Zvol’s and with two machines, each of them contains a Zvol that will communicate with the two nodes that will act as a Cluster with CTL. The two machines that contain the backend itself are str0 and str1, both have the same configurations so I will only put an output of the configuration of one of the machines, I expose those two volumes on str0 and on str1 for the nodes to use, the nodes are called node0 and node1:

str1:~ # zfs list -t volume -r z9
NAME      USED  AVAIL  REFER  MOUNTPOINT
z9/rad1   510M  1.64G   115M  -
str1:~ # cat /etc/ctl.conf
portal-group pg0 {
       discovery-auth-group no-authentication
       listen 10.10.1.2
}
target iqn.2020-01.local.zfs-1:target0
{
                auth-group no-authentication
                portal-group pg0

                lun 0 {
                         path /dev/zvol/z9/rad1
                }
}

I connect the two Zvols exposed in str0-1 on the two nodes and create a mirror on the two nodes, so that in case a Zvol or a str fails, the service is active:

kern.cam.ctl.ha_id=1
kern.cam.ctl.ha_mode=2
kern.cam.ctl.ha_role=0
kern.cam.ctl.iscsi.ping_timeout=0
sysctl kern.cam.ctl.ha_peer="connect 10.200.1.1:8181"

kern.cam.ctl.ha_id=2
kern.cam.ctl.ha_mode=2
kern.cam.ctl.ha_role=1
kern.cam.ctl.iscsi.ping_timeout=0
sysctl kern.cam.ctl.ha_peer="listen 10.200.1.2:8181"

gmirror create -v -b round-robin gm0 /dev/da0 /dev/da1

I expose the mirrors on the two nodes so that the client connects and can perform a multiple route, the mirrors would be exposed as follows:

node0:~ # cat /etc/ctl.conf
portal-group pg0 {
       discovery-auth-group no-authentication
       listen 10.10.1.5
}
target iqn.2025-01.local.ha:target0
{
                auth-group no-authentication
                portal-group pg0

                lun 0 {
                         path /dev/mirror/gm0
                }
}

On the client I only create a multi-route in case a node is disabled:

gmultipath create -A MIRROR /dev/da0 /dev/da1

So far so good, as indicated in the paper and the ctl(4) manual, in case the primary node fails, the data transfer is paused until the secondary node changes its role to primary, and this has to be done manually:

sysctl kern.cam.ctl.ha_role=0

But if you don’t change the role of the secondary node in time, the client connections are lost due to time_out. Here is my question, is this configuration shown here viable in any way or in any specific environment For example, I don’t know how ESXI and the VMs would respond to that wait, until the secondary node can access the backend, I don’t have a lab right now to be able to check it.

Or how a Windows server would respond, that’s the question.

Thanks.