We inherited a very trash VMWARE setup and things are going wrong.
We have 5 ESXI host that were participating into a VSAN with one another.
Most of the host were just sharing their internal DISKS in the VSAN however the beefy server had a jbod plugged directly into it which was it’s local datastore/vsan contribution.
The beefy server OS is fried and unable to be recovered. When I plug the JBOD into a linux OS it just see’s all of the disks instead of one single volume which makes sense.
I added the VMFS package to my debian machine and want to be able to read the JBOD to migrate the data off but it’s still unable to read it.
Plugging the JBOD into another ESXI host doesn’t see the datastore.
My hope would be that this JBOD has nothing to do with any vSAN configuration. In any case, how is it connected to the server? I’m assuming there is some kind of HBA or RAID card in use. Any idea if there was a RAID volume configured on these disks? Can you tell what kind of filesystem/formatting is on them?
You’ll need to know how it was set up originally in order to replicate the configuration on a new server since the original is dead.
So when we rebooted the ESXI host it actually booted into Oracle Linux. It was possible that he had this setup as a ZFS pool but when I checked no pools existed. After looking further into this and finally repairing VCenter I have come to the conclusion that the JBOD either has nothing to do with it or was mounted as an NFS share.
Will treat this forum more like diary because this is jank.
Update here, so I was able to restore access to the JBOD by switching oracle to an earlier zernel version and having DKMS install ZFS on the loaded kernel. I then was able to see that the L2Arc cache was configured wrong. Since I was just trying to obtain VM backups I removed the DISKS from the Cache then I was able to import the pool successfully. I now have access to the JBOD.