And ZFS shut down the pool to save your stuff after all. Being able to import in read-only was a very good sign. Thing can get way more complicated than this.
Were you able to identify why the entire vdev was ejected from the pool in the first place? I mean, two disks at the same time is unlikely a drive problem. And especially being the last vdev that was added. That smells fishy. And with all things being equal, it may happen again.
Pretty much what my connection looks like too. The upload will do the job for sending incremental snapshots, but restoring an entire pool, yeah thatās probably weeks. I donāt have an off-site backup (I run with cold disks for replication). But Iād probably just load the server into my car and drive to my friend. Good opportunity for backup&beer party
Iām glad you got your stuff back. Having a storage array with disks failing beyond the level of redundancyā¦is is usually game over for most arrays.
Appreciate the feedback. Youāre not wrong, it just didnāt really matter the ACLs were pretty basic. Iāve been just watching the CP run in top and tracking space utilization.
Itās definitely something wrong with one of the controllers in one of my shelves. Iāve been writing an insnae amount of I/O using only a single path on the āBā controllers and everythingās fine.
For sure, that would have been āeasierā but a full size atx tower and 2 3U disks shelves arenāt a small thing to move around. Weāre all here to learn from eachother also, so its all good.
Iāll split my mirrors to different controllers. I couldnāt see any noticeable performance impact and if one controller dies or is āacting strangelyā I got the other side of the mirror on the other controller and corresponding SFF-8643 connector, cable and backplane.
This obviously is more difficult with RAIDZ or if you donāt have multiple controllers.
Thatās how I designed it, but my action of āreplacingā the failed disk after the system had automatically started to use my hotspare, coupled with the fact I didnāt properly document what drive was where, is what lead to the failure.,
Hey good work. There is a little known feature of zfs if you have a remote zpool you can actually pull down the missing bits from there so restoring just the damaged bits where local replicas are all gone is still recoverable! Glad this worked out though.