Zpool Offline. Any Hope of recovering anything?

Yeah, RAID is not a backup.

And ZFS shut down the pool to save your stuff after all. Being able to import in read-only was a very good sign. Thing can get way more complicated than this.

Were you able to identify why the entire vdev was ejected from the pool in the first place? I mean, two disks at the same time is unlikely a drive problem. And especially being the last vdev that was added. That smells fishy. And with all things being equal, it may happen again.

Pretty much what my connection looks like too. The upload will do the job for sending incremental snapshots, but restoring an entire pool, yeah thatā€™s probably weeks. I donā€™t have an off-site backup (I run with cold disks for replication). But Iā€™d probably just load the server into my car and drive to my friend. Good opportunity for backup&beer party :slight_smile:

Iā€™m glad you got your stuff back. Having a storage array with disks failing beyond the level of redundancyā€¦is is usually game over for most arrays.

Appreciate the feedback. Youā€™re not wrong, it just didnā€™t really matter the ACLs were pretty basic. Iā€™ve been just watching the CP run in top and tracking space utilization.

Itā€™s definitely something wrong with one of the controllers in one of my shelves. Iā€™ve been writing an insnae amount of I/O using only a single path on the ā€œBā€ controllers and everythingā€™s fine.

For sure, that would have been ā€œeasierā€ but a full size atx tower and 2 3U disks shelves arenā€™t a small thing to move around. Weā€™re all here to learn from eachother also, so its all good.

Iā€™ll split my mirrors to different controllers. I couldnā€™t see any noticeable performance impact and if one controller dies or is ā€œacting strangelyā€ I got the other side of the mirror on the other controller and corresponding SFF-8643 connector, cable and backplane.
This obviously is more difficult with RAIDZ or if you donā€™t have multiple controllers.

Thatā€™s how I designed it, but my action of ā€œreplacingā€ the failed disk after the system had automatically started to use my hotspare, coupled with the fact I didnā€™t properly document what drive was where, is what lead to the failure.,

Hey good work. There is a little known feature of zfs if you have a remote zpool you can actually pull down the missing bits from there so restoring just the damaged bits where local replicas are all gone is still recoverable! Glad this worked out though.

2 Likes