[ZFS] Zpool unmountable although healthy [solved]

Hello,
I have a bit of a problem with ZFS.
I created a zpool on with LUKS encrypted drives and now I can't get the filesystem zfs mounted after reboot:
Upon importing the zpool, explicitly mounting it or changing certain properties the command just reports

filesystem 'Data' can not be mounted due to error 1
cannot mount 'Data': Invalid argument

I've created a smaller test zpool and tested it for functionality and verified that my setup works without problems (storing data, exporting, importing, re-mounting encrypted drives, after reboots).
After testing I created the actual Zpool I wanna use, copied all of my data from my mass storage to it and sorted most of it, I still have most of the original data on the source drive but I sorted, deleted, moved data for half a day and It would be a pain to do all that including creating the zpool, getting UUIDs, writing mount/Unmount scripts and re-downloading lost media files again.
What I did in the following order after which the problem occured:
- Creating Luks-formatted drives
- Creating mirrored zpool on both drives (automatically mounted zpool to /Data)
- copying Data from various sources to it
- sorting some Data with file manager
- unmounting zpool
- exporting zpool
- removing luks encryption mappings (luksClose)
- reboot
- mounting encryption mappings (luksOpen)
- importing zpool
- noticing problem with mounting
- zpool scrub
- checking zpool status and properties (e.g. for canMount flag)

Other troubleshooting steps I took:
- booting Linux iso but failing to load zfs modules after installing them
- reducing fstab to just mounting root and /boot
- upgrade of zpool (already up-to-date)
- checking dmesg (no relevant info found)
- stracing mount command (problem not found)

Current zpool setup/status:
# zpool list -v

NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Data 3.62T 2.39T 1.24T - 40% 65% 1.00x ONLINE -
mirror 3.62T 2.39T 1.24T - 40% 65%
Data1 - - - - - -
Data2 - - - - - -

# zpool status -v

pool: Data
state: ONLINE
scan: scrub repaired 0 in 5h43m with 0 errors on Mon May 23 05:04:47 2016
config:

NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
Data1 ONLINE 0 0 0
Data2 ONLINE 0 0 0

As you can see there are no signs of corruption, errors or any other reported problems, just the inability of mounting them.
As "scrub" and "zdb Data" successfully completes (and latter outputes files) I'm think there's nothing wrong with the zpool and data is readable, I'm probably just doing something wrong, just having no idea what as I could mount zpools without issues on with multiple test zpools with same name and scripts for mounting/unmounting.

Maybe one of you can find the problem, here some more information:
Setup: 4.5.4-1-ARCH, system specs
complete log of what i've done and output of various status commands, zpool history, strace output and package versions:
https://pastebin.com/Cwwtksuv
Guide I followed (mostly): ZFS on Linux with LUKS encrypted disks | make then make install

Any help is appreciated!

Edit:
Got the Zpool working and mountable under Arch without errors now with downgraded ZFS packages and Linux Kernel (LTS versions).
Gonna write a bug report this situation on the Github page of ZFS though.

More info in my Post on their subreddit