ZFS - Pop_OS 19.10 - I/O Errors

New to this ZFS thing. I setup a raidz1 with 5x8TB drives and things were going well for the most part. But 3 times now I’ve had to restart the array due to IO faults after mass copying files to the array from other hard drives.

After restart things go well for a while before more IO errors.
I suspect a few of the individual drives are going to sleep? Is that a thing?
Do I need to turn off some sort of power management settings in Ubuntu/Pop_OS settings for the ZFS member drives?

Or any other input on why I am get IO faults after 10-30minutes of spamming the drives with new data at 200MB/s.

Tnx

Pool config?

How’d you install zfs?

Anything of interest in dmesg?

Smart status on disks?

Memtest results?


Without any more detailed info, I’m stabbing in the dark.

Not very likely, drives don’t sleep while doing IO, and a ZFS array is always doing something. For a server, all forms of sleeping should be turned off anyways.

If the drives are dropping out during high IO, most likely bad connection (cheap hotswap cage for ex), or bad cable, or bad power. Less likely, bad drive or hardware fault on the controller.

1 Like

Tnx for the replies!

I did not modify any pool configs.
I created pool with ashift=12, -m /mnt/tank using by-id
zfs set compression=lz4 tank
zfs set atime=off tank
zfs create tank/datasets

ZFS was installed with simply: sudo apt install zfsutils-linux
Smart data reads Ok with no values changing.
Have not run memtest yet. Brand new system.
I can’t rule out the Sata cables, they are brand new. I just bought a set of 8 to hopefully avoid any issues like this.

Got some dmesg of the issue below:

[ 6679.593626] ahci 0000:29:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x7fffff00000 flags=0x0000]
[ 6679.593636] ahci 0000:29:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x7fffff00780 flags=0x0000]
[ 6710.900139] ata3.00: exception Emask 0x0 SAct 0x8800 SErr 0x40000 action 0x6 frozen
[ 6710.900143] ata3: SError: { CommWake }
[ 6710.900146] ata3.00: failed command: WRITE FPDMA QUEUED
[ 6710.900150] ata3.00: cmd 61/90:58:d0:98:bc/07:00:c4:01:00/40 tag 11 ncq dma 991232 out
res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[ 6710.900152] ata3.00: status: { DRDY }
[ 6710.900153] ata3.00: failed command: WRITE FPDMA QUEUED
[ 6710.900157] ata3.00: cmd 61/00:78:20:a6:bc/03:00:c4:01:00/40 tag 15 ncq dma 393216 out
res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[ 6710.900158] ata3.00: status: { DRDY }
[ 6710.900161] ata3: hard resetting link
[ 6710.900185] ata4.00: exception Emask 0x0 SAct 0x800a0094 SErr 0x40000 action 0x6 frozen
[ 6710.900190] ata4: SError: { CommWake }
[ 6710.900192] ata4.00: failed command: WRITE FPDMA QUEUED
[ 6710.900197] ata4.00: cmd 61/b0:10:b0:9a:bc/07:00:c4:01:00/40 tag 2 ncq dma 1007616 ou
res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[ 6710.900199] ata4.00: status: { DRDY }
[ 6710.900200] ata4.00: failed command: WRITE FPDMA QUEUED
[ 6710.900203] ata4.00: cmd 61/a8:20:08:93:bc/07:00:c4:01:00/40 tag 4 ncq dma 1003520 ou
res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[ 6710.900205] ata4.00: status: { DRDY }
[ 6710.900206] ata4.00: failed command: WRITE FPDMA QUEUED
[ 6710.900209] ata4.00: cmd 61/58:38:20:aa:bc/05:00:c4:01:00/40 tag 7 ncq dma 700416 out
res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[ 6710.900211] ata4.00: status: { DRDY }
[ 6710.900212] ata4.00: failed command: WRITE FPDMA QUEUED
[ 6710.900215] ata4.00: cmd 61/c0:88:60:a2:bc/07:00:c4:01:00/40 tag 17 ncq dma 1015808 ou
res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[ 6710.900217] ata4.00: status: { DRDY }
[ 6710.900218] ata4.00: failed command: READ FPDMA QUEUED
[ 6710.900222] ata4.00: cmd 60/08:98:68:a0:bb/00:00:76:01:00/40 tag 19 ncq dma 4096 in
res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[ 6710.900223] ata4.00: status: { DRDY }
[ 6710.900224] ata4.00: failed command: WRITE FPDMA QUEUED
[ 6710.900228] ata4.00: cmd 61/88:f8:00:f9:ba/07:00:c4:01:00/40 tag 31 ncq dma 987136 out
res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
[ 6710.900229] ata4.00: status: { DRDY }
[ 6710.900232] ata4: hard resetting link
[ 6720.900535] ata4: softreset failed (1st FIS failed)
[ 6720.900539] ata4: hard resetting link
[ 6720.900630] ata3: softreset failed (1st FIS failed)
[ 6720.900635] ata3: hard resetting link
[ 6730.900224] ata4: softreset failed (1st FIS failed)
[ 6730.900229] ata4: hard resetting link
[ 6730.900826] ata3: softreset failed (1st FIS failed)
[ 6730.900831] ata3: hard resetting link
[ 6765.900445] ata4: softreset failed (1st FIS failed)
[ 6765.900452] ata4: limiting SATA link speed to 3.0 Gbps
[ 6765.900453] ata4: hard resetting link
[ 6765.900656] ata3: softreset failed (1st FIS failed)
[ 6765.900662] ata3: limiting SATA link speed to 3.0 Gbps
[ 6765.900664] ata3: hard resetting link
[ 6770.900559] ata3: softreset failed (1st FIS failed)
[ 6770.900566] ata3: reset failed, giving up
[ 6770.900568] ata3.00: disabled
[ 6770.900601] sd 2:0:0:0: [sdb] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 6770.900603] sd 2:0:0:0: [sdb] tag#11 Sense Key : Not Ready [current]
[ 6770.900606] sd 2:0:0:0: [sdb] tag#11 Add. Sense: Logical unit not ready, hard reset required
[ 6770.900608] sd 2:0:0:0: [sdb] tag#11 CDB: Write(16) 8a 00 00 00 00 01 c4 bc 98 d0 00 00 07 90 00 00
[ 6770.900611] blk_update_request: I/O error, dev sdb, sector 7595661520 op 0x1:(WRITE) flags 0x700 phys_seg 16 prio class 0
[ 6770.900614] ata4: softreset failed (1st FIS failed)
[ 6770.900619] ata4: reset failed, giving up
[ 6770.900623] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00WJTA0_7HK2ANTN-part1 error=5 type=2 offset=3888977649664 size=991232 flags=40080c80
[ 6770.900626] ata4.00: disabled
[ 6770.900637] sd 2:0:0:0: [sdb] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 6770.900638] sd 2:0:0:0: [sdb] tag#15 Sense Key : Not Ready [current]
[ 6770.900640] sd 2:0:0:0: [sdb] tag#15 Add. Sense: Logical unit not ready, hard reset required
[ 6770.900641] sd 2:0:0:0: [sdb] tag#15 CDB: Write(16) 8a 00 00 00 00 01 c4 bc a6 20 00 00 03 00 00 00
[ 6770.900642] blk_update_request: I/O error, dev sdb, sector 7595664928 op 0x1:(WRITE) flags 0x700 phys_seg 6 prio class 0
[ 6770.900646] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00WJTA0_7HK2ANTN-part1 error=5 type=2 offset=3888979394560 size=393216 flags=40080c80
[ 6770.900651] ata3: EH complete
[ 6770.900697] sd 3:0:0:0: [sdc] tag#2 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 6770.900700] sd 3:0:0:0: [sdc] tag#2 Sense Key : Not Ready [current]
[ 6770.900702] sd 3:0:0:0: [sdc] tag#2 Add. Sense: Logical unit not ready, hard reset required
[ 6770.900704] sd 3:0:0:0: [sdc] tag#2 CDB: Write(16) 8a 00 00 00 00 01 c4 bc 9a b0 00 00 07 b0 00 00
[ 6770.900707] blk_update_request: I/O error, dev sdc, sector 7595662000 op 0x1:(WRITE) flags 0x700 phys_seg 17 prio class 0
[ 6770.900713] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=2 offset=3888977895424 size=1007616 flags=40080c80
[ 6770.900727] sd 3:0:0:0: [sdc] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 6770.900729] sd 3:0:0:0: [sdc] tag#4 Sense Key : Not Ready [current]
[ 6770.900730] sd 3:0:0:0: [sdc] tag#4 Add. Sense: Logical unit not ready, hard reset required
[ 6770.900732] sd 3:0:0:0: [sdc] tag#4 CDB: Write(16) 8a 00 00 00 00 01 c4 bc 93 08 00 00 07 a8 00 00
[ 6770.900733] blk_update_request: I/O error, dev sdc, sector 7595660040 op 0x1:(WRITE) flags 0x700 phys_seg 17 prio class 0
[ 6770.900736] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=2 offset=3888976891904 size=1003520 flags=40080c80
[ 6770.900741] sd 3:0:0:0: [sdc] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 6770.900742] sd 3:0:0:0: [sdc] tag#7 Sense Key : Not Ready [current]
[ 6770.900744] sd 3:0:0:0: [sdc] tag#7 Add. Sense: Logical unit not ready, hard reset required
[ 6770.900745] sd 3:0:0:0: [sdc] tag#7 CDB: Write(16) 8a 00 00 00 00 01 c4 bc aa 20 00 00 05 58 00 00
[ 6770.900746] blk_update_request: I/O error, dev sdc, sector 7595665952 op 0x1:(WRITE) flags 0x700 phys_seg 13 prio class 0
[ 6770.900749] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=2 offset=3888979918848 size=700416 flags=40080c80
[ 6770.900754] sd 3:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 6770.900756] sd 3:0:0:0: [sdc] tag#17 Sense Key : Not Ready [current]
[ 6770.900757] sd 3:0:0:0: [sdc] tag#17 Add. Sense: Logical unit not ready, hard reset required
[ 6770.900758] sd 3:0:0:0: [sdc] tag#17 CDB: Write(16) 8a 00 00 00 00 01 c4 bc a2 60 00 00 07 c0 00 00
[ 6770.900760] blk_update_request: I/O error, dev sdc, sector 7595663968 op 0x1:(WRITE) flags 0x4700 phys_seg 16 prio class 0
[ 6770.900762] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=2 offset=3888978903040 size=1015808 flags=40080c80
[ 6770.900767] sd 3:0:0:0: [sdc] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 6770.900769] sd 3:0:0:0: [sdc] tag#19 Sense Key : Not Ready [current]
[ 6770.900770] sd 3:0:0:0: [sdc] tag#19 Add. Sense: Logical unit not ready, hard reset required
[ 6770.900772] sd 3:0:0:0: [sdc] tag#19 CDB: Read(16) 88 00 00 00 00 01 76 bb a0 68 00 00 00 08 00 00
[ 6770.900773] blk_update_request: I/O error, dev sdc, sector 6286975080 op 0x0:(READ) flags 0x700 phys_seg 1 prio class 0
[ 6770.900775] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=1 offset=3218930192384 size=4096 flags=180880
[ 6770.900781] sd 3:0:0:0: [sdc] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 6770.900782] sd 3:0:0:0: [sdc] tag#31 Sense Key : Not Ready [current]
[ 6770.900783] sd 3:0:0:0: [sdc] tag#31 Add. Sense: Logical unit not ready, hard reset required
[ 6770.900785] sd 3:0:0:0: [sdc] tag#31 CDB: Write(16) 8a 00 00 00 00 01 c4 ba f9 00 00 00 07 88 00 00
[ 6770.900786] blk_update_request: I/O error, dev sdc, sector 7595555072 op 0x1:(WRITE) flags 0x700 phys_seg 16 prio class 0
[ 6770.900788] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=2 offset=3888923148288 size=987136 flags=40080c80
[ 6770.900795] ata4: EH complete
[ 6770.900814] sd 3:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 6770.900818] sd 3:0:0:0: [sdc] tag#29 CDB: Read(16) 88 00 00 00 00 00 00 00 0a 10 00 00 00 10 00 00
[ 6770.900820] blk_update_request: I/O error, dev sdc, sector 2576 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 6770.900825] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=1 offset=270336 size=8192 flags=b08c1
[ 6770.901029] sd 3:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 6770.901031] sd 3:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 03 a3 80 e4 10 00 00 00 10 00 00
[ 6770.901032] blk_update_request: I/O error, dev sdc, sector 15628035088 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 6770.901034] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=1 offset=8001552916480 size=8192 flags=b08c1
[ 6770.901039] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=2 offset=3888981610496 size=65536 flags=180880
[ 6770.901040] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=1 offset=8001553178624 size=8192 flags=b08c1
[ 6770.901063] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00WJTA0_7HK2ANTN-part1 error=5 type=2 offset=3888980852736 size=823296 flags=40080c80
[ 6770.901075] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00WJTA0_7HK2ANTN-part1 error=5 type=2 offset=3888979853312 size=999424 flags=40080c80
[ 6770.901086] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00WJTA0_7HK2ANTN-part1 error=5 type=1 offset=270336 size=8192 flags=b08c1
[ 6770.901089] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00WJTA0_7HK2ANTN-part1 error=5 type=1 offset=8001552916480 size=8192 flags=b08c1
[ 6770.901091] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00WJTA0_7HK2ANTN-part1 error=5 type=1 offset=8001553178624 size=8192 flags=b08c1
[ 6770.901107] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=2 offset=2754595639296 size=118784 flags=40080c80
[ 6770.901166] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00M9AA0_VAGLBL5L-part1 error=5 type=2 offset=3888980619264 size=991232 flags=40080c80
[ 6770.901179] zio pool=tank vdev=/dev/disk/by-id/ata-WDC_WD80EMAZ-00WJTA0_7HK2ANTN-part1 error=5 type=1 offset=3218930192384 size=4096 flags=180880
[ 6770.977087] WARNING: Pool ‘zfs16tb’ has encountered an uncorrectable I/O failure and has been suspended.

[ 6889.040711] INFO: task txg_sync:13072 blocked for more than 120 seconds.
[ 6889.040716] Tainted: P OE 5.3.0-22-generic #24+system76~1573659475~19.10~26b2022-Ubuntu
[ 6889.040717] “echo 0 > /proc/sys/kernel/hung_task_timeout_secs” disables this message.
[ 6889.040718] txg_sync D 0 13072 2 0x80004000
[ 6889.040721] Call Trace:
[ 6889.040728] __schedule+0x2b9/0x6c0
[ 6889.040730] schedule+0x42/0xb0
[ 6889.040732] schedule_timeout+0x152/0x2f0
[ 6889.040792] ? spa_taskq_dispatch_ent+0x4f/0x70 [zfs]
[ 6889.040795] ? __next_timer_interrupt+0xe0/0xe0
[ 6889.040797] io_schedule_timeout+0x1e/0x50
[ 6889.040803] __cv_timedwait_common+0x15e/0x1c0 [spl]
[ 6889.040805] ? wait_woken+0x80/0x80
[ 6889.040810] __cv_timedwait_io+0x19/0x20 [spl]
[ 6889.040865] zio_wait+0x11b/0x230 [zfs]
[ 6889.040918] ? __raw_spin_unlock+0x9/0x10 [zfs]
[ 6889.040963] dsl_pool_sync+0xbc/0x410 [zfs]
[ 6889.041011] spa_sync_iterate_to_convergence+0xe0/0x1c0 [zfs]
[ 6889.041052] spa_sync+0x312/0x5b0 [zfs]
[ 6889.041099] txg_sync_thread+0x279/0x310 [zfs]
[ 6889.041143] ? txg_dispatch_callbacks+0x100/0x100 [zfs]
[ 6889.041149] thread_generic_wrapper+0x83/0xa0 [spl]
[ 6889.041152] kthread+0x104/0x140
[ 6889.041157] ? clear_bit+0x20/0x20 [spl]
[ 6889.041159] ? kthread_park+0x80/0x80
[ 6889.041161] ret_from_fork+0x22/0x40
[ 7009.873466] INFO: task pool-org.gnome.:20659 blocked for more than 120 seconds.
[ 7009.873470] Tainted: P OE 5.3.0-22-generic #24+system76~1573659475~19.10~26b2022-Ubuntu
[ 7009.873471] “echo 0 > /proc/sys/kernel/hung_task_timeout_secs” disables this message.
[ 7009.873473] pool-org.gnome. D 0 20659 14714 0x80000000
[ 7009.873475] Call Trace:
[ 7009.873482] __schedule+0x2b9/0x6c0
[ 7009.873522] ? arc_space_return+0xa0/0x120 [zfs]
[ 7009.873525] schedule+0x42/0xb0
[ 7009.873526] io_schedule+0x16/0x40
[ 7009.873532] cv_wait_common+0xdc/0x180 [spl]
[ 7009.873534] ? wait_woken+0x80/0x80
[ 7009.873539] __cv_wait_io+0x18/0x20 [spl]
[ 7009.873597] txg_wait_synced+0x88/0xd0 [zfs]
[ 7009.873639] dmu_tx_wait+0x1b5/0x210 [zfs]
[ 7009.873677] dmu_tx_assign+0x49/0x70 [zfs]
[ 7009.873731] zfs_write+0x425/0xd50 [zfs]
[ 7009.873733] ? __switch_to_asm+0x40/0x70
[ 7009.873735] ? __switch_to_asm+0x34/0x70
[ 7009.873737] ? __switch_to_asm+0x40/0x70
[ 7009.873738] ? __switch_to_asm+0x34/0x70
[ 7009.873740] ? __switch_to_asm+0x40/0x70
[ 7009.873741] ? __switch_to_asm+0x34/0x70
[ 7009.873742] ? __switch_to_asm+0x40/0x70
[ 7009.873744] ? __switch_to_asm+0x40/0x70
[ 7009.873746] ? __switch_to_asm+0x34/0x70
[ 7009.873748] ? __switch_to+0x110/0x470
[ 7009.873749] ? __switch_to_asm+0x40/0x70
[ 7009.873752] ? find_get_entry+0x58/0x140
[ 7009.873803] zpl_write_common_iovec+0xad/0x120 [zfs]
[ 7009.873804] ? pagecache_get_page+0x2d/0x2f0
[ 7009.873807] ? touch_atime+0x33/0xe0
[ 7009.873853] zpl_iter_write_common+0x8e/0xb0 [zfs]
[ 7009.873898] zpl_iter_write+0x56/0x90 [zfs]
[ 7009.873901] new_sync_write+0x125/0x1c0
[ 7009.873903] __vfs_write+0x29/0x40
[ 7009.873904] __kernel_write+0x54/0x110
[ 7009.873907] write_pipe_buf+0x6a/0x90
[ 7009.873908] ? wakeup_pipe_readers+0x50/0x50
[ 7009.873909] __splice_from_pipe+0x8d/0x1a0
[ 7009.873911] ? _cond_resched+0x19/0x30
[ 7009.873912] ? wakeup_pipe_readers+0x50/0x50
[ 7009.873914] splice_from_pipe+0x5f/0x90
[ 7009.873915] default_file_splice_write+0x19/0x24
[ 7009.873917] do_splice+0x23f/0x640
[ 7009.873918] ? __do_sys_newfstat+0x61/0x70
[ 7009.873920] __x64_sys_splice+0x131/0x150
[ 7009.873923] do_syscall_64+0x5a/0x130
[ 7009.873925] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 7009.873926] RIP: 0033:0x7ff6c4ec4883
[ 7009.873931] Code: Bad RIP value.
[ 7009.873932] RSP: 002b:00007ff6ba7fb440 EFLAGS: 00000293 ORIG_RAX: 0000000000000113
[ 7009.873933] RAX: ffffffffffffffda RBX: 0000000000100000 RCX: 00007ff6c4ec4883
[ 7009.873934] RDX: 000000000000002a RSI: 0000000000000000 RDI: 000000000000002b
[ 7009.873935] RBP: 0000000000000000 R08: 0000000000100000 R09: 0000000000000004
[ 7009.873935] R10: 00007ff6ba7fb580 R11: 0000000000000293 R12: 000000000000002a
[ 7009.873936] R13: 0000000000000000 R14: 000000000000002b R15: 00007ff6ba7fb590

The drives were white label ones shucked from 8tb easy stores?
Does anyone know if it might be like tler?
In the mean time, do you have a shot of the zpool status showing the pool errors?
And did you do a “zfs clear label” or whatever the command is before doing a scrub?

I did try zfs clear tank and even tried drives individually. Nothing brought them back except a reboot.

But as an update I found a single post about someone else having an issue with Asmedia chipset with ZFS so I moved the drives off to a different chipset. I’ve mashed on them for another 2 TBs so far and no longer have I/O errors.

However, it should be noted, the all the sata connections and power connections are different now, so it’s not exactly apples to apples comparison.

1 Like

So for now problem gone, but not sure original cause?

Well, a minor victory is still a win?

I’ve been using zfs as a home user for a coulple of years, and 8tb drives in a z2 or less scares me- the resilver when full must take some time?

Yip, victory for now. Not sure the cause. I have 3 other drives that I need to consolidate data from. So I will put those on the previously used stat/power and see if issues crop up and maybe try to narrow it down.

This pool is mostly just a working pool to use, everything is backed up to two other sources.
When I can, I plan to order a few more drives to do an 8x8TB raidZ2.
But with my testing, now I know that I will need a good HBA.

1 Like

OP i hope you are doing this just to play and learn. Nowhere was explicitly said you were doing this for just tests. ZFS is cool but not for primetime in Ubuntu land. Wait for the proper release so as to prevent future headaches.

I swapped to Fedora 31 because every version of 19.10 from Ubuntu, Pop_OS, etc anything using gnome 3.2.1 on those distros kept freezing up my machine. Fedora using Wayland and gnome 3.2.1 has been stable for two weeks.

Other than having to compile ZFS from scratch. Have had no issues. The ZFS is a media pool, and samba share for a windows 10 pass-through VM. Everything on the pool is regularly backed up. But seems fine so far.