I’ve got a Dell R720 with a HBA running in IT mode passing 6 Samsung MZ-6ER400T/0C3 400GB SAS SSDs through to proxmox. The drives came in with 520 byte sector sizes and I successfully reformatted them to 512 so proxmox could utilize them. I had all 6 together in a Raid0 array using ZFS since this is purely for homelab experimentation and wont be housing anything critical. Originally I was getting poor performance but I noticed ZFS threw an error stating that the configured block size for the pool was 4096 but the native block size was 8192. I destroyed and remade the array with ashift=13 to align the configured and native block sizes but i’m still getting absolutely godawful performance. I just ran a fio benchmark on the pool and these are the results:
I made sure to set the fio block size to match the block size in the pool. I’m not sure why the performance is so poor. The server itself has plenty of RAM and cores. How is it a ZFS pool on 6 enterprise SSDs has performance on par with a USB 3 thumb drive? I’d like to stick with ZFS because of it’s snapshot capabilities which comes in handy when trying to break and fix things, but i’m starting to think I should just go to mdadm or something.