Hi there,
I have a dual 2011v2 server, with an LSI flashed (HBA) raid card driving a 24 drive netapp disk shelf, with one of those qsfp > mini-sas cables. So I have 24 ssd (small). They’re fairly new. I started coping VMs to the ZFS pool, and suddenly I started getting zfs errors, and maybe some kernel panics - not sure about kernel issues.
I saw when Wendel helped Bitwit (Kyle), he mentioned those raid cards get super hot. I opened my case, gave it an extra space between the next card, and boosted my fan profile to 100%. And poof, it seems my errors went away.
However, one one of these server, it seems like they’re heat soaking. I run DD tests; first test I crank out 800m/s, second test 200 less. All subsequent tests fall to about 300 MB/s. Maybe heat soak?
The first time I did my DD test I used this block size
dd bs=1M count=10000 if=/dev/zero of=test conv=fdatasync
That’s where the tests fall.
Then i tried a different block size
dd if=/dev/zero of=/tmp/output bs=8k count=100k;
and with that second option, I have slightly less throughput, because my block size is small (makes sense to me), but I don’t get that potential heat soak issue.
Any ideas?
Again, I’m using a 24 drive, 2 pool raidz2. I didn’t change the block size at all. BTW, Im planning on using this for proxmox, and mino object storage, so any advising tuning for raw vm images, would be advisable; that’s how I discovered the errors.