Return to Level1Techs.com

Promox ZFS Storage pool Slow wrights fast reads

so a little about this system

AMD FX 8150 Stock speeds running 32 gigs ddr3 using 5 8Tb wd white drives dell lsi controller and two 6tb wd White drives one ssd

i have promox set up and ZFS pool and is working fine on proxmox i have created a Samba Share and a NFS share i can wright to them both

but when writing some thing to the Samba Share wrights at like 5M/s and it can read at like 100M/s the wright speed has degrade over time but reads have been fast as always i need help in diag the ZFS system share and NFS share both have had the same issue over time it degrades it is all was been sent from a windows machine and some times it is just copying from zpool to zpool both over NFS Share and Samba Share i have tryed to change config’s.

i can re-silver drives at 600M/s so i know it is not a throughput but implementation of network shares or config

please let me know what commands you would a paste of and i will spit out the results

i all so would like to put in my p2000 and deticate it to a plex LXC container

have yet to figure that one out

Try writing directly to the disks, not over smb.

Resilver is read-intensive, not write.

How full is the pool?

1 Like

36.25TiB total 26.14TiB Free 10.11TiB allocated 2% Fragmentation

Currently in the middle of resilvering a larger drive

[email protected]:~# zpool iostat 1
capacity operations bandwidth
pool alloc free read write read write


media 10.1T 26.1T 78 49 11.3M 3.38M
media 10.1T 26.1T 2.16K 615 523M 101M
media 10.1T 26.1T 2.14K 602 485M 98.2M
media 10.1T 26.1T 2.06K 727 499M 98.7M
media 10.1T 26.1T 2.11K 568 376M 59.0M
media 10.1T 26.1T 1.70K 606 488M 96.9M
media 10.1T 26.1T 1.95K 626 493M 106M
media 10.1T 26.1T 1.97K 598 450M 89.6M
media 10.1T 26.1T 2.11K 716 498M 111M
media 10.1T 26.1T 3.11K 794 430M 84.2M
media 10.1T 26.1T 4.11K 766 576M 144M
media 10.1T 26.1T 3.34K 898 598M 150M
media 10.1T 26.1T 4.07K 823 566M 142M
media 10.1T 26.1T 2.82K 574 576M 144M
media 10.1T 26.1T 1.86K 374 302M 80.1M
media 10.1T 26.1T 2.78K 759 538M 131M
media 10.1T 26.1T 3.62K 636 534M 134M
media 10.1T 26.1T 3.10K 754 568M 143M
^C
[email protected]:~# zpool iostat 1
capacity operations bandwidth
pool alloc free read write read write


media 10.1T 26.1T 78 49 11.3M 3.39M
media 10.1T 26.1T 0 1.84K 0 456M
media 10.1T 26.1T 1.99K 1.06K 422M 179M
media 10.1T 26.1T 3.12K 738 547M 137M
media 10.1T 26.1T 3.81K 719 556M 139M
media 10.1T 26.1T 607 1.45K 142M 242M
media 10.1T 26.1T 0 1.80K 0 466M
media 10.1T 26.1T 0 1.77K 0 480M
media 10.1T 26.1T 1.56K 1.24K 291M 256M
media 10.1T 26.1T 2.47K 1.10K 477M 120M
media 10.1T 26.1T 4.28K 2.00K 569M 142M
media 10.1T 26.1T 1.91K 1.26K 233M 63.9M
media 10.1T 26.1T 0 5.45K 0 468M

so the first test was copy from Zpool to the / the secound was copy to zpool all while resilvering

Slow ‘synced’ writes are a common problem with hardware RAID.

Does your RAID card have the option of a BBU?

And is the HBA in IT mode for ZFS to have direct disk access, or raid mode passing through the drives?
Just in case the HBA/raid card is trying something “clever”

so lsi card is not in IT mode but it is still flashed in a raid mode but passing through harddrives to the system i would love some help in flashing it to IT mode but everytime i try it fails

for your info it is not set up in a hardware raid all by software in linux/proxmox/ZFS

1 Like

it might have the option but does not have the battery