Custom NAS performance issues or is it the client windows PC?

i have a custom build arch linux ZFS nas which i had to use more than usually.
i didn’t want the most performance. so it has not much ram or anything.

the short story i had a dying SSD in a gaming PC and backup my steam folder which was working perfectly with 100 mb plus over many many hours.

i’m currently copying the data back on a Samsung pro 980 nvme “clone”(oem version) on a windows 11 system.

nas system specs
ryzen 5 1600 (2600 rebrand)
16 gb ECC memory (one stick seem to stop working…)
cheap b450 board
6 WD 12 GB shucked from externals
a very old SLC sandisk 120 gb for the system and swap (i never added the swap still 2g for some reasons)

here is a screen from my glances setup. together with fastcopy doing pretty much nothing the numbers are wrong it is doing most of the time nothing. (default windows copy does not stop discovering tera copy doe about the same)

the temps of the HDD are high the case is a ugly armour tank with the coolers in the front active. which is expected to be fine…

the zpool stats seem to be broken and i have no idea if the CPU_IOWAIT is normal/to be excepted or really bad.

i excepted bad right performance but not read. if someone knowable enough notice something wrong on the screen i would really like the help.

CPU_IOWAIT means something is wrong with your ram or you don’t have enough ram. The processor is basically waiting for resources to free up or moving a lot of stuff into and out of the swapfile.

thanks i will try to fix the memory stick yet again.

is that to be expected that for read operations the memory is heavily load with ZFS?

i’m totally aware that i have far to less ram than is recommended for ZFS of this size.

i’m also confused that the swap file is not used.

would the still present over 100 GB of SLC SSD help if i add this as swap zvol or L2ARC.
https://wiki.archlinux.org/title/ZFS#Swap_volume
these warning are not encouraging but quite dated.

i’m not up to date on this and i have no real experience with this…

I think ZFS does a bucket load of caching into ram so your ram situation might be a giant bottleneck and I think, I’m not sure about this, that it doesn’t use your swap file at all.

I am not agreeing with @NotNot. Neither your RAM should be a problem, my NAS currently has 8GB and works without problems, nor should it be the IO_Wait. I have heard stories about ZFS running on low RAM systems with 2GB without issues. ZFS likes to cache, but if it can’t it just does serve you directly from the disk! My guess about the IO_Wait is that you just passed the threshold for the warning to trigger.

My guess something else is the problem here. Please check the pool performance, e.g. by copying from a local SSD to the pool, and the network performance with something like iperf3.

uff i’m out pf the loop.
i used this fio before:

but don’t have a clue how to use that anymore i even looked for the job and could not fine it.
the iperf3 package is just giving me 404.

old numbers are these Arch linux kernel 5.5 and zfs... ruining my nas plans? - #32 by Huhn_Morehuhn
i even have something to compare too.

the ram issue is getting more and more confusing it happen back then to and it currently is showing both sticks…

I don’t have time to look into the problem here, but it occurred to me that if swap is being used, it might possibly be generating sync writes. If the VM is using a ZVOL as the storage, then that can rapidly create massive problems, because every since write to a ZVOL will force all async writes to flush as well if there is not an SLOG. In a similar way, journaled filesystems used in a VM, like XFS can cause the same problems (even with an SLOG) if their external journal isn’t placed on a different ZVOL

At this time I’m unclear of the exact implications of this regarding datasets, but supposedly it’s just per file and isn’t as big of a problem (the sync writes themselves are still not great to have)). So it could still be an issue if using a qcow2 file.

neither a zvol nor swap is used here.
i guess i limited the ram zfs can use back in the day.

the problem seems to be limited to reads only.

Is ashift/relatime off? You probably want to post the pool/dataset property settings.

Also, as Sapiens said, you want to confirm that

  1. You can read from the pool locally on the NAS.
  2. That your network can do what you want with iperf
  3. That whatever method of file sharing (samba, nfs) can serve files from a plain filesystem like ext4 can do what you need.

sry well this is not going to advance for a while now.
i placed the ram in the not recommended dimm 0 and dimm 2 to get my 16 GB back and now the grub can not be found (reboot and select proper boot device) and in one of 6 cases the grub is recovery mode (unknown file system).

which makes absolutely no sense when i move the ram around but the that’s situation i’m facing now.

the system is back running.

i can’t get iperf3 i guess i have to update the mirror list to get it there where so warning on the arch wiki about that so i don’T know if that’s a good idea but what so ever i did a simple cp with the same game and it was copied at over 100 mb sec as far as i could see in glances with the same iowait error in glances.

i was done with the copy before my client windows PC was even starting the transfer (it failed in the end because the SDD was to small) still discovering and also massively faster.

this should be the information that had been asked:

[root@gundam2 raid]# zfs list -r gundam2
NAME      USED  AVAIL     REFER  MOUNTPOINT
gundam2  21.7T  29.0T     21.7T  /mnt/raid
[root@gundam2 raid]# zfs get all gundam2
NAME     PROPERTY              VALUE                  SOURCE
gundam2  type                  filesystem             -
gundam2  creation              Wed Feb 19  1:46 2020  -
gundam2  used                  21.7T                  -
gundam2  available             29.0T                  -
gundam2  referenced            21.7T                  -
gundam2  compressratio         1.00x                  -
gundam2  mounted               yes                    -
gundam2  quota                 none                   default
gundam2  reservation           none                   default
gundam2  recordsize            1M                     local
gundam2  mountpoint            /mnt/raid              local
gundam2  sharenfs              off                    default
gundam2  checksum              on                     default
gundam2  compression           off                    default
gundam2  atime                 on                     default
gundam2  devices               on                     default
gundam2  exec                  on                     default
gundam2  setuid                on                     default
gundam2  readonly              off                    default
gundam2  zoned                 off                    default
gundam2  snapdir               hidden                 default
gundam2  aclinherit            restricted             default
gundam2  createtxg             1                      -
gundam2  canmount              on                     default
gundam2  xattr                 on                     default
gundam2  copies                1                      default
gundam2  version               5                      -
gundam2  utf8only              off                    -
gundam2  normalization         none                   -
gundam2  casesensitivity       sensitive              -
gundam2  vscan                 off                    default
gundam2  nbmand                off                    default
gundam2  sharesmb              off                    default
gundam2  refquota              none                   default
gundam2  refreservation        none                   default
gundam2  guid                  5167254796260552112    -
gundam2  primarycache          all                    default
gundam2  secondarycache        all                    default
gundam2  usedbysnapshots       0B                     -
gundam2  usedbydataset         21.7T                  -
gundam2  usedbychildren        328M                   -
gundam2  usedbyrefreservation  0B                     -
gundam2  logbias               latency                default
gundam2  objsetid              54                     -
gundam2  dedup                 off                    local
gundam2  mlslabel              none                   default
gundam2  sync                  standard               default
gundam2  dnodesize             legacy                 default
gundam2  refcompressratio      1.00x                  -
gundam2  written               21.7T                  -
gundam2  logicalused           22.5T                  -
gundam2  logicalreferenced     22.5T                  -
gundam2  volmode               default                default
gundam2  filesystem_limit      none                   default
gundam2  snapshot_limit        none                   default
gundam2  filesystem_count      none                   default
gundam2  snapshot_count        none                   default
gundam2  snapdev               hidden                 default
gundam2  acltype               off                    default
gundam2  context               none                   default
gundam2  fscontext             none                   default
gundam2  defcontext            none                   default
gundam2  rootcontext           none                   default
gundam2  relatime              off                    default
gundam2  redundant_metadata    all                    default
gundam2  overlay               off                    default
gundam2  encryption            off                    default
gundam2  keylocation           none                   default
gundam2  keyformat             none                   default
gundam2  pbkdf2iters           0                      default
gundam2  special_small_blocks  0                      default

even with a long time in idle the memory will not be unloaded.

i have to use a different client PC the other one broke down this is now win 10.