Some possibilities.
Is the 660p mostly full? It is a QLC drive, and those do tend to drop down to 100~300MB/s when writing out to QLC.
Are you using NFS, SMB, or SSHFS? I’ve had a lot of problems with NFS locking up or limiting speeds, and SSHFS is thread bound and struggles even on my 2600x to get more than 400~500MB/s. I’ve always found Samba performs well, but it has security problems so often. Last time I tried to use it, it wouldn’t even boot a server because “working is currently too insecure, please stop using samba” basically.
Have you tried making a 48GB Ramdisk and writing to it with DD, to see if it performs better?
I tried a different approach. Ditched the QNAP and made a new config.
The new server is a Supermicro X10 Mobo with a 6 core Xeon 2620 and 64 GB of RAM. The Intel 10 Gbe NIC card is mounted on a PCIe 3.0 X8 slot.
Pool 1 (fastPool) : 2 x 4 TB SSDs WD NAS. Stripped
Pool 2 (slowPool): 4 x 4 TB HDDs Iron Wolf Pro NAS. Stripped
The SSDs are connected to a SuperMicro SAS controller cause I didn’t have enough SATA3 cables.
All pools contain just 1 30GB file
2 SMB shares made, one each pool.
Both pools saturate at 3 Gbs/sec during copying/pasting operations.
If you run iostat -x 1 or atop whilst transferring (either server or client or both) what do you see? What’s your average queue size, or average read/write size, how many reads/writes per second, how many do you have in flight at any given point in time?
Are you perhaps not running smb3[1], but using an older protocol for some reason and/or otherwise forcing syncs or limiting parallelism and pipelinening and buffering that’s normally required at higher speeds?
[1]: there are debug flags you can pass in smb.conf that’ll log connection/request details, they’re documented on the samba website
Don’t know what version of samba TrueNAs scale has but I will check.
Maybe the Intel 660p on the client isn’t fast enough … I was looking at some benchmarks, the write speed 291 MB/sec … seems like I’m getting. But on the other hand I was expecting the upload to be bigger then .
hmmm, so client storage / nvme0n1 is basically idle 80%+ of the time, it could take more of whatever it’s been doing.
can you try iotop / atop or something similar in a truenas shell?
you can probably do docker run --privileged --rm --name=temp_debug_storage_perf -it alpine /bin/sh … and then in that alpine shell, you can do apk add --no-cache atop sysstat to get atop and iostat going.
Hmm, that tells me that something’s off with that true TrueNAS docker setup … or at least differently configured compared to regular vanilla docker. and I don’t have a TrueNAS system around to play with right now
Alpine image comes with BusyBox that has basic network tools built in, if APK is hanging, maybe its networking is just broken… can you try pinging the alpinelinux.org ? … if not, try giving it --network=host when creating/running the alpine container.
iperf3 has also mode where it can write/read the data to/from disk (not just transfer). So you can try to run that in both directions and it should reveal where is the problem…
Either this will tell you that bottleneck is storage on one of the sytems or if this is still performing well, then bottleneck is in the software stack (probably TrueNAS Scale…)
Your bottleneck could be the SMB protocol. You either need to run multi-channel SMB (if available) or NFS. A regular single transfer SMB will never saturate 10gb network that I have seen.
We ran a battery of tests and confirmed SMB is never going to approach the speed you’d expect. We run 25Gbps backbone.
We ran tests with NFS and it wasn’t just faster, it was magnitudes faster. SMB isn’t network friendly.
If you can use NFS instead, try it. We now use Linux running DaVinci Resolve and it kicks ass. Windows will support NFS too if Linux isn’t an option.
Sorry for late reply. Yes I use samba share but can easily try NFS since both pcs use linux (PopOS vs TrueNAS scale).
Tomorrow I will install some spare Samsung pro 980s NVMEs and test with those … one on server, one on client. Will let u know my progress
Maybe that you are running an old CPU! A DDR3 CPU 4 cores at 2600 will not be enough if you have many programs that you are running! Also, a write with SSD is not 550 write. I have 510 reads with tests, but only 170 write. A M2 with 5000 reads might do it! Or have a 4 gigabyte cache that fills up when writing to the disks!