whenever i am reading or writing to my CIFS folder, smbd uses 100% of my server’s CPU. and whatever task is doing the write on the client uses absurdly more CPU than it has any reason to.
running dd if=/dev/zero of=file.txt count=10 bs=1G from the client seems to be a good way to induce this. dd uses 12% CPU on the client and smbd uses 100% of the server during this operation.
my client is a 4-core VM running on the server. the server’s CPU is a ThreadRipper 1950x.
the CIFS share is a RAID 5 array using 4 of the HItachi/HGST H7280A520SUN8.0T HDD’s.
my RAID card is the Dell PERC H830
i understand that tasks using the array have to deal with i/o wait, but this CPU usage is affecting other tasks on the system that dont use the array at all.
This sounds about right. I saw this when I had NTFS mounts in Linux. I could be incorrect, but from my brief research, it has to do with those processes (smbd) are running in user space. So you’re seeing the CPU usage for those processes as they are churning through the I/O, as opposed to writing directly to ext4 or something, those things happening in the kernel space.
Ok cool. Did turning off encryption help your cpu? At some point, Samba introduced aes-ni support but I am not sure if that completely offloads server encryption Or what. Possible that the Debian package predates the feature as well.
Note that authentication is still secure and encrypted, it’s just the file sharing data that is unencrypted. If this is a concern for you, isolating the traffic on your lan with a vlan/pvlan/firewall is the best security solution if you need performant file sharing. Encrypting on the fly is inherently computationally expensive.